id
stringlengths 18
42
| text
stringlengths 0
8.25M
| added
stringlengths 24
24
| created
stringlengths 20
24
| source
stringclasses 4
values | original_shard_dir
stringclasses 163
values | original_shard_idx
int64 0
311k
| num_tokens
int64 1
1.44M
|
---|---|---|---|---|---|---|---|
proofpile-arXiv_065-135 |
\section{Introduction}
\label{sec:intro}
Interactions of cosmic-ray particles with detector materials can produce radioactive isotopes that create backgrounds for experiments searching for rare events such as dark matter interactions and neutrinoless double beta decay. Silicon is a widely used detector material because it is available with very high purity, which leads to low intrinsic radioactive backgrounds. In particular, solid-state silicon-based detector technologies show promise because their eV-scale energy thresholds~\cite{PhysRevLett.123.181802,Abramoff:2019dfb,Agnese:2018col} provide sensitivity to scattering events between atoms and ``low-mass'' dark matter particles with masses below 1\,GeV/c$^{2}$~\cite{Essig:2015cda}.
Three prominent low-mass dark matter efforts that employ silicon detectors are DAMIC~\cite{aguilararevalo2020results}, SENSEI~\cite{Abramoff:2019dfb}, and SuperCDMS~\cite{PhysRevD.95.082002}. All three use the highest-purity single-crystal silicon as detector substrates~\cite{VONAMMON198494}, with sensors fabricated on the surfaces for readout of charge or phonons and installed in low-background facilities to reduce the event rate from environmental backgrounds.
A primary challenge in these rare-event searches is to distinguish potential signal events from the much higher rate of interactions due to conventional sources of radiation, both from the terrestrial environment and in the detector materials. A variety of mitigation strategies are used to minimize backgrounds; nevertheless, a nonzero residual background expectation is generally unavoidable. Beta-emitting radiocontaminants in the bulk and on the surfaces of the detectors are especially challenging in the search for dark matter because the decay products can produce energy signals that are indistinguishable from the expected signal. Both DAMIC and SuperCDMS have investigated these detector backgrounds (see, e.g., Refs.~\cite{Aguilar-Arevalo:2015lvd,aguilararevalo2020results,PhysRevD.95.082002,Orrell:2017rid}), and they have identified $^3$H~(tritium), $^{32}$Si (intrinsic to the silicon) and $^{210}$Pb (surface contamination) as the leading sources of background for future silicon-based dark matter experiments. Unlike for $^{32}$Si, there are not yet any direct measurements of the tritium background in silicon; current estimates are based on models that have yet to be validated.
Tritium and other radioactive isotopes such as $^7$Be~and $^{22}$Na~are produced in silicon detectors as a result of cosmic-ray exposure, primarily due to interactions of high-energy cosmic-ray neutrons with silicon nuclei in the detector substrates~\cite{cebrian,Agnese:2018kta}.
The level of background from cosmogenic isotopes in the final detector is effectively determined by the above-ground exposure time during and following detector production, the cosmic-ray flux, and the isotope-production cross sections. The neutron-induced production cross sections for tritium, $^7$Be, and to a lesser extent $^{22}$Na, are not experimentally known except for a few measurements at specific energies. There are several estimates of the expected cross sections; however, they vary significantly, leading to large uncertainties in the expected cosmogenic background for rare-event searches that employ silicon detectors. To address this deficiency, we present measurements of the integrated isotope-production rates from a neutron beam at the Los Alamos Neutron Science Center (LANSCE) ICE HOUSE facility \cite{lisowski2006alamos, icehouse}, which has a similar energy spectrum to that of cosmic-ray neutrons at sea level. This spectral-shape similarity allows for a fairly direct extrapolation from the measured beam production rates to the expected cosmogenic production rates. While the spectral shape is similar, the flux of neutrons from the LANSCE beam greater than \SI{10}{MeV} is roughly \num{5E8} times larger than the cosmic-ray flux, which enables production of measurable amounts of cosmogenic isotopes in short periods of time. Our measurement will allow the determination of acceptable above-ground residency times for future silicon detectors, as well as improve cosmogenic-related background estimates and thus sensitivity forecasts.
We begin in Sec.~\ref{sec:isotopes} with a discussion of radioisotopes that can be cosmogenically produced in silicon, and we identify those most relevant for silicon-based dark matter searches: $^3$H, $^7$Be, and $^{22}$Na. For these three isotopes, we review previous measurements of the production cross sections and present the cross-section models that we use in our analysis. Section~\ref{sec:exposure} introduces our experimental approach, in which several silicon targets---a combination of charge-coupled devices (CCDs) and wafers---were irradiated at LANSCE. In Sec.~\ref{sec:counting} and Sec.~\ref{sec:production_rates} we present our measurements and predictions of the beam-activated activities, respectively. These results are combined in Sec.~\ref{sec:cosmogenic_rates} to provide our best estimates of the production rates from cosmogenic neutrons. In Sec.~\ref{sec:alternate} we evaluate other (non-neutron) production mechanisms and we conclude in Sec.~\ref{sec:discussion} with a summarizing discussion.
\section{Cosmogenic Radioisotopes}
\label{sec:isotopes}
\begin{table}[t]
\centering
\begin{tabular}{c c c c}
\hline
Isotope & Half-life & Decay & Q-value \\
& [yrs] & mode & [keV]\\
\hline
\vrule width 0pt height 2.2ex
$^3$H & 12.32\,$\pm$\,0.02 & $\beta$- & 18.591\,$\pm$\,0.003 \\
$^7$Be & 0.1457\,$\pm$\,0.0020 & EC & 861.82\,$\pm$\,0.02\\
$^{10}$Be & (1.51\,$\pm$\,0.06)$\times$10$^6$ & $\beta$- & 556.0\,$\pm$\,0.6\\
$^{14}$C & 5700\,$\pm$\,30 & $\beta$- & 156.475\,$\pm$\,0.004\\
$^{22}$Na & 2.6018\,$\pm$\,0.0022 & $\beta$+ & 2842.2\,$\pm$\,0.2\\
$^{26}$Al & (7.17\,$\pm$\,0.24)$\times$10$^5$ & EC & 4004.14\,$\pm$\,6.00\\
\hline
\end{tabular}
\caption{List of all radioisotopes with half-lives $>$\,30 days that can be produced by cosmogenic interactions with natural silicon. All data is taken from NNDC databases \cite{dunford1998online}. \protect\footnotemark[1]}
\footnotetext{Unless stated otherwise, all uncertainties quoted in this paper are at 1$\sigma$ (68.3\%) confidence.}
\label{tab:rad_isotopes}
\end{table}
Most silicon-based dark matter experiments use high-purity ($\gg$\,99\%) natural silicon (92.2\% $^{28}$Si, 4.7\% $^{29}$Si, 3.1\% $^{30}$Si \cite{meija2016isotopic}) as the target detector material. The cosmogenic isotopes of interest for these experiments are therefore any long-lived radioisotopes that can be produced by cosmic-ray interactions with silicon; Table~\ref{tab:rad_isotopes} lists all isotopes with half-lives greater than 30 days that are lighter than $^{30}$Si + n/p. None of them have radioactive daughters that may contribute additional backgrounds. Assuming that effectively all non-silicon atoms present in the raw material are driven out during growth of the single-crystal silicon boules used to fabricate detectors, and that the time between crystal growth and moving the detectors deep underground is typically less than 10 years, cosmogenic isotopes with half-lives greater than 100 years (i.e., $^{10}$Be, $^{14}$C, and $^{26}$Al) do not build up sufficient activity~\cite{reedy2013cosmogenic, caffee2013cross} to produce significant backgrounds. Thus the cosmogenic isotopes most relevant to silicon-based rare-event searches are tritium, $^7$Be, and $^{22}$Na. Tritium is a particularly dangerous background for dark matter searches because it decays by pure beta emission and its low Q-value (\SI{18.6} {\keV}) results in a large fraction of decays that produce low-energy events in the expected dark matter signal region. $^7$Be~decays by electron capture, either directly to the ground state of $^7$Li (89.56\%) or via the \SI{477}{\keV} excited state of $^7$Li (10.44\%). $^7$Be~is not a critical background for dark matter searches, because it has a relatively short half-life (\SI{53.22}{\day}); however, the \SI{54.7}{\eV} atomic de-excitation following electron capture may provide a useful energy-calibration tool. $^{22}$Na~decays primarily by positron emission (90.3\%) or electron capture (9.6\%) to the 1275 keV level of $^{22}$Ne. For thin silicon detectors $^{22}$Na~can be a significant background as it is likely that both the \SI{1275}{\keV} $\gamma$ ray and the \SI{511}{\keV} positron-annihilation photons will escape undetected, with only the emitted positron or atomic de-excitation following electron capture depositing any energy in the detector. Note that compared to $^3$H, the higher $\beta^+$ endpoint (\SI{546}{keV}) means that a smaller fraction of the $^{22}$Na~decays produce signals in the energy range of interest for dark matter searches.
\subsection{Tritium Production}
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{si_h3_crosssections.pdf}
\caption{Experimental measurements (magenta error bars) \cite{QAIM1978150, Tippawan:2004sy, benck2002secondary} and model estimates (continuous curves) of neutron-induced tritium production in silicon. Measurements of the proton-induced cross section \cite{goebel1964production, kruger1973high} are also shown for reference (gray error bars).}
\label{fig:si_3h_cross_sections}
\end{figure}
Tritium production in silicon at sea-level is dominated by spallation interactions of high-energy cosmogenic neutrons with silicon nuclei. Tritium is a pure $\beta$ emitter and it is therefore not possible to directly measure the production cross section using conventional methods that rely on $\gamma$-ray~detectors to tag the reaction products. There are three previous experimental measurements of the neutron-induced tritium production cross section in silicon (shown in Fig.~\ref{fig:si_3h_cross_sections}), which either extracted tritium from a silicon target and measured the activity in a proportional counter \cite{QAIM1978150} or measured the triton nuclei ejected from a silicon target using $\Delta E-E$ telescopes \cite{Tippawan:2004sy,benck2002secondary}. The proton-induced cross section is expected to be similar to that of neutrons so we also show previous measurements with proton beams \cite{goebel1964production, kruger1973high}. While these measurements provide useful benchmarks at specific energies, they are insufficient to constrain the cosmogenic production cross section across the full range of relevant neutron energies (from $\sim$10\,MeV to a few GeV).
For this reason, previous estimates of tritium production in silicon dark matter detectors have relied on estimates of the cross section from calculations and simulations of the nuclear interactions or compiled databases that combine calculations with experimental data \cite{martoff1987limits, zhang2016cosmogenic, agnese2019production}. The production of tritons due to spallation is difficult to model, because the triton is a very light nucleus that is produced not only during the evaporation or de-excitation phase but also from coalescence of nucleons emitted during the high-energy intra-nuclear cascade stage \cite{leray2010improved, leray2011results, filges2009handbook}. Due to large variations among the predictions of different cross-section models, we consider several models for comparison to our experimental results and extraction of cosmogenic production rates. Shown in Fig.~\ref{fig:si_3h_cross_sections} are the semi-empirical formulae of Konobeyev and Korovin (K\&K) \cite{konobeyev1993tritium} (extracted from the commonly used ACTIVIA code \cite{back2008activia}) and results from nuclear reaction calculations and Monte Carlo simulations that are performed by codes such as TALYS \cite{koning2008talys}, INCL \cite{boudard2013new} and ABLA \cite{kelic2008deexcitation}.\footnote{The Konobeyev and Korovin ($^3$H), and Silberberg and Tsao ($^7$Be, $^{22}$Na) cross sections were obtained from the ACTIVIA code package \cite{activia2017}, the TALYS cross sections were calculated using TALYS-1.9 \cite{talys1.9}, and the INCL cross sections were calculated using the INCL++ code (v6.0.1) with the ABLA07 de-excitation model \cite{mancusi2014extension}. The default parameters were used for all programs. We note that the TALYS models are optimized in the \SI{1}{\keV} to \SI{200}{\MeV} energy range though the maximum energy has been formally extended to \SI{1}{\GeV} \cite{koning2014extension}.} We also compared effective cross sections (extracted through simulation) from built-in physics libraries of the widely used Geant4 simulation package \cite{agostinelli2003geant4,allison2016recent} such as INCLXX \cite{boudard2013new,mancusi2014extension}, BERTINI \cite{bertini1963low, guthrie1968calculation, bertini1969intranuclear, bertini1971news}, and Binary Cascades (BIC) \cite{folger2004binary}.\footnote{We used Geant4.10.3.p02 with physics lists QGSP\_INCLXX 1.0 (INCL++ v5.3), QGSP\_BERT 4.0, and QGSP\_BIC 4.0.}
\subsection{$^7$Be~Production}
$^7$Be~is produced as an intermediate-mass nuclear product of cosmogenic particle interactions with silicon. The neutron-induced production cross section has been measured at only two energies \cite{ninomiya2011cross}, as shown in Fig.~\ref{fig:si_7be_cross_sections}. Although the neutron- and proton-induced cross sections are not necessarily the same, especially for neutron-deficient nuclides such as $^7$Be~and $^{22}$Na~\cite{ninomiya2011cross}, there are a large number of measurements with protons
that span the entire energy range of interest \cite{otuka2014towards, zerkin2018experimental}, which we show in Fig.~\ref{fig:si_7be_cross_sections} for comparison.\footnote{We have excluded measurements from Ref.~\cite{rayudu1968formation}, because there are well-known discrepancies with other measurements \cite{ michel1995nuclide, schiekel1996nuclide}.} For ease of evaluation, we fit the proton cross-section data with a continuous 4-node spline, hereafter referred to as ``$^{\text{nat}}$Si(p,x)$^7$Be Spline Fit''.
As with tritium, we also show predictions from different nuclear codes and semi-empirical calculations, including the well-known Silberberg and Tsao (S\&T) semi-empirical equations \cite{silberberg1973partial,silberberg1973partial2, silberberg1977cross, silberberg1985improved, silberberg1990spallation, silberberg1998updated} as implemented in the ACTIVIA code. We note that the model predictions for the $^7$Be~production cross section in silicon vary greatly, with significantly different energy thresholds, energy dependence, and magnitude. $^7$Be~is believed to be produced predominantly as a fragmentation product rather than as an evaporation product or residual nucleus \cite{michel1995nuclide}, and fragmentation is typically underestimated in most theoretical models \cite{michel1995nuclide, titarenko2006excitation}. We note that unlike for the tritium cross-section models, there is a significant difference between the predictions obtained by evaluating the INCL++ v6.0.1 model directly versus simulating with Geant4 (INCL++ v5.3), probably due to updates to the model.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{si_be7_crosssections.pdf}
\caption{Experimental measurements (magenta error bars) \cite{ninomiya2011cross} and model estimates (continuous curves) of the neutron-induced $^7$Be~production cross section in silicon. Measurements of the proton-induced cross section \cite{otuka2014towards, zerkin2018experimental} are also shown for reference (gray error bars).}
\label{fig:si_7be_cross_sections}
\end{figure}
\subsection{$^{22}$Na~Production}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{si_na22_crosssections.pdf}
\caption{Experimental measurements (magenta and pink error bars) \cite{michel2015excitation, hansmann2010production, yashima2004measurement, sisterson2007cross, ninomiya2011cross} and model estimates (continuous curves) of the neutron-induced $^{22}$Na~production cross section in silicon. Measurements of the proton-induced cross section \cite{otuka2014towards, zerkin2018experimental} are also shown for reference (gray error bars).}
\label{fig:si_22na_cross_sections}
\end{figure}
$^{22}$Na~is produced as a residual nucleus following cosmogenic interactions with silicon. Compared to tritium and $^7$Be, the production of $^{22}$Na~is the best studied. Measurements of the neutron-induced cross section were carried out by Michel et.\ al.\ using quasi-monoenergetic neutrons between 33 and 175 MeV, with TALYS-predicted cross sections used as the initial guess to unfold the experimentally measured production yields \cite{michel2015excitation, hansmann2010production}. These, along with six other data points between 66 and 370 MeV \cite{yashima2004measurement, sisterson2007cross, ninomiya2011cross}, are shown in Fig.~\ref{fig:si_22na_cross_sections}. Proton-induced cross-section measurements\footnote{Similar to $^7$Be, we have excluded measurements from Ref.~\cite{rayudu1968formation}.} \cite{otuka2014towards, zerkin2018experimental} span the entire energy range of interest and are significantly larger than the measured neutron-induced cross sections. As before, we also show the predicted cross sections from Silberberg and Tsao, TALYS, INCL++ (ABLA07) and Geant4 models. In order to compare the existing neutron cross-section measurements to our data, we use a piecewise model that follows the measurements in Refs.~\cite{michel2015excitation, hansmann2010production} below 180\,MeV and follows the TALYS model at higher energies. This model is hereafter referred to as ``Michel-TALYS'' (see Fig.~\ref{fig:si_22na_cross_sections}). $^{22}$Na~can also be produced indirectly through the production of the short-lived isotopes $^{22}$Mg, $^{22}$Al, and $^{22}$Si, which eventually decay to $^{22}$Na, but for the models considered the total contribution from these isotopes is $<$ \SI{1}{\percent}, and is ignored here.
\section{Beam Exposure}
\label{sec:exposure}
To evaluate the production rate of cosmogenic isotopes through the interaction of high-energy neutrons, we irradiated silicon charge-coupled devices (CCDs) and silicon wafers at the LANSCE neutron beam facility. Following the irradiation, the CCDs were readout to measure the beam-induced $\beta$ activity within the CCD active region, and the $\gamma$ activity induced in the wafers was measured using $\gamma$-ray spectroscopy. In this section we describe the details of the targets and beam exposure, while in Sec.~\ref{sec:counting} we present the measurement results.
\subsection{CCDs}
\label{sec:ccds}
The irradiated CCDs were designed and procured by Lawrence Berkeley National Laboratory (LBNL)~\cite{ccdtech} for the DAMIC Collaboration.
CCDs from the same fabrication lot were extensively characterized in the laboratory and deployed underground at SNOLAB to search for dark matter~\cite{Aguilar-Arevalo:2016zop, PhysRevD.94.082006}.
The devices are three-phase scientific CCDs with a buried $p$-channel fabricated on a \SI{670}{\micro\meter}-thick $n$-type high-resistivity (10--20\,\si{\kilo\ohm\cm}) silicon substrate, which can be fully depleted by applying $>$\,\SI{40}{\volt} to a thin backside contact.
The CCDs feature a 61.44$\times$30.72\,mm$^2$ rectangular array of 4096$\times$2048 pixels (each 15$\times$15 \si{\micro\meter\squared}) and an active thickness of \SI{661 \pm 10}{\micro\meter}.
By mass, the devices are $>$\,\SI{99}{\percent} elemental silicon with natural isotopic abundances. Other elements present are oxygen ($\sim$\,\SI{0.1}{\percent}) and nitrogen ($<$\,\SI{0.1}{\percent}) in the dielectrics, followed by phosphorous and boron dopants ($<$\,\SI{0.01}{\percent}) in the silicon.
Ionizing particles produce charge in the CCD active region; e.g., a fast electron or $\beta$ particle will produce on average one electron-hole pair for every \SI{3.8}{\eV} of deposited energy. The ionization charge is drifted by the applied electric field and collected on the pixel array. The CCDs are read out serially by moving the charge vertically row-by-row into the serial register (the bottom row) where the charge is moved horizontally pixel-by-pixel to the output readout node.
Before irradiation, the charge-transfer inefficiency from pixel to pixel was $< 10^{-6}$~\cite{ccdtech}, the dark current was $<$\SI{1}{e^- \per pixel \per \hour}, and the uncertainty in the measurement of the charge collected by a pixel was $\sim$2\,$e^-$ RMS. Further details on the response of DAMIC CCDs can be found in Sec.~IV of Ref.~\cite{PhysRevD.94.082006}.
Even after the significant increase in CCD noise following irradiation (e.g., due to shot noise associated with an increase in dark current), the CCD can still resolve most of the tritium $\beta$-decay spectrum.
Irradiation generates defects in silicon devices that can trap charges and negatively impact the performance of CCDs. Fully depleted devices are resilient to irradiation damage in the bulk silicon because the ionization charge is collected over a short period of time, which minimizes the probability of charge being trapped by defects before it is collected.
For this reason LBNL CCDs have been considered for space-based imaging where the devices are subjected to high levels of cosmic radiation~\cite{snap}.
Measurements at the LBNL cyclotron demonstrated the remarkable radiation tolerance of the CCDs proposed for the SNAP satellite, which follow the same design principles and fabrication process as the DAMIC CCDs.
For the measurements presented in this paper, there is a trade-off between activation rate and CCD performance.
Higher irradiation leads to a higher activity of radioisotopes in the CCD and hence a lower statistical uncertainty in the measurement.
On the other hand, higher irradiation also decreases the CCD performance, which needs to be modeled and can thus introduce significant systematic uncertainty.
The two most relevant performance parameters affected by the irradiation are the charge-transfer inefficiency (CTI) and the pixel dark current (DC).
Ref.~\cite{snap} provides measurements of CTI and DC after irradiation with 12.5 and \SI{55}{MeV} protons.
Following irradiation doses roughly equivalent to a LANSCE beam fluence of $2.4\times10^{12}$ neutrons above \SI{10}{\MeV}, the CCDs were still functional with the CTI worsened to $\sim$\,$10^{-4}$ and asymptotic DC rates (after days of operation following a room-temperature anneal) increased to $\sim$\SI{100}{e^- \per pixel \per \hour}.
These values depend strongly on the specific CCD design and the operation parameters, most notably the operating temperature.
Considering the available beam time, the range of estimated production rates for the isotopes of interest, and the CCD background rates, we decided to irradiate three CCDs with different levels of exposure, roughly corresponding to $2.4\times10^{12}$, $1.6\times10^{12}$, and $0.8\times10^{12}$ neutrons above \SI{10}{MeV} at the LANSCE neutron beam. Furthermore, we used a collimator (see Sec.~\ref{sec:lansce_beam}) to suppress irradiation of the serial register at the edge of the CCDs by one order of magnitude and thus mitigate CTI in the horizontal readout direction. Following the beam exposure, we found that the least irradiated CCD had an activity sufficiently above the background rate while maintaining good instrumental response and was therefore selected for analysis in Sec.~\ref{sec:ccd_counting}.
The CCDs were packaged at the University of Washington following the procedure developed for the DAMIC experiment.
The CCD die and a flex cable were glued onto a silicon support piece such that the electrical contact pads for the signal lines are aligned.
The CCDs were then wedge bonded to the flex cable with \SI{25}{\micro\meter}-thick aluminum wire.
A connector on the tail of the flex cable can be connected to the electronics for device control and readout.
Each packaged device was fixed inside an aluminum storage box, as shown in Fig.~\ref{fig:CCDphoto}. The CCDs were kept inside their storage boxes during irradiation to preserve the integrity of the CCD package, in particular to prevent the wire bonds from breaking during handling and to reduce any possibility of electrostatic discharge, which can damage the low-capacitance CCD microelectronics.
To minimize the attenuation of neutrons along the beam path and activation of the storage box, the front and back covers that protect each CCD were made from relatively thin (0.5\,mm) high-purity aluminum (alloy 1100).
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{CCD_photo.pdf}
\caption{Photograph of the CCD package inside its aluminum storage box. Left: Package before wire bonding. Right: After wire bonding, with aluminum frame to keep the CCD package fixed in place.}
\label{fig:CCDphoto}
\end{figure}
\subsection{Wafers}
In addition to the CCDs, we exposed several Si wafers, a Ge wafer, and two Cu plates to the neutron beam. These samples served both as direct targets for activation and measurement of specific radioisotopes, and as witness samples of the neutron beam. In this paper, we focus on the Si wafers; however, the Ge wafer and Cu plates were also measured and may be the subject of future studies.
A total of eight Si wafers (4 pairs) were used: one pair matched to each of the three CCDs (such that they had the same beam exposure time) and a fourth pair that served as a control sample. The eight wafers were purchased together and have effectively identical properties. Each wafer was sliced from a Czochralski-grown single-crystal boule with a 100-mm diameter and a resistivity of $>$\SI{20}{\ohm\cm}. The wafers are undoped, were polished on one side, and have a $\langle$100$\rangle$ crystal-plane alignment. The thickness of each individual wafer is \SI{500 \pm 17}{\micro\meter} (based on information from the vendor). The control sample was not exposed to the neutron beam and thus provides a background reference for the gamma counting. Note that because the wafers were deployed and counted in pairs, henceforth we distinguish and refer to only pairs of wafers rather than individual wafers. The (single) Ge wafer is also \SI{100}{\milli\meter} in diameter and undoped, with a thickness of \SI{525 \pm 25}{\micro\meter}, while the Cu plates have dimensions of $114.7 \times 101.6 \times$ \SI{3.175}{\milli\meter}.
\subsection{LANSCE Beam Exposure}
\label{sec:lansce_beam}
\begin{figure*}
\centering
\includegraphics[width=0.32\textwidth]{config1-pers.pdf}
\includegraphics[width=0.32\textwidth]{config2-pers.pdf}
\includegraphics[width=0.32\textwidth]{config3-pers.pdf}
\caption{Geant4 renderings of the three setups used to position targets in the neutron beam, with the beam passing from right to left.
Aluminum (Al) boxes holding the CCDs (yellow) were held in place by an Al rack (dark gray).
For the initial setup (left), the Al box is made transparent to show the positioning of the CCD (red), air (grey), and other structures (light brown).
The other targets include pairs of Si wafers (green), a Ge wafer (blue), and Cu plates (copper brown).
The polyethylene wafer holder (purple) is simplified to a rectangle of the same thickness and height as the actual object, with the sides and bottom removed.
All targets were supported on an acetal block (light gray).}
\label{fig:g4rendering}
\end{figure*}
The samples were irradiated at the LANSCE WNR ICE-HOUSE II facility~\cite{icehouse} on Target 4 Flight Path 30 Right (4FP30R). A broad-spectrum (0.2--800 MeV) neutron beam was produced via spallation of 800 MeV protons on a tungsten target. A 2.54-cm (1") diameter beam collimator was used to restrict the majority of the neutrons to within the active region of the CCD and thus prevent unwanted irradiation of the serial registers on the perimeter of the active region. The neutron fluence was measured with $^{238}$U foils by an in-beam fission chamber~\cite{wender1993fission} placed downstream of the collimator. The beam has a pulsed time structure, which allows the incident neutron
energies to be determined using the time-of-flight technique (TOF)---via a measurement between the proton beam pulse and the fission chamber signals~\cite{lisowski2006alamos,wender1993fission}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{InBeamLayout_cropped.jpg}
\end{center}
\caption{Layout of the samples as placed in the beam during the final irradiation setup (cf.\ Fig.~\ref{fig:g4rendering} right). The beam first passes through the cylindrical fission chamber (far right) and then through the samples (from right to left): 3~CCDs in Al boxes (with flex cables emerging at the top), 3~pairs of Si wafers, 1~Ge wafer, and 2~Cu plates.}
\label{Fig:CCDlayout}
\end{figure}
The beam exposure took place over four days between September 18$^{\mathrm{th}}$ and 22$^{\mathrm{nd}}$, 2018. On Sept.\,18, CCD\,1 was placed in the beam line at 18:03 local time, located closest to the fission chamber, along with a pair of Si wafers, one Ge wafer, and one Cu plate placed downstream (in that order; cf.\ Fig.~\ref{fig:g4rendering} left). The front face of the Al box containing CCD\,1 was \SI{260}{\mm} from the face of the fission chamber. At 17:16 on Sept.\,20, CCD\,2 was added directly downstream from CCD\,1, along with another pair of Si wafers. The front face of the Al box for CCD\,2 was \SI{14.3}{\mm} from the front face of CCD\,1. At 09:11 on Sept.\,22, CCD\,3 was added downstream with an equidistant spacing relative to the other CCDs, along with another pair of Si wafers and a second Cu plate. Figure~\ref{fig:g4rendering} shows schematics of these three exposure setups, while Fig.~\ref{Fig:CCDlayout} shows a photograph of the final setup in which all three CCDs were on the beam line. The exposure was stopped at 08:00 on Sept.\,23, and all parts exposed to the beam were kept in storage for approximately seven weeks to allow short-lived radioactivity to decay prior to shipment for counting.
\subsection{Target Fluence}
The fluence measured by the fission chamber during the entire beam exposure is shown in Fig.~\ref{fig:lanscebeamenergy}, with a total of \num{2.91 \pm 0.22 E12} neutrons above 10 MeV. The uncertainty is dominated by the systematic uncertainty in the $^{238}$U(n, f) cross section used to monitor the fluence, shown in Fig.~\ref{fig:fission_cs}. Below 200 MeV the assumed LANSCE cross section and various other experimental measurements and evaluations \cite{lisowski1991fission, carlson2009international, tovesson2014fast, marcinkevicius2015209} agree to better than 5\%. Between 200 and 300 MeV there are only two measurements of the cross section \cite{lisowski1991fission, miller2015measurement} which differ by 5--10\%. Above \SI{300}{\MeV} there are no experimental measurements. The cross section used by the LANSCE facility assumes a constant cross section above \SI{380}{\MeV} at roughly the same value as that measured at \SI{300}{\MeV} \cite{miller2015measurement}. This is in tension with evaluations based on extrapolations from the $^{238}$U(p, f) cross section that recommend an increasing cross section to a constant value of roughly \SI{1.5}{\barn} at 1 GeV \cite{duran2017search,carlson2018evaluation}. We have used the LANSCE cross section and assumed a 5\% systematic uncertainty below \SI{200}{\MeV}, a 10\% uncertainty between 200 and \SI{300}{\MeV}, and a constant 20\% uncertainty between 300 and \SI{750}{\MeV}. The uncertainty in the neutron energy spectrum due to the timing uncertainty in the TOF measurement (FWHM $\sim$ \SI{1.2}{\nano\second}) is included in all calculations but is sub-dominant (2.5\%-3.5\%) for the estimates of isotope production rates.
While the nominal beam diameter was set by the 1" collimator, the cross-sectional beam profile has significant tails at larger radii. At the fission chamber approximately 38.8\% of neutrons fall outside a 1" diameter, as calculated with the beam profile provided by LANSCE.
Additionally the beam is slightly diverging, with an estimated cone opening angle of 0.233\degree. A Geant4 \cite{agostinelli2003geant4,allison2016recent} simulation that included the measured beam profile and beam divergence, the measured neutron spectrum, and the full geometry and materials of the targets, mounting apparatus, and fission chamber, was used to calculate the neutron fluence through each material, accounting for any attenuation of the neutrons through the targets.
To reduce computational time, a biasing technique was used to generate neutrons. Instead of following the beam profile, neutrons were generated uniformly in a \SI{16}{\cm}$\times$\SI{16}{\cm} square in front of the fission chamber, covering the entire cross-sectional area of the setup. After running the Geant4 simulation, each event was assigned a weight which is proportional to the intensity of the beam at the simulated neutron location, as obtained from the two-dimensional beam profile supplied by LANSCE. This allows reuse of the same simulation results for different beam profiles and alignment offsets. A total of \num{5.5 E10} neutrons above 10 MeV were simulated for each setup and physics list.
At this level of statistics, the statistical uncertainties in the simulation are sub-dominant to the total neutron fluence uncertainty.
The simulations show that each CCD receives about \SI{83}{\percent} of the whole beam. To assess the uncertainty in the neutron fluence due to misalignment of the beam with the center of the CCDs, the profile of the beam was reconstructed by measuring the dark current rate in the CCDs as a function of position (see Sec.~\ref{sec:ccd_counting}). The beam misalignment is calculated to be about $-2.3$\,mm in the $x$ direction and $+0.5$\,mm in the $y$ direction, which when input into the Geant4 simulation yields a systematic uncertainty in the neutron fluence of less than 1\%. The total neutron fluence ($>$ \SI{10}{\MeV}) through each CCD and its Si-wafer matched pair is listed in Table~\ref{tab:neutron_fluences}; corresponding energy spectra are shown in Fig.~\ref{fig:lanscebeamenergy} (the spectral shape of the fluence through each Si-wafer pair is very similar to that of the corresponding CCD and has been omitted for clarity).
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{neutron_flux_targets.pdf}
\caption{Comparison of the LANSCE 4FP30R/ICE II neutron beam with sea-level cosmic-ray neutrons. The black data points and left vertical axis show the number of neutrons measured by the fission chamber during the entire beam exposure used for this measurement. Uncertainties shown are statistical only (see main text for discussion of systematic uncertainties). The colored markers show the simulated fluence for each of the CCDs in the setup. For comparison, the red continuous line and the right vertical axis show the reference cosmic-ray neutron flux at sea level for New York City during the midpoint of solar modulation \cite{gordon2004measurement}}.
\label{fig:lanscebeamenergy}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{fission_cs.pdf}
\end{center}
\caption{Experimental measurements (circles) \cite{lisowski1991fission, tovesson2014fast, miller2015measurement} and evaluations (squares) \cite{carlson2009international, marcinkevicius2015209, duran2017search, carlson2018evaluation} of the $^{238}$U(n, f) cross section. The cross section assumed by the LANSCE facility to convert the fission chamber counts to a total neutron fluence is shown by the black line, with the shaded grey band indicating the assumed uncertainty.}
\label{fig:fission_cs}
\end{figure}
\begin{table}
\centering
\begin{tabular}{c c c}
\hline
Target & Exposure time & Neutrons through target \\
& [hrs] & ($> 10$ MeV)\\
\hline
\vrule width 0pt height 2.2ex
CCD 1 & 109.4 & \num{2.39 \pm 0.18 E12}\\
Wafer 1 & 109.4 & \num{2.64 \pm 0.20 E12}\\
\hline
\vrule width 0pt height 2.2ex
CCD 2 & 62.7 & \num{1.42 \pm 0.11 E12}\\
Wafer 2 & 62.7 & \num{1.56 \pm 0.12 E12}\\
\hline
\vrule width 0pt height 2.2ex
CCD 3 & 22.8 & \num{5.20 \pm 0.39 E11}\\
Wafer 3 & 22.8 & \num{5.72 \pm 0.43 E11}\\
\hline
\end{tabular}
\caption{Beam exposure details for each CCD and its Si-wafer matched pair.}
\label{tab:neutron_fluences}
\end{table}
\section{Counting}
\label{sec:counting}
\subsection{Wafers}
\label{ssec:wafer_counting}
\begin{table*}[ht]
\centering
\begin{tabular}{ccccc}
\hline
& Wafer 0 & Wafer 1 & Wafer 2 & Wafer 3 \\
\hline
\vrule width 0pt height 2.2ex
Si areal density [atoms/cm$^2$] & \multicolumn{4}{c}{\num{4.99 \pm 0.17 e21}~~~~~~~~~~~~~~~~~~~~~} \\
Beam to meas.\ time [days] & - & \num{184.107} & \num{187.131} & \num{82.342} \\
Ge counting time [days] & \num{7.000} & \num{1.055} & \num{3.005} & \num{7.000} \\
\hline
\vrule width 0pt height 2.2ex
Measured $^7$Be~activity [mBq] & $<$\num{40} & \num{161 \pm 24} & \num{75 \pm 12} & \num{149 \pm 12}\\
Decay-corrected $^7$Be~activity [mBq] & - & \num{1830 \pm 270} & \num{870 \pm 140} & \num{437 \pm 34}\\
Beam-avg.\ $^7$Be~cross section [cm$^2$] & - & \num{0.92 \pm 0.16 E-27} & \num{0.74 \pm 0.13 E-27} & \num{1.01 \pm 0.12 E-27}\\
\hline
\vrule width 0pt height 2.2ex
Measured $^{22}$Na~activity [mBq] & $<$\num{5.1} & \num{606 \pm 29} & \num{370 \pm 16} & \num{139.5 \pm 6.3}\\
Decay-corrected $^{22}$Na~activity [mBq] & - & \num{694 \pm 33} & \num{424 \pm 19} & \num{148.2 \pm 6.6}\\
Beam-avg.\ $^{22}$Na~cross section [cm$^2$] & - & \num{6.23 \pm 0.60 E-27} & \num{6.44 \pm 0.61 E-27} & \num{6.15 \pm 0.58 E-27}\\
\hline
\end{tabular}
\caption{Gamma-counting results for the Si-wafer pairs. Measured activities are corrected for isotope decay that occurred during the beam exposure, as well as between the end of the beam exposure and the time of the gamma counting. Uncertainties are listed at 1$\sigma$ (68.3\%) confidence while upper limits quoted for the unirradiated pair (``Wafer 0'') represent the spectrometer's minimum detectable activity (Currie MDA with a 5\% confidence factor~\cite{currie}) at the corresponding peak energy.}
\label{tab:wafer_counting}
\end{table*}
The gamma-ray activities of the Si-wafer pairs (including the unirradiated pair) were measured with a low-background counter at Pacific Northwest National Laboratory (PNNL). Measurements were performed using a Canberra Broad Energy Ge (BEGe) gamma-ray spectrometer (model BE6530) situated within the shallow underground laboratory (SUL) at PNNL \cite{aalseth2012shallow}. The SUL is designed for low-background measurements, with a calculated depth of \SI{30}{\meter} water equivalent.
The BEGe spectrometer is optimized for the measurement of fission and activation products, combining the spectral advantages of low-energy and coaxial detectors, with an energy range from \SI{3}{\keV} to \SI{3}{\MeV}.
The detector is situated within a lead shield (200\,mm), lined with tin (1\,mm) and copper (1\,mm). It is equipped with a plastic scintillator counter \cite{burnett2017development, burnett2014cosmic, burnett2012development, burnett2013further} to veto cosmic rays, which improves sensitivity by further reducing the cosmic-induced detector background by 25\%.
The detector was operated with a Canberra Lynx MCA to provide advanced time-stamped list mode functionality.
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{ge_counting.pdf}
\caption{Spectral comparison of the gamma-counting results for the Si-wafer pairs. Inspection of the full energy range (top panel) reveals two peaks in the irradiated samples (1, 2, and 3) at \SI{478}{\keV} (bottom left) and \SI{1275}{\keV} (bottom right) that are not present in the unirradiated sample (0), corresponding to $^7$Be\ and $^{22}$Na\ activated by the LANSCE neutron beam, respectively.}
\label{fig:ge_counting}
\end{figure*}
Each wafer pair was measured independently, with wafer pair 3 and the unexposed wafer pair 0 counted for longer periods because their expected activities were the lowest. Table~\ref{tab:wafer_counting} shows the gamma-counting details, and Fig.~\ref{fig:ge_counting} shows the measured gamma-ray spectra. Spectral analysis was performed using the Canberra Genie 2000 Gamma Acquisition \& Analysis software (version 3.4) and all nuclear data were taken from the Evaluated Nuclear Data File (ENDF) database \cite{chadwick2011endf} hosted at the National Nuclear Data Center by Brookhaven National Laboratory. Compared to the unirradiated wafer-pair spectrum, the only new peaks identified in the spectra of the irradiated wafer pairs are at 478 and \SI{1275}{\keV}, corresponding to $^7$Be~(10.44\% intensity per decay) and $^{22}$Na~(99.94\% intensity per decay), respectively (cf.\,Fig.\,\ref{fig:ge_counting}). Note that each of the irradiated wafer pairs also has a significant excess at \SI{511}{\keV}, corresponding to positron-annihilation photons from $^{22}$Na\ decays, and an associated sum peak at \SI{1786}{\keV} ($= 511 +$ \SI{1275}{\keV}).
The $^7$Be\ and $^{22}$Na\ activities in each wafer pair were calculated using the 478 and \SI{1275}{\keV} peaks, respectively. The measured values listed in Table~\ref{tab:wafer_counting} include the detector efficiency and true-coincidence summing corrections for the sample geometry and gamma-ray energies considered (calculated using the Canberra In Situ Object Counting Systems, or ISOCS, calibration software \cite{venkataraman1999validation}). The activity uncertainties listed in Table~\ref{tab:wafer_counting} include both the statistical and systematic contributions, with the latter dominated by uncertainty in the efficiency calibration ($\sim$\SI{4}{\percent}). Each measured activity is then corrected for isotope decay that occurred during the beam exposure, as well as between the end of the beam exposure and the time of the gamma counting.
To compare among the results of the different wafer pairs, we divide each decay-corrected activity by the total number of incident neutrons and the number of target Si atoms to obtain a beam-averaged cross section (also listed in Table~\ref{tab:wafer_counting}). The values are in good agreement for both $^7$Be\ and $^{22}$Na\ (even if the common systematic uncertainty associated with the neutron beam fluence is ignored), which serves as a cross-check of the neutron-beam exposure calculations. The lack of any other identified peaks confirms that there are no other significant long-lived gamma-emitting isotopes produced by high-energy neutron interactions in silicon. Specifically, the lack of an identifiable peak at \SI{1808.7}{\keV} allows us to place an upper limit on the produced activity of $^{26}$Al at the minimum detectable activity level of \SI{12}{\milli\becquerel} (Currie MDA with a 5\% confidence factor~\cite{currie}), i.e.\ at least 58$\times$ lower than the $^{22}$Na\ activity in wafer pair 1.
\subsection{CCDs}
\label{sec:ccd_counting}
Images from CCD\,3 were acquired at The University of Chicago in a custom vacuum chamber. Prior to counting, the CCD was removed from the aluminum transport box and placed in a copper box inside the vacuum chamber. Images taken were 4200 columns by 2100 rows in size, with 52 rows and 104 columns constituting the ``overscan'' (i.e., empty pixel reads past the end of the CCD pixel array). These overscan pixels contain no charge and thus provide a direct measurement of the pixel readout noise. A total of 8030 post-irradiation images with \SI{417}{\sec} of exposure were acquired, for a total counting time of 38.76 days. Data were taken in long continuous runs of many images, with interruptions in data taking for testing of the CCD demarcating separate data runs.
Background data were taken prior to shipment to the LANSCE facility for neutron irradiation. These background data consist of the combined spectrum from all radioactive backgrounds in the laboratory environment, including the vacuum chamber, the intrinsic contamination in the CCD, and cosmic rays. A total of 1236 images were acquired using the same readout settings as post-irradiation images, but with a longer exposure of \SI{913}{\sec}, for a total counting time of 13.06 days.
CCD images were processed with the standard DAMIC analysis software~\cite{PhysRevD.94.082006}, which subtracts the image pedestal, generates a ``mask'' to exclude repeating charge patterns in the images caused by defects, and groups pixels into clusters that correspond to individual ionization events. The high dark current caused by damage to the CCD from the irradiation (see Fig.~\ref{fig:darkcurrentprofile}) necessitated a modification to this masking procedure because the average CCD pixel values were no longer uniform across the entire CCD, as they were before irradiation. The images were therefore split into 20-column segments which were treated separately for the pedestal subtraction and masking steps.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{dark_current_profile.pdf}
\caption{Post-irradiation dark-current profile for CCD\,3, obtained from the median pixel values across multiple images. The elevated number of dark counts in the center of the CCD shows the effect of the neutron damage on the CCD.}
\label{fig:darkcurrentprofile}
\end{figure}
Simulations of $^3$H{}, $^{22}$Na{}, and $^7$Be{} decays in the bulk silicon of the CCD were performed using a custom Geant4 simulation, using the Penelope Geant4 physics list, with a simplified geometry that included only the CCD
and the surrounding copper box. Radioactive-decay events were simulated according to the beam profile, assumed to be proportional to the dark current profile (shown in Fig. ~\ref{fig:darkcurrentprofile}). The CCD response was simulated for every ionization event, including the stochastic processes of charge generation and transport that were validated in Ref.~\cite{PhysRevD.96.042002}.
To include the effects of noise and dark current on the clustering algorithm, simulated ``blank'' images were created with the same noise and dark-current profile as the post-irradiation data. The simulated ionization events were pixelated and added onto the blank images, which were then processed with the standard DAMIC reconstruction code to identify clusters.
The increase in the vertical (row-to-row) charge transfer inefficiency (CTI) observed in the post-irradiation data was simulated with a Poissonian kernel, which assumes a constant mean probability, $\lambda$, of charge loss for each pixel transfer along a column~\cite{janesick}. We assume a dependence of $\lambda$ as a function of column number that is proportional to the dark current profile. The total effect of CTI on a particular cluster depends on the number of vertical charge transfers $n$. The continuous CCD readout scheme, chosen to optimize the noise while minimizing overlap of charge clusters, results in a loss of information about the true number of vertical charge transfers for each cluster. For every simulated cluster we therefore pick a random $n$ uniformly from 1 to 2000 to simulate events distributed from the bottom row to the top row of the CCD and apply the Poissonian kernel. We determined the maximum value of $\lambda$ near the center of the CCD to be $9\times10^{-4}$ by matching the distribution of the vertical spread of clusters in the simulation to the data.\footnote{The data from CCD\,1 and CCD\,2, which experienced significantly higher neutron irradiation than CCD\,3, were discarded from the analysis because the vertical CTI could not be well described with a Poissonian kernel. We suspect that the CTI in these CCDs is dominated by the effect of charge traps introduced by the neutron irradiation. During the readout procedure these traps are filled with charge from ionization clusters. The charge is then released on the time scale of milliseconds, corresponding to $\sim$25 vertical transfers. This effect is difficult to model and results in considerable loss of charge from clusters in these two CCDs.}
The identified clusters in the background data acquired prior to irradiation at LANSCE were also introduced on simulated blank images to include the effect of dark current, defects, and CTI on the background spectrum in the activated region of the CCD.
The post-irradiation energy spectrum was fit using a model that includes components for the CCD background, $^{22}$Na{} decays, and $^3$H{} decays. $^7$Be{} was excluded from the fit because the decay does not produce a significant contribution to the total energy spectrum, even if the activity were many times the value we expect based on the wafer measurement.
We constructed a binned Poissonian log-likelihood as the test statistic for the fit, which was minimized using Minuit \cite{James:1994vla} to find the best-fit parameters.
Due to the relatively low statistics in the background template compared to post-irradiation data, statistical errors were corrected using a modified Barlow-Beeston method \cite{BARLOW1993219}, allowing each bin of the model to fluctuate by a Gaussian-constrained term with a standard deviation proportional to the bin statistical uncertainty.
The data spectrum was fit from 2 to \SI{25}{\kilo\eV} to contain most of the $^3$H{} spectrum, while excluding clusters from noise at low energies.
A \SI{2}{\kilo\eV}-wide energy region around the copper K-shell fluorescence line at \SI{8}{\kilo\eV} was masked from the fit because it is not well-modeled in the simulation.
This peak-like feature is more sensitive to the details of the energy response than the smooth $^3$H{} spectrum. We have verified that including this K-shell line in the fit has a negligible effect on the fitted $^3$H\ activity.
The background rate for the fit was fixed to the pre-irradiation value, while keeping the amplitude of the $^{22}$Na{} spectrum free.
This choice has a negligible impact on the $^3$H{} result because the background and $^{22}$Na{} spectra are highly degenerate within the fit energy range, with a correlation coefficient of 0.993.
Figure~\ref{fig:finalfitresults} shows the measured energy spectrum and the best-fit result ($\chi^2$/NDF=104/87).
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{plot_final_fit.pdf}
\caption{Data spectrum and best-fit model with the spectral components stacked in different colors. The spectrum was fit from 2 to \SI{25}{\keV} with the shaded region around the \SI{8}{\keV} copper K-shell fluorescence line excluded from the fit. The rise in the spectrum below \SI{18}{\keV} from $^3$H{} decay is clearly visible above the nearly flat background and $^{22}$Na{} spectrum.}
\label{fig:finalfitresults}
\end{figure}
After the fit was performed, the activities were calculated by dividing the fitted counts by the cumulative data exposure. This number was corrected for the isotope-specific event detection efficiency obtained from the simulation for the energy region of interest.
Systematic errors were estimated from a series of fits under different configurations, including varying the energy range of the fit, varying the energy response and charge transfer parameters within their uncertainties, and floating versus constraining the amplitudes of the background and/or $^{22}$Na{} components in the fit.
The best estimate for the tritium activity in CCD\,3 (after correcting for radioactive decay) is $45.7 \pm 0.5 $ (stat) $\pm 1.5 $ (syst) \si{\milli\becquerel}.
The precision of the $^{22}$Na\ measurement in the CCDs is limited because the relatively flat $^{22}$Na{} spectrum is degenerate with the shape of the background spectrum. Unfortunately, there are no features in the CCD spectrum at low energies that can further constrain the $^{22}$Na{} activity. Further, the damage to the CCD renders the spectrum at higher energies unreliable because events with energies $>$\SI{50}{\kilo\eV} create large extended tracks where the effects of CTI, dark current, and pileup with defects become considerable, preventing reliable energy reconstruction.
Notably, characteristic full-absorption $\gamma$ lines are not present in the CCD spectrum because $\gamma$ rays do not deposit their full energy in the relatively thin CCDs. As a cross-check of the post-irradiation background rate, we separately fit the first and last 400 columns of the CCD (a region mostly free of neutron exposure) and found values consistent with the pre-irradiation background to within $\sim$\SI{7}{\percent}. Constraining the background to within this range has a negligible effect on the fitted tritium activity but leads to significant variation in the estimated $^{22}$Na\ activity, which dominates the overall systematic uncertainty. The best estimate for the $^{22}$Na~activity in CCD\,3 is $126 \pm 5 $ (stat) $ \pm 26 $ (syst) \si{\milli\becquerel}. This is consistent with the more precise measurement of the $^{22}$Na~activity in the silicon wafers, which corresponds to a CCD\,3 activity of \SI{88.5 \pm 5.3}{\milli\becquerel}.
\section{Predicted Beam Production Rate}
\label{sec:production_rates}
If the neutron beam had an energy spectrum identical to that of cosmic-ray neutrons, we could simply estimate the cosmogenic production rate by scaling the measured activity by the ratio of the cosmic-ray neutrons to that of the neutron beam. However the beam spectrum falls off faster at higher energies than that of cosmic rays (see Fig.~\ref{fig:lanscebeamenergy}). Thus we must rely on a model for the production cross sections to extrapolate from the beam measurement to the cosmogenic production rate.
We can evaluate the accuracy of the different cross-section models by comparing the predicted $^3$H, $^7$Be, and $^{22}$Na~activity produced by the LANSCE neutron beam irradiation to the decay-corrected measured activities. We note that measurements of the unirradiated targets confirm that any non-beam related isotope concentrations (e.g. due to cosmogenic activation) are negligible compared to the beam-induced activity.
For a given model of the isotope production cross section $\sigma(E)$ [cm$^2$], the predicted isotope activity, $P$ [Bq], produced by the beam (correcting for decays) is given by
\begin{linenomath*}
\begin{align}
\label{eq:beam_act}
P = \frac{n_a}{\tau} \int S(E) \cdot \sigma(E)~dE
\end{align}
\end{linenomath*}
where $n_a$ is the areal number density of the target silicon atoms [\si{\atoms\per \cm\squared}], $\tau$ is the mean life [\si{\second}] of the isotope decay, and $S(E)$ is the energy spectrum of neutrons [\si{\neutrons \per \MeV}]. The second column of Table~\ref{tab:trit_pred} shows the predicted activity in CCD 3, $P_\text{CCD3}$, for the different $^3$H~cross-section models considered. The corresponding numbers for $^7$Be~and $^{22}$Na~in Wafer 3 ($P_\text{W3})$ are shown in Tables~\ref{tab:ber_pred} and \ref{tab:sod_pred} respectively. The uncertainties listed include the energy-dependent uncertainties in the LANSCE neutron beam spectrum and the uncertainty in the target thickness.
\begin{table*}[t!]
\centering
\begin{tabular}{cccccc}
\hline
Model & Predicted LANSCE & Ejected & Implanted & Predicted LANSCE & Measured/Predicted\\
& $^3$H~produced act. & activity & activity & $^3$H~residual act. & $^3$H~residual activity\\
& $P_\text{CCD3}$ [\si{\milli\becquerel}] & $E_\text{CCD3}$ [\si{\milli\becquerel}] & $I_\text{CCD3}$ [\si{\milli\becquerel}] & $R_\text{CCD3}$ [\si{\milli\becquerel}] & \\
\hline
K\&K (ACTIVIA) & \num{40.8 \pm 4.5} & & &\num{41.5 \pm 5.6} & \num{1.10 \pm 0.15}\\
TALYS & \num{116 \pm 16} & \num{46.70 \pm 0.12} & \num{53.8 \pm 2.1} & \num{123 \pm 17} & \num{0.370 \pm 0.053} \\
INCL++(ABLA07) & \num{41.8 \pm 4.8} & & & \num{42.5 \pm 5.9} & \num{1.07 \pm 0.15}\\
GEANT4 BERTINI & \num{13.0 \pm 1.5} & \num{3.354 \pm 0.072} & \num{3.699 \pm 0.045} & \num{13.3 \pm 1.6} & \num{3.43 \pm 0.42}\\
GEANT4 BIC & \num{17.8 \pm 1.8} & \num{4.995 \pm 0.084} & \num{6.421 \pm 0.059} & \num{19.2 \pm 2.0} & \num{2.38 \pm 0.26}\\
GEANT4 INCLXX & \num{42.3 \pm 5.1} & \num{20.65 \pm 0.11} & \num{16.94 \pm 0.10} & \num{38.5 \pm 4.6} & \num{1.19 \pm 0.15}\\
\hline
\end{tabular}
\caption{Predicted $^3$H~activity in CCD 3 based on different cross-section models. The second column lists the total predicted activity produced in the CCD. The third and fourth columns list the activity ejected and implanted respectively with listed uncertainties only due to simulation statistics. The fifth column shows the final predicted residual activity calculated from the second, third, and fourth columns, including systematic uncertainties due to the geometry. For models without ejection and implantation information we use the average of the other models---see text for details. The final column shows the ratio of the experimentally measured activity to the predicted residual activity.}
\label{tab:trit_pred}
\end{table*}
\begin{table*}[t!]
\centering
\begin{tabular}{cccccc}
\hline
Model & Predicted LANSCE & Ejected & Implanted & Predicted LANSCE & Measured/Predicted\\
& $^7$Be~produced act. & activity & activity & $^7$Be~residual act. & $^7$Be~residual act.\\
& $P_\text{W3}$ [\si{\milli\becquerel}] & $E_\text{W3}$ [\si{\milli\becquerel}] & $I_\text{W3}$ [\si{\milli\becquerel}] & $R_\text{W3}$ [\si{\milli\becquerel}] & \\
\hline
S\&T (ACTIVIA) & \num{408 \pm 46} & & & \num{405 \pm 49} & \num{1.08 \pm 0.16}\\
TALYS & \num{294 \pm 41} & & & \num{292 \pm 42} & \num{1.50 \pm 0.25}\\
INCL++(ABLA07) & \num{141 \pm 21} & & & \num{140 \pm 22} & \num{3.12 \pm 0.55}\\
$^{\text{nat}}$Si(p,x)$^7$Be Spline Fit & \num{518 \pm 68} & & & \num{514 \pm 72} & \num{0.85 \pm 0.14}\\
GEANT4 BERTINI & \num{0.99 \pm 0.20} & $<0.33$ & \num{0.64 \pm 0.14} & \num{1.63 \pm 0.43} & \num{268 \pm 74} \\
GEANT4 BIC & \num{1.27 \pm 0.24} & $<0.33$ & \num{0.61 \pm 0.16} & \num{1.98 \pm 0.50} & \num{221 \pm 59}\\
GEANT4 INCLXX & \num{21.6 \pm 3.0} & \num{3.59 \pm 0.85} & \num{3.42 \pm 0.38} & \num{21.4 \pm 3.1} & \num{20.4 \pm 3.4}\\
\hline
\end{tabular}
\caption{Predicted $^7$Be~activity in Wafer 3 based on different cross-section models. See Table~\ref{tab:trit_pred} caption for a description of the columns. Upper limits are 90\% C.L.}
\label{tab:ber_pred}
\end{table*}
\begin{table*}[t!]
\centering
\begin{tabular}{cccccc}
\hline
Model & Predicted LANSCE & Ejected & Implanted & Predicted LANSCE & Measured/Predicted\\
& $^{22}$Na~produced act. & activity & activity & $^{22}$Na~residual act. & $^{22}$Na~residual act.\\
& $P_\text{W3}$
[\si{\milli\becquerel}] & $E_\text{W3}$ [\si{\milli\becquerel}] & $I_\text{W3}$ [\si{\milli\becquerel}] & $R_\text{W3}$ [\si{\milli\becquerel}] & \\
\hline
S\&T (ACTIVIA) & \num{295 \pm 29} & & & \num{295 \pm 29} & \num{0.502 \pm 0.054}\\
TALYS & \num{209 \pm 18}& & & \num{208 \pm 18} & \num{0.711 \pm 0.070}\\
INCL++(ABLA07) & \num{207 \pm 21} & & & \num{206 \pm 21} & \num{0.718 \pm 0.081}\\
Michel-TALYS & \num{151 \pm 14} & & & \num{151 \pm 14} & \num{0.98 \pm 0.10}\\
GEANT4 BERTINI & \num{97 \pm 11} & $< 0.88$ & $<0.008$ & \num{96 \pm 11} & \num{1.54 \pm 0.18}\\
GEANT4 BIC & \num{393 \pm 40} & $<2.0$ & $<0.02$ & \num{392 \pm 40} & \num{0.378 \pm 0.042}\\
GEANT4 INCLXX & \num{398 \pm 40} & $<2.0$ & $<0.03$ & \num{398 \pm 40} & \num{0.373 \pm 0.041}\\
\hline
\end{tabular}
\caption{Predicted $^{22}$Na~activity in Wafer 3 based on different cross-section models. See Table~\ref{tab:trit_pred} caption for a description of the details. Upper limits are 90\% C.L.}
\label{tab:sod_pred}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=0.75\columnwidth]{tritium_ejection_implantation.pdf}
\caption{Schematic diagram showing triton ejection and implantation. The filled circles indicate example triton production locations while the triton nuclei show the final implantation locations. Production rate estimates include trajectories (a) and (b), while counting the tritium decay activity in the CCD measures (a) and (c).}
\label{fig:trit_ejec_schematic}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{transmatices-logcolor-altstyle.pdf}
\caption{Shown are the activities [mBq] of $^3$H (left), $^7$Be (middle), and $^{22}$Na (right) produced and implanted in various volumes (i.e., $T_{ij}\cdot P_j$) as predicted by the GEANT4 INCLXX model. CCD\,1, CCD\,2, CCD\,3 are the CCDs, with CCD\,1 being closest to the fission chamber. Box\,1, Box\,2, and Box\,3 are the aluminum boxes that contain CCD\,1, CCD\,2, and CCD\,3, respectively. Si\,1, Si\,2, Si\,3, and Ge are the silicon and germanium wafers downstream of the CCDs. World represents the air in the irradiation room.}
\label{fig:transmat}
\end{figure*}
\subsection{Ejection and Implantation}
Light nuclei, such as tritons, can be produced with significant fractions of the neutron kinetic energy. Due to their small mass, these nuclei have relatively long ranges and can therefore be ejected from their volume of creation and implanted into another volume. The situation is shown schematically in Fig.~\ref{fig:trit_ejec_schematic}. While we would like to estimate the total production rate in the silicon targets, what is actually measured is a combination of the nuclei produced in the target that are not ejected and nuclei produced in surrounding material that are implanted in the silicon target. The measured activity therefore depends not only on the thickness of the target but also on the nature and geometry of the surrounding materials.
The residual activity, $R_i$, eventually measured in volume $i$, can be written as
\begin{align}
\label{eq:transfer}
R_i = \sum_j T_{ij} \cdot P_j
\end{align}
where $P_j$ is the total activity produced in volume $j$ (see Eq.~\ref{eq:beam_act}) and $T_{ij}$ is the transfer probability---the probability of a triton produced in $j$ to be eventually implanted in $i$. Because the ejection and implantation of light nuclei is also an issue for dark matter detectors during fabrication and transportation, we have also explicitly factored the transfer probability into ejected activity ($E_i$) and activity implanted from other materials ($I_i$) to give the reader an idea of the relative magnitudes of the two competing effects:
\begin{align}
\label{eq:ejection}
E_i &= (1 - T_{ii})\cdot P_i\\
\label{eq:implantation}
I_i &= \sum_{j \neq i} T_{ij} \cdot P_j\\
R_i &= P_i - E_i + I_i
\end{align}
For nuclear models that are built-in as physics lists within Geant4, explicit calculations of transfer probabilities are not necessary, because the nuclei produced throughout the setup are propagated by Geant4 as part of the simulation. For the TALYS model, which does calculate the kinematic distributions for light nuclei such as tritons but is not included in Geant4, we had to include the propagation of the nuclei separately. Since the passage of nuclei through matter in the relevant energy range is dominated by electromagnetic interactions, which are independent of nuclear production models and can be reliably calculated by Geant4, we used TALYS to evaluate the initial kinetic energy and angular distributions of triton nuclei produced by the LANSCE neutron beam and then ran the Geant4 simulation starting with nuclei whose momenta are drawn from the TALYS-produced distributions. For the remaining models which do not predict kinematic distributions of the resulting nuclei, we simply used the average and standard deviation of the transfer probabilities from the models that do provide this information. As an example, the transfer matrix (expressed in terms of activity $T'_{ij} = T_{ij}\cdot P_j$) from the Geant4 INCLXX model for all three isotopes of interest is shown in Fig.~\ref{fig:transmat}. The uncertainties are calculated by propagating the statistical errors from the simulations through Eqs.~(\ref{eq:transfer}), (\ref{eq:ejection}), and (\ref{eq:implantation}). Additionally we have evaluated a 1\% systematic uncertainty on ejection and implantation of $^3$H{} and $^7$Be~due to the uncertainty in the target thicknesses.
\subsubsection{Tritium}
The model predictions for the ejected and implantated activity of tritons in CCD 3 are shown in the third and fourth columns of Table~\ref{tab:trit_pred}. One can see that depending on the model, 25\%--50\% of the tritons produced in the CCDs are ejected and there is significant implantation of tritons from the protective aluminum boxes surrounding the CCDs.
Due to the similarity of the aluminum and silicon nucleus and the fact that the reaction Q-value for triton production only differs by \SI{5.3}{MeV}, at high energies the production of tritons in aluminum is very similar to that of silicon. In Ref.~\cite{benck2002secondary}, the total triton production cross section as well as the single and double differential cross sections for neutron-induced triton ejection were found to be the same for silicon and aluminum, within the uncertainty of the measurements. This led the authors to suggest that results for aluminum, which are more complete and precise, can also be used for silicon. We show all existing measurements for neutron- and proton-induced triton production in aluminum \cite{benck2002fast, otuka2014towards, zerkin2018experimental} in Fig.~\ref{fig:al_3h_cross_sections} along with model predictions. Comparison to Fig.~\ref{fig:si_3h_cross_sections} shows that all models considered have very similar predictions for aluminum and silicon.
This similarity in triton production, as well as the similar stopping powers of aluminum and silicon, leads to a close compensation of the triton ejected from the silicon CCD with the triton implanted into the CCD from the aluminum box. If the material of the box and CCD were identical and there was sufficient material surrounding the CCD, the compensation would be exact, with no correction to the production required (ignoring attenuation of the neutron flux). In our case, the ratio of production to residual tritons is predicted to be \num{0.985 \pm 0.078}, based on the mean and RMS over all models with kinematic information, and we apply this ratio to the rest of the cross-section models.
\subsubsection{$^7$Be}
Due to the heavier nucleus, the fraction of ejected $^7$Be~nuclei is expected to be smaller than for tritons. As listed in Table~\ref{tab:ber_pred}, the Geant4 INCLXX model predicts that $\sim17\%$ of $^7$Be~produced in the silicon wafers is ejected. For the BIC and BERTINI models, the predicted production rates in silicon are roughly 400 times smaller than our measurement and within the statistics of our simulations we could only place upper limits on the fraction ejected from the wafers at roughly 30\%. We chose to use Wafer 3 for our estimation because it has the largest amount of silicon upstream of the targets, allowing for the closest compensation of the ejection through implantation. However, for $^7$Be~there is also a contribution of implantation from production in the $\sim$\num{0.5}" of air between the wafer targets, which varies between \SIrange[range-phrase = --]{0.4}{0.6}{\milli\becquerel} for the different models. Because this is significant compared to the severely underestimated production and ejection in silicon for the BERTINI and BIC models, the ratio of the production to residual activity is also greatly underestimated and we have therefore chosen to not use the BERTINI and BIC models for estimations of the $^7$Be~production rate from here onward. For all models without kinematic information we have used the ratio of production to residual $^7$Be~activity from the Geant4 INCLXX model, i.e. \num{1.008 \pm 0.046}.
\subsubsection{$^{22}$Na}
As seen in the third and fourth columns of Table~\ref{tab:sod_pred}, both the ejection and implantation fraction of $^{22}$Na~nuclei are negligible due to the large size of the residual nucleus and no correction needs to be made to the predicted production activity.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{al_vs_si_h3_crosssections_2.pdf}
\caption{Experimental measurements (data points) and model estimates (continuous lines) of the neutron-induced tritium production in aluminum. Measurements of the proton-induced cross section are also shown for reference. For direct comparison, we also show the corresponding model predictions for silicon (dashed lines) from Fig.~\ref{fig:si_3h_cross_sections}.}
\label{fig:al_3h_cross_sections}
\end{figure}
\subsection{Comparison to Experimental Measurements}
The ratio of the experimentally measured activities to the predictions of the residual activity from different models are shown in the final column of Tables~\ref{tab:trit_pred}, \ref{tab:ber_pred}, and \ref{tab:sod_pred} for $^3$H{}, $^7$Be{}, and $^{22}$Na{} respectively. For tritium, it can be seen that the predictions of the K\&K and INCL models are in fairly good agreement with the measurement, while the TALYS model overpredicts and the Geant4 BERTINI and BIC models underpredict the activity by more than a factor of two. For $^7$Be, the best agreement with the data comes from the S\&T model and the spline fit to measurements of the proton-induced cross section. We note that the proton cross sections do slightly overpredict the production from neutrons, as found in Ref.~\cite{ninomiya2011cross}, but the value is within the measurement uncertainty. For $^{22}$Na, there is good agreement between our measured activity and the predictions from the experimental measurements of the neutron-induced activity by Michel et al. \cite{michel2015excitation, hansmann2010production}, extrapolated at high energies using the TALYS model. For comparison, the use of the proton-induced production cross section (shown in Fig.~\ref{fig:si_22na_cross_sections}) leads to a value that is roughly 1.9$\times$ larger than our measured activity.
If we assume that the energy dependence of the cross-section model is correct, the ratio of the experimentally measured activity to the predicted activity is the normalization factor that must be applied to each model to match the experimental data. In the next section we will use this ratio to estimate the production rates from cosmic-ray neutrons at sea level.
\section{Cosmogenic Neutron Activation}
\label{sec:cosmogenic_rates}
The isotope production rate per unit target mass from the interaction of cosmic-ray neutrons, $P'$ [\si{\atoms\per\kg\per\second}], can be written as
\begin{linenomath*}
\begin{align}
P' = n \int \Phi(E) \cdot \sigma(E)~dE,
\end{align}
\end{linenomath*}
where $n$ is the number of target atoms per unit mass of silicon [atoms/kg], $\sigma(E)$ is the isotope production cross section [cm$^2$], $\Phi(E)$ is the cosmic-ray neutron flux [\si{\neutrons\per\cm\squared\per\second\per\MeV}], and the integral is evaluated from 1\,MeV to 10\,GeV.\footnote{The TALYS cross sections only extend up to 1 GeV \cite{koning2014extension}. We have assumed a constant extrapolation of the value at 1\,GeV for energies $>$1\,GeV.} While the cross section is not known across the entire energy range and each of the models predicts a different energy dependence, the overall normalization of each model is determined by the comparison to the measurements on the LANSCE neutron beam. The similar shapes of the LANSCE beam and the cosmic-ray neutron spectrum allow us to greatly reduce the systematic uncertainty arising from the unknown cross section.
There have been several measurements and calculations of the cosmic-ray neutron flux (see, e.g., Refs.~\cite{hess1959cosmic, armstrong1973calculations, ziegler1996terrestrial}). The intensity of the neutron flux varies with altitude, location in the geomagnetic field, and solar magnetic activity---though the spectral shape does not vary as significantly---and correction factors must be applied to calculate the appropriate flux \cite{desilets2001scaling}. The most commonly used reference spectrum for sea-level cosmic-ray neutrons is the so-called ``Gordon'' spectrum \cite{gordon2004measurement} (shown in Fig.~\ref{fig:lanscebeamenergy}), which is based on measurements at five different sites in the United States, scaled to sea level at the location of New York City during the mid-point of solar modulation. We used the parameterization given in Ref.~\cite{gordon2004measurement}, which agrees with the data to within a few percent. The spectrum uncertainties at high energies are dominated by uncertainties in the spectrometer detector response function ($<4$\% below 10 MeV and 10--15\% above 150 MeV). We have assigned an average uncertainty of 12.5\% across the entire energy range.
\begin{table}[t!]
\centering
\begin{tabular}{ccc}
\hline
Model & Predicted & Scaled \\
& cosmogenic $^3$H & cosmogenic $^3$H \\
& production rate & production rate\\
& [\si{\atoms\per\kilogram\per\dayshort}] & [\si{\atoms\per\kilogram\per\dayshort}] \\
\hline
K\&K (ACTIVIA) & \num{98 \pm 12} & \num{108 \pm 20} \\
TALYS & \num{259 \pm 33} & \num{96 \pm 18}\\
INCL++(ABLA07) & \num{106 \pm 13} & \num{114 \pm 22}\\
G4 BERTINI & \num{36.1 \pm 4.5} & \num{124 \pm 22}\\
G4 BIC & \num{42.8 \pm 5.4} & \num{102 \pm 17}\\
G4 INCLXX & \num{110 \pm 14} & \num{130 \pm 23}\\
\hline
\end{tabular}
\caption{Predicted $^3$H\ production rates (middle column) from sea-level cosmic-ray neutron interactions in silicon for different cross-section models. The final column provides our best estimate of the production rate for each model after scaling by the ratio of the measured to predicted $^3$H~activities for the LANSCE neutron beam.}
\label{tab:trit_cosmic}
\end{table}
\begin{table}[t!]
\centering
\begin{tabular}{ccc}
\hline
Model & Predicted & Scaled \\
& cosmogenic $^7$Be & cosmogenic $^7$Be \\
& production rate & production rate\\
& [\si{\atoms\per\kilogram\per\dayshort}] & [\si{\atoms\per\kilogram\per\dayshort}] \\
\hline
S\&T (ACTIVIA) & \num{8.1 \pm 1.0} & \num{8.7 \pm 1.6}\\
TALYS & \num{4.17\pm 0.52} & \num{6.2 \pm 1.3}\\
INCL++(ABLA07) & \num{2.81 \pm 0.35} & \num{8.8 \pm 1.9}\\
$^{\text{nat}}$Si(p,x)$^7$Be Spl. & \num{9.8 \pm 1.2} & \num{8.3 \pm 1.7}\\
G4 INCLXX & \num{0.411 \pm 0.052} & \num{8.4 \pm 1.7}\\
\hline
\end{tabular}
\caption{Predicted $^7$Be\ production rates (middle column) from sea-level cosmic-ray neutron interactions in silicon for different cross-section models. The final column provides our best estimate of the production rate for each model after scaling by the ratio of the measured to predicted $^7$Be~activities for the LANSCE neutron beam.}
\label{tab:ber_cosmic}
\end{table}
\begin{table}[t!]
\centering
\begin{tabular}{ccc}
\hline
Model & Predicted & Scaled \\
& cosmogenic $^{22}$Na & cosmogenic $^{22}$Na\\
& production rate & production rate\\
& [\si{\atoms\per\kilogram\per\dayshort}] & [\si{\atoms\per\kilogram\per\dayshort}] \\
\hline
S\&T (ACTIVIA) & \num{86 \pm 11} & \num{43.2 \pm 7.1}\\
TALYS & \num{60.5 \pm 7.6} &\num{43.0 \pm 6.8}\\
INCL++(ALBA07) & \num{60.0 \pm 7.5} & \num{43.1 \pm 7.2}\\
Michel-TALYS & \num{42.8 \pm 5.4} & \num{42.0 \pm 6.8}\\
G4 BERTINI & \num{28.0 \pm 3.5} & \num{43.0 \pm 7.3}\\
G4 BIC & \num{115 \pm 14} & \num{43.4 \pm 7.2}\\
G4 INCLXX & \num{116 \pm 15} & \num{43.1 \pm 7.1}\\
\hline
\end{tabular}
\caption{Predicted $^{22}$Na\ production rates (middle column) from sea-level cosmic-ray neutron interactions in silicon for different cross-section models. The final column provides our best estimate of the production rate for each model after scaling by the ratio of the measured to predicted $^{22}$Na~activities for the LANSCE neutron beam.}
\label{tab:sod_cosmic}
\end{table}
The predicted production rates per unit target mass for the cross-section models considered are shown in the second columns of Tables~\ref{tab:trit_cosmic}, ~\ref{tab:ber_cosmic}, and~\ref{tab:sod_cosmic} for $^3$H, $^7$Be, and $^{22}$Na~respectively. Scaling these values by the ratio of the measured to predicted activities for the LANSCE neutron beam, we obtain our best estimates for the neutron-induced cosmogenic production rates per unit target mass, shown in the corresponding final columns. The spread in the values for the different cross-section models is an indication of the systematic uncertainty in the extrapolation from the LANSCE beam measurement to the cosmic-ray neutron spectrum. If the LANSCE neutron-beam spectral shape was the same as that of the cosmic-ray neutrons, or if the cross-section models all agreed in shape, the central values in the final column of each table would be identical.
Our best estimate of the activation rate of tritium in silicon from cosmic-ray neutrons is \mbox{$(112 \pm 15_\text{exp} \pm 12_\text{cs} \pm 14_\text{nf})$} \si{\atomstrit\per\kg\per\day}, where the first uncertainty listed is due to experimental measurement uncertainties (represented by the average uncertainty on the ratio of the measured to predicted activities from the LANSCE beam irradiation for a specific cross-section model), the second is due to the uncertainty in the energy dependence of the cross section (calculated as the standard deviation of the scaled cosmogenic production rates of the different models), and the third is due to the uncertainty in the sea-level cosmic-ray neutron flux. Similarly, the neutron-induced cosmogenic activation rates for $^7$Be\ and $^{22}$Na\ in silicon are \mbox{$(8.1 \pm 1.3_\text{exp} \pm 1.1_\text{cs} \pm 1.0_\text{nf})$} \si{\atomsber\per\kg\per\day} and \mbox{$(43.0 \pm 4.7_\text{exp} \pm 0.4_\text{cs} \pm 5.4_\text{nf})$} \si{\atomssod\per\kg\per\day}.
\section{Activation from other particles}
\label{sec:alternate}
In addition to activity induced by fast neutrons, interactions of protons, gamma-rays, and muons also contribute to the total production rate of $^3$H, $^7$Be~and $^{22}$Na. In the following subsections we describe the methods we used to estimate the individual contributions using existing measurements and models. In some cases experimental data is very limited and we have had to rely on rough approximations based on other targets and related processes.
\subsection{Proton Induced Activity}
At sea level the flux of cosmic-ray protons is lower than that of cosmic-ray neutrons due to the attenuation effects of additional electromagnetic interactions in the atmosphere. To estimate the production rate from protons we have used the proton spectra from Ziegler \cite{ziegler1979effect, ziegler1981effect} and Diggory et.\ al.\ \cite{diggory1974momentum} (scaled by the angular distribution from the PARMA analytical model \cite{sato2016analytical} as implemented in the EXPACS software program \cite{expacs}), shown in Fig.~\ref{fig:alt_flux_comp}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{alt_flux_comparison.pdf}
\caption{Comparison of sea-level cosmic-ray fluxes of protons \cite{diggory1974momentum, ziegler1979effect, ziegler1981effect}, gamma rays \cite{expacs}, and neutrons \cite{gordon2004measurement}.}
\label{fig:alt_flux_comp}
\end{figure}
Experimental measurements of the proton-induced tritium production cross section have been made only at a few energies (see Fig.~\ref{fig:si_3h_cross_sections}). We have therefore based our estimates on the neutron cross-section models, scaled by the same factor used in Table~\ref{tab:trit_pred}. To account for possible differences between the proton- and neutron-induced cross sections, we have included a 30\% uncertainty based on the measured differences between the cross sections in aluminum (see Fig.~\ref{fig:al_3h_cross_sections}). Similar to the neutron-induced production, we have used the mean and sample standard deviation of the production rates calculated with all the different combinations of the proton spectra and cross-section models as our estimate of the central value and uncertainty, yielding a sea-level production rate from protons of \SI{10.0 \pm 4.5}{\atomstrit\per\kg\per\day}.
For $^7$Be~and $^{22}$Na, measurements of the proton cross section across the entire energy range have been made; we have used spline fits to the data with an overall uncertainty of roughly 10\% based on the experimental uncertainties (see Figs.~\ref{fig:si_7be_cross_sections}~and \ref{fig:si_22na_cross_sections}). Our best estimates for the $^7$Be~and $^{22}$Na~production rates from protons are \SI{1.14 \pm 0.14}{\atomsber\per\kg\per\day} and \SI{3.96 \pm 0.89}{\atomssod\per\kg\per\day}.
\begin{table*}[t!]
\centering
\begin{tabular}{cccc}
\hline
\vrule width 0pt height 2.2ex
Source & $^3$H~production rate & $^7$Be~production rate & $^{22}$Na~production rate \\
& [\si{\atoms\per\kilogram\per\day}] & [\si{\atoms\per\kilogram\per\day}] & [\si{\atoms\per\kilogram\per\day}] \\
\hline
Neutrons & \num{112 \pm 24} & \num{8.1 \pm 1.9} & \num{43.0 \pm 7.2}\\
Protons & \num{10.0 \pm 4.5} & \num{1.14 \pm 0.14} & \num{3.96 \pm 0.89}\\
Gamma Rays & \num{0.73 \pm 0.51} & \num{0.118 \pm 0.083} & \num{2.2 \pm 1.5}\\
Muon Capture & \num{1.57 \pm 0.92} & \num{0.09 \pm 0.09} & \num{0.48 \pm 0.11}\\
\hline
Total & \num{124 \pm 25} & \num{9.4 \pm 2.0} & \num{49.6 \pm 7.4}\\
\hline
\end{tabular}
\caption{Final estimates of the radioisotope production rates in silicon exposed to cosmogenic particles at sea level.}
\label{tab:final_cosmic_prod}
\end{table*}
\subsection{Gamma Ray Induced Activity}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{si_gamma_crosssections.pdf}
\caption{Estimated photonuclear cross-section models for production of $^3$H, $^7$Be, and $^{22}$Na. The dashed lines indicate the original models from TALYS while the solid lines indicate the models scaled to match yield measurements made with bremsstrahlung radiation \cite{matsumura2000target, currie1970photonuclear}.}
\label{fig:gamma_cs}
\end{figure}
The flux of high-energy gamma rays at the Earth's surface was obtained using the PARMA analytical model \cite{sato2016analytical} as implemented in the EXPACS software program \cite{expacs}. Similar to the neutron spectrum, we used New York City as our reference location for the gamma spectrum, which is shown in Fig.~\ref{fig:alt_flux_comp}.
Photonuclear yields of $^7$Be~and $^{22}$Na~in silicon have been measured using bremsstrahlung beams with endpoints ($E_0$) up to \SI{1}{\giga\eV} \cite{matsumura2000target}. We are not aware of any measurements of photonuclear tritium production in silicon, though there is a measurement in aluminum with $E_0 =$ \SI{90}{\MeV} \cite{currie1970photonuclear} which we assume to be the same as for silicon. The yields, $Y(E_0)$, are typically quoted in terms of the cross section per equivalent quanta (eq.q), defined as
\begin{align}
Y(E_0) = \frac{\displaystyle\int_0^{E_0} \sigma(k)N(E_0,k)dk}{\displaystyle \frac{1}{E_0}\int_0^{E_0} kN(E_0,k)dk}
\end{align}
where $\sigma(k)$ is the cross section as a function of photon energy $k$, and $N(E_0, k)$ is the bremsstrahlung energy spectrum.
To obtain an estimate for $\sigma(k)$, we assume a $1/k$ energy dependence for $N(E_0, k)$~\cite{tesch1971accuracy} and scale the TALYS photonuclear cross section models to match the measured yields of \SI{72}{\micro\barn \per \eqquanta} at $E_0 =$ \SI{90}{\MeV} for tritium and \SI{227}{\micro\barn \per \eqquanta} and \SI{992}{\micro\barn \per \eqquanta} at $E_0 =$ \SI{1000}{\MeV} for $^7$Be\ and $^{22}$Na , respectively (see Fig.~\ref{fig:gamma_cs}).
This corresponds to estimated photonuclear production rates of \SI{0.73}{\atomstrit\per\kilogram\per\day}, \SI{0.12}{\atomsber\per\kilogram\per\day}, and \SI{2.2}{\atomssod\per\kilogram\per\day}. Given the large uncertainties in the measured yields, the cross-section spectral shape, and the bremsstrahlung spectrum, we assume a $\sim 70\%$ overall uncertainty on these rates.
\subsection{Muon Capture Induced Activity}
The production rate of a specific isotope $X$ from sea-level cosmogenic muon capture can be expressed as
\begin{align}
P_\mu(X) = R_0 \cdot \frac{\lambda_c\text{(Si)}}{Q\lambda_d + \lambda_c\text{(Si)}}\cdot f_\text{Si}(X)
\end{align}
where $R_0 = \SI{484 \pm 52}{\muons\per\kg\per\day}$ is the rate of stopped negative muons at sea level at geomagnetic latitudes of about \SI{40}{\degree} \cite{charalambus1971nuclear}, the middle term is the fraction of muons that capture on silicon (as opposed to decaying) with the capture rate on silicon $\lambda_c$(Si) = \SI{8.712 \pm 0.018 E5}{\per\sec} \cite{suzuki1987total}, the decay rate of muons $\lambda_d$ = \SI{4.552E5}{\per\sec} \cite{tanabashi2018m}, and the Huff correction factor $Q = 0.992$ for bound-state decay \cite{measday2001nuclear}. The final term, $f_\text{Si}(X)$, is the fraction of muon captures on silicon that produce isotope $X$.
For $^{28}$Si the fraction of muon captures with charged particles emitted has been measured to be \SI{15 \pm 2}{\percent} with theoretical estimates \cite{lifshitz1980nuclear} predicting the composition to be dominated by protons ($f_\text{Si}(^1$H) = \SI{8.8}{\percent}), alphas ($f_\text{Si}(^4$He) = \SI{3.4}{\percent}), and deuterons ($f_\text{Si}(^2$H) = \SI{2.2}{\percent}). The total fraction of muon captures that produce tritons has not been experimentally measured\footnote{A direct measurement of triton production from muon capture in silicon was performed by the \href{http://muon.npl.washington.edu/exp/AlCap/index.html}{AlCap
Collaboration} and a publication is in preparation. }, but a lower limit can be set at \SI{7 \pm 4 e-3}{\percent} from an experimental measurement of tritons emitted above 24 MeV \cite{budyashov1971charged}.
Recent measurements of the emission fractions of protons and deuterons following muon capture on aluminum have found values of $f_\text{Al}(^1$H) = \SI{4.5 \pm 0.3}{\percent} and $f_\text{Al}(^2$H) = \SI{1.8 \pm 0.2}{\percent} \cite{gaponenko2020charged}, and those same data can be used to calculate a rough triton emission fraction of $f_\text{Al}(^3$H) = \SI{0.4}{\percent} \cite{gaponenkopersonal}. If one assumes the same triton kinetic energy distribution in silicon as estimated for aluminum \cite{gaponenko2020charged} and uses it to scale the value measured above 24 MeV, one obtains a triton production estimate of $f_\text{Si}(^3$H) = \SI{0.49 \pm 0.28}{\percent}. The production rate of tritons from muon capture is then estimated to be \SI{1.57 \pm 0.92}{\atomstrit\per\kg\per\day}.
The fraction of muon captures that produce $^{22}$Na~has been measured at $f_\text{Si}$($^{22}$Na) = \SI{0.15 \pm 0.03}{\percent} \cite{heisinger2002production}, corresponding to a production rate from muon captures of \SI{0.48 \pm 0.11}{\atomssod\per\kg\per\day}. To our knowledge there have been no measurements of the production of $^7$Be~through muon capture on silicon. We assume the ratio of $^7$Be~to $^{22}$Na~production is the same for muon capture as it is for the neutron production rates calculated earlier, with roughly \SI{100}{\percent} uncertainty, resulting in an estimated production rate from muon captures of \SI{0.09 \pm 0.09}{\atomsber\per\kg\per\day}.
\section{Discussion}
\label{sec:discussion}
The final estimates for the total cosmogenic production rates of $^3$H, $^7$Be, and $^{22}$Na~at sea level are listed in Table~\ref{tab:final_cosmic_prod}. These rates can be scaled by the known variations of particle flux with altitude or depth, location in the geomagnetic field, and solar activity, to obtain the total expected activity in silicon-based detectors for specific fabrication, transportation, and storage scenarios. The production rate at sea level is dominated by neutron-induced interactions, but for shallow underground locations muon capture may be the dominant production mechanism. For estimates of the tritium background, implantation of tritons generated in surrounding materials and ejection of tritons from thin silicon targets should also be taken into account.
Tritium is the main cosmogenic background of concern for silicon-based dark matter detectors. At low energies, 0--5\,keV,
the estimated production rate corresponds to an activity of roughly \SI{0.002} {\decays \per \keV \per \kg \per \day} per day of sea-level exposure. This places strong restrictions on the fabrication and transportation of silicon detectors for next-generation dark matter experiments. In order to mitigate the tritium background we are currently exploring the possibility of using low-temperature baking to remove implanted tritium from fabricated silicon devices.
Aside from silicon-based dark matter detectors, silicon is also widely used in sensors and electronics for rare-event searches due to the widespread use of silicon in the semiconductor industry and the availability of high-purity silicon. The relative contributions of $^3$H, $^7$Be, and $^{22}$Na~to the overall background rate of an experiment depends not only on the activation rate but also on the location of these components within the detector and the specific energy region of interest. The cosmogenic production rates determined here can be used to calculate experiment-specific background contributions and shielding requirements for all silicon-based materials.
\section{Acknowledgements}
We are grateful to John Amsbaugh and Seth Ferrara for designing the beamline holders, Larry Rodriguez for assistance during the beam time, and Brian Glasgow and Allan Myers for help with the gamma counting. We would also like to thank Alan Robinson and Andrei Gaponenko for useful discussions on production mechanisms from other particles. This work was performed, in part, at the Los Alamos Neutron Science Center (LANSCE), a NNSA User Facility operated for the U.S.\ Department of Energy (DOE) by Los Alamos National Laboratory (Contract 89233218CNA000001) and we thank John O'Donnell for his assistance with the beam exposure and data acquisition.
Pacific Northwest National Laboratory (PNNL) is operated by Battelle Memorial Institute for the U.S.\ Department of Energy (DOE) under Contract No.\ DE-AC05-76RL01830;
the experimental approach was originally developed under the Nuclear-physics, Particle-physics, Astrophysics, and Cosmology (NPAC) Initiative, a Laboratory Directed Research and Development (LDRD) effort at PNNL, while the application to CCDs was performed under the DOE Office of High Energy Physics' Advanced Technology R\&D subprogram. We acknowledge the financial support from National Science Foundation through Grant No.\ NSF PHY-1806974 and from the Kavli Institute for Cosmological Physics at The University of Chicago through an endowment from the Kavli Foundation. The CCD development work was supported in part by the Director, Office of Science, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.
| 2024-02-18T23:39:40.337Z | 2020-11-24T02:21:07.000Z | algebraic_stack_train_0000 | 22 | 14,429 |
|
proofpile-arXiv_065-241 | \section{Motivation}
Chromium is considered as the archetypical itinerant antiferromagnet~\cite{1988_Fawcett_RevModPhys, 1994_Fawcett_RevModPhys}. Interestingly, it shares its body-centered cubic crystal structure $Im\overline{3}m$ with the archetypical itinerant ferromagnet $\alpha$-iron and, at melting temperature, all compositions Fe$_{x}$Cr$_{1-x}$~\cite{2010_Okamoto_Book}. As a result, the Cr--Fe system offers the possibility to study the interplay of two fundamental forms of magnetic order in the same crystallographic environment.
Chromium exhibits transverse spin-density wave order below a N\'{e}el temperature $T_{\mathrm{N}} = 311$~K and longitudinal spin-density wave order below $T_{\mathrm{SF}} = 123$~K~\cite{1988_Fawcett_RevModPhys}. Under substitutional doping with iron, the longitudinal spin-density wave order becomes commensurate at $x = 0.02$. For $0.04 < x$, only commensurate antiferromagnetic order is observed~\cite{1967_Ishikawa_JPhysSocJpn, 1980_Babic_JPhysChemSolids, 1983_Burke_JPhysFMetPhys_I}. The N\'{e}el temperature decreases at first linearly with increasing $x$ and vanishes around $x \approx 0.15$~\cite{1967_Ishikawa_JPhysSocJpn, 1976_Suzuki_JPhysSocJpn, 1978_Burke_JPhysFMetPhys, 1980_Babic_JPhysChemSolids, 1983_Burke_JPhysFMetPhys_I}. Increasing $x$ further, a putative lack of long-range magnetic order~\cite{1978_Burke_JPhysFMetPhys} is followed by the onset of ferromagnetic order at $x \approx 0.18$ with a monotonic increase of the Curie temperature up to $T_{\mathrm{C}} = 1041$~K in pure $\alpha$-iron~\cite{1963_Nevitt_JApplPhys, 1975_Loegel_JPhysFMetPhys, 1980_Fincher_PhysRevLett, 1981_Shapiro_PhysRevB, 1983_Burke_JPhysFMetPhys_II, 1983_Burke_JPhysFMetPhys_III}.
The suppression of magnetic order is reminiscent of quantum critical systems under pressure~\cite{2001_Stewart_RevModPhys, 2007_Lohneysen_RevModPhys, 2008_Broun_NatPhys}, where substitutional doping of chromium with iron decreases the unit cell volume. In comparison to stoichiometric systems tuned by hydrostatic pressure, however, disorder and local strain are expected to play a crucial role in Fe$_{x}$Cr$_{1-x}$. This conjecture is consistent with reports on superparamagnetic behavior for $0.20 \leq x \leq 0.29$~\cite{1975_Loegel_JPhysFMetPhys}, mictomagnetic behavior~\footnote{In mictomagnetic materials, the virgin magnetic curves recorded in magnetization measurements as a function of field lie outside of the hysteresis loops recorded when starting from high field~\cite{1976_Shull_SolidStateCommunications}.} gradually evolving towards ferromagnetism for $0.09 \leq x \leq 0.23$~\cite{1975_Shull_AIPConferenceProceedings}, and spin-glass behavior for $0.14 \leq x \leq 0.19$~\cite{1979_Strom-Olsen_JPhysFMetPhys, 1980_Babic_JPhysChemSolids, 1981_Shapiro_PhysRevB, 1983_Burke_JPhysFMetPhys_I, 1983_Burke_JPhysFMetPhys_II, 1983_Burke_JPhysFMetPhys_III}.
Despite the rather unique combination of properties, notably a metallic spin glass emerging at the border of both itinerant antiferromagnetic and ferromagnetic order, comprehensive studies addressing the magnetic properties of Fe$_{x}$Cr$_{1-x}$ in the concentration range of putative quantum criticality are lacking. In particular, a classification of the spin-glass regime, to the best of our knowledge, has not been addressed before.
Here, we report a study of polycrystalline samples of Fe$_{x}$Cr$_{1-x}$ covering the concentration range $0.05 \leq x \leq 0.30$, i.e., from antiferromagnetic doped chromium well into the ferromagnetically ordered state of doped iron. The compositional phase diagram inferred from magnetization and ac susceptibility measurements is in agreement with previous reports~\cite{1983_Burke_JPhysFMetPhys_I, 1983_Burke_JPhysFMetPhys_II, 1983_Burke_JPhysFMetPhys_III}. As the perhaps most notable new observation, we identify a precursor phenomenon preceding the onset of spin-glass behavior in the imaginary part of the ac susceptibility. For the spin-glass state, analysis of ac susceptibility data recorded at different excitation frequencies by means of the Mydosh parameter, power-law fits, and a Vogel--Fulcher ansatz establishes a crossover from cluster-glass to superparamagnetic behavior as a function of increasing $x$. Microscopic evidence for this evolution is provided by neutron depolarization, indicating an increase of the size of ferromagnetic clusters with $x$.
Our paper is organized as follows. In Sec.~\ref{sec:methods}, the preparation of the samples and their metallurgical characterization by means of x-ray powder diffraction is reported. In addition, experimental details are briefly described. Providing a first point of reference, the presentation of the experimental results starts in Sec.~\ref{sec:results} with the compositional phase diagram as inferred in our study, before turning to a detailed description of the ac susceptibility and magnetization data. Next, neutron depolarization data are presented, allowing to extract the size of ferromagnetically ordered clusters from exponential fits. Exemplary data on the specific heat, electrical resistivity, and high-field magnetization for $x = 0.15$ complete this section. In Sec.~\ref{sec:discussion}, information on the nature of the spin-glass behavior in Fe$_{x}$Cr$_{1-x}$ and its evolution under increasing $x$ is inferred from an analysis of ac susceptibility data recorded at different excitation frequencies. Finally, in Sec.~\ref{sec:conclusion} the central findings of this study are summarized.
\section{Experimental methods}
\label{sec:methods}
Polycrystalline samples of Fe$_{x}$Cr$_{1-x}$ for $0.05 \leq x \leq 0.30$ ($x = 0.05$, 0.10, 0.15, 0.16, 0.17, 0.18, 0.18, 0.19, 0.20, 0.21, 0.22, 0.25, 0.30) were prepared from iron (4N) and chromium (5N) pieces by means of radio-frequency induction melting in a bespoke high-purity furnace~\cite{2016_Bauer_RevSciInstrum}. No losses in weight or signatures of evaporation were observed. In turn, the composition is denoted in terms of the weighed-in amounts of starting material. Prior to the synthesis, the furnace was pumped to ultra-high vacuum and subsequently flooded with 1.4~bar of argon (6N) treated by a point-of-use gas purifier yielding a nominal purity of 9N. For each sample, the starting elements were melted in a water-cooled Hukin crucible and the resulting specimen was kept molten for about 10~min to promote homogenization. Finally, the sample was quenched to room temperature. With this approach, the imminent exsolution of the compound into two phases upon cooling was prevented, as suggested by the binary phase diagram of the Fe--Cr system reported in Ref.~\cite{2010_Okamoto_Book}. From the resulting ingots samples were cut with a diamond wire saw.
\begin{figure}
\includegraphics[width=1.0\linewidth]{figure1}
\caption{\label{fig:1}X-ray powder diffraction data of Fe$_{x}$Cr$_{1-x}$. (a)~Diffraction pattern for $x = 0.15$. The Rietveld refinement (red curve) is in excellent agreement with the experimental data and confirms the $Im\overline{3}m$ structure. (b)~Diffraction pattern around the (011) peak for all concentrations studied. For clarity, the intensities are normalized and curves are offset by 0.1. Inset: Linear decrease of the lattice constant $a$ with increasing $x$. The solid gray line represents a guide to the eye.}
\end{figure}
Powder was prepared of a small piece of each ingot using an agate mortar. X-ray powder diffraction at room temperature was carried out on a Huber G670 diffractometer using a Guinier geometry. Fig.~\ref{fig:1}(a) shows the diffraction pattern for $x = 0.15$, representing typical data. A Rietveld refinement based on the $Im\overline{3}m$ structure yields a lattice constant $a = 2.883$~\AA.
Refinement and experimental data are in excellent agreement, indicating a high structural quality and homogeneity of the polycrystalline samples. With increasing $x$, the diffraction peaks shift to larger angles, as shown for the (011) peak in Fig.~\ref{fig:1}(b), consistent with a linear decrease of the lattice constant in accordance with Vegard's law.
Measurements of the magnetic properties and neutron depolarization were carried out on thin discs with a thickness of ${\sim}0.5$~mm and a diameter of ${\sim}10$~mm. Specific heat and electrical transport for $x = 0.15$ were measured on a cube of 2~mm edge length and a platelet of dimensions $5\times2\times0.5~\textrm{mm}^{3}$, respectively.
The magnetic properties, the specific heat, and the electrical resistivity were measured in a Quantum Design physical properties measurement system. The magnetization was measured by means of an extraction technique. If not stated otherwise, the ac susceptibility was measured at an excitation amplitude of 0.1~mT and an excitation frequency of 1~kHz. Additional ac susceptibility data for the analysis of the spin-glass behavior were recorded at frequencies ranging from 10~Hz to 10~kHz. The specific heat was measured using a quasi-adiabatic large-pulse technique with heat pulses of about 30\% of the current temperature~\cite{2013_Bauer_PhysRevLett}. For the measurements of the electrical resistivity the samples were contacted in a four-terminal configuration and a bespoke setup was used based on a lock-in technique at an excitation amplitude of 1~mA and an excitation frequency of 22.08~Hz. Magnetic field and current were applied perpendicular to each other, corresponding to the transverse magneto-resistance.
Neutron depolarization measurements were carried out at the instrument ANTARES~\cite{2015_Schulz_JLarge-ScaleResFacilJLSRF} at the Heinz Maier-Leibniz Zentrum~(MLZ). The incoming neutron beam had a wavelength $\lambda = 4.13$~\AA\ and a wavelength spread $\Delta\lambda / \lambda = 10\%$. It was polarized using V-cavity supermirrors. The beam was transmitted through the sample and its polarization analyzed using a second polarizing V-cavity. While nonmagnetic samples do not affect the polarization of the neutron beam, the presence of ferromagnetic domains in general results in a precession of the neutron spins. In turn, the transmitted polarization with respect to the polarization axis of the incoming beam is reduced. This effect is referred to as neutron depolarization. Low temperatures and magnetic fields for this experiment were provided by a closed-cycle refrigerator and water-cooled Helmholtz coils, respectively. A small guide field of 0.5~mT was generated by means of permanent magnets. For further information on the neutron depolarization setup, we refer to Refs.~\cite{2015_Schmakat_PhD, 2017_Seifert_JPhysConfSer, 2019_Jorba_JMagnMagnMater}.
All data shown as a function of temperature in this paper were recorded at a fixed magnetic field under increasing temperature. Depending on how the sample was cooled to 2~K prior to the measurement, three temperature versus field histories are distinguished. The sample was either cooled (i)~in zero magnetic field (zero-field cooling, zfc), (ii)~with the field at the value applied during the measurement (field cooling, fc), or (iii)~in a field of 250~mT (high-field cooling, hfc). For the magnetization data as a function of field, the sample was cooled in zero field. Subsequently, data were recorded during the initial increase of the field to $+250$~mT corresponding to a magnetic virgin curve, followed by a decrease to $-250$~mT, and a final increase back to $+250$~mT.
\section{Experimental results}
\label{sec:results}
\subsection{Phase diagram and bulk magnetic properties}
\begin{figure}
\includegraphics[width=1.0\linewidth]{figure2}
\caption{\label{fig:2}Zero-field composition--temperature phase diagram of Fe$_{x}$Cr$_{1-x}$. Data inferred from ac susceptibility, $\chi_{\mathrm{ac}}$, and neutron depolarization are combined with data reported by Burke and coworkers~\cite{1983_Burke_JPhysFMetPhys_I, 1983_Burke_JPhysFMetPhys_II, 1983_Burke_JPhysFMetPhys_III}. Paramagnetic~(PM), antiferromagnetic~(AFM), ferromagnetic~(FM), and spin-glass~(SG) regimes are distinguished. A precursor phenomenon is observed above the dome of spin-glass behavior (purple line). (a)~Overview. (b) Close-up view of the regime of spin-glass behavior as marked by the dashed box in panel (a).}
\end{figure}
The presentation of the experimental results starts with the compositional phase diagram of Fe$_{x}$Cr$_{1-x}$, illustrating central results of our study. An overview of the entire concentration range studied, $0.05 \leq x \leq 0.30$, and a close-up view around the dome of spin-glass behavior are shown in Figs.~\ref{fig:2}(a) and \ref{fig:2}(b), respectively. Characteristic temperatures inferred in this study are complemented by values reported by Burke and coworkers~\cite{1983_Burke_JPhysFMetPhys_I, 1983_Burke_JPhysFMetPhys_II, 1983_Burke_JPhysFMetPhys_III}, in good agreement with our results. Comparing the different physical properties in our study, we find that the imaginary part of the ac susceptibility displays the most pronounced signatures at the various phase transitions and crossovers. Therefore, the imaginary part was used to define the characteristic temperatures as discussed in the following. The same values are then marked in the different physical properties to highlight the consistency with alternative definitions of the characteristic temperatures based on these properties.
Four regimes may be distinguished in the phase diagram, namely paramagnetism at high temperatures (PM, no shading), antiferromagnetic order for small values of $x$ (AFM, green shading), ferromagnetic order for larger values of $x$ (FM, blue shading), and spin-glass behavior at low temperatures (SG, orange shading). We note that faint signatures reminiscent of those attributed to the onset of ferromagnetic order are observed in the susceptibility and neutron depolarization for $0.15 \leq x \leq 0.18$ (light blue shading). In addition, a distinct precursor phenomenon preceding the spin-glass behavior is observed at the temperature $T_{\mathrm{X}}$ (purple line) across a wide concentration range. Before elaborating on the underlying experimental data, we briefly summarize the key characteristics of the different regimes.
We attribute the onset of antiferromagnetic order below the N\'{e}el temperature $T_{\mathrm{N}}$ for $x = 0.05$ and $x = 0.10$ to a sharp kink in the imaginary part of the ac susceptibility, where values of $T_{\mathrm{N}}$ are consistent with previous reports~\cite{1978_Burke_JPhysFMetPhys, 1983_Burke_JPhysFMetPhys_I}. As may be expected, the transition is not sensitive to changes of the magnetic field, excitation frequency, or cooling history. The absolute value of the magnetization is small and it increases essentially linearly as a function of field in the parameter range studied.
We identify the emergence of ferromagnetic order below the Curie temperature $T_{\mathrm{C}}$ for $0.18 \leq x$ from a maximum in the imaginary part of the ac susceptibility that is suppressed in small magnetic fields of a few millitesla. This interpretation is corroborated by the onset of neutron depolarization. The transition is not sensitive to changes of the excitation frequency or cooling history. The magnetic field dependence of the magnetization exhibits a characteristic S-shape with almost vanishing hysteresis, reaching quasi-saturation at small fields. Both characteristics are expected for a soft ferromagnetic material such as iron. For $0.15 \leq x \leq 0.18$, faint signatures reminiscent of those observed for $0.18 \leq x$, such as a small shoulder instead of a maximum in the imaginary part of the ac susceptibility, are interpreted in terms of an incipient onset of ferromagnetic order.
We identify reentrant spin-glass behavior below a freezing temperature $T_{\mathrm{g}}$ for $0.10 \leq x \leq 0.25$ from a pronounced maximum in the imaginary part of the ac susceptibility that is suppressed at intermediate magnetic fields of the order of 50~mT. The transition shifts to lower temperatures with increasing excitation frequency, representing a hallmark of spin glasses. Further key indications for spin-glass behavior below $T_{\mathrm{g}}$ are a branching between different cooling histories in the temperature dependence of the magnetization and neutron depolarization as well as mictomagnetic behavior in the field dependence of the magnetization, i.e., the virgin magnetic curve lies outside the hysteresis loop obtained when starting from high magnetic field.
In addition, we identify a precursor phenomenon preceding the onset of spin-glass behavior at a temperature $T_{\mathrm{X}}$ based on a maximum in the imaginary part of the ac susceptibility that is suppressed in small magnetic fields reminiscent of the ferromagnetic transition. With increasing excitation frequency the maximum shifts to lower temperatures, however at a smaller rate than the freezing temperature $T_{\mathrm{g}}$. Interestingly, the magnetization and neutron depolarization exhibit no signatures at $T_{\mathrm{X}}$.
\subsection{Zero-field ac susceptibility}
\begin{figure}
\includegraphics[width=1.0\linewidth]{figure3}
\caption{\label{fig:3}Zero-field ac susceptibility as a function of temperature for all samples studied. For each concentration, real part (Re\,$\chi_{\mathrm{ac}}$, left column) and imaginary part (Im\,$\chi_{\mathrm{ac}}$, right column) of the susceptibility are shown. Note the logarithmic temperature scale and the increasing scale on the ordinate with increasing $x$. Triangles mark temperatures associated with the onset of antiferromagnetic order at $T_{\mathrm{N}}$ (green), spin-glass behavior at $T_{\mathrm{g}}$ (red), ferromagnetic order at $T_{\mathrm{C}}$ (blue), and the precursor phenomenon at $T_{\mathrm{X}}$ (purple). The corresponding values are inferred from Im\,$\chi_{\mathrm{ac}}$, see text for details.}
\end{figure}
The real and imaginary parts of the zero-field ac susceptibility on a logarithmic temperature scale are shown in Fig.~\ref{fig:3} for each sample studied. Characteristic temperatures are inferred from the imaginary part and marked by colored triangles in both quantities. While the identification of the underlying transitions and crossovers will be justified further in terms of the dependence of the signatures on magnetic field, excitation frequency, and history, as elaborated below, the corresponding temperatures are referred to as $T_{\mathrm{N}}$, $T_{\mathrm{C}}$, $T_{\mathrm{g}}$, and $T_{\mathrm{X}}$ already in the following.
For small iron concentrations, such as $x = 0.05$ shown in Fig.~\ref{fig:3}(a), the real part is small and essentially featureless, with exception of an increase at low temperatures that may be attributed to the presence of ferromagnetic impurities, i.e., a so-called Curie tail~\cite{1972_DiSalvo_PhysRevB, 2014_Bauer_PhysRevB}. The imaginary part is also small but displays a kink at the N\'{e}el temperature $T_{\mathrm{N}}$. In metallic specimens, such as Fe$_{x}$Cr$_{1-x}$, part of the dissipation detected via the imaginary part of the ac susceptibility arises from the excitation of eddy currents at the surface of the sample. Eddy current losses scale with the resistivity~\cite{1998_Jackson_Book, 1992_Samarappuli_PhysicaCSuperconductivity} and in turn the kink at $T_{\mathrm{N}}$ reflects the distinct change of the electrical resistivity at the onset of long-range antiferromagnetic order.
When increasing the iron concentration to $x = 0.10$, as shown in Fig.~\ref{fig:3}(b), both the real and imaginary parts increase by one order of magnitude. Starting at $x = 0.10$, a broad maximum may be observed in the real part that indicates an onset of magnetic correlations where the lack of further fine structure renders the extraction of more detailed information impossible. In contrast, the imaginary part exhibits several distinct signatures that allow, in combination with data presented below, to infer the phase diagram shown in Fig.~\ref{fig:2}. For $x = 0.10$, in addition to the kink at $T_{\mathrm{N}}$ a maximum may be observed at 3~K which we attribute to the spin freezing at $T_{\mathrm{g}}$.
Further increasing the iron concentration to $x = 0.15$, as shown in Fig.~\ref{fig:3}(c), results again in an increase of both the real and imaginary parts by one order of magnitude. The broad maximum in the real part shifts to slightly larger temperatures. In the imaginary part, two distinct maxima are resolved, accompanied by a shoulder at their high-temperature side. From low to high temperatures, these signatures may be attributed to $T_{\mathrm{g}}$, $T_{\mathrm{X}}$, and a potential onset of ferromagnetism at $T_{\mathrm{C}}$. No signatures related to antiferromagnetism may be discerned. For $x = 0.16$ and 0.17, shown in Figs.~\ref{fig:3}(d) and \ref{fig:3}(e), both the real and imaginary part remain qualitatively unchanged while their absolute values increase further. The characteristic temperatures shift slightly to larger values.
For $x = 0.18$, 0.19, 0.20, 0.21, and 0.22, shown in Figs.~\ref{fig:3}(f)--\ref{fig:3}(j), the size of the real and imaginary parts of the susceptibility remains essentially unchanged. The real part is best described in terms of a broad maximum that becomes increasingly asymmetric as the low-temperature extrapolation of the susceptibility increases with $x$. In the imaginary part, the signature ascribed to the onset of ferromagnetic order at $T_{\mathrm{C}}$ at larger concentrations develops into a clear maximum, overlapping with the maximum at $T_{\mathrm{X}}$ up to $x = 0.20$. For $x = 0.21$ and $x = 0.22$, three well-separated maxima may be attributed to the characteristic temperatures $T_{\mathrm{g}}$, $T_{\mathrm{X}}$, and $T_{\mathrm{C}}$. While both $T_{\mathrm{g}}$ and $T_{\mathrm{X}}$ stay almost constant with increasing $x$, $T_{\mathrm{C}}$ distinctly shifts to higher temperatures.
For $x = 0.25$, shown in Fig.~\ref{fig:3}(k), the signature attributed to $T_{\mathrm{X}}$ has vanished while $T_{\mathrm{g}}$ is suppressed to about 5~K. For $x = 0.30$, shown in Fig.~\ref{fig:3}(l), only the ferromagnetic transition at $T_{\mathrm{C}}$ remains and the susceptibility is essentially constant below $T_{\mathrm{C}}$. Note that the suppression of spin-glass behavior around $x = 0.25$ coincides with the percolation limit of 24.3\% in the crystal structure $Im\overline{3}m$, i.e., the limit above which long-range magnetic order is expected in spin-glass systems~\cite{1978_Mydosh_JournalofMagnetismandMagneticMaterials}. Table~\ref{tab:1} summarizes the characteristic temperatures for all samples studied, including an estimate of the associated errors.
\subsection{Magnetization and ac susceptibility under applied magnetic fields}
\begin{figure*}
\includegraphics[width=1.0\linewidth]{figure4}
\caption{\label{fig:4}Magnetization and ac susceptibility in magnetic fields up to 250~mT for selected concentrations (increasing from top to bottom). Triangles mark the temperatures $T_{\mathrm{N}}$ (green), $T_{\mathrm{g}}$ (red), $T_{\mathrm{C}}$ (blue), and $T_{\mathrm{X}}$ (purple). The values shown in all panels correspond to those inferred from Im\,$\chi_{\mathrm{ac}}$ in zero field. \mbox{(a1)--(f1)}~Real part of the ac susceptibility, Re\,$\chi_{\mathrm{ac}}$, as a function of temperature on a logarithmic scale for different magnetic fields. \mbox{(a2)--(f2)}~Imaginary part of the ac susceptibility, Im\,$\chi_{\mathrm{ac}}$. \mbox{(a3)--(f3)}~Magnetization for three different field histories, namely high-field cooling~(hfc), field cooling (fc), and zero-field cooling (zfc). \mbox{(a4)--(f4)}~Magnetization as a function of field at a temperature of 2~K after initial zero-field cooling. Arrows indicate the sweep directions. The scales of the ordinates for all quantities increase from top to bottom.}
\end{figure*}
In order to justify further the relationship of the signatures in the ac susceptibility with the different phases, their evolution under increasing magnetic field up to 250~mT and their dependence on the cooling history are illustrated in Fig.~\ref{fig:4}. For selected values of $x$, the temperature dependences of the real part of the ac susceptibility, the imaginary part of the ac susceptibility, and the magnetization, shown in the first three columns, are complemented by the magnetic field dependence of the magnetization at low temperature, $T = 2$~K, shown in the fourth column.
For small iron concentrations, such as $x = 0.05$ shown in Figs.~\ref{fig:4}(a1)--\ref{fig:4}(a4), both Re\,$\chi_{\mathrm{ac}}$ and Im\,$\chi_{\mathrm{ac}}$ remain qualitatively unchanged up to the highest fields studied. The associated stability of the transition at $T_{\mathrm{N}}$ under magnetic field represents a key characteristic of itinerant antiferromagnetism, which is also observed in pure chromium. Consistent with this behavior, the magnetization is small and increases essentially linearly in the field range studied. No dependence on the cooling history is observed.
For intermediate iron concentrations, such as $x = 0.15$, $x = 0.17$, and $x = 0.18$ shown in Figs.~\ref{fig:4}(b1) to \ref{fig:4}(d4), the broad maximum in Re\,$\chi_{\mathrm{ac}}$ is suppressed under increasing field. Akin to the situation in zero field, the evolution of the different characteristic temperatures is tracked in Im\,$\chi_{\mathrm{ac}}$. Here, the signatures associated with $T_{\mathrm{X}}$ and $T_{\mathrm{C}}$ proof to be highly sensitive to magnetic fields and are suppressed already above about 2~mT. The maximum associated with the spin freezing at $T_{\mathrm{g}}$ is suppressed at higher field values.
In the magnetization as a function of temperature, shown in Figs.~\ref{fig:4}(b3) to \ref{fig:4}(d3), a branching between different cooling histories may be observed below $T_{\mathrm{g}}$. Compared to data recorded after field cooling (fc), for which the temperature dependence of the magnetization is essentially featureless at $T_{\mathrm{g}}$, the magnetization at low temperatures is reduced for data recorded after zero-field cooling (zfc) and enhanced for data recorded after high-field cooling (hfc). Such a history dependence is typical for spin glasses~\cite{2015_Mydosh_RepProgPhys}, but also observed in materials where the orientation and population of domains with a net magnetic moment plays a role, such as conventional ferromagnets.
The spin-glass character below $T_{\mathrm{g}}$ is corroborated by the field dependence of the magnetization shown in Figs.~\ref{fig:4}(b4) to \ref{fig:4}(d4), which is perfectly consistent with the temperature dependence. Most notably, in the spin-glass regime at low temperatures, mictomagnetic behavior is observed, i.e., the magnetization of the magnetic virgin state obtained after initial zero-field cooling (red curve) is partly outside the hysteresis loop obtained when starting from the field-polarized state at large fields (blue curves)~\cite{1976_Shull_SolidStateCommunications}. This peculiar behavior is not observed in ferromagnets and represents a hallmark of spin glasses~\cite{1978_Mydosh_JournalofMagnetismandMagneticMaterials}.
For slightly larger iron concentrations, such as $x = 0.22$ shown in Figs.~\ref{fig:4}(e1) to \ref{fig:4}(e4), three maxima at $T_{\mathrm{g}}$, $T_{\mathrm{X}}$, and $T_{\mathrm{C}}$ are clearly separated. With increasing field, first the high-temperature maximum associated with $T_{\mathrm{C}}$ is suppressed, followed by the maxima at $T_{\mathrm{X}}$ and $T_{\mathrm{g}}$. The hysteresis loop at low temperatures is narrower, becoming akin to that of a conventional soft ferromagnet. For large iron concentrations, such as $x = 0.30$ shown in Figs.~\ref{fig:4}(f1) to \ref{fig:4}(f4), the evolution of Re\,$\chi_{\mathrm{ac}}$, Im\,$\chi_{\mathrm{ac}}$, and the magnetization as a function of magnetic field consistently corresponds to that of a conventional soft ferromagnet with a Curie temperature $T_{\mathrm{C}}$ of more than 200~K. For the ferromagnetic state observed here, all domains are aligned in fields exceeding ${\sim}50$~mT.
\begin{table}
\caption{\label{tab:1}Summary of the characteristic temperatures in Fe$_{x}$Cr$_{1-x}$ as inferred from the imaginary part of the ac susceptibility and neutron depolarization data. We distinguish the N\'{e}el temperature $T_{\mathrm{N}}$, the Curie temperature $T_{\mathrm{C}}$, the spin freezing temperature $T_{\mathrm{g}}$, and the precursor phenomenon at $T_{\mathrm{X}}$. Temperatures inferred from neutron depolarization data are denoted with the superscript `D'. For $T_{\mathrm{C}}^{\mathrm{D}}$, the errors were extracted from the fitting procedure (see below), while all other errors correspond to estimates of read-out errors.}
\begin{ruledtabular}
\begin{tabular}{ccccccc}
$x$ & $T_{\mathrm{N}}$ (K) & $T_{\mathrm{g}}$ (K) & $T_{\mathrm{X}}$ (K) & $T_{\mathrm{C}}$ (K) & $T_{\mathrm{g}}^{\mathrm{D}}$ (K) & $T_{\mathrm{C}}^{\mathrm{D}}$ (K) \\
\hline
0.05 & $240 \pm 5$ & - & - & - &- & - \\
0.10 & $190 \pm 5$ & $3 \pm 5$ & - & - & - & - \\
0.15 & - & $11 \pm 2$ & $23 \pm 3$ & $30 \pm 10$ & - & - \\
0.16 & - & $15 \pm 2$ & $34 \pm 3$ & $42 \pm 10$ & $18 \pm 5$ & $61 \pm 10$ \\
0.17 & - & $20 \pm 2$ & $36 \pm 3$ & $42 \pm 10$ & $23 \pm 5$ & $47 \pm 2$ \\
0.18 & - & $22 \pm 2$ & $35 \pm 3$ & $42 \pm 10$ & $22 \pm 5$ & $73 \pm 1$ \\
0.19 & - & $19 \pm 2$ & $37 \pm 5$ & $56 \pm 10$ & $25 \pm 5$ & $93 \pm 1$ \\
0.20 & - & $19 \pm 2$ & $35 \pm 5$ & $50 \pm 10$ & $24 \pm 5$ & $84 \pm 1$ \\
0.21 & - & $14 \pm 2$ & $35 \pm 5$ & $108 \pm 5$ & $25 \pm 5$ & $101 \pm 1$ \\
0.22 & - & $13 \pm 2$ & $32 \pm 5$ & $106 \pm 5$ & $21 \pm 5$ & $100 \pm 1$ \\
0.25 & - & $5 \pm 5$ & - & $200 \pm 5$ & - & - \\
0.30 & - & - & - & $290 \pm 5$ & - & - \\
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{Neutron depolarization}
\begin{figure}
\includegraphics{figure5}
\caption{\label{fig:5}Remaining neutron polarization after transmission through 0.5~mm of Fe$_{x}$Cr$_{1-x}$ as a function of temperature for $0.15 \leq x \leq 0.22$ (increasing from top to bottom). Data were measured in zero magnetic field under increasing temperature following initial zero-field cooling (zfc) or high-field cooling (hfc). Colored triangles mark the Curie transition $T_{\mathrm{C}}$ and the freezing temperature $T_{\mathrm{g}}$. Orange solid lines are fits to the experimental data, see text for details.}
\end{figure}
The neutron depolarization of samples in the central composition range $0.15 \leq x \leq 0.22$ was studied to gain further insights on the microscopic nature of the different magnetic states. Figure~\ref{fig:5} shows the polarization, $P$, of the transmitted neutron beam with respect to the polarization axis of the incoming neutron beam as a function of temperature. In the presence of ferromagnetically ordered domains or clusters that are large enough to induce a Larmor precession of the neutron spin during its transit, adjacent neutron trajectories pick up different Larmor phases due to the domain distribution in the sample. When averaged over the pixel size of the detector, this process results in polarization values below 1, also referred to as neutron depolarization. For a pedagogical introduction to the time and space resolution of this technique, we refer to Refs.~\cite{2008_Kardjilov_NatPhys, 2010_Schulz_PhD, 2015_Schmakat_PhD, _Seifert_tobepublished}.
For $x = 0.15$, shown in Fig.~\ref{fig:5}(a), no depolarization is observed. For $x = 0.16$, shown in Fig.~\ref{fig:5}(b), a weak decrease of polarization emerges below a point of inflection at $T_{\mathrm{C}} \approx 60$~K (blue triangle). The value of $T_{\mathrm{C}}$ may be inferred from a fit to the experimental data as described below and is in reasonable agreement with the value inferred from the susceptibility. The partial character of the depolarization, $P \approx 0.96$ in the low-temperature limit, indicates that ferromagnetically ordered domains of sufficient size occupy only a fraction of the sample volume. At lower temperatures, a weak additional change of slope may be attributed to the spin freezing at $T_{\mathrm{g}}$ (red triangle).
For $x = 0.17$, shown in Fig.~\ref{fig:5}(c), both signatures get more pronounced. In particular, data recorded after zero-field cooling (zfc) and high-field cooling (hfc) branch below $T_{\mathrm{g}}$, akin to the branching observed in the magnetization. The underlying dependence of the microscopic magnetic texture on the cooling history is typical for a spin glass. Note that the amount of branching varies from sample to sample. Such pronounced sample dependence is not uncommon in spin-glass systems, though the microscopic origin of these irregularities in Fe$_{x}$Cr$_{1-x}$ remains to be resolved.
When further increasing $x$, shown in Figs.~\ref{fig:5}(c)--\ref{fig:5}(h), the transition temperature $T_{\mathrm{C}}$ shifts to larger values and the depolarization gets more pronounced until essentially reaching $P = 0$ at low temperatures for $x = 0.22$. No qualitative changes are observed around $x = 0.19$, i.e., the composition for which the onset of long-range ferromagnetic order was reported previously~\cite{1983_Burke_JPhysFMetPhys_II}. Instead, the gradual evolution as a function of $x$ suggests that ferromagnetically ordered domains start to emerge already for $x \approx 0.15$ and continuously increase in size and/or number with $x$. This conjecture is also consistent with the appearance of faint signatures in the susceptibility. Note that there are no signatures related to $T_{\mathrm{X}}$.
In order to infer quantitative information, the neutron depolarization data were fitted using the formalism of Halpern and Holstein~\cite{1941_Halpern_PhysRev}. Here, spin-polarized neutrons are considered as they are traveling through a sample with randomly oriented ferromagnet domains. When the rotation of the neutron spin is small for each domain, i.e., when $\omega_{\mathrm{L}}t \ll 2\pi$ with the Larmor frequency $\omega_{\mathrm{L}}$ and the time required for transiting the domain $t$, the temperature dependence of the polarization of the transmitted neutrons may be approximated as
\begin{equation}\label{equ1}
P(T) = \mathrm{exp}\left[-\frac{1}{3}\gamma^{2}B^{2}_{\mathrm{0}}(T)\frac{d\delta}{v^{2}}\right].
\end{equation}
Here, $\gamma$ is the gyromagnetic ratio of the neutron, $B_{\mathrm{0}}(T)$ is the temperature-dependent average magnetic flux per domain, $d$ is the sample thickness along the flight direction, $\delta$ is the mean magnetic domain size, and $v$ is the speed of the neutrons. In mean-field approximation, the temperature dependence of the magnetic flux per domain is given by
\begin{equation}\label{equ2}
B_{\mathrm{0}}(T) = {\mu_{0}}^{2} {M_{0}}^{2} \left(1 - \frac{T}{T_{\mathrm{C}}}\right)^{\beta}
\end{equation}
where $\mu_{0}$ is the vacuum permeability, $M_{0}$ is the spontaneous magnetization in each domain, and $\beta$ is the critical exponent. In the following, we use the magnetization value measured at 2~K in a magnetic field of 250~mT as an approximation for $M_{0}$ and set $\beta = 0.5$, i.e., the textbook value for a mean-field ferromagnet. Note that $M_{0}$ more than triples when increasing the iron concentration from $x = 0.15$ to $x = 0.22$, as shown in Tab.~\ref{tab:2}, suggesting that correlations become increasingly important.
Fitting the temperature dependence of the polarization for temperatures above $T_{\mathrm{g}}$ according to Eq.~\eqref{equ1} yields mean values for the Curie temperature $T_{\mathrm{C}}$ and the domain size $\delta$, cf.\ solid orange lines in Fig.~\ref{fig:5} tracking the experimental data. The results of the fitting are summarized in Tab.~\ref{tab:2}. The values of $T_{\mathrm{C}}$ inferred this way are typically slightly higher than those inferred from the ac susceptibility, cf.\ Tab.~\ref{tab:1}. This shift could be related to depolarization caused by slow ferromagnetic fluctuations prevailing at temperatures just above the onset of static magnetic order. Yet, both values of $T_{\mathrm{C}}$ are in reasonable agreement. The mean size of ferromagnetically aligned domains or clusters, $\delta$, increases with increasing $x$, reflecting the increased density of iron atoms. As will be shown below, this general trend is corroborated also by an analysis of the Mydosh parameter indicating that Fe$_{x}$Cr$_{1-x}$ transforms from a cluster glass for small $x$ to a superparamagnet for larger $x$.
\begin{table}
\caption{\label{tab:2}Summary of the Curie temperature, $T_{\mathrm{C}}$, and the mean domain size, $\delta$, in Fe$_{x}$Cr$_{1-x}$ as inferred from neutron depolarization studies. Also shown is the magnetization measured at a temperature of 2~K in a magnetic field of 250~mT, ${M_{0}}$.}
\begin{ruledtabular}
\begin{tabular}{cccc}
$x$ & $T_{\mathrm{C}}^{\mathrm{D}}$ (K) & $\delta$ ($\upmu$m) & $M_{0}$ ($10^{5}$A/m) \\
\hline
0.15 & - & - & 0.70 \\
0.16 & $61 \pm 10$ & $0.61 \pm 0.10$ & 0.84 \\
0.17 & $47 \pm 2$ & $2.12 \pm 0.15$ & 0.96 \\
0.18 & $73 \pm 1$ & $3.17 \pm 0.07$ & 1.24 \\
0.19 & $93 \pm 1$ & $3.47 \pm 0.02$ & 1.64 \\
0.20 & $84 \pm 1$ & $4.67 \pm 0.03$ & 1.67 \\
0.21 & $101 \pm 1$ & $3.52 \pm 0.03$ & 2.18 \\
0.22 & $100 \pm 1$ & $5.76 \pm 0.13$ & 2.27\\
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{Specific heat, high-field magnetometry, and electrical resistivity}
\begin{figure}
\includegraphics{figure6}
\caption{\label{fig:6}Low-temperature properties of Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$. (a)~Specific heat as a function of temperature. Zero-field data (black curve) and an estimate for the phonon contribution using the Debye model (gray curve) are shown. Inset: Specific heat at high temperatures approaching the Dulong--Petit limit. (b)~Specific heat divided by temperature. After subtraction of the phonon contribution, magnetic contributions at low temperatures are observed (green curve). (c)~Magnetic contribution to the entropy obtained by numerical integration. (d)~Magnetization as a function of field up to $\pm9$~T for different temperatures. (e)~Electrical resistivity as a function of temperature for different applied field values.}
\end{figure}
To obtain a complete picture of the low-temperature properties of Fe$_{x}$Cr$_{1-x}$, the magnetic properties at low fields presented so far are complemented by measurements of the specific heat, high-field magnetization, and electrical resistivity on the example of Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$.
The specific heat as a function of temperature measured in zero magnetic field is shown in Fig.~\ref{fig:6}(a). At high temperatures, the specific heat approaches the Dulong--Petit limit of $C_{\mathrm{DP}} = 3R = 24.9~\mathrm{J}\,\mathrm{mol}^{-1}\mathrm{K}^{-1}$, as illustrated in the inset. With decreasing temperature, the specific heat monotonically decreases, lacking pronounced anomalies at the different characteristic temperatures.
The specific heat at high temperatures is dominated by the phonon contribution that is described well by a Debye model with a Debye temperature $\mathit{\Theta}_{\mathrm{D}} = 460$~K, which is slightly smaller than the values reported for $\alpha$-iron (477~K) and chromium (606~K)~\cite{2003_Tari_Book}. As shown in terms of the specific heat divided by temperature, $C/T$, in Fig.~\ref{fig:6}(b), the subtraction of this phonon contribution from the measured data highlights the presence of magnetic contributions to the specific heat below ${\sim}$30~K (green curve). As typical for spin-glass systems, no sharp signatures are observed and the total magnetic contribution to the specific heat is rather small~\cite{2015_Mydosh_RepProgPhys}. This finding is substantiated by the entropy $S$ as calculated by means of extrapolating $C/T$ to zero temperature and numerically integrating
\begin{equation}
S(T) = \int_{0}^{T}\frac{C(T)}{T}\,\mathrm{d}T.
\end{equation}
As shown in Fig.~\ref{fig:6}(c), the magnetic contribution to the entropy released up to 30~K amounts to about $0.04~R\ln2$, which corresponds to a small fraction of the total magnetic moment only.
Insights on the evolution of the magnetic properties under high magnetic fields may be inferred from the magnetization as measured up to $\pm9$~T, shown in Fig.~\ref{fig:6}(d). The magnetization is unsaturated up to the highest fields studied and qualitatively unchanged under increasing temperature, only moderately decreasing in absolute value. The value of 0.22~$\mu_{\mathrm{B}}/\mathrm{f.u.}$ obtained at 2~K and 9~T corresponds to a moment of 1.46~$\mu_{\mathrm{B}}/\mathrm{Fe}$, i.e., the moment per iron atom in Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$ stays below the value of 2.2~$\mu_{\mathrm{B}}/\mathrm{Fe}$ observed in $\alpha$-iron~\cite{2001_Blundell_Book}.
Finally, the electrical resistivity as a function of temperature is shown in Fig.~\ref{fig:6}(e). As typical for a metal, the resistivity is of the order of several ten $\upmu\Omega\,\mathrm{cm}$ and, starting from room temperature, decreases essentially linearly with temperature. However, around 60~K, i.e., well above the onset of magnetic order, a minimum is observed before the resistivity increases towards low temperatures.
Such an incipient divergence of the resistivity with decreasing temperature due to magnetic impurities is reminiscent of single-ion Kondo systems~\cite{1934_deHaas_Physica, 1964_Kondo_ProgTheorPhys, 1987_Lin_PhysRevLett, 2012_Pikul_PhysRevLett}. When magnetic field is applied perpendicular to the current direction, this low-temperature increase is suppressed and a point of inflection emerges around 100~K. This sensitivity with respect to magnetic fields clearly indicates that the additional scattering at low temperatures is of magnetic origin. Qualitatively, the present transport data are in agreement with earlier reports on Fe$_{x}$Cr$_{1-x}$ for $0 \leq x \leq 0.112$~\cite{1966_Arajs_JApplPhys}.
\section{Characterization of the spin-glass behavior}
\label{sec:discussion}
In spin glasses, random site occupancy of magnetic moments, competing interactions, and geometric frustration lead to a collective freezing of the magnetic moments below a freezing temperature $T_{\mathrm{g}}$. The resulting irreversible metastable magnetic state shares many analogies with structural glasses. Depending on the densities of magnetic moments, different types of spin glasses may be distinguished. For small densities, the magnetic properties may be described in terms of single magnetic impurities diluted in a nonmagnetic host, referred to as canonical spin-glass behavior. These systems are characterized by strong interactions and the cooperative spin freezing represents a phase transition. For larger densities, clusters form with local magnetic order and frustration between neighboring clusters, referred to as cluster glass behavior, developing superparamagnetic characteristics as the cluster size increases. In these systems, the inter-cluster interactions are rather weak and the spin freezing takes place in the form of a gradual blocking. When the density of magnetic moments surpasses the percolation limit, long-range magnetic order may be expected.
For compositions close to the percolation limit, so-called reentrant spin-glass behavior may be observed. In such cases, as a function of decreasing temperature first a transition from a paramagnetic to a magnetically ordered state occurs before a spin-glass state emerges at lower temperatures. As both the paramagnetic and the spin-glass state lack long-range magnetic order, the expression ‘reentrant’ alludes to the disappearance of long-range magnetic order after a finite temperature interval and consequently the re-emergence of a state without long-range order~\cite{1993_Mydosh_Book}.
The metastable nature of spin glasses manifests itself in terms of a pronounced history dependence of both microscopic spin arrangement and macroscopic magnetic properties, translating into four key experimental observations; (i) a frequency-dependent shift of the maximum at $T_{\mathrm{g}}$ in the ac susceptibility, (ii) a broad maximum in the specific heat located 20\% to 40\% above $T_{\mathrm{g}}$, (iii) a splitting of the magnetization for different cooling histories, and (iv) a time-dependent creep of the magnetization~\cite{2015_Mydosh_RepProgPhys}. The splitting of the magnetization and the broad signature in the specific heat were addressed in Figs.~\ref{fig:5} and \ref{fig:6}.
In the following, the frequency dependence of the ac susceptibility will be analyzed by means of three different ways, namely the Mydosh parameter, power law fits, and the Vogel--Fulcher law, permitting to classify the spin-glass behavior in Fe$_{x}$Cr$_{1-x}$ and its change as a function of composition.
\begin{figure}
\includegraphics[width=0.97\linewidth]{figure7}
\caption{\label{fig:7}Imaginary part of the zero-field ac susceptibility as a function of temperature for Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$ measured at different excitation frequencies $f$. Analysis of the frequency-dependent shift of the spin freezing temperature $T_{\mathrm{g}}$ allows to gain insights on the microscopic nature of the spin-glass state.}
\end{figure}
In the present study, the freezing temperature $T_{\mathrm{g}}$ was inferred from a maximum in the imaginary part of the ac susceptibility as measured at an excitation frequency of 1~kHz. However, in a spin glass the temperature below which spin freezing is observed depends on the excitation frequency $f$, as illustrated in Fig.~\ref{fig:7} for the example of Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$. Under increasing frequency, the imaginary part remains qualitatively unchanged but increases in absolute size and the maximum indicating $T_{\mathrm{g}}$ shifts to higher temperatures. Analyzing this shift in turn provides information on the microscopic nature of the spin-glass behavior.
The first and perhaps most straightforward approach utilizes the empirical Mydosh parameter $\phi$, defined as
\begin{equation}
\phi = \left[\frac{T_{\mathrm{g}}(f_{\mathrm{high}})}{T_{\mathrm{g}}(f_{\mathrm{low}})} - 1\right] \left[\ln\left(\frac{f_{\mathrm{high}}}{f_{\mathrm{low}}}\right)\right]^{-1}
\end{equation}
where $T_{\mathrm{g}}(f_{\mathrm{high}})$ and $T_{\mathrm{g}}(f_{\mathrm{low}})$ are the freezing temperatures as experimentally observed at high and low excitation frequencies, $f_{\mathrm{high}}$ and $f_{\mathrm{low}}$, respectively~\cite{1993_Mydosh_Book, 2015_Mydosh_RepProgPhys}. Small shifts associated with Mydosh parameters below 0.01 are typical for canonical spin glasses such as Mn$_{x}$Cu$_{1-x}$, while cluster glasses exhibit intermediate values up to 0.1. Values exceeding 0.1 suggest superparamagnetic behavior~\cite{1993_Mydosh_Book, 2015_Mydosh_RepProgPhys, 1980_Tholence_SolidStateCommun, 1986_Binder_RevModPhys}.
\begin{figure}
\includegraphics[width=1.0\linewidth]{figure8}
\caption{\label{fig:8}Evolution of the Mydosh-parameter in Fe$_{x}$Cr$_{1-x}$. (a)~Schematic depiction of the five different sequences of magnetic regimes observed as a function of temperature for different $x$. The following regimes are distinguished: paramagnetic~(PM), antiferromagnetic~(AFM), ferromagnetic~(FM), spin-glass~(SG). A precursor phenomenon~(PC) may be observed between FM and SG. (b)~Mydosh parameter $\phi$ as a function of the iron concentration $x$, allowing to classify the spin-glass behavior as canonical ($\phi \leq 0.01$, gray shading), cluster-glass ($0.01 \leq \phi \leq 0.1$, yellow shading), or superparamagnetic ($\phi \geq 0.1$, brown shading). }
\end{figure}
\begin{table*}
\caption{\label{tab:3}Parameters inferred from the analysis of the spin-glass behavior in Fe$_{x}$Cr$_{1-x}$, namely the Mydosh parameter $\phi$, the zero-frequency extrapolation of the spin freezing temperature $T_\mathrm{g}(0)$, the characteristic relaxation time $\tau_{0}$, the critical exponent $z\nu$, the Vogel--Fulcher temperature $T_{0}$, and the cluster activation energy $E_{a}$. The errors were determined by means of Gaussian error propagation ($\phi$), the distance of neighboring data points ($T_\mathrm{g}(0)$), and statistical deviations of the linear fits ($\tau_{0}$, $z\nu$, $T_{0}$, and $E_{a}$).}
\begin{ruledtabular}
\begin{tabular}{ccccccc}
$x$ & $\phi$ & $T_\mathrm{g}(0)$ (K) & $\tau_{0}$ ($10^{-6}$~s) & $z\nu$ & $T_{0}$ (K) & $E_{a}$ (K) \\
\hline
0.05 & - & - & - & - & - & - \\
0.10 & $0.064 \pm 0.011$ & - & - & - & - & - \\
0.15 & $0.080 \pm 0.020$ & $9.1 \pm 0.1$ & $0.16 \pm 0.03$ & $5.0 \pm 0.1$ & $8.5 \pm 0.1$ & $19.9 \pm 0.8$ \\
0.16 & $0.100 \pm 0.034$ & $13.4 \pm 0.1$ & $1.73 \pm 0.15$ & $2.2 \pm 0.0$ & $11.9 \pm 0.1$ & $14.4 \pm 0.3$ \\
0.17 & $0.107 \pm 0.068$ & $18.3 \pm 0.1$ & $6.13 \pm 1.52$ & $1.5 \pm 0.1$ & $16.3 \pm 0.3$ & $12.8 \pm 0.9$ \\
0.18 & $0.108 \pm 0.081$ & $14.5 \pm 0.1$ & $1.18 \pm 0.46$ & $7.0 \pm 0.5$ & $16.9 \pm 0.5$ & $24.2 \pm 2.3$ \\
0.19 & $0.120 \pm 0.042$ & $14.2 \pm 0.1$ & $0.47 \pm 0.15$ & $4.5 \pm 0.2$ & $14.6 \pm 0.4$ & $16.3 \pm 1.4$ \\
0.20 & $0.125 \pm 0.043$ & $13.5 \pm 0.1$ & $1.29 \pm 0.34$ & $4.1 \pm 0.2$ & $13.6 \pm 0.3$ & $18.8 \pm 1.3$ \\
0.21 & $0.138 \pm 0.048$ & $9.5 \pm 0.1$ & $1.67 \pm 0.21$ & $4.7 \pm 0.1$ & $10.3 \pm 0.4$ & $12.0 \pm 1.3$ \\
0.22 & $0.204 \pm 0.071$ & $11.7 \pm 0.1$ & $2.95 \pm 0.80$ & $2.6 \pm 0.1$ & $11.3 \pm 0.4$ & $11.3 \pm 1.2$ \\
0.25 & $0.517 \pm 0.180$ & $2.8 \pm 0.1$ & $75.3 \pm 5.34$ & $1.8 \pm 0.1$ & - & - \\
0.30 & - & - & - & - & - & \\
\end{tabular}
\end{ruledtabular}
\end{table*}
As summarized in Tab.~\ref{tab:3} and illustrated in Fig.~\ref{fig:8}, the Mydosh parameter in Fe$_{x}$Cr$_{1-x}$ monotonically increases as a of function of increasing iron concentration. For small $x$, the values are characteristic of cluster-glass behavior, while for large $x$ they lie well within the regime of superparamagnetic behavior. This evolution reflects the increase of the mean size of ferromagnetic clusters as inferred from the analysis of the neutron depolarization data.
\begin{figure}
\includegraphics[width=1.0\linewidth]{figure9}
\caption{\label{fig:9}Analysis of spin-glass behavior using power law fits and the Vogel--Fulcher law for Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$. (a)~Logarithm of the relaxation time as a function of the logarithm of the normalized shift of the freezing temperature. The red solid line is a power law fit allowing to infer the characteristic relaxation time $\tau_{0}$ and the critical exponent $z\nu$. Inset: Goodness of fit for different estimated zero-frequency extrapolations of the freezing temperature, $T_{\mathrm{g}}^{\mathrm{est}}(0)$. The value $T_{\mathrm{g}}(0)$ used in the main panel is defined as the temperature of highest $R^{2}$. (b)~Spin freezing temperature as a function of the inverse of the logarithm of the ratio of characteristic frequency and excitation frequency. The red solid line is a fit according to the Vogel--Fulcher law allowing to infer the cluster activation energy $E_{a}$ and the Vogel--Fulcher temperature $T_{0}$.}
\end{figure}
The second approach employs the standard theory for dynamical scaling near phase transitions to $T_{\mathrm{g}}$~\cite{1977_Hohenberg_RevModPhys, 1993_Mydosh_Book}. The relaxation time $\tau = \frac{1}{2\pi f}$ is expressed in terms of the power law
\begin{equation}
\tau = \tau_{0} \left[\frac{T_{\mathrm{g}}(f)}{T_{\mathrm{g}}(0)} - 1\right]^{z\nu}
\end{equation}
where $\tau_{0}$ is the characteristic relaxation time of a single moment or cluster, $T_{\mathrm{g}}(0)$ is the zero-frequency limit of the spin freezing temperature, and $z\nu$ is the critical exponent. In the archetypical canonical spin glass Mn$_{x}$Cu$_{1-x}$, one obtains values such as $\tau_{0} = 10^{-13}~\mathrm{s}$, $T_{\mathrm{g}}(0) = 27.5~\mathrm{K}$, and $z\nu = 5$~\cite{1985_Souletie_PhysRevB}.
The corresponding analysis is illustrated in Fig.~\ref{fig:9}(a) for Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$. First the logarithm of the ratio of relaxation time and characteristic relaxation time, $\ln(\frac{\tau}{\tau_{0}})$, is plotted as a function of the logarithm of the normalized shift of the freezing temperature, $\ln\left[\frac{T_{\mathrm{g}}(f)}{T_{\mathrm{g}}(0)} - 1\right]$, for a series of estimated values of the zero-frequency extrapolation $T_{\mathrm{g}}^{\mathrm{est}}(0)$. For each value of $T_{\mathrm{g}}^{\mathrm{est}}(0)$ the data are fitted linearly and the goodness of fit is compared by means of the $R^{2}$ coefficient, cf.\ inset of Fig.~\ref{fig:9}(a). The best approximation for the zero-frequency freezing temperature, $T_{\mathrm{g}}(0)$, is defined as the temperature of highest $R^{2}$. Finally, the characteristic relaxation time $\tau_{0}$ and the critical exponent $z\nu$ are inferred from a linear fit to the experimental data using this value $T_{\mathrm{g}}(0)$, as shown in Fig.~\ref{fig:9}(a) for Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$.
The same analysis was carried out for all compositions Fe$_{x}$Cr$_{1-x}$ featuring spin-glass behavior, yielding the parameters summarized in Tab.~\ref{tab:3}. Characteristic relaxation times of the order of $10^{-6}~\mathrm{s}$ are inferred, i.e., several order of magnitude larger than those observed in canonical spin glasses and consistent with the presence of comparably large magnetic clusters, as may be expected for the large values of $x$. Note that these characteristic times are also distinctly larger than the $10^{-12}~\mathrm{s}$ to $10^{-8}~\mathrm{s}$ that neutrons require to traverse the magnetic clusters in the depolarization experiments. Consequently, the clusters appear quasi-static for the neutron which in turn is a prerequisite for the observation of net depolarization across a macroscopic sample. The critical exponents range from 1.5 to 7.0, i.e., within the range expected for glassy systems~\cite{1980_Tholence_SolidStateCommun, 1985_Souletie_PhysRevB}. The lack of systematic evolution of both $\tau_{0}$ and $z\nu$ as a function of iron concentration $x$ suggests that these parameters in fact may be rather sensitive to details of microscopic structure, potentially varying substantially between individual samples.
The third approach uses the Vogel--Fulcher law, developed to describe the viscosity of supercooled liquids and glasses, to interpret the properties around the spin freezing temperature $T_{\mathrm{g}}$~\cite{1993_Mydosh_Book, 1925_Fulcher_JAmCeramSoc, 1980_Tholence_SolidStateCommun, 2013_Svanidze_PhysRevB}. Calculating the characteristic frequency $f_{0} = \frac{1}{2\pi\tau_{0}}$ from the characteristic relaxation time $\tau_{0}$ as determined above, the Vogel--Fulcher law for the excitation frequency $f$ reads
\begin{equation}
f = f_{0} \exp\left\lbrace-\frac{E_{a}}{k_{\mathrm{B}}[T_{\mathrm{g}}(f)-T_{0}]}\right\rbrace
\end{equation}
where $k_{\mathrm{B}}$ is the Boltzmann constant, $E_{a}$ is the activation energy for aligning a magnetic cluster by the applied field, and $T_{0}$ is the Vogel--Fulcher temperature providing a measure of the strength of the cluster interactions. As a point of reference, it is interesting to note that values such as $E_{a}/k_{\mathrm{B}} = 11.8~\mathrm{K}$ and $T_{0} = 26.9~\mathrm{K}$ are observed in the archetypical canonical spin glass Mn$_{x}$Cu$_{1-x}$~\cite{1985_Souletie_PhysRevB}.
For each composition Fe$_{x}$Cr$_{1-x}$, the spin freezing temperature $T_{\mathrm{g}}(f)$ is plotted as a function of the inverse of the logarithm of the ratio of characteristic frequency and excitation frequency, $\frac{1}{\ln(f/f_{0})}$, as shown in Fig.~\ref{fig:9}(b) for Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$. A linear fit to the experimental data allows to infer $E_{a}$ and $T_{0}$ from the slope and the intercept. The corresponding values for all compositions Fe$_{x}$Cr$_{1-x}$ featuring spin-glass behavior are summarized in Tab.~\ref{tab:3}. All values of $T_{0}$ and $E_{a}$ are of the order 10~K and positive, indicating the presence of strongly correlated clusters~\cite{2012_Anand_PhysRevB, 2011_Li_ChinesePhysB, 2013_Svanidze_PhysRevB}. Both $T_{0}$ and $E_{a}$ follow roughly the evolution of the spin freezing temperature $T_{\mathrm{g}}$, reaching their maximum values around $x = 0.17$ or $x = 0.18$.
\section{Conclusions}
\label{sec:conclusion}
In summary, a comprehensive study of the magnetic properties of polycrystalline Fe$_{x}$Cr$_{1-x}$ in the composition range $0.05 \leq x \leq 0.30$ was carried out by means of x-ray powder diffraction as well as measurements of the magnetization, ac susceptibility, and neutron depolarization, complemented by specific heat and electrical resistivity data for $x = 0.15$. As our central result, we present a detailed composition--temperature phase diagram based on the combination of a large number of quantities. Under increasing iron concentration $x$, antiferromagnetic order akin to pure Cr is suppressed above $x = 0.15$, followed by the emergence of weak magnetic order developing distinct ferromagnetic character above $x = 0.18$. At low temperatures, a wide dome of reentrant spin-glass behavior is observed for $0.10 \leq x \leq 0.25$, preceded by a precursor phenomenon. Analysis of the neutron depolarization data and the frequency-dependent shift in the ac susceptibility indicate that with increasing $x$ the size of ferromagnetically ordered clusters increases and that the character of the spin-glass behavior changes from a cluster glass to a superparamagnet.
\acknowledgments
We wish to thank P.~B\"{o}ni and S.~Mayr for fruitful discussions and assistance with the experiments. This work has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under TRR80 (From Electronic Correlations to Functionality, Project No.\ 107745057, Project E1) and the excellence cluster MCQST under Germany's Excellence Strategy EXC-2111 (Project No.\ 390814868). Financial support by the Bundesministerium f\"{u}r Bildung und Forschung (BMBF) through Project No.\ 05K16WO6 as well as by the European Research Council (ERC) through Advanced Grants No.\ 291079 (TOPFIT) and No.\ 788031 (ExQuiSid) is gratefully acknowledged. G.B., P.S., S.S., M.S., and P.J.\ acknowledge financial support through the TUM Graduate School.
| 2024-02-18T23:39:40.791Z | 2020-07-22T02:11:10.000Z | algebraic_stack_train_0000 | 45 | 9,343 |
|
proofpile-arXiv_065-258 | \section{Introduction}
With technological advancements in the automotive industry in recent times, modern vehicles are no longer made up of only mechanical devices but are also an assemblage of complex electronic devices called electronic control units (ECUs) which provide advanced vehicle functionality and facilitate independent decision making. ECUs receive input from sensors and runs computations for their required tasks~\cite{Alam:2018}. These vehicles are also fitted with an increasing number of sensing and communication technologies to facilitate driving decisions and to be \textit{self aware}~\cite{Anupam:2018}. However, the proliferation of these technologies have been found to facilitate the remote exploitation of the vehicle [7]. Malicious entities could inject malware in ECUs to compromise the internal network of the vehicle~\cite{Anupam:2018}. The internal network of a vehicle refers to the communications between the multiple ECUs in the vehicle over on-board buses such as the controller area network (CAN)~\cite{Han:2014}. The authors in [7] and [8] demonstrated the possibility of such remote exploitation on a connected and autonomous vehicle (CAV), which allowed the malicious entity to gain full control of the driving system and bring the vehicle to a halt.\\
To comprehend the extent to which smart vehicles are vulnerable, we conducted a risk analysis for connected vehicles in [1] and identified likely threats and their sources. Furthermore, using the Threat Vulnerability Risk Assessment (TVRA) methodology, we classified identified threats based on their impact on the vehicles and found that compromising one or more of the myriad of ECUs installed in the vehicles poses a considerable threat to the security of smart vehicles and the vehicular network. Vehicular network here refers to communication between smart vehicles and roadside units (RSUs) which are installed and managed by the transport authority. These entities exchange routine and safety messages according to the IEEE802.11p standard [4]. By compromising ECUs fitted in a vehicle, a malicious entity could for example, broadcast false information in the network to affect the driving decisions of other vehicles. Therefore, in this paper, we focus on monitoring the state of the in-vehicle network to enable the detection of an ECU compromise.
Previous efforts that focus on the security of in-vehicle networks have focused on intrusion and anomaly detection which enables the detection of unauthorized access to in-vehicle network [9-11], [15], [23] and the identification of deviation from acceptable vehicle behavior~\cite{Wasicek:2014}. Several challenges however persist. First, proposed security solutions are based on a centralized design which relies on a Master ECU that is responsible for ensuring valid communications between in-vehicle ECUs [9-10] [23]. However, these solutions are vulnerable to a single point of failure attack where an attacker's aim is to compromise the centralized security design. Furthermore, if the Master ECU is either compromised or faulty, the attacker could easily execute actions that undermine the security of the in-vehicle network. In-addition, efforts that focus on intrusion detection by comparing ECU firmware versions [10] [11] [15] are also vulnerable to a single point of exploitation whereby the previous version which is centrally stored could be altered. These works [11] [15] also rely on the vehicle manufacturer to ultimately verify the state of ECUs. However, vehicle manufacturers could be motivated to execute malicious actions for their benefits such as to evade liability [3]
Therefore, decentralization of the ECU state verification among entities in the vehicular ecosystem is desirable for the security of smart vehicles. Finally, the solution proposed in [24] which focuses on observing deviations from acceptable behavior utilized data generated from a subset of ECUs. However, this present a data reliability challenge when an ECU not included in the ECU subset is compromised. \\
We argue in this paper that Blockchain (BC) [12] technology has the potential to address the aforementioned challenges including centralization, availability and data reliability. \\
\textbf{BC } is an immutable and distributed ledger technology that provides verifiable record of transactions in the form of an interconnected series of data blocks. BC can be public or permissioned [3] to differentiate user capabilities including who has the right to participate in the BC network. BC replaces centralization with a trustless consensus which when applied to our context can ensure that no single entity can assume full control of verifying the state of ECUs in a smart vehicle. The decentralized consensus provided by BC is well-suited for securing the internal network of smart vehicles by keeping track of historical operations executed on the vehicle's ECUs such as firmware updates, thus easily identifying any change to the ECU and who was responsible for that change. Also, the distributed structure of BC provides robustness to a single point of failure.
\subsection{Contributions and Paper Layout}
Having identified the limitations of existing works, we propose a Blockchain based Framework for sEcuring smaRt vehicLes (B-FERL). B-FERL is an apposite countermeasure for in-vehicle network security that exposes threats in smart vehicles by ascertaining the state of the vehicle’s internal controls. Also, given that data modification depicts a successful attempt to alter the state of an ECU, B-FERL also suffices as a data reliability solution that ensures that a vehicle's data is trustworthy. We utilize a permissioned BC to allow only trusted entities manage the record of vehicles in the BC network. This means that state changes of an ECU are summarized, stored and managed distributedly in the BC.\\
\textit{The key contributions of this paper are summarized as follows:} \\
\textbf{(1)} We present B-FERL; a decentralized security framework for in-vehicle networks. B-FERL ascertains the integrity of in-vehicle ECUs and highlights the existence of threats in a smart vehicle. To achieve this, we define a two-tier blockchain-based architecture, which introduces an initialization operation used to create record vehicles for authentication purposes and a challenge-response mechanism where the integrity of a vehicle's internal network is queried when it connects to an RSU to ensure its security.\\
\textbf{(2)} We conduct a qualitative evaluation of B-FERL to evaluate its resilience to identified attacks. We also conduct a comparative evaluation with existing approaches and highlight the practical benefits of B-FERL. Finally, we characterize the performance of B-FERL via extensive simulations using the CORE simulator against key performance measures such as the time and storage overheads for smart vehicles and RSUs.\\
\textbf{(3)} Our proposal is tailored to meet the integrity requirement for securing smart vehicles and the availability requirement for securing vehicular networks and we provide succinct discussion on the applicability of our proposal to achieve various critical automotive functions such as vehicular forensics, secure vehicular communication and trust management. \\
This paper is an extension of our preliminary ideas presented in [1]. Here, we present a security framework for detecting when an in-vehicle network compromise occurs and provide evidence that reflect actions on ECUs in a vehicle. Also, we present extensive evaluations to demonstrate the efficacy of B-FERL. \\
The rest of the paper is structured as follows. In section 2, we discuss related works and present an overview of our proposed framework in Section 3 where we describe our system, network and threat model. Section 4 describes the details of our proposed framework. In section 5, we discuss results of the performance evaluation. Section 6 present discussions on the potential use cases of B-FERL, comparative evaluation with closely related works, and we conclude the paper in Section 7.
\section{Related Work}
BC has been proposed as security solutions for vehicular networks. However, proposed solutions have not focused on the identification of compromised ECUs for securing vehicular networks.
The author in~\cite{Blackchain:2017} proposed Blackchain, a BC based message revocation and accountability system for secure vehicular communication. However, their proposal does not consider the reliability of data communicated in the vehicular network which could be threatened when an in-vehicle ECU is compromised. The author in~\cite{Ali:2017} presents a BC based architecture for securing automotive networks. However they have not described how their architecture is secured from insider attacks where authorised entities could be motivated to execute rogue actions for their benefits. Also, their proposal does not consider the veracity of data from vehicles. The authors in~\cite{cube:2018} proposed a security platform for autonomous vehicle based on blockchain but have not presented a description of their architecture and its applicability for practical scenarios. Also, their security is towards the prevention of unauthorized network entry using a centralized intrusion detector which is vulnerable to a single point of failure attack. Their proposal do not also consider the malicious tendencies of authorized entities as described in~\cite{Oham:2018}.
The authors in~\cite{Coin:2018} proposed CreditCoin; a privacy preserving BlockChain based incentive announcement and reputation management scheme for smart vehicles. Their proposal is based on threshold authentication where a number of vehicles agree on a message generated by a vehicle and then the agreed message is sent to a nearby roadside unit. However, in addition to the possibility of collusion attacks, the requirement that vehicles would manage a copy of the blockchain presents a significant storage and scalability constraint for vehicles. The authors in~\cite{BARS:2018} have proposed a Blockchain-based Anonymous Reputation System (BARS) for Trust Management in VANETs however, they have not presented details on how reputation is built for vehicles and have also not presented justifications for their choice of reputation evaluation parameters. The authors in~\cite{Contract:2018} have proposed an enhanced Delegated Proof-of-stake (DPoS) consensus scheme with a two-stage soft security solution for secure vehicular communications. However, their proposal is directed at establishing reputation for road side infrastructures and preventing collusion attacks in the network. These authors~\cite{Coin:2018}~\cite{BARS:2018}~\cite{Contract:2018} have also not considered the security of in-vehicle networks.
\section{B-FERL Overview and Threat Model}
In this section, we present a brief overview of B-FERL including the roles of interacting entities, and a description of the network and threat models.
\subsection{Architecture overview}
The architecture of our proposed security solution (B-FERL) is described in Figure~\ref{fig:framework}.
Due to the need to keep track of changes to ECU states and to monitor the behaviour of a vehicle while operational, B-FERL consists of two main BC tiers namely, upper and lower tiers. Furthermore, these tiers clarify the role of interacting entities and ensure that entities are privy to only information they need to know.
The upper tier comprises vehicle manufacturers, service technicians, insurance companies, legal and road transport authorities. The integration of these entities in the upper tier makes it easy to also keep track of actions executed by vehicle manufacturers and service technicians on ECUs such as firmware updates which changes the state of an ECU and allows only trusted entities such as transport and legal authorities to verify such ECU state changes. Interactions between entities in this tier focus on vehicle registration and maintenance. The initial registration data of a vehicle is used to create a record (block) for the vehicle in the upper tier. This record stores the state of the vehicle and the hash values of all ECUs in the vehicle and is used to perform vehicle validation in the lower tier BC. This is accomplished by comparing the current state of the vehicle and the firmware hashes of each ECU in the vehicle to their values in the lower tier BC. Also, the upper tier stores scheduled maintenance or diagnostics data that reflects the actions of vehicle manufacturers and service technicians on a smart vehicle. This information is useful for the monitoring of the vehicle while operational and for making liability decisions in the multi-entity liability attribution model~\cite{Oham:2018}.\\
In the following, we describe actions that trigger interactions in the upper tier. In the rest of the paper unless specifically mentioned, we refer to smart vehicles as \textit{CAVs}.
\begin{itemize}
\item When a \textit{CAV} is assembled, the vehicle manufacturer obtains the ECU Merkle root value ($SS_{ID}$) by computing hash values of all ECUs in the vehicle and forwards this value to the road transport and legal authorities to create a public record (block) for the vehicle. This record is utilized by RSUs to validate vehicles in the lower tier. We present a detailed description of this process in Section 3.
\item When a maintenance occurs in the vehicle, vehicle manufacturers or service technicians follow the process of obtaining the updated $SS_{ID}$ value above and communicate this to the transport and legal authorities to update the record of the vehicle and assess the integrity of its ECUs. We present a detailed description of this process in Section 3. Maintenance here means any activity that alters the state of any of the vehicle's ECUs.
\end{itemize}
The lower tier comprises roadside units (\textit{RSUs}), smart vehicles, legal and road transport authorities. Interactions in this tier focus on identifying when an ECU in a vehicle has been compromised. To achieve this, a vehicle needs to prove its ECUs firmware integrity whenever it connects to an \textit{RSU}. When a vehicle approaches the area of coverage of an \textit{RSU}, the \textit{RSU} sends the vehicle a challenge request to prove the state of its ECUs. To provide a response, the vehicle computes the cumulative hash value of all of its ECUs i.e. its ECU Merkle root ($SS_{ID}$). The response provided by the vehicle is then used to validate its ECUs current state in comparison to the previous state in the lower tier. Also, as a vehicle moves from one \textit{RSU} to the other, an additional layer of verification is added by comparing the time stamps of its current response to the previous response to prevent the possibility of a replay attack. It is noteworthy, that compared to traditional BC which executes a consensus algorithm in order to insert transactions into a block, B-FERL relies on the appendable block concept (ABC) proposed in~\cite{Michelin:2018} where transactions are added to the blocks by valid block owners represented by their public key. Therefore, no consensus algorithm is required in B-FERL to append transactions to the block. To ensure that the integrity of a block is not compromised, ABC decouples the block header from the transactions to enable network nodes store transactions off-chain without compromising block integrity. Furthermore, to ensure scalability in the lower tier, we only store two transactions (which represents the previous and current ECU's firmware state) per vehicle and push other transactions to the cloud where historical data of the vehicle can be accessed when necessary.
However, this operation could introduce additional latency for pushing the extra transaction from the RSU to the cloud storage. This further imposes an additional computing and bandwidth requirement for the RSU. \\
Next, we discuss our network model which describes interacting entities in our proposed framework and their roles.
\begin{figure*}[h]
\centering
\includegraphics[width=0.9\textwidth]{B-CRIF.PNG}
\caption{The Proposed Blockchain Framework}
\label{fig:framework}
\end{figure*}
\subsection{Network model}
To restrict the flow of information to only concerned and authorized entities, we consider a two-tiered network model as shown in Figure \ref{fig:framework}. The upper tier features the road transport and legal authorities responsible for managing the vehicular network. This tier also integrates entities responsible for the maintenance of vehicles such as vehicle manufacturers and the service technicians. It could also include auto-insurance companies who could request complimentary evidence from Transport and Legal authorities for facilitating liability decisions. For simplicity we focus on single entities for each of these however, our proposal is generalizable to the case when there are several of each entity.\\
The lower tier features \textit{CAVs} as well as RSUs which are installed by the road transport authority for the management and monitoring of traffic situation in the road network.
For interactions between \textit{CAVs} and RSUs , we utilize the IEEE802.11p communication standard which has been widely used to enable vehicle-to-vehicle and vehicle-to-infrastructure communications [4]. However, 5G is envisaged to bring about a new vehicular communication era with higher reliability, expedited data transmissions and reduced delay [5]. Also, we utilise PKI to issue identifiable digital identities to entities and establish secure communication channels for permissible communication.
The upper tier features a permissioned blockchain platform managed by the road transport and legal authorities. Vehicle manufacturers and service technicians participate in this BC network by sending sensor update notification transactions which are verified and validated by the BC network managers. Insurance companies on the other hand participate by sending request transactions for complimentary evidence to facilitate liability attribution and compensation payments. The lower tier also features a permissioned BC platform managed by the road transport, legal authorities and RSUs. In this tier, we maintain vehicle-specific profiles. To achieve this, once a vehicle enters the area of coverage of a roadside unit (RSU), the RSU sends a challenge request to the vehicle by which it reports the current state of its ECUs. Once a valid response is provided, the vehicle is considered trustworthy until another challenge-response activity. \\
We present a full description of the entire process involved in our proposed framework in section 3.
\subsection{Threat Model}
Given the exposure of \textit{CAVs} to the Internet, they become susceptible to multiple security attacks which may impact the credibility of data communicated by a vehicle. In the attack model, we consider how relevant entities could execute actions to undermine the proposed framework. The considered attacks include: \\
\textbf{Fake data:} A compromised vehicle could try to send misleading information in the vehicular network for its benefit. For example, it could generate false messages about a traffic incident to gain advantage on the road. Also, to avoid being liable in the case of an accident, a vehicle owner could manipulate an ECU to generate false data.\\
\textbf{Code injection:} Likely liable entities such as the vehicle manufacturer and service technician could send malware to evade liability. vehicle owners on the other hand could execute such actions to for example reduce the odometer value for the vehicle to increase its resale value.\\
\textbf{Sybil attack:} A vehicle could create multiple identities to manipulate vehicular network, for example by creating false alarm such as false traffic jam etc.\\
\textbf{Masquerade attack (fake vehicle):} A compromised roadside unit could create a fake vehicle or an external adversary could create a fake vehicle for the purpose of causing an accident or changing the facts of an accident. \\
\textbf{ECU State Reversal Attack: } A vehicle owner could extract the current firmware version of an ECU and install its malicious version and revert to the original version for verification purpose.
\section{Blockchain based Framework for sEcuring smaRt vehicLes (B-FERL)} \label{sec:b-ferl}
This section outlines the architecture of the proposed framework. As described in Figure~\ref{fig:framework}, entities involved in our framework include vehicle manufacturers, service technicians, insurance companies, \textit{CAVs}, RSUs, road transport and legal authorities. Based on entity-roles described in section 2, we categorize entities as verifiers and proposers. Verifiers are entities that verify and validate data sent to the BC. Verifiers in B-FERL include RSUs, road transport and legal authorities. Proposers are entities sending data to the BC or providing a response to a challenge request. Proposers in our framework include \textit{CAVs}, vehicle manufacturers, service technicians and insurance companies. \\
In B-FERL architecture, we assume that the CAVs are producing many transactions, especially in high density smart city areas. Most of blockchains implementations are designed to group transactions, add them into a block and only after that append the new block into the blockchain, which leads to a sequential transaction insertion. To tackle this limitation, in B-FERL we adopted a blockchain framework presented by Michelin et al.~\cite{Michelin:2018} which introduces the appendable block concept (ABC). This blockchain solution enables multiple CAVs to append transactions in different blocks at same time. The framework identifies each CAV by its public key, and for each different public key, a block is created in the blockchain data structure. The block is divided in two distinct parts: (i) block header, which contains the CAV public key, the previous block header hash, the timestamp; (ii) block payload, where all the transactions are stored. The transaction storage follows a linked list data structure, the first transaction contains the block header hash, while the coming transactions contain the previous transaction hash. This data structure allows the solution to insert new transaction into existing blocks. Each transaction must be signed by the CAV private key, once the transaction signature is validated with the block's public key, the RSU can proceed appending the transaction into the block identified by the CAV public key. Based on the public key, the BC maps all the transactions from a specific entity to the same block.
\subsection{Transactions}
Transactions are the basic communication primitive in BC for the exchange of information among entities in B-FERL.
Having discussed the roles of entities in each tier in B-FERL, in this section, we discuss the details of communication in each tier facilitated by the different kind of transactions. Transactions generated are secured using cryptographic hash functions (SHA-256), digital signatures and asymmetric encryption. \\
\textbf{\textit{Upper tier}}\\
Upper tier transactions include relevant information about authorized actions executed on a \textit{CAV}. They also contain interactions that reflect the time a vehicle was assembled. Also, in this tier, insurance companies could seek complementary evidence from the road transport and legal authorities in the event of an accident hence, a request transaction is also sent in this tier. \\
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Merkletree.PNG}
\caption{Obtaining the Merkle tree root value}
\label{fig:merkle}
\end{figure}
\textbf{Genesis transaction:} This transaction is initiated by a vehicle manufacturer when a vehicle is assembled. The genesis transaction contains the initial $SS_{ID}$ value which is the Merkle tree root from the \textit{CAV's} ECU firmware hashes at \textit{CAV} creation time, time stamp, firmware hashes of each ECU and associated timestamps, ($H(ECU){_1}$, $T{_1}$), ($H(ECU){_2}$, $T{_2}$), .....($H(ECU){_n}$, $T{_n}$) which reflect when an action was executed on the ECU, the public key and signature of the vehicle manufacturer. Figure \ref{fig:merkle} shows how the $SS_{ID}$ of a \textit{CAV} with 8 ECUs is derived.
\begin{center}
Genesis = [$SS_{ID}$, TimeStamp, ($H(ECU){_1}$, $T{_1}$), ($H(ECU){_2}$, $T{_2}$), .....($H(ECU){_n}$, $T{_n}$), PubKey, Sign]
\end{center}
The genesis transaction is used by the transport and legal authorities to create a genesis block for a \textit{CAV}. This block is a permanent record of the \textit{CAV} and used to validate its authenticity in the lower tier. It contains the genesis transaction, public key of the \textit{CAV}, time stamp which is the time of block creation and an external address such as an address to a cloud storage where \textit{CAV} generated data would be stored as the block size increases. \\
\textbf{Update transaction:} This transaction could be initiated by a vehicle manufacturer or a service technician. It is initiated when the firmware version of an ECU in the \textit{CAV} is updated during scheduled maintenance or diagnostics. An update transaction leads to a change in the initial $SS_{ID}$ value and contains the updated $SS_{ID}$ value, time stamp, public key of \textit{CAV}, public key of vehicle manufacturer or service technician and their signatures. \\
When an update transaction is received in the upper tier, the update transaction updates the record (block) of the \textit{CAV} in the lower tier. The updated \textit{CAV} block will now be utilized by RSUs to validate the authenticity of the \textit{CAV} in the lower tier.\\
\textbf{Request transaction:} This transaction is initiated by an insurance company to facilitate liability decisions and compensation payments. It contains the signature of the insurance company, the data request and its public key.\\
\textbf{\textit{Lower tier}} \\
Communication in the lower tier reflect how transactions generated in the upper tier for CAVs are appended to their public record (block) in the lower tier. Additionally, we describe how the block is managed by an RSU in the lower tier and the transport and legal authorities in the upper tier. Lower tier communications also feature the interactions between \textit{CAVs} and RSUs and describes how the integrity of ECUs in a \textit{CAV} is verified. In the following, we describe the interactions that occur in the lower tier. \\
\textbf{Updating CAV block:} Updating the block of a \textit{CAV} is either performed by the road transport and legal authorities or by an RSU. It is performed by the road transport and legal authorities after an update transaction is received in the upper tier. It is performed by an RSU after it receives a response to a challenge request sent to the vehicle. The challenge-response scenario is described in the next type of transaction. The update executed by an RSU contains a \textit{CAV’s} response which includes the signature of the \textit{CAV}, time stamp, response to the challenge and \textit{CAV’s} public key. It also contains the hash of the previous transaction in the block computed by the RSU, the signature and public key of the RSU.\\
\textbf{Challenge-Response transaction:} The Challenge-Response transaction is a request from an RSU to prove the integrity of its ECUs. This request is received when the \textit{CAV} comes into the RSU's area of coverage. When this occurs, the \textit{CAV} receives a twofold challenge from the RSU. First is a challenge to compute its $SS_{ID}$ to ascertain the integrity of its state. Next challenge is to compute the hash value of randomly selected ECUs to prevent and detect the malicious tendencies of vehicle owners discussed in Section 3.\\
The \textit{CAV} responds by providing a digitally signed response to the request.
\subsection{Operation}
In this section we describe key operations in our proposed framework. The proposed framework works in a permissioned mode where road transport and legal authorities have rights to manage the BC in the upper and lower tier. Service technicians as well as vehicle manufacturers generate data when they execute actions that alters the internal state of a \textit{CAV} while \textit{CAVs} prove the integrity of their ECUs when they connect to a RSU. \\
We define 2 critical operations in our proposed framework:
\subsubsection{Initialization} Describes the process of creating a record for a vehicle in the vehicular network. Once a genesis transaction is generated for a \textit{CAV} by a vehicle manufacturer, upper tier verifiers verify the transaction and upon a successful verification, a genesis block is broadcasted in the lower tier for the \textit{CAV}. \\
\begin{figure}[h]
\centering
\includegraphics[width=0.53\textwidth]{tiert.PNG}
\caption{\textit{CAV} record initialization (black) and upper-tier update (blue) operations.}
\label{fig:operation}
\end{figure}
Figure \ref{fig:operation} describes the process of block creation (assembling) for \textit{CAVs}. It outlines the requisite steps leading to the creation of a block (record) for a \textit{CAV}.
\subsubsection{Update} Describes the process of updating the record of the vehicle in the vehicular network. The update operation results in a change in the block of a \textit{CAV} in the lower tier. The update operation occurs in the upper and lower tier. In the upper tier, an update operation occurs when a vehicle manufacturer performs a diagnostic on a \textit{CAV} or when a scheduled maintenance is conducted by a service technician. In the lower tier, it occurs when a \textit{CAV} provides a response to the challenge request initiated by an RSU. In the following we discuss the update operation that occurs at both tiers. \\
\textbf{Upper-tier update:} Here, we describe how the earlier mentioned actions of the vehicle manufacturer or service technician alters the existing record for a \textit{CAV} in the vehicular network.\\
Figure \ref{fig:operation} outlines the necessary steps to update the record of a vehicle. After completing the diagnostics or scheduled maintenance (step 1), the vehicle manufacturer or service technician retrieves the hash of all sensors in the vehicle (step 2) and computes a new ECU Merkle root value (step 3). Next, an update transaction is created to reflect the action on the vehicle (step 4). This transaction includes the computed ECU Merkle root value, time stamp to reflect when the diagnostics or maintenance was conducted, signature of the entity conducting the maintenance or diagnostics and a metadata field that describes what maintenance or diagnostics was conducted on the \textit{CAV}. Next, the transaction is broadcasted in the upper tier (step 5) and verified by verifiers (step 6); road transport and legal authorities by validating the signature of the proposer (step 7). Upon signature validation, an update block is created by the verifiers for the \textit{CAV} (step 8) and broadcasted in the lower tier (step 9). \\
\textbf{Lower tier update:} We describe here how the update of a \textit{CAV’s} record is executed by an RSU after the initialization steps in the lower tier. \\
Figure \ref{fig:lowupdate} describes the necessary steps involved in updating the record of the \textit{CAV} in the lower tier. When a \textit{CAV} approaches the area of coverage of an RSU, the RSU sends the \textit{CAV} a challenge request which is to prove that it is a valid \textit{CAV} by proving its current sensor state (Step 1). For this, the \textit{CAV} computes its current $SS_{ID}$ value as well as the hash values of selected ECUs (Step 2) and forward it to the RSU including its signature, time stamp and public key (Step 3). \\
\begin{figure*}[h]
\centering
\includegraphics[width=0.85\textwidth]{lowupdate.PNG}
\caption{Lower-tier update operations.}
\label{fig:lowupdate}
\end{figure*}
When the RSU receives the response data from the \textit{CAV}, it first verifies that the vehicle is a valid \textit{CAV} by using its public key ($PubKey_{CAV}$) to check that the vehicle has a block in the BC (Step 4). Only valid vehicles have a block (record) in the BC. When the RSU retrieves $PubKey_{CAV}$, it validates the signature on the response data (Step 4.1). If validation succeeds, the RSU retrieves the firmware hash value in the \textit{CAV’s} block (Step 5) proceeds to compare the computed hash values with the value on the \textit{CAV’s} block (Step 5.1). Otherwise, the RSU reports to the road transport and legal authorities of the presence of a malicious \textit{CAV} or an illegal \textit{CAV} if there is no block for such \textit{CAV} in the BC (Step 4.2). If the comparison of hash values succeeds, the RSU updates the \textit{CAV’s} record in the lower tier to include the $SS_{ID}$ value, the time stamp, and public key of the \textit{CAV} (Step 6). This becomes the latest record of the \textit{CAV} in the lower tier until another challenge-response round or another maintenance or diagnostic session. However, if the hash value differs, the RSU reports to the road transport and legal authorities of the presence of a malicious \textit{CAV} (Step 5.2). \\
When the \textit{CAV} encounters another RSU, another challenge-response activity begins. This time, the RSU repeats the steps (1-5), in-addition, another layer of verification is executed. The RSU compares the time stamp on the response data to the immediate previous record stored on the lower tier blockchain (Step 5.1.2). The time stamp value is expected to continuously increase as the vehicle travels, if this is the case, RSU executes updates the \textit{CAV’s} block (Step 6). Otherwise, the RSU can detect a malicious action and report this to the road transport and legal authority (Step 5.2). However, if a malicious \textit{CAV} reports a time stamp greater than its previous time stamp, we rely on the assumption that one or more of its ECUs would have been compromised and so it would produce an $SS_{ID}$ different from its record in the lower tier. Another alternative is to comparatively evaluate its time-stamp against the time-stamp of other vehicles in the RSU area of coverage. To ensure that the blockchain in the lower tier scales efficiently, we store only two transactions per \textit{CAV} block. In this case, after successfully executing (Step 5.1.2), the RSU removes the genesis transaction from the block and stores it in a cloud storage which can be accessed using the external address value in the \textit{CAV’s} block. \\
With the challenge-response activity, we build a behaviour profile for \textit{CAV’s} and continuously prove the trustworthiness of a vehicle while operational. Also, by keeping track of the actions of likely liable entities such as the service technician and vehicle manufacturer and by storing vehicle’s behaviour profile in the blockchain, we obtain historical proof that could be utilised as contributing evidence for facilitating liability decisions.
\section{Performance Evaluation}
The evaluation of B-FERL was performed in an emulated scenario using Common Open Research Emulator (CORE), running in a Linux Virtual Machine using six processor cores and 12 Gb of RAM. Based on the appendable blocks concept described in section~\ref{sec:b-ferl}, B-FERL supports adding transactions of a specific \textit{CAV} to a block. This block is used to identify the \textit{CAV} in the lower tier and stores all of its records. \\
The initial experiments aim to identify the project viability, and thus enable us to plan ahead for real world scenario experimentation. The evaluated scenario consists of multiple CAVs (varying from 10 to 200) exchanging information with a peer-to-peer network with five RSU in the lower tier.
Initially, we evaluate the time it takes B-FERL to perform the system initialization. This refers to the time it takes the upper tier validators to create a record (block) for a \textit{CAV}. Recall that in Figure~\ref{fig:operation}, creating a record for a \textit{CAV} is based on the successful verification of the genesis transaction sent from vehicle manufacturers. The results presented are the average of ten runs and we also show the standard deviation for each given scenario. In this first evaluation, we vary the amount of genesis transactions received by validators from 10 to 200 to identify how B-FERL responds to the increasing number of simultaneous transactions received.
The results are presented in Figure~\ref{fig:createBlock}. Time increases in a linear progression as the number of \textit{CAVs} increases. The time measured in milliseconds increases from 0.31 ms (standard deviation 0.12 ms) for 10 \textit{CAVs}, to 0.49 ms (standard deviation 0.22 ms) for 200 \textit{CAVs} which is still relatively low compared to the scenario with 10 \textit{CAVs}.\\
Once the blocks are created for the \textit{CAVs}, the upper tier validators broadcast the blocks to the RSUs. In the next evaluation, we measure the time taken for an RSU to update its BC with the new block. The time required for this action is 0.06 ms for 200 \textit{CAVs} which reflects the efficiency of B-FERL given the number of \textit{CAVs}.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{TimeToAddBlock.pdf}
\caption{Time taken to create a block}
\label{fig:createBlock}
\end{figure}
The next evaluation was the time that each RSU takes to evaluate the challenge response. This is an important measure in our proposed solution as it reflects the time taken by an RSU to verify the authenticity of a \textit{CAV} and conduct the ECU integrity check. This process is described in steps 4 to 6 presented in Figure~\ref{fig:lowupdate}. Figure~\ref{fig:validateChallenge} presents the average time, which increases linearly from 1.37 ms (standard deviation 0.15 ms) for 10 \textit{CAVs} to 2.02 ms (standard deviation 0.72 ms) for 200 \textit{CAVs}. From the result, we can see that the actual values are small even for large group of \textit{CAVs}.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{TimeToValidateChallenge.pdf}
\caption{Time taken to validate a challenge from vehicles}
\label{fig:validateChallenge}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{TimeMerkleTree.pdf}
\caption{Time taken to calculate Merkle tree root}
\label{fig:merkleResult}
\end{figure}
In the next evaluation, we evaluate the time it takes a \textit{CAV} to compute its merkle tree root defined as the cumulative sum of all its ECU hash. According to NXP, a semiconductor supplier for automotive industries~\cite{NXP:2017}, the number of ECUs range from 30 to 100 in a modern vehicle. In this evaluation, we assume that as vehicle functions become more automated, the number of ECUs is likely to increase. Therefore, in our experiments, we vary the number of ECUs from 10 to 1,000. Figure~\ref{fig:merkleResult} presents the time to compute the Merkle tree root. The results present a linear growth as the number of ECUS increases. In our result, even when the number of ECUs in a \textit{CAV} are 1000, the time to compute the Merkle tree root is about 12 ms which is still an acceptable time for a highly complex scenario.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{SizeBlockchain.pdf}
\caption{Blockchain size}
\label{fig:blocksize}
\end{figure}
In the final evaluation, we consider the amount of storage required by an RSU to store the BC for different number of \textit{CAVs}. To get a realistic picture of required storage, we considered the number of vehicles in New South Wales (NSW), Australia in 2018. As presented in Figure~\ref{fig:blocksize}, the number of blocks (which represents the number of vehicles) was changed from 100,000 representing a small city in NSW to 5,600,0000\footnote{5,600,0000 represents the number of cars in the state of New South Wales, according to www.abs.gov.au}. Based on the results, an RSU must have around 5 Gb to store the BC structure in the state of New South Wales. This result show that it is feasible for an RSU to maintain the BC for all \textit{CAVs} in NSW.
\section{Discussion}
In this section, we provide a further discussion considering the security, Use cases as well a comparative evaluation of B-FERL against related work.
\subsection{Security analysis}
In this section, we discuss how our proposal demonstrates resilience against attacks described in the attack model. \\
\textbf{Fake data:} For this to occur, one or more data generating ECU of a \textit{CAV} would have been compromised. We can detect this attack during the challenge-response activity between the compromised \textit{CAV} and an RSU where the \textit{CAV} is expected to prove the integrity of its ECU by computing its ECU Merkle tree root value. \\
\textbf{Code injection:} Actions executed by service technicians and vehicle manufacturers are stored in the upper tier and could be traced back to them. Vehicle owners are not be able to alter their odometer value as such actions would make the $SS_{ID}$ value different from what is in its record in the lower tier. \\
\textbf{Sybil attack:} The only entities capable of creating entities in the vehicular networks are the verifiers in the upper tier who are assumed to be trusted. A vehicle trying to create multiple entities must be able to create valid blocks for those entities which is infeasible in our approach. \\
\textbf{Masquerade attack (fake vehicles):} A compromised RSU cannot create a block for a \textit{CAV}. As such, this attack is unlikely to be undetected in B-FERL. Also, a \textit{CAV} is considered valid only if its public key exists in the BC managed by the road transport and legal authorities. \\
\textbf{ECU State Reversal Attack: } We address this attack using the random ECU integrity verification challenge. By randomly requesting the values of ECU in a \textit{CAV}, RSUs could detect the reversal attack by comparing the timestamps ECUs against their entries in the lower tier BC.
Having discussed our defense mechanism, it is noteworthy that while the utilization of a public key introduces a trade-off that compromises privacy and anonymity of a vehicle, the public key is only utilized by a RSU to identify a vehicle in the challenge-response transaction which ascertains the state of a vehicle and does not require the transmission of sensitive and privacy related information.
\subsection{Use case}
In this section, we discuss the applicability of our proposed solution to the following use cases in the vehicular networks domain: (1) Vehicular forensics, (2) Trust management, and (3) Secure vehicular communication. \\
\textbf{\textit{Vehicular forensics:}} In the liability attribution model proposed for \textit{CAVs} in~\cite{Oham:2018}, liability in the event of an accident could be split amongst entities responsible for the day-to-day operation of the \textit{CAVs} including the vehicle manufacturers, service technicians and vehicle owners. Also, the authors in~\cite{Norton:2017} have identified conditions for the attribution of liability to the aforementioned entities. The consensus is to attribute liability to vehicle manufacturer and technicians for product defect and service failure respectively and to the vehicle owners for negligence. In our proposed work, we keep track of authorized actions of vehicle manufacturers and service technicians in the upper tier and so we are able to identify which entity executed the last action on the vehicle before the accident. Also, with the challenge-response between RSUs and \textit{CAVs} in the lower tier, we are able to obtain historical proof that proves how honest or rogue a vehicle has been in the vehicular network. Consider the \textit{CAV} in Figure 1, if before entering the coverage region of an RSU, an accident occurs, we could generate evidence before the occurrence of the accident in the lower tier that reflects the behavior of the \textit{CAV} and such evidence could be utilized with the accident data captured by the vehicle for facilitating liability decisions. \\
\textbf{\textit{Trust Management:}} Trust management in vehicular networks either assesses the veracity of data generated by a vehicle or the reputation of a vehicle [19]. This information is used to evaluate trust in the network. However, existing works on trust management for vehicular networks significantly relies on the presence of witness vehicles to make trust based decisions [19-22] and could therefore make wrong trust decisions if there are little or no witnesses available. Also, reliance on witnesses also facilitate tactical attacks like collusion and badmouthing. In our proposal, we rely solely on data generated by a \textit{CAV} and we can confirm the veracity of data generated or communicated by the \textit{CAV} by obtaining such evidence in the lower tier from the historical challenge-response activity between a \textit{CAV} and RSUs as the \textit{CAV} travels. \\
\textbf{\textit{Secure vehicular communication networks:}} Given that the successful execution of a malicious action by a \textit{CAV} reflects that at least one of the \textit{CAV's} ECUs has been compromised and as a result, undermines the security of the vehicular networks. We describe below how our proposal suffices as an apposite security solution for vehicular networks. \\
\textbf{Identifying compromised \textit{CAVs}}: By proving the state of ECUs in \textit{CAVs}, we can quickly identify cases of ECU tampering and quickly broadcast a notification of malicious presence in the vehicular network to prevent other \textit{CAVs} from communicating with the compromised \textit{CAV}. \\
\textbf{Effective revocation mechanism:} Upon the identification of a malicious \textit{CAV} during the challenge-response activity, Road transport authorities could also efficiently revoke the communication rights of such compromised \textit{CAV} to prevent further compromise such as the propagation of false messages in the network by the compromised \textit{CAV}.
\subsection{Comparative evaluation}
In this section, we comparatively evaluate B-FERL against the works proposed in [9-10], [15], [23] using identified requirements for securing in-vehicle networks. \\
\textbf{Adversaries}: Identified works are vulnerable to attacks executed by authorized entities (insider attacks) but in B-FERL, we address this challenge by capturing all interactions between all entities responsible for the operation of the \textit{CAV} including the owner, manufacturer and service technician. By recording these actions in the upper tier (BC), we ensure that no entity can repudiate its actions. Furthermore, by proving the state of ECUs in a \textit{CAV}, we are able to identify possible attacks. \\
\textbf{Decentralization:} By storing vehicle related data as well as actions executed by manufacturers and service technicians in the BC, we ensure that no entity can alter or modify any of its actions. Also, by verifying the internal state of a \textit{CAV} as it moves from one RSU to another, we preserve the security of the vehicular networks. \\
\textbf{Privacy:} By restricting access to information to only authorized entities in B-FERL, we preserve the privacy of concerned entities in our proposed framework. \\
\textbf{Safety:} By verifying the current state of a \textit{CAV} against its record in the lower tier, we ensure communication occurs only between valid and honest \textit{CAVs} which ultimately translates to secure communications in the vehicular network.
\section{Conclusion}
In this paper, we have presented a Blockchain based Framework for sEcuring smaRt vehicLes (B-FERL). The purpose of B-FERL is to identify when an ECU of a smart vehicle have been compromised by querying the internal state of the vehicle and escalate identified compromise to requisite authorities such as the road transport and legal authority who takes necessary measure to prevent such compromised vehicles from causing harm to the vehicular network. Given this possibility, B-FERL doubles as a detection and reaction mechanism offering adequate security to vehicles and the vehicular network. Also, we demonstrated the practical applicability of B-FERL to critical applications in the vehicular networks domain including trust management, secure vehicular network and vehicular forensics where we discuss how B-FERL could offer non-repudiable and reliable evidence to facilitate liability attribution. Furthermore, by qualitatively evaluating the performance of B-FERL, we demonstrate how it addresses key challenges of earlier identified works. Security analysis also confirms B-FERL's resilience to a broad range of attacks perpetuated by adversaries including those executed by supposedly benign internal entities. Simulation results reflect the practical applicability of B-FERL in realistic scenarios. \\
Our current proposal provides security for smart vehicles by identifying when a vehicle becomes compromised and secures the vehicle against possible exploitations by internal adversaries. An interesting future direction would be to consider the privacy implication for a smart vehicle as it travels from one roadside unit to another.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
| 2024-02-18T23:39:40.868Z | 2020-07-22T02:06:02.000Z | algebraic_stack_train_0000 | 50 | 7,840 |
|
proofpile-arXiv_065-297 | \section{Introduction}
\label{sec:start}
\subsection{The landscape}\label{sect:landsc}
In this paper, we give a combinatorial description of the structures on
which diagonal groups, including those arising in the O'Nan--Scott Theorem,
act.
This is a rich area, with links not only to finite group theory (as in the
O'Nan--Scott Theorem) but also to designed experiments, and the combinatorics
of Latin squares and their higher-dimensional generalisations. We do not
restrict our study to the finite case.
Partitions lie at the heart of this study. We express the Latin hypercubes
we need in terms of partitions, and our final structure for diagonal groups
can be regarded as a join-semilattice of partitions. Cartesian products of sets
can be described in terms of the partitions induced by the coordinate projection
maps
and this approach was introduced into the study of primitive permutation groups
by L.~G.~Kov\'acs~\cite{kov:decomp}. He called the collection of these coordinate partitions
a ``system of product imprimitivity''. The concept was further developed
in~\cite{ps:cartesian} where the same object was called a ``Cartesian decomposition''.
In preparation for introducing the join-semilattice of partitions for the diagonal
groups, we view Cartesian decompositions as lattices of partitions of the
underlying set.
Along the way, we also discuss a number of conditions on families of partitions
that have been considered in the literature, especially the statistical
literature.
\subsection{Outline of the paper}
As said above, our aim is to describe the geometry and combinatorics underlying
diagonal groups, in general. In the O'Nan--Scott Theorem, the diagonal groups
$D(T,m)$ depend on a non-abelian simple group $T$ and a positive integer~$m$.
But these groups can be defined for an arbitrary group $T$, finite or infinite,
and we investigate them in full generality.
Our purpose is to describe the structures on which diagonal groups act. This
takes two forms: descriptive, and axiomatic. In the former, we start with a
group $T$ and a positive integer $m$, build the structure on which the group
acts, and study its properties. The axiomatic approach is captured by the
following theorem, to be proved in Section~\ref{sec:diag}. Undefined terms
such as Cartesian lattice, Latin square,
paratopism, and diagonal semilattice will be introduced later, so that when
we get to the point of proving the theorem its statement should be clear.
We mention here that the automorphism group of a Cartesian lattice is,
in the simplest case,
a wreath product of two symmetric groups in its product action, while the
automorphism group of a diagonal semilattice $\dsl Tm$ is the diagonal group $D(T,m)$;
Latin squares, on the other hand, may (and usually do) have only the trivial
group of automorphisms.
\begin{theorem}\label{thm:main}
Let $\Omega$ be a set with $|\Omega|>1$, and $m$ an integer at least $2$. Let $Q_0,\ldots,Q_m$
be $m+1$ partitions of $\Omega$ satisfying the following property: any $m$
of them are the minimal non-trivial partitions in a Cartesian lattice on
$\Omega$.
\begin{enumerate}\itemsep0pt
\item If $m=2$, then the three partitions are the row, column, and letter
partitions of a Latin square on $\Omega$, unique up to paratopism.
\item If $m>2$, then there is a group $T$, unique up to isomorphism,
such that $Q_0,\ldots,Q_m$ are the minimal non-trivial partitions in a diagonal
semilattice $\dsl Tm$ on $\Omega$.
\end{enumerate}
\end{theorem}
The case $m=3$ in Theorem~\ref{thm:main}(b) can be phrased in the language
of Latin cubes and may thus be of independent interest. The proof is in
Theorems~\ref{thm:bingo} and \ref{th:upfront} (see also
Theorem~\ref{thm:regnice}). See Section~\ref{sec:whatis} for the definition
of a regular Latin cube of sort (LC2).
\begin{theorem}
\label{thm:bingo_}
Consider a Latin cube of sort (LC2) on an underlying set~$\Omega$,
with coordinate partitions $P_1$, $P_2$ and $P_3$, and letter partition~$L$.
Then the Latin cube is regular if and only if there is a group~$T$ such that, up to relabelling the letters
and the three sets of coordinates,
$\Omega=T^3$ and $L$ is the coset partition defined
by the diagonal subgroup $\{(t,t,t) \mid t \in T\}$.
Moreover, $T$ is unique up to group isomorphism.
\end{theorem}
Theorem~\ref{thm:main}
has a similar form to the axiomatisation of projective geometry
(see \cite{vy}). We give simple axioms, and show that diagonal structures of smallest
dimension satisfying them are ``wild'' and exist in great profusion, while
higher-dimensional structures can be completely described in terms of an
algebraic object. In our case, the algebraic object is a group, whereas,
for projective geometry, it is a division ring, also called a skew field.
Note that the group emerges naturally from the combinatorial axioms.
In Section~\ref{sec:prelim}, we describe the preliminaries required.
Section~\ref{sec:Cart} revisits Cartesian decompositions, as described
in~\cite{ps:cartesian}, and defines Cartesian lattices.
Section~\ref{sec:LC} specialises to the case that $m=3$. Not only does this
show that this case is very different from $m=2$; it also underpins the
proof by induction of Theorem~\ref{thm:main}, which is given in
Section~\ref{sec:diag}.
In the last two sections, we give further results on diagonal groups. In
Section~\ref{s:pqp}, we determine which diagonal groups are primitive,
and which are quasiprimitive (these two conditions turn out to be equivalent).
In Section~\ref{s:diaggraph}, we define a graph having a given diagonal
group as its automorphism group (except for four small diagonal groups),
examine some of its graph-theoretic properties, and briefly describe the
application of this to synchronization properties of permutation groups
from~\cite{bccsz} (finite primitive diagonal groups with $m\geqslant2$ are
non-synchronizing).
The final section poses a few open problems related to this work.
\subsection{Diagonal groups}\label{sect:diaggroups}
In this section we define the diagonal groups, in two ways: a ``homogeneous''
construction, where all factors are alike but the action is on a coset space;
and an ``inhomogeneous'' version
which gives an alternative way of labelling the elements of the underlying set
which is better for calculation
even though one of the factors has to be treated differently.
Let $T$ be a group with $|T|>1$, and $m$ an integer with $m\geqslant1$. We define the
\emph{pre-diagonal group} $\widehat D(T,m)$ as the semidirect
product of $T^{m+1}$ by $\operatorname{Aut}(T)\times S_{m+1}$, where $\operatorname{Aut}(T)$ (the
automorphism group of $T$) acts in the same way on each factor, and $S_{m+1}$
(the symmetric group of degree $m+1$) permutes the factors.
Let $\delta(T,m+1)$ be the diagonal subgroup $\{(t,t,\ldots,t) \mid t\in T\}$
of $T^{m+1}$,
and $\widehat H=\delta(T,m+1)\rtimes (\operatorname{Aut}(T)\times S_{m+1})$.
We represent $\widehat D(T,m)$ as a permutation group on the set of
right cosets of $\widehat H$. If $T$ is finite, the degree of this
permutation representation is $|T|^m$. In general, the action is not
faithful, since $\delta(T,m+1)$ (acting by conjugation)
induces inner automorphisms of $T^{m+1}$, which agree with the inner
automorphisms induced by $\operatorname{Aut}(T)$.
In fact, if $m\geqslant 2$ or $T$ is non-abelian, then the kernel of the $\widehat D(T,m)$-action
is
\begin{align}\label{eq:K}
\begin{split}
\widehat K
&=\{(t,\ldots,t)\alpha\in T^{m+1}\rtimes \operatorname{Aut}(T)\mid t\in T\mbox{ and}\\
&\mbox{$\alpha$ is the
inner automorphism induced by $
t^{-1}$}\},
\end{split}
\end{align}
and so $\widehat K\cong T$. Thus, if, in addition, $T$ is finite,
then the order of the permutation group induced by
$\widehat D(T,m)$ is $|\widehat D(T,m)|/| \widehat K|=
|T|^m(|\operatorname{Aut}(T)|\times|S_{m+1}|)$. If $m=1$ and $T$ is abelian, then
the factor $S_2$ induces the inversion automorphism $t\mapsto t^{-1}$ on $T$ and
the permutation group induced by $\widehat D(T,m)$ is the holomorph
$T\rtimes \operatorname{Aut}(T)$.
We define the \emph{diagonal group} $D(T,m)$ to be the permutation group
induced by $\widehat D(T,m)$ on the set of right cosets of $\widehat H$ as above.
So $D(T,m)\cong \widehat D(T,m)/\widehat K$.
To move to a more explicit representation of $D(T,m)$,
we choose coset representatives
for $\delta(T,m+1)$ in $T^{m+1}$. A convenient choice is to number the direct
factors of
$T^{m+1}$ as $T_0,T_1,\ldots,T_m$, and use representatives of
the form $(1,t_1,\ldots,t_m)$, with $t_i\in T_i$. We will denote this
representative by $[t_1,\ldots,t_m]$, and let $\Omega$ be the set of all
such symbols. Thus, as a set, $\Omega$ is bijective with~$T^m$.
\begin{remark}\label{rem:diaggens}
Now we can describe the action of $\widehat D(T,m)$ on $\Omega$ as follows.
\begin{itemize}\itemsep0pt
\item[(I)] For $1\leqslant i\leqslant m$, the factor $T_i$ acts by right multiplication
on symbols in the $i$th position in elements of $\Omega$.
\item[(II)] $T_0$ acts by simultaneous left multiplication of all coordinates by
the inverse. This is because, for $x\in T_0$, $x$ maps the coset containing
$(1,t_1,\ldots,t_m)$ to the coset containing $(x,t_1,\ldots,t_m)$, which is
the same as the coset containing $(1,x^{-1}t_1,\ldots,x^{-1}t_m)$.
\item[(III)] Automorphisms of $T$ act simultaneously on all coordinates; but
inner automorphisms are identified with the action of elements in the diagonal
subgroup $\delta(T,m+1)$ (the element $(x,x,\ldots,x)$ maps the coset containing
$(1,t_1,\ldots,t_m)$ to the coset containing $(x,t_1x,\ldots,t_mx)$, which is
the same as the coset containing $(1,x^{-1}t_1x,\ldots,x^{-1}t_mx)$).
\item[(IV)] Elements of $S_m$ (fixing coordinate $0$) act by permuting the
coordinates in elements of $\Omega$.
\item[(V)] Consider the element of $S_{m+1}$ which transposes coordinates $0$ and~$1$.
This maps the coset containing $(1,t_1,t_2,\ldots,t_m)$ to the coset containing
the tuple $(t_1,1,t_2\ldots,t_m)$, which
also contains
$(1,t_1^{-1},t_1^{-1}t_2,\ldots,t_1^{-1}t_m)$. So the action of this
transposition is
\[[t_1,t_2,\ldots,t_m]\mapsto[t_1^{-1},t_1^{-1}t_2,\ldots,t_1^{-1}t_m].\]
Now $S_m$ and this transposition generate $S_{m+1}$.
\end{itemize}
\end{remark}
By~\eqref{eq:K}, the kernel $\widehat K$ of the $\widehat D(T,m)$-action
on~$\Omega$ is
contained in the subgroup generated by elements of type (I)--(III).
For example, in the case when $m=1$, the set $\Omega$ is bijective with
$T$; the factor $T_1$ acts by right multiplication, $T_0$ acts by left
multiplication by the inverse, automorphisms act in the natural way, and
transposition of the coordinates acts as inversion.
The following theorem states that the diagonal group $D(T,m)$ can be
viewed as the automorphism group of the corresponding diagonal join-semilattice
$\dsl Tm$ and the diagonal graph $\Gamma_D(T,m)$ defined in
Sections~\ref{sec:diag1} and~\ref{sec:dgds}, respectively. The two parts of
this theorem comprise Theorem~\ref{t:autDTm} and Corollary~\ref{c:sameag}
respectively.
\begin{theorem}
Let $T$ be a non-trivial group, $m\geqslant 2$, let $\dsl Tm$ be the diagonal semilattice and
$\Gamma_D(T,m)$ the diagonal graph. Then the following are valid.
\begin{enumerate}
\item The automorphism group of $\dsl Tm$ is $D(T,m)$.
\item If $(|T|,m)\not\in\{(2,2),(3,2),(4,2),(2,3)\}$, then the automorphism group of $\Gamma_D(T,m)$ is $D(T,m)$.
\end{enumerate}
\end{theorem}
\subsection{History}
The celebrated O'Nan--Scott Theorem describes the socle (the product of the
minimal normal subgroups) of a finite permutation group. Its original form
was different; it was a necessary condition for a finite permutation group
of degree~$n$ to be a maximal subgroup of the symmetric or alternating
group of degree~$n$. Since the maximal intransitive and imprimitive subgroups
are easily described, attention focuses on the primitive maximal subgroups.
The theorem was proved independently by Michael O'Nan and Leonard Scott,
and announced by them at the Santa Cruz conference on finite groups in 1979.
(Although both papers appeared in the preliminary conference proceedings, the
final published version contained only Scott's paper.) However, the roots
of the theorem are much older; a partial result appears in Jordan's
\textit{Trait\'e des Substitutions} \cite{jordan}
in 1870. The extension to arbitrary primitive groups is due to Aschbacher
and Scott~\cite{aschsc} and independently to Kov\'acs~\cite{kov:sd}. Further
information on the history of the theorem is given in
\cite[Chapter 7]{ps:cartesian} and~\cite[Sections~1--4]{kovacs}.
For our point of view, and avoiding various complications, the theorem
can be stated as follows:
\begin{theorem}\label{thm:ons}
Let $G$ be a primitive permutation group on a finite set $\Omega$. Then one
of the following four conditions holds:
\begin{enumerate}
\item $G$ is contained in an affine group $\operatorname{AGL}(d,p)\leqslant\operatorname{Sym}(\Omega)$,
with $d\geqslant1$ and $p$ prime, and so preserves the affine geometry of
dimension $d$ over the field with $p$ elements with point set $\Omega$;
\item $G$ is contained in a wreath product in its product action contained in
$\operatorname{Sym}(\Omega)$, and so preserves a Cartesian decomposition of $\Omega$;
\item $G$ is contained in the diagonal group $D(T,m)\leqslant\operatorname{Sym}(\Omega)$,
with $T$ a non-abelian finite simple group and $m\geqslant1$;
\item $G$ is almost simple (that is, $T\leqslant G\leqslant\operatorname{Aut}(T)$, where $T$
is a non-abelian finite simple group).
\end{enumerate}
\end{theorem}
Note that, in the first three cases of the theorem, the action of the group
is specified; indeed, in the first two cases, we have a geometric or
combinatorial structure which is preserved by the group. (Cartesian
decompositions are described in detail in~\cite{ps:cartesian}.) One of our
aims in this paper is to provide a similar structure preserved by diagonal
groups, although our construction is not restricted to the case where $T$ is
simple, or even finite.
It is clear that the Classification of Finite Simple Groups had a great
effect on the applicability of the O'Nan--Scott Theorem to the study of
finite primitive permutation groups; indeed, the landscape of the subject
and its applications has been completely transformed by CFSG.
In Section~\ref{s:pqp} we characterise primitive and quasiprimitive diagonal
groups as follows.
\begin{theorem}\label{th:primaut}
Suppose that $T$ is a non-trivial group, $m\geqslant 2$, and consider $D(T,m)$
as a permutation group on $\Omega=T^{m}$. Then the following
are equivalent.
\begin{enumerate}
\item $D(T,m)$ is a primitive permutation group;
\item $D(T,m)$ is a quasiprimitive permutation group;
\item $T$ is a characteristically simple group, and, if $T$ is
an elementary abelian $p$-group, then $p\nmid(m+1)$.
\end{enumerate}
\end{theorem}
Diagonal groups and the structures they preserve have occurred in other
places too. Diagonal groups with $m=1$ (which in fact are not covered by
our analysis) feature in the paper ``Counterexamples to a theorem of Cauchy''
by Peter Neumann, Charles Sims and James Wiegold~\cite{nsw}, while
diagonal groups over the group $T=C_2$ are automorphism groups of the
\emph{folded cubes}, a class of distance-transitive graphs, see~\cite[p.~264]{bcn}.
Much less explicit information is available about related questions on infinite symmetric groups.
Some maximal subgroups of infinite symmetric groups have been associated
with structures such as subsets, partitions~\cite{braziletal,macn,macpr},
and Cartesian decompositions~\cite{covmpmek}.
However, it is still not known if infinite symmetric groups have
maximal subgroups that are analogues of the maximal subgroups of simple
diagonal type in finite symmetric or alternating groups. If $T$ is a possibly
infinite simple group, then the diagonal group $D(T,m)$ is primitive and,
by~\cite[Theorem~1.1]{uniform}, it cannot be embedded into a wreath product in
product action. On the other hand, if $\Omega$ is a countable set, then, by
\cite[Theorem~1.1]{macpr}, simple diagonal type groups are properly contained
in maximal subgroups of $\operatorname{Sym}(\Omega)$. (This containment is proper since the
diagonal group itself is not maximal; its product with the finitary symmetric
group properly contains it.)
\section{Preliminaries}
\label{sec:prelim}
\subsection{The lattice of partitions}
\label{sec:part}
A partially ordered set (often abbreviated to \textit{poset}) is a set
equipped with a partial order, which we here write as $\preccurlyeq$.
A finite poset
is often represented by a \emph{Hasse diagram}.
This is a diagram drawn as a graph in the plane. The vertices of the diagram
are the elements of the poset; if $q$ \emph{covers} $p$ (that is, if $p\prec q$
but there is no element $r$ with $p \prec r \prec q$),
there is an edge joining $p$ to~$q$,
with $q$ above $p$ in the plane (that is, with larger $y$-coordinate).
Figure~\ref{f:hasse} represents the divisors of $36$, ordered by divisibility.
\begin{figure}[htbp]
\begin{center}
\setlength{\unitlength}{1mm}
\begin{picture}(20,20)
\multiput(0,10)(5,-5){3}{\circle*{2}}
\multiput(5,15)(5,-5){3}{\circle*{2}}
\multiput(10,20)(5,-5){3}{\circle*{2}}
\multiput(0,10)(5,-5){3}{\line(1,1){10}}
\multiput(0,10)(5,5){3}{\line(1,-1){10}}
\end{picture}
\end{center}
\caption{\label{f:hasse}A Hasse diagram}
\end{figure}
In a partially ordered set with order relation $\preccurlyeq$,
we say that an element $c$ is the \emph{meet}, or \emph{infimum},
of $a$ and $b$ if
\begin{itemize}
\renewcommand{\itemsep}{0pt}
\item $c\preccurlyeq a$ and $c\preccurlyeq b$;
\item for all $d$, $d\preccurlyeq a$ and $d\preccurlyeq b$ implies
$d\preccurlyeq c$.
\end{itemize}
The meet of $a$ and $b$, if it exists, is unique; we write it $a\wedge b$.
Dually, $x$ is the \emph{join}, or \emph{supremum} of $a$ and $b$ if
\begin{itemize}
\item $a\preccurlyeq x$ and $b\preccurlyeq x$;
\item for all $y$, if $a\preccurlyeq y$ and $b\preccurlyeq y$,
then $x\preccurlyeq y$.
\end{itemize}
Again the join, if it exists, is unique, and is written $a\vee b$.
The terms ``join'' and ``supremum'' will be used interchangeably.
Likewise, so will the terms ``meet'' and ``infimum''.
In an arbitrary poset, meets and joins may not exist. A poset in which every
pair of elements has a meet and a join is called a \emph{lattice}.
A subset of a lattice which is closed under taking joins is called a
\emph{join-semilattice}.
The poset shown in Figure~\ref{f:hasse} is a lattice. Taking it as described
as the set of divisors of $36$ ordered by divisibility, meet and join are
greatest common divisor and least common multiple respectively.
In a lattice, an easy induction shows that suprema and infima of arbitrary
finite sets exist and are unique. In particular, in a finite lattice there is
a unique minimal element and a unique maximal element. (In an infinite lattice,
the existence of least and greatest elements is usually assumed. But all
lattices in this paper will be finite.)
\medskip
The most important example for us is the \emph{partition lattice} on a set
$\Omega$, whose elements are all the partitions of $\Omega$. There are
(at least) three different ways of thinking about partitions. In one
approach, used in \cite{rab:as,pjc:ctta,ps:cartesian},
a partition of
$\Omega$ is a set $P$ of pairwise disjoint subsets of $\Omega$, called \textit{parts}
or \textit{blocks}, whose union is $\Omega$.
For $\omega$ in $\Omega$, we write $P[\omega]$ for the unique part of $P$
which contains~$\omega$.
A second approach uses equivalence relations. The ``Equivalence Relation
Theorem'' \cite[Section 3.8]{pjc:ctta} asserts that, if $R$ is an equivalence
relation on a set~$\Omega$, then the equivalence classes of~$R$ form a partition
of~$\Omega$. Conversely, if $P$~is a partition of~$\Omega$ then there is a
unique equivalence relation~$R$ whose equivalence classes are the parts of~$P$.
We call $R$ the \textit{underlying equivalence relation} of~$P$. We write
$x\equiv_Py$ to mean that $x$ and $y$ lie in the same part of~$P$ (and so are
equivalent in the corresponding relation).
The third approach to partitions, as kernels of functions,
is explained near the end of this subsection.
The ordering on partitions is given by
\begin{quote}
$P\preccurlyeq Q$ if and only if every part of $P$ is contained in a part of $Q$.
\end{quote}
Note that $P\preccurlyeq Q$ if and only if $R_P\subseteq R_Q$, where $R_P$
and $R_Q$ are the equivalence relations corresponding to $P$ and $Q$, and
a relation is regarded as a set of ordered pairs.
For any two partitions $P$ and $Q$, the parts of $P\wedge Q$ are all
\emph{non-empty} intersections of a part of $P$ and a part of $Q$. The join
is a little harder to define. The two elements $\alpha$, $\beta$ in $\Omega$
lie in the same part of $P\vee Q$ if and only if there is a finite sequence
$(\omega_0,\omega_1,\ldots,\omega_m)$ of elements of $\Omega$,
with $\omega_0=\alpha$ and $\omega_m=\beta$, such that $\omega_i$ and
$\omega_{i+1}$ lie in the same part of $P$ if $i$ is even, and
in the same part of $Q$ if $i$ is odd. In other words, there is a walk of finite
length from $\alpha$ to~$\beta$ in which each step remains within a part of
either $P$ or~$Q$.
In the partition lattice on $\Omega$, the unique least element is the partition
(denoted by $E$) with all parts of size~$1$,
and the unique greatest element (denoted by $U$) is
the partition with a single part $\Omega$.
In a sublattice
of this, we shall call an element \textit{minimal} if it is minimal subject
to being different from~$E$.
(Warning: in some of the literature that we cite, this partial order is
written as~$\succcurlyeq$. Correspondingly, the Hasse diagram is the other
way up and the meanings of $\wedge$ and $\vee$ are interchanged.)
For a partition~$P$, we denote by $|P|$ the number of parts of~$P$.
For example, $|P|=1$ if and only if $P=U$. In the infinite case, we interpret
$|P|$ as the cardinality of the set of parts of~$P$.
There is a connection between partitions and functions which will be important
to us. Let $F\colon\Omega\to\mathcal{T}$ be a function, where $\mathcal{T}$
is an auxiliary set. We will assume, without loss of generality,
that $F$ is onto. Associated with~$F$ is a partition of $\Omega$,
sometimes denoted by $\widetilde F$, whose
parts are the inverse images of the elements of $\mathcal{T}$; in other words,
two points of $\Omega$ lie in the same part of~$\widetilde F$ if and only if they
have the same image under~$F$. In areas of algebra such as semigroup theory
and universal algebra, the partition~$\widetilde F$ is referred to as the
\emph{kernel} of $F$.
This point of view is common in experimental design in statistics, where
$\Omega$~is the set of experimental units, $\mathcal{T}$~the set of treatments
being compared, and $F(\omega)$~is the treatment applied to the unit~$\omega$:
see~\cite{rab:design}.
For example, an element $\omega$ in $\Omega$ might be a plot in an agricultural
field, or a single run of an industrial machine, or one person for one month.
The outcomes to be measured are thought of as functions on $\Omega$,
but variables like $F$ which partition~$\Omega$ in ways that may
affect the outcome are called \textit{factors}. If $F$ is a factor, then the
values $F(\omega)$, for $\omega$ in $\Omega$, are called \textit{levels}
of~$F$. In this context,
usually no distinction is made between the function~$F$ and the
partition $\widetilde F$ of $\Omega$ which it defines.
If $F\colon\Omega\to\mathcal{T}$ and $G\colon\Omega\to\mathcal{S}$ are two
functions on $\Omega$, then the partition $\widetilde F\wedge\widetilde G$ is the
kernel of the function $F\times G\colon\Omega\to\mathcal{T}\times\mathcal{S}$,
where $(F\times G)(\omega)=(F(\omega),G(\omega))$. In other words,
$\widetilde{F\times G}=\widetilde{F}\wedge\widetilde{G}$.
\begin{defn}
One type of partition which we make use of is the (right) \emph{coset
partition} of a group relative to a subgroup. Let $H$ be a subgroup of a
group~$G$, and let $P_H$ be the partition of $G$ into right cosets of $H$.
\end{defn}
We gather a few basic properties of coset partitions.
\begin{prop}
\label{prop:coset}
\begin{enumerate}
\item
If $H$ is a normal subgroup of $G$, then $P_H$ is the kernel (in the general
sense defined earlier) of the natural homomorphism from $G$ to $G/H$.
\item
$P_H\wedge P_K=P_{H\cap K}$.
\item
$P_H\vee P_K=P_{\langle H,K\rangle}$.
\item
The map $H\mapsto P_H$ is an isomorphism from the lattice of subgroups of~$G$
to a sublattice of the partition lattice on~$G$.
\end{enumerate}
\end{prop}
\begin{proof}
(a) and (b) are clear. (c) holds because elements of $\langle H,K\rangle$
are composed of elements from $H$ and $K$. Finally, (d) follows from (b) and
(c) and the fact that the map is injective.
\end{proof}
Subgroup lattices of groups have been extensively investigated: see, for
example, Suzuki~\cite{suzuki:book}.
\subsection{Latin squares}
\label{sec:LS}
A \emph{Latin square} of order~$n$ is usually defined as an $n\times n$
array~$\Lambda$ with entries from an alphabet~$T$ of size~$n$
with the property that each letter in~$T$ occurs once in each row and once
in each column of~$\Lambda$.
The diagonal structures in this paper can be regarded as generalisations, where
the dimension is not restricted to be $2$, and the alphabet is allowed to be
infinite. To ease our way in, we re-formulate the definition as follows. For
this definition we regard $T$ as indexing the rows and columns as well as the
letters. This form of the definition allows the structures to be infinite.
A \emph{Latin square} consists of a pair of sets $\Omega$ and $T$, together
with three functions $F_1,F_2,F_3\colon\Omega\to T$, with the property that, if
$i$ and $j$ are any two of $\{1,2,3\}$, the map
$F_i\times F_j\colon\Omega\to T\times T$ is a bijection.
We recover the original definition by specifying that the $(i,j)$ entry
of~$\Lambda$ is equal to~$k$ if the unique point $\omega$ of $\Omega$ for which
$F_1(\omega)=i$ and $F_2(\omega)=j$ satisfies $F_3(\omega)=k$. Conversely,
given the original definition, if we index rows and columns with $T$, then
$\Omega$ is the set of cells of the array, and $F_1,F_2,F_3$ map a cell to its
row, column, and entry respectively.
In the second version of the definition,
the set~$T$ acts as an index set for rows, columns and
entries of the square. We will need the freedom to change the indices
independently; so we now rephrase the definition in terms of the
three partitions $P_i=\widetilde F_i$ ($i=1,2,3$).
Two partitions $P_1$ and $P_2$ of $\Omega$ form a \emph{grid} if,
for all $p_i\in P_i$ ($i=1,2$), there is a unique point of $\Omega$ lying in
both $p_1$ and $p_2$. In other words, there is a bijection $F$ from
$P_1\times P_2$ to $\Omega$ so that $F(p_1,p_2)$ is the unique point in
$p_1\cap p_2$. This implies that $P_1\wedge P_2=E$ and $P_1\vee P_2=U$, but
the converse is not true.
For example, if $\Omega = \{1,2,3,4,5,6\}$ the partitions
$P_1 =\{\{1,2\},\{3,4\},\{5,6\}\}$ and $P_2=\{\{1,3\}, \{2,5\}, \{4,6\}\}$
have these properties but do not form a grid.
Three partitions $P_1,P_2,P_3$ of $\Omega$ form a \emph{Latin square} if
any two of them form a grid.
This third version of the definition is the one that we shall mostly use
in this paper.
\begin{prop}
\label{p:order}
If $\{P_1,P_2,P_3\}$ is a Latin square on $\Omega$, then $|P_1|=|P_2|=|P_3|$,
and this cardinality is also the cardinality of any part of any of the three
partitions.
\end{prop}
\begin{proof}
Let $F_{ij}$ be the bijection from $P_i\times P_j$ to $\Omega$, for
$i,j\in\{1,2,3\}$, $i\ne j$.
For any part~$p_1$ of~$P_1$,
there is a bijection $\phi$ between $P_2$ and~$p_1$:
simply put $\phi(p_2) = F_{12}(p_1,p_2) \in p_1$ for each part $p_2$ of~$P_2$.
Similarly there is a bijection~$\psi$ between $P_3$ and $p_1$
defined by $\psi(p_3) = F_{13}(p_1,p_3) \in p_1$ for each part $p_3$ of~$P_3$.
Thus $|P_2|=|P_3|=|p_1|$, and $\psi^{-1}\phi$ is an explicit bijection
from $P_2$ to $P_3$.
Similar bijections are defined by any part~$p_2$ of $P_2$ and any part~$p_3$
of~$P_3$.
The result follows.
\end{proof}
The three partitions are usually called \emph{rows}, \emph{columns} and
\emph{letters}, and denoted by $R,C,L$ respectively. This refers to the
first definition of the Latin square as a square array of letters. Thus,
the Hasse diagram of the three partitions is shown in Figure~\ref{f:ls}.
\begin{figure}[htbp]
\begin{center}
\setlength{\unitlength}{1mm}
\begin{picture}(30,30)
\multiput(15,5)(0,20){2}{\circle*{2}}
\multiput(5,15)(10,0){3}{\circle*{2}}
\multiput(5,15)(10,10){2}{\line(1,-1){10}}
\multiput(5,15)(10,-10){2}{\line(1,1){10}}
\put(15,5){\line(0,1){20}}
\put(10,3){$E$}
\put(10,13){$C$}
\put(10,23){$U$}
\put(0,13){$R$}
\put(27,13){$L$}
\end{picture}
\end{center}
\caption{\label{f:ls}A Latin square}
\end{figure}
The number defined in Proposition~\ref{p:order} is called the \emph{order} of
the Latin square. So, with our
second definition, the order of the Latin square is $|T|$.
\medskip
Note that the number of Latin squares of order $n$ grows faster than the
exponential of $n^2$, and the vast majority of these (for large $n$) are
not Cayley tables of groups. We digress slightly to discuss this.
The number of Latin squares of order $n$ is a rapidly growing function, so
rapid that allowing for paratopism (the natural notion of isomorphism for
Latin squares, regarded as sets of partitions; see before
Theorem~\ref{thm:albert} for the definition) does not affect the leading
asymptotics. There is
an elementary proof based on Hall's Marriage Theorem that the number is at least
\[n!(n-1)!\cdots1!\geqslant(n/c)^{n^2/2}\]
for a constant $c$. The van der Waerden permanent conjecture (proved by
Egory\v{c}ev and Falikman~\cite{e:vdwpc,f:vdwpc}) improves the
lower bound to $(n/c)^{n^2}$. An elementary argument using only Lagrange's
and Cayley's Theorems shows that the number of groups of order $n$ is much
smaller; the upper bound is $n^{n\log n}$. This has been improved to
$n^{(c\log n)^2}$ by Neumann~\cite{pmn:enum}. (His theorem was conditional on a
fact about finite simple groups, which follows from the classification of these
groups.) The elementary arguments referred to, which suffice for our claim,
can be found in \cite[Sections~6.3,~6.5]{pjc:ctta}.
Indeed, much more is true: almost all Latin squares have trivial
autoparatopism groups~\cite{pjc:asymm,mw}, whereas
the autoparatopism group of the Cayley table of a group of order~$n$
is the diagonal group, which has order at least
$6n^2$, as we shall see at the end of Section~\ref{sect:lsautgp}.
\medskip
There is a graph associated with a Latin square, as follows: see
\cite{bose:SRG,pjc:rsrg,phelps}. The
vertex set is $\Omega$; two vertices are adjacent if they lie in the same part
of one of the partitions $P_1,P_2,P_3$. (Note that, if points lie in the same
part of more than one of these partitions, then the points are equal.)
This is the \emph{Latin-square graph} associated with the Latin square.
In the finite case,
if $|T|=n$, then it is a regular graph with $n^2$~vertices, valency
$3(n-1)$, in which two adjacent vertices have $n$~common neighbours and two
non-adjacent vertices have $6$ common neighbours.
Any regular finite graph with the property that the number of common
neighbours of vertices $v$ and $w$ depends only on whether or not $v$ and $w$
are adjacent is called \textit{strongly regular}: see \cite{bose:SRG,pjc:rsrg}.
Its parameters are the number of vertices, the valency, and the numbers of
common neighbours of adjacent and non-adjacent vertices respectively. Indeed,
Latin-square graphs form one of the most prolific classes of strongly regular
graphs: the number of such graphs on a square number of vertices grows faster
than exponentially, in view of Proposition~\ref{p:lsgraphaut} below.
A \emph{clique} is a set of vertices, any two adjacent;
a \textit{maximum clique} means a maximal clique (with respect to inclusion)
such that there is no clique of strictly larger size. Thus a maximum clique
must be maximal, but the converse is not necessarily true. The following
result is well-known; we sketch a proof.
\begin{prop}
A Latin square of order $n>4$ can be recovered uniquely from its Latin-square
graph, up to the order of the three partitions and permutations of the rows,
columns and letters.
\label{p:lsgraphaut}
\end{prop}
\begin{proof}
If $n>4$, then any clique of size greater than~$4$ is contained in a unique
clique which is a part of one of the three partitions~$P_i$ for
$i=1,2,3$. In particular, the maximum cliques are the parts of the three
partitions.
Two maximum cliques are parts of the same partition if and only if they are
disjoint (since parts of different partitions intersect in a unique point).
So we can recover the three partitions $P_i$ ($i=1,2,3$) uniquely up to order.
\end{proof}
This proof shows why the condition $n>4$ is necessary. Any Latin-square graph
contains cliques of size $3$ consisting of three cells, two in the same row,
two in the same column, and two having the same entry; and there may also be
cliques of size $4$ consisting of the cells of an \emph{intercalate}, a
Latin subsquare of order~$2$.
We examine what happens for $n\leqslant 4$.
\begin{itemize}
\renewcommand{\itemsep}{0pt}
\item For $n=2$, the unique Latin square is the Cayley table of the group $C_2$;
its Latin-square graph is the complete graph $K_4$.
\item For $n=3$, the unique Latin square is the Cayley table of $C_3$. The
Latin-square graph is the complete tripartite graph $K_{3,3,3}$: the nine
vertices are partitioned into three parts of size~$3$, and the edges join all
pairs of points in different parts.
\item For $n=4$, there are two Latin squares up to isotopy, the Cayley tables
of the Klein group and the cyclic group. Their Latin-square graphs
are most easily identified by
looking at their complements, which are strongly regular graphs on $16$ points
with parameters $(16,6,2,2)$: that is, all vertices have valency~$6$, and any
two vertices have just two common neighbours. Shrikhande~\cite{shrikhande}
showed that there are exactly two such graphs: the $4\times4$ square lattice
graph, sometimes written as $L_2(4)$, which is the line graph~$L(K_{4,4})$
of the complete bipartite graph $K_{4,4}$; and one further graph now called
the \emph{Shrikhande graph}. See Brouwer~\cite{Brouwer} for a detailed
description of this graph.
\end{itemize}
Latin-square graphs were introduced in two seminal papers by Bruck and Bose
in the \emph{Pacific Journal of Mathematics} in 1963~\cite{bose:SRG,Bruck:net}.
A special case of Bruck's main result is that a strongly regular graph having
the parameters $(n^2, 3(n-1), n, 6)$ associated with a Latin-square graph of
order~$n$ must actually be a Latin-square graph, provided that $n>23$.
\subsection{Quasigroups}
\label{sesc:quasi}
A \emph{quasigroup} consists of a set $T$ with a binary operation $\circ$ in
which each of the equations $a\circ x=b$ and $y\circ a=b$ has a unique solution
$x$ or $y$ for any given $a,b\in T$. These solutions are denoted by
$a\backslash b$ and $b/a$ respectively.
According to the second of our three equivalent definitions,
a quasigroup $(T,\circ)$ gives rise to a Latin square
$(F_1,F_2,F_3)$ by the rules that $\Omega=T\times T$ and,
for $(a,b)$ in $\Omega$,
$F_1(a,b)=a$, $F_2(a,b)=b$, and $F_3(a,b)=a\circ b$.
Conversely, a Latin square with rows, columns and letters indexed by a
set $T$ induces a quasigroup structure
on $T$ by the rule that, if we use the pair $(F_1,F_2)$ to identify $\Omega$
with $T\times T$, then $F_3$ maps the pair $(a,b)$ to $a\circ b$. (More
formally, $F_1(\omega)\circ F_2(\omega)=F_3(\omega)$ for all $\omega\in\Omega$.)
In terms of partitions, if $a,b\in T$, and the unique point lying in the part
of $P_1$ labelled $a$ and the part of $P_2$ labelled $b$ also lies in the
part of $P_3$ labelled~$c$, then $a\circ b=c$.
In the usual representation of a Latin square as a square array, the Latin
square is the \emph{Cayley table} of the quasigroup.
Any permutation of $T$ induces a quasigroup isomorphism, by simply
re\-labelling the elements. However, the Latin square property is also
preserved if we choose three permutations
$\alpha_1$, $\alpha_2$, $\alpha_3$ of $T$ independently and define new functions
$G_1$, $G_2$, $G_3$ by $G_i(\omega)=(F_i(\omega))\alpha_i$ for $i=1,2,3$.
(Note that we write permutations on the right, but most other functions on
the left.)
Such a triple of maps is called an
\emph{isotopism} of the Latin square or quasigroup.
We can look at this another way. Each map $F_i$ defines a partition
$P_i$ of~$\Omega$, in which two points lie in the same part if their
images under $F_i$ are equal. Permuting elements of the three image sets
independently has no effect on the partitions. So an isotopism class of
quasigroups corresponds to a Latin square (using the partition definition)
with arbitrary labellings of rows, columns and letters by $T$.
A \emph{loop} is a quasigroup with a two-sided identity. Any quasigroup is
isotopic to a loop, as observed by Albert~\cite{albert}: indeed, any element
$e$ of the quasigroup can be chosen to be the identity. (Use the letters in
the row and column of a fixed cell containing $e$ as column, respectively row,
labels.)
A different equivalence on Latin squares is obtained by applying
a permutation to the three functions $F_1,F_2,F_3$. Two Latin squares (or
quasigroups) are said to be \emph{conjugate}~\cite{kd} or \emph{parastrophic}
\cite{shch:quasigroups} if they are related by such a
permutation. For example, the transposition of $F_1$ and $F_2$ corresponds
(under the original definition) to transposition (as matrix) of the Latin
square. Other conjugations are slightly harder to define: for example,
the $(F_1,F_3)$ conjugate is the square in which the $(i,j)$ entry is $k$ if
and only if the $(k,j)$ entry of the original square is $i$.
Combining the operations of isotopism and conjugation gives the relation of
\emph{paratopism}. The paratopisms form the group $\operatorname{Sym}(T)\wr S_3$. Given a
Latin square or quasigroup, its \emph{autoparatopism group} is the group of
all those paratopisms which preserve it, in the sense that they map the set
$\{(x,y,x\circ y):x,y\in T\}$ of triples to itself. This coincides with the
automorphism group of the Latin square (as set of partitions): take $\Omega$
to be the set of triples and let the three partitions correspond to the values
in the three positions. An autoparatopism is called an \emph{autotopism} if it
is an isotopism. See \cite{paratop} for details.
In the case of groups, a conjugation can be attained
by applying a suitable isotopism, and so the following result is a direct
consequence of Albert's well-known theorem~\cite[Theorem~2]{albert}.
\begin{theorem}\label{thm:albert}
If $\Lambda$ and $\Lambda'$ are Latin squares, isotopic to Cayley tables
of groups $G$ and $G'$ respectively, and if some paratopism maps $\Lambda$
to $\Lambda'$, then the groups $G$ and $G'$ are isomorphic.
\end{theorem}
Except for a small number of exceptional cases, the autoparatopism group of
a Latin square coincides with the automorphism group of its Latin-square graph.
\begin{prop}
\label{p:autlsg}
Let $\Lambda$ be a Latin square of order $n>4$. Then the automorphism group
of the Latin-square graph of $\Lambda$ is isomorphic to the autoparatopism
group of~$\Lambda$.
\end{prop}
\begin{proof}
It is clear that autoparatopisms of $\Lambda$ induce automorphisms of its
graph. The converse follows from Proposition~\ref{p:lsgraphaut}.
\end{proof}
A question which will be of great importance to us is the following: How do
we recognise Cayley tables of groups among Latin squares? The answer is given
by the following theorem, proved in \cite{brandt,frolov}. We first need
a definition, which is given in the statement of \cite[Theorem~1.2.1]{DK:book}.
\begin{defn}
\label{def:quad}
A Latin square satisfies the \textit{quadrangle criterion}, if, for all
choices of $i_1$, $i_2$, $j_1$, $j_2$, $i_1'$, $i_2'$, $j_1'$ and $j_2'$,
if the letter in $(i_1,j_1)$ is equal to the letter in $(i_1',j_1')$,
the letter in $(i_1,j_2)$ is equal to the letter in $(i_1',j_2')$,
and the letter in $(i_2,j_1)$ is equal to the letter in $(i_2',j_1')$,
then the letter in $(i_2,j_2)$ is equal to the letter in $(i_2',j_2')$.
\end{defn}
In other words, any pair of rows and pair of columns define four entries in the
Latin square; if two pairs of rows and two pairs of columns have the property that
three of the four entries are equal, then the fourth entries are also equal.
If $(T,\circ)$ is a quasigroup, it satisfies the quadrangle criterion if and
only if, for any $a_1,a_2,b_1,b_2,a_1',a_2',b_1',b_2'\in T$, if
$a_1\circ b_1=a_1'\circ b_1'$, $a_1\circ b_2=a_1'\circ b_2'$, and
$a_2\circ b_1=a_2'\circ b_1'$, then $a_2\circ b_2=a_2'\circ b_2'$.
\begin{theorem}
\label{thm:frolov}
Let $(T,\circ)$ be a quasigroup. Then $(T,\circ)$ is isotopic to a group if
and only if it satisfies the quadrangle criterion.
\end{theorem}
In \cite{DK:book}, the ``only if'' part of this result is proved in its
Theorem 1.2.1 and the converse is proved in the text following Theorem~1.2.1.
A Latin square which satisfies the quadrangle criterion is called a
\textit{Cayley matrix} in~\cite{DOP:quad}.
If $(T, \circ)$ is isotopic to a group then we may assume that the rows,
columns and letters have been labelled in such a way that $a \circ b= a^{-1}b$
for all $a$, $b$ in~$T$. We shall use this format in the proof of
Theorems~\ref{t:autDT2} and~\ref{thm:bingo}.
\subsection{Automorphism groups}\label{sect:lsautgp}
Given a Latin square $\Lambda=\{R,C,L\}$ on a set $\Omega$, an
\emph{automorphism} of $\Lambda$ is a permutation of $\Omega$ preserving
the set of three partitions; it is a \emph{strong automorphism} if it
fixes the three partitions individually. (These maps are also called
\emph{autoparatopisms} and \emph{autotopisms}, as noted in the preceding
section.)
We will generalise this definition later, in Definition~\ref{def:weak}.
We denote the groups of automorphisms and strong automorphisms by
$\operatorname{Aut}(\Lambda)$ and $\operatorname{SAut}(\Lambda)$ respectively.
In this section we verify that, if $\Lambda$ is the Cayley table of a group
$T$, then $\operatorname{Aut}(\Lambda)$ is the diagonal group $D(T,2)$ defined in
Section~\ref{sect:diaggroups}.
We begin with a principle which we will use several times.
\begin{prop}
Suppose that the group $G$ acts transitively on a set~$\Omega$.
Let $H$ be a subgroup of $G$, and assume that
\begin{itemize}
\renewcommand{\itemsep}{0pt}
\item $H$ is also transitive on $\Omega$;
\item $G_\alpha=H_\alpha$, for some $\alpha\in\Omega$.
\end{itemize}
Then $G=H$.
\label{p:subgp}
\end{prop}
\begin{proof}
The transitivity of $H$ on $\Omega$ means that we can choose a set $X$ of
coset representatives for $G_\alpha$ in $G$ such that $X\subseteq H$. Then
$H=\langle H_\alpha,X\rangle=\langle G_\alpha,X\rangle=G$.
\end{proof}
The next result applies to any Latin square. As noted earlier, given a
Latin square $\Lambda$, there is a loop $Q$ whose Cayley table is $\Lambda$.
\begin{prop}
Let $\Lambda$ be the Cayley table of a loop $Q$ with identity $e$. Then
the subgroup $\operatorname{SAut}(\Lambda)$ fixing the cell in row and
column $e$ is equal to the automorphism group of $Q$.
\label{p:autlatin}
\end{prop}
\begin{proof}
A strong automorphism of $\Lambda$ is given by an isotopism $(\rho,\sigma,\tau)$
of $Q$, where $\rho$, $\sigma$, and $\tau$ are permutations of rows, columns
and letters, satisfying
\[(ab)\tau=(a\rho)(b\sigma)\]
for all $a,b\in Q$. If this isotopism fixes the element $(e,e)$ of $\Omega$,
then substituting
$a=e$ in the displayed equation shows that $b\tau=b\sigma$ for all $b\in Q$,
and so $\tau=\sigma$. Similarly, substituting $b=e$ shows that $\tau=\rho$.
Now the displayed equation shows that $\tau$ is an automorphism of $Q$.
Conversely, if $\tau$ is an automorphism of $Q$, then $(\tau,\tau,\tau)$ is
a strong automorphism of $\Lambda$ fixing the cell $(e,e)$.
\end{proof}
\begin{theorem}
Let $\Lambda$ be the Cayley table of a group $T$. Then $\operatorname{Aut}(\Lambda)$
is the diagonal group $D(T,2)$.
\label{t:autDT2}
\end{theorem}
\begin{proof}
First, we show that $D(T,2)$ is a subgroup of $\operatorname{Aut}(\Lambda)$.
We take $\Omega=T\times T$ and
represent $\Lambda=\{R,C,L\}$ as follows, using notation introduced in
Section~\ref{sec:part}:
\begin{itemize}
\renewcommand{\itemsep}{0pt}
\item $(x,y)\equiv_R(u,v)$ if and only if $x=u$;
\item $(x,y)\equiv_C(u,v)$ if and only if $y=v$;
\item $(x,y)\equiv_L(u,v)$ if and only if $x^{-1}y=u^{-1}v$.
\end{itemize}
(As an array, we take the $(x,y)$ entry to be $x^{-1}y$. As noted at the end
of Section~\ref{sesc:quasi}, this
is isotopic to the usual representation of the Cayley table.)
Routine verification shows that the generators of $D(T,2)$ given in
Section~\ref{sect:diaggroups} of types (I)--(III) preserve these
relations, while the map $(x,y)\mapsto(y,x)$ interchanges $R$ and $C$
while fixing $L$, and the map $(x,y)\mapsto(x^{-1},x^{-1}y)$ interchanges $C$
and $L$ while fixing $R$. (Here is one case: the element $(a,b,c)$ in $T^3$ maps
$(x,y)$ to $(a^{-1}xb,a^{-1}yc)$. If $x=u$ then $a^{-1}xb=a^{-1}ub$, and
if $x^{-1}y=u^{-1}v$ then $(a^{-1}xb)^{-1}a^{-1}yc=(a^{-1}ub)^{-1}a^{-1}vc$.)
Thus $D(T,2)\leqslant\operatorname{Aut}(\Lambda)$.
Now we apply Proposition~\ref{p:subgp} in two stages.
\begin{itemize}
\item First, take $G=\operatorname{Aut}(\Lambda)$ and $H=D(T,2)$. Then $G$ and $H$ both induce
$S_3$ on the set of three partitions; so it suffices to prove that the
group of strong automorphisms of $\Lambda$ is generated by elements of
types (I)--(III) in $D(T,2)$.
\item Second, take $G$ to be $\operatorname{SAut}(\Lambda)$,
and $H$ the group generated by translations and automorphisms of $T$
(the elements of type (I)--(III) in Remark~\ref{rem:diaggens}). Both $G$
and $H$ act transitively on $\Omega$, so it is enough to show that the
stabilisers of a cell (which we can take to be $(1,1)$) in $G$ and $H$ are
equal. Consideration of elements of types (I)--(III)
shows that $H_{(1,1)}=\operatorname{Aut}(T)$,
while Proposition~\ref{p:autlatin} shows that $G_{(1,1)}=\operatorname{Aut}(T)$.
\end{itemize}
The statement at the end of the second stage completes the proof.
\end{proof}
It follows from Proposition~\ref{p:lsgraphaut}
that, if $n>4$, the automorphism group of
the Latin-square graph derived from the Cayley table of a group $T$ of order~$n$
is also the diagonal group $D(T,2)$. For $n\leqslant4$, we described the
Latin-square graphs at the end of Section~\ref{sec:LS}. For the groups $C_2$,
$C_3$, and $C_2\times C_2$, the graphs are $K_4$, $K_{3,3,3}$, and
$L(K_{4,4})$ respectively, with automorphism groups $S_4$,
$S_3\wr S_3$, and $S_4\wr S_2$ respectively. However, the automorphism group
of the Shrikhande graph is the group $D(C_4,2)$, with order $192$.
(The order of the automorphism group is $192$, see Brouwer~\cite{Brouwer},
and it contains $D(C_4,2)$, also with order $192$, as a subgroup.)
It also follows from Proposition~\ref{p:lsgraphaut} that,
if $T$ is a group, then the automorphism group of
the Latin-square graph is transitive on the vertex set. Vertex-transitivity
does not, however,
characterise Latin-square graphs that correspond to groups, as can be
seen by considering the examples in~\cite{wanlesspage}; the smallest example
which is not a group has order~$6$.
Finally, we justify the assertion made earlier, that the Cayley table of a
group of order $n$, as a Latin square, has at least $6n^2$ automorphisms. By
Theorem~\ref{t:autDT2}, this automorphism group is the diagonal group
$D(T,2)$; this group has a quotient $S_3$ acting on the three partitions, and
the group of strong automorphisms contains the right multiplications by
elements of $T^2$.
\subsection{More on partitions}\label{sect:moreparts}
Most of the work that we cite in this subsection has been about partitions of
finite sets.
See \cite[Sections 2--4]{rab:BCC} for a recent summary of this material.
\begin{defn}
\label{def:uniform}
A partition~$P$ of a set~$\Omega$ is \emph{uniform} if all its
parts have the same size in the sense that, whenever $\Gamma_1$
and $\Gamma_2$ are parts of $P$, there is a bijection from $\Gamma_1$
onto $\Gamma_2$.
\end{defn}
Many other words are used for this property for finite sets $\Omega$.
Tjur \cite{tjur84,tjur91} calls
such a partition \emph{balanced}. Behrendt \cite{behr} calls them
\emph{homogeneous}, but this conflicts with the use of this word
in \cite{ps:cartesian}. Duquenne \cite{duq} calls them \textit{regular},
as does Aschbacher~\cite{asch_over}, while Preece \cite{DAP:Oz} calls them
\emph{proper}.
Statistical work has made much use of the notion of orthogonality between
pairs of partitions. Here we explain it in the finite case, before
attempting to find a generalisation that works for infinite sets.
When $\Omega$ is finite, let $V$ be the real vector space $\mathbb{R}^\Omega$
with the usual inner product. Subspaces $V_1$ and $V_2$ of $V$ are defined
in \cite{tjur84} to be \textit{geometrically orthogonal} to each other if
$V_1 \cap(V_1 \cap V_2)^\perp \perp V_2 \cap(V_1\cap V_2)^\perp$.
This is equivalent to saying that the matrices $M_1$ and $M_2$ of orthogonal
projection onto $V_1$ and $V_2$ commute.
If $V_i$ is the set of vectors which are constant on each part of partition
$P_i$ then we say that partition $P_1$ is \textit{orthogonal} to partition $P_2$
if $V_1$ is geometrically orthogonal to $V_2$.
Here are two nice results in the finite case. See, for example,
\cite[Chapter 6]{rab:as}, \cite[Chapter 10]{rab:design} and \cite{tjur84}.
\begin{theorem}
For $i=1$, $2$, let $P_i$ be a partition of the finite set $\Omega$ with
projection matrix $M_i$. If $P_1$ is orthogonal to $P_2$ then the matrix
of orthogonal projection onto the subspace consisting of those
vectors which are constant on each part of the partition $P_1 \vee P_2$ is
$M_1M_2$.
\end{theorem}
\begin{theorem}
\label{thm:addon}
If $P_1$, $P_2$ and $P_3$ are pairwise orthogonal partitions of a finite
set $\Omega$ then $P_1\vee P_2$ is orthogonal to $P_3$.
\end{theorem}
Let $\mathcal{S}$ be a set of partitions of $\Omega$ which are pairwise
orthogonal. A consequence of Theorem~\ref{thm:addon} is that, if $P_1$ and
$P_2$ are in $\mathcal{S}$, then $P_1 \vee P_2$ can be added to $\mathcal{S}$
without destroying orthogonality. This is one motivation for the
following definition.
\begin{defn}
\label{def:tjur}
A set of partitions of a finite set $\Omega$ is a \emph{Tjur block structure}
if every pair of its elements is orthogonal, it is closed under taking
suprema, and it contains $E$.
\end{defn}
Thus the set of partitions in a Tjur block structure forms a join-semi\-lattice.
The following definition is more restrictive, but is widely used by
statisticians, based on the work of many people, including
Nelder \cite{JAN:OBS},
Throckmorton \cite{Thr61} and Zyskind \cite{Zy62}.
\begin{defn}
A set of partitions of a finite set $\Omega$ is an \emph{orthogonal
block structure} if it is a Tjur block structure, all of its partitions
are uniform, it is closed under taking infima, and it contains $U$.
\end{defn}
The set of partitions in an orthogonal block structure forms a lattice.
These notions have been used by combinatorialists and group theorists as
well as statisticians. For example, as explained in Section~\ref{sec:LS},
a Latin square can be regarded as an orthogonal block structure with the
partition lattice shown in Figure~\ref{f:ls}.
The following theorem shows how subgroups of a group can give rise to a Tjur
block structure: see \cite[Section 8.6]{rab:as} and
Proposition~\ref{prop:coset}(c).
\begin{theorem}
Given two subgroups $H$, $K$ of a finite group $G$, the partitions
$P_H$ and $P_K$ into right
cosets of $H$ and $K$ are orthogonal if and only if $HK=KH$ (that is, if and
only if $HK$ is a subgroup of $G$). If this happens, then the join of these
two partitions is the partition $P_{HK}$ into right cosets of $HK$.
\end{theorem}
An orthogonal block structure is called a \textit{distributive block structure}
or a \textit{poset block structure} if each of $\wedge$ and $\vee$ is
distributive over the other.
The following definition is taken from \cite{rab:as}.
\begin{defn}
\label{def:weak}
An \textit{automorphism} of a set of
partitions is a permutation of the underlying set that preserves the set of
partitions. Such an automorphism is a \textit{strong automorphism} if it
preserves each of the partitions.
\end{defn}
The group of strong automorphisms of a poset block structure
is a \textit{generalised wreath product} of symmetric groups: see
\cite{GWP,tulliobook}. One of the aims of the present paper is to
describe the automorphism group of the set of partitions defined by a
diagonal semilattice.
In \cite{CSCPWT}, Cheng and Tsai state that the desirable properties of
a collection
of partitions of a finite set are that it is a Tjur block structure,
all the partitions are uniform, and it contains $U$. This sits between Tjur
block structures and orthogonal block structures but does not seem to have been
named.
Of course, this theory needs a notion of inner product. If the set is
infinite we
would have to consider the vector space whose vectors have all but finitely
many entries zero. But if $V_i$ is the set of vectors which are constant on
each part of partition $P_i$ and if each part of $P_i$ is infinite then $V_i$
is the zero subspace. So we need to find a different definition that will
cover the infinite case.
We noted in Section~\ref{sec:part} that each partition is defined by its
underlying equivalence relation. If $R_1$ and $R_2$ are two equivalence
relations on $\Omega$ then their composition $R_1 \circ R_2$ is the relation
defined by
\[
\omega _1 (R_1 \circ R_2) \omega_2\mbox{ if and only if }
\exists \omega_3\in\Omega\mbox{ such that } \omega_1 R_1 \omega_3\mbox{ and }\omega_3 R_2 \omega_2.
\]
\begin{prop}
\label{prop:commeq}
Let $P_1$ and $P_2$ be partitions of $\Omega$ with underlying equivalence
relations $R_1$ and $R_2$ respectively. For each part $\Gamma$ of $P_1$,
denote by $\mathcal{B}_\Gamma$ the set of parts of $P_2$ whose intersection
with $\Gamma$ is not empty.
The following are equivalent.
(Recall that $P[\omega]$ is the part of $P$ containing $\omega$.)
\begin{enumerate}
\item
The equivalence relations $R_1$ and $R_2$ commute with each other
in the sense that
$R_1 \circ R_2 = R_2 \circ R_1$.
\item The relation $R_1 \circ R_2$ is an equivalence relation.
\item For all $\omega_1$ and $\omega_2$ in $\Omega$, the set
$P_1[\omega_1] \cap P_2[\omega_2]$ is non-empty if and only if the set
$P_2[\omega_1]\cap P_1[\omega_2]$ is non-empty.
\item
Modulo the parts of $P_1 \wedge P_2$, the restrictions of $P_1$
and $P_2$ to any part of $P_1 \vee P_2$ form a grid.
In other words, if $\Gamma$ and $\Xi$ are parts of $P_1$ and $P_2$
respectively, both contained in the same part of $P_1\vee P_2$, then
$\Gamma \cap \Xi \ne \emptyset$.
\item For all parts $\Gamma$ and $\Delta$ of $P_1$, the sets
$\mathcal{B}_\Gamma$ and $\mathcal{B}_\Delta$ are either equal or disjoint.
\item If $\Gamma$ is a part of $P_1$ contained in a part $\Theta$
of $P_1\vee P_2$ then $\Theta$ is the union of the parts of $P_2$
in $\mathcal{B}_\Gamma$.
\end{enumerate}
\end{prop}
In part (d), ``modulo the parts of $P_1\wedge P_2$'' means that, if each of
these parts is contracted to a point, the result is a grid as defined earlier.
In the finite case, if $P_1$ is orthogonal to $P_2$ then their underlying
equivalence relations $R_1$ and $R_2$ commute.
We need a concept that is the same as orthogonality in the
finite case (at least, in the Cheng--Tsai case).
\begin{defn}
\label{def:compatible}
Two uniform partitions $P$ and $Q$ of a set $\Omega$ (which may be finite or
infinite) are \emph{compatible} if
\begin{enumerate}
\item their underlying equivalence relations commute, and
\item their infimum $P\wedge Q$ is uniform.
\end{enumerate}
\end{defn}
If the partitions $P$, $Q$ and $R$ of a set $\Omega$ are pairwise
compatible then the equivalence of statements (a) and (f) of
Proposition~\ref{prop:commeq}
shows that
$P\vee Q$ and $R$ satisfy condition~(a) in
the definition of compatibility. Unfortunately, they may not satisfy
condition~(b), as the following example shows,
so the analogue of Theorem~\ref{thm:addon} for compatibility is not true in
general. However, it is true if we restrict attention to join-semilattices
of partitions where all infima are uniform. This is the case for
Cartesian lattices and for semilattices defined
by diagonal structures (whose definitions follow in
Sections~\ref{sec:firstcd} and \ref{sec:diag1} respectively).
It is also true for group semilattices: if $P_H$ and $P_K$ are the
partitions of a group $G$ into right cosets of subgroups $H$ and $K$
respectively, then $P_H\wedge P_K = P_{H \cap K}$,
as remarked in Proposition~\ref{prop:coset}.
\begin{eg}
\label{eg:badeg}
Let $\Omega$ consist of the $12$ cells in the three $2 \times 2$ squares
shown in Figure~\ref{fig:badeg}. Let $P$ be the partition of $\Omega$
into six rows, $Q$ the partition into six columns, and $R$ the partition
into six letters.
\begin{figure}
\[
\begin{array}{c@{\qquad}c@{\qquad}c}
\begin{array}{|c|c|}
\hline
A & B\\
\hline
B & A\\
\hline
\end{array}
&
\begin{array}{|c|c|}
\hline
C & D\\
\hline
E & F\\
\hline
\end{array}
&
\begin{array}{|c|c|}
\hline
C & D\\
\hline
E & F\\
\hline
\end{array}
\end{array}
\]
\caption{Partitions in Example~\ref{eg:badeg}}
\label{fig:badeg}
\end{figure}
Then $P\wedge Q = P\wedge R = Q \wedge R=E$, so each infimum is uniform.
The squares are the parts of the supremum $P\vee Q$.
For each pair of $P$, $Q$ and~$R$, their
underlying equivalence relations commute. However, the parts
of $(P\vee Q)\wedge R$ in the first square have size two, while all of the
others have size one.
\end{eg}
\section{Cartesian structures}
\label{sec:Cart}
We remarked just before Proposition~\ref{p:order} that three partitions of
$\Omega$ form a Latin square if and only if any two form a grid. The main
theorem of this paper is a generalisation of this fact to higher-dimensional
objects, which can be regarded as Latin hypercubes. Before
we get there, we need to consider the higher-dimensional analogue of grids.
\subsection{Cartesian decompositions and Cartesian lattices}
\label{sec:firstcd}
Cartesian decompositions are defined on \cite[p.~4]{ps:cartesian}. Since we
shall be taking a slightly different approach, we introduce these objects
rather briefly; we show that they are equivalent to those in our approach,
in the sense that each can be constructed from the other in a standard way,
and the automorphism groups of corresponding objects are the same.
\begin{defn}
\label{def:cart}
A \emph{Cartesian decomposition} of a set~$\Omega$, of dimension~$n$, is a
set~$\mathcal{E}$ of $n$ partitions $P_1,\ldots,P_n$ of $\Omega$ such that
$|P_i|\geqslant2$ for all $i$, and for all $p_i\in P_i$ for $i=1,\ldots,n$,
\[|p_1\cap\cdots\cap p_n|=1.\]
A Cartesian decomposition is \emph{trivial} if $n=1$; in this case $P_1$ is
the partition of $\Omega$ into singletons.
\end{defn}
For the rest of this subsection, $P_1,\ldots,P_n$ form a Cartesian decomposition
of $\Omega$.
\begin{prop}\label{prop:CDbij}
There is a well-defined bijection between $\Omega$ and
$P_1\times\cdots\times P_n$, given by
\[\omega\mapsto(p_1,\ldots,p_n)\]
if and only if $\omega\in p_i$ for $i=1,\ldots,n$.
\end{prop}
For simplicity, we adapt the notation in Section~\ref{sec:part} by
writing $\equiv_i$ for the equivalence relation $\equiv_{P_i}$ underlying
the partition~$P_i$.
For any subset $J$ of the index set $\{1,\ldots,n\}$, define a partition
$P_J$ of $\Omega$ corresponding to the following equivalence relation
$\equiv_{P_J}$ written as $\equiv_J$:
\[\omega_1\equiv_J\omega_2 \Leftrightarrow (\forall i\in J)\
\omega_1\equiv_i\omega_2.\]
In other words, $P_J=\bigwedge_{i\in J}P_i$.
\begin{prop}
\label{p:antiiso}
For all $J,K\subseteq \{1,\ldots,n\}$, we have
\[P_{J\cup K}=P_J\wedge P_K,\quad\hbox{and}\quad P_{J\cap K}=P_J\vee P_K.\]
Moreover, the equivalence relations $\equiv_J$ and $\equiv_K$ commute with
each other.
\end{prop}
It follows from this proposition that the partitions $P_J$, for
$J\subseteq\{1,\ldots,n\}$, form a lattice (a sublattice of the partition
lattice on $\Omega$), which is anti-isomorphic to the Boolean lattice of
subsets of $\{1,\ldots,n\}$ by the map $J\mapsto P_J$. We call this lattice
the \emph{Cartesian lattice} defined by the Cartesian decomposition.
For more details we refer to the book~\cite{ps:cartesian}.
Following \cite{JAN:OBS},
most statisticians would call such a lattice a \textit{completely crossed
orthogonal block structure}: see \cite{rab:DCC}.
It is called a \textit{complete factorial structure} in \cite{RAB:LAA}.
(Warning: a different common meaning of \textit{Cartesian lattice} is
$\mathbb{Z}^n$: for example, see \cite{Rand:CL}.)
The $P_i$ are the maximal non-trivial elements of this lattice. Our approach is
based on considering the dual description, the minimal non-trivial elements of
the lattice; these are the partitions $Q_1,\ldots,Q_n$, where
\[Q_i=P_{\{1,\ldots,n\}\setminus\{i\}}=\bigwedge_{j\ne i}P_j\]
and $Q_1,\ldots,Q_n$ generate the Cartesian lattice by repeatedly forming
joins (see Proposition~\ref{p:antiiso}).
\subsection{Hamming graphs and Cartesian decompositions}
\label{sec:HGCD}
The Hamming graph is so-called because of its use in coding theory. The
vertex set is the set of all $n$-tuples over an alphabet $A$; more briefly,
the vertex set is $A^n$. Elements of $A^n$ will be written as
${a}=(a_1,\ldots,a_n)$. Two vertices $a$ and $b$ are joined if
they agree in all but one coordinate, that is, if
there exists~$i$ such that $a_i\ne b_i$ but $a_j=b_j$ for $j\ne i$.
We denote this graph by $\operatorname{Ham}(n,A)$.
The alphabet $A$ may be finite or infinite, but we restrict the number~$n$
to be finite. There is a more general form, involving alphabets
$A_1,\ldots,A_n$; here the $n$-tuples $a$ are required to satisfy $a_i\in A_i$
for $i=1,\ldots,n$ (that is, the vertex set is $A_1\times\cdots\times A_n$);
the adjacency rule is the same. We will call this a \emph{mixed-alphabet
Hamming graph}, denoted $\operatorname{Ham}(A_1,\ldots,A_n)$.
A Hamming graph is connected, and the graph distance between two vertices
$a$ and $b$ is the number of coordinates where they differ:
\[d({a},{b})=|\{i\mid a_i\ne b_i\}|.\]
\begin{theorem}\label{th:cdham}
\begin{enumerate}
\item Given a Cartesian decomposition of~$\Omega$, a unique mixed-alpha\-bet
Hamming graph can be constructed from it.
\item Given a mixed-alphabet Hamming graph on $\Omega$, a unique Cartesian
decomposition of~$\Omega$ can be constructed from it.
\item The Cartesian decomposition and the Hamming graph referred to above
have the same automorphism group.
\end{enumerate}
\end{theorem}
The constructions from Cartesian decomposition to Hamming graph and back are
specified in the proof below.
\begin{proof}
Note that the trivial Cartesian decomposition of $\Omega$ corresponds to the complete
graph and the automorphism group of both is the symmetric group $\operatorname{Sym}(\Omega)$.
Thus in the rest of the proof we assume that the Cartesian decomposition
in item~(a) is non-trivial and the Hamming graph in item~(b) is constructed with
$n\geqslant 2$.
\begin{enumerate}
\item
Let $\mathcal{E}=\{P_1,\ldots,P_n\}$ be a Cartesian decomposition
of $\Omega$ of dimension~$n$:
each $P_i$ is a partition of $\Omega$. By Proposition~\ref{prop:CDbij},
there is a bijection $\phi$ from $\Omega$ to
$P_1\times\cdots\times P_n$: a point $a$ in $\Omega$ corresponds to
$(p_1,\ldots,p_n)$, where $p_i$ is the part of $P_i$ containing~$a$.
Also, by Proposition~\ref{p:antiiso} and the subsequent discussion,
the minimal partitions in the
Cartesian lattice generated by $P_1,\ldots,P_n$ have the form
\[Q_i=\bigwedge_{j\ne i}P_j\]
for $i=1,\ldots,n$; so $a$ and $b$ in $\Omega$ lie in the same part
of $Q_i$ if their
images under $\phi$ agree in all coordinates except the $i$th. So, if we define
$a$ and $b$ to be adjacent if they are in the same part of $Q_i$ for some
$i$, the resultant graph is isomorphic (by $\phi$) to the mixed-alphabet
Hamming graph on $P_1\times\cdots\times P_n$.
\item
Let $\Gamma$ be a mixed-alphabet Hamming graph on
$A_1\times\cdots\times A_n$. Without loss of generality, $|A_i|>1$ for all $i$
(we can discard any coordinate where this fails). We establish various facts
about $\Gamma$; these facts correspond to the claims on pages 271--276 of~\cite{ps:cartesian}.
Any maximal clique in $\Gamma$ has the form
\[C({a},i)=\{{b}\in A_1\times\cdots\times A_n\mid b_j=a_j\hbox{ for }j\ne i\},\]
for some ${a}\in\Omega$, $i\in\{1,\ldots,n\}$. Clearly all vertices in
$C({a},i)$ are adjacent in~$\Gamma$. If ${b},{c}$ are distinct vertices in
$C({a},i)$, then
$b_i\ne c_i$, so no vertex outside $C({a},i)$ can be joined to both.
Moreover, if any two vertices are joined, they differ in a unique coordinate
$i$, and so there is some $a$ in $\Omega$ such that
they both lie in $C({a},i)$ for that value of~$i$.
Let $C=C({a},i)$ and $C'=C({b},j)$ be two maximal cliques.
Put $\delta = \min\{d({ x},{ y})\mid { x}\in C,{ y}\in C'\}$.
\begin{itemize}
\item
If $i=j$, then there is a bijection $\theta\colon C\to C'$ such
that $d({ v},\theta({ v}))=\delta$ and
$d({v},{ w})=\delta +1$ for ${ v}$ in $C$, ${ w}$ in $C'$ and
${ w}\ne\theta({v})$.
(Here $\theta$ maps a vertex in $C$ to the unique vertex
in $C'$ with the same $i$th coordinate.)
\item If $i\ne j$, then there are unique ${ v}$ in $C$ and ${ w}$ in $C'$ with
$d({ v},{ w})= \delta$;
and distances between vertices in
$C$ and $C'$ are $\delta$, $\delta+1$ and $\delta+2$,
with all values realised. (Here ${ v}$ and ${ w}$ are
the vertices which agree in both the $i$th and $j$th coordinates; if two
vertices agree in just one of these, their distance is $\delta+1$, otherwise it
is $\delta+2$.)
\end{itemize}
See also claims 3--4 on pages 273--274 of~\cite{ps:cartesian}.
It is a consequence of the above that the partition of the maximal cliques into \emph{types}, where
$C({a},i)$ has type $i$, is invariant under graph automorphisms; each type forms a
partition $Q_i$ of $\Omega$.
By Proposition~\ref{p:antiiso} and the discussion following it, the maximal non-trivial partitions in the
sublattice generated by $Q_1,\ldots,Q_n$ form a Cartesian decomposition
of~$\Omega$.
\item This is clear, since no arbitrary choices were made in either construction.
\end{enumerate}
\end{proof}
We can describe this automorphism group precisely. Details will be given
in the case where all alphabets are the same; we deal briefly with the
mixed-alphabet case at the end.
Given a set $\Omega=A^n$, the wreath product $\operatorname{Sym}(A)\wr S_n$ acts on
$\Omega$: the $i$th factor of the base group $\operatorname{Sym}(A)^n$ acts on the entries
in the $i$th coordinate of points of $\Omega$, while $S_n$ permutes the
coordinates. (Here $S_n$ denotes $\operatorname{Sym}(\{1,\ldots,n\})$.)
\begin{cor}
The automorphism group of the Hamming graph $\operatorname{Ham}(n,A)$ is the wreath product
$\operatorname{Sym}(A)\wr S_n$ just described.
\end{cor}
\begin{proof}
By Theorem~\ref{th:cdham}(c), the automorphism group of $\operatorname{Ham}(n,A)$ coincides
with the stabiliser in $\operatorname{Sym}(A^n)$ of the natural Cartesian decomposition $\mathcal{E}$
of the set $A^n$. By~\cite[Lemma~5.1]{ps:cartesian},
the stabiliser of $\mathcal{E}$ in $\operatorname{Sym}(A^n)$ is $\operatorname{Sym}(A)\wr S_n$.
\end{proof}
In the mixed alphabet case, only one change needs to be made. Permutations
of the coordinates must preserve the cardinality of the alphabets associated
with the coordinate: that is, $g\in S_n$ induces an automorphism of the
Hamming graph if and only if $ig=j$ implies $|A_i|=|A_j|$ for all $i,j$.
(This condition is clearly necessary. For sufficiency, if $|A_i|=|A_j|$,
then we may actually identify $A_i$ and $A_j$.)
So if $\{1,\ldots,n\}=I_1\cup\cdots\cup I_r$, where $I_k$ is the non-empty set
of those indices for which the corresponding alphabet has some given cardinality,
then the group $\operatorname{Aut}(\operatorname{Ham}(A_1,\ldots,A_n))$ is the direct product of $r$ groups, each
a wreath product $\operatorname{Sym}(A_{i_k})\wr\operatorname{Sym}(I_k)$, acting in its product action,
where $i_k$ is a member of $I_k$.
Part~(c) of Theorem~\ref{th:cdham} was also proved in~\cite[Theorem~12.3]{ps:cartesian}.
Our proof is a simplified version of the proof presented in~\cite{ps:cartesian}
and is included here as a nice application of the lattice theoretical framework
developed in Section~\ref{sec:prelim}. The automorphism group of the mixed-alphabet Hamming graph can also be determined
using the characterisation of the automorphism groups of Cartesian products of graphs.
The first such characterisations were given by Sabidussi~\cite{Sabidussi} and
Vizing~\cite{Vizing}; see also~\cite[Theorem~6.6]{grhandbook}.
The recent preprint~\cite{MZ} gives a self-contained elementary proof in the case of
finite Hamming graphs.
\section{Latin cubes}
\label{sec:LC}
\subsection{What is a Latin cube?}
\label{sec:whatis}
As pointed out in \cite{dap75oz,dap83enc,dap89jas,DAcube},
there have been many different definitions of
a Latin cube (that is, a three-dimensional generalisation of a Latin square)
and of a Latin hypercube (a higher-dimensional generalisation).
Typically, the underlying set $\Omega$ is a Cartesian product
$\Omega_1 \times \Omega_2 \times\cdots \times \Omega_m$
where $\left|\Omega_1\right| = \left|\Omega_2\right| = \cdots =
\left|\Omega_m\right|$. As for Latin squares in Section~\ref{sec:LS}, we often
seek to relabel the elements of $\Omega_1$, \ldots, $\Omega_m$ so that
$\Omega = T^m$ for some set~$T$. The possible
conditions are concisely summarised in \cite{CRC}. The alphabet is
a set of letters of cardinality $\left|T\right|^a$ with
$1\leqslant a\leqslant m-1$, and the \emph{type} is $b$ with
$1\leqslant b\leqslant m-a$. The definition is that if the values of any $b$
coordinates are fixed then all letters in the given alphabet occur
equally often on the subset of $\Omega$ so defined (which can be regarded
as a $(m-b)$-dimensional array, so that the $|T|^b$ arrays of this form
partition $T^m$; these are parallel lines or planes in a cubical array
according as $b=2$ or $b=1$).
One extreme case has $a=1$ and $b=m-1$.
This definition is certainly in current use
when $m \in \{3,4\}$: for example, see \cite{MWcube,MulWeb}.
The hypercubes in \cite{LMW}
have $a=1$ but allow smaller values of $b$.
The other extreme has $a=m-1$ and $b=1$,
which is what we have here.
(Unfortunately, the meaning of the phrase ``Latin hypercube design'' in
Statistics has completely changed in the last thirty years. For example,
see \cite{tang2009,tang93}.)
Fortunately, it suffices for us to consider Latin cubes, where $m=3$.
Let $P_1$, $P_2$ and $P_3$ be the partitions which give the standard Cartesian
decomposition of the cube $\Omega_1 \times \Omega_2 \times \Omega_3$.
Following~\cite{DAcube}, we call the parts of
$P_1$, $P_2$ and $P_3$ \textit{layers}, and the parts of $P_1\wedge P_2$,
$P_1\wedge P_3$ and $P_2\wedge P_3$ \textit{lines}. Thus a layer is a slice
of the cube parallel to one of the faces.
Two lines $\ell_1$ and
$\ell_2$ are said to be \textit{parallel} if there is some
$\{i,j\}\subset \{1,2,3\}$ with $i\ne j$ such that $\ell_1$ and $\ell_2$
are both parts of $P_i \wedge P_j$.
The definitions in \cite{CRC,DAcube} give us the following three possibilities
for the case that $|\Omega_i|=n$ for $i$ in $\{1,2,3\}$.
\begin{itemize}
\item[(LC0)]
There are $n$ letters, each of which occurs once per line.
\item[(LC1)]
There are $n$ letters, each of which occurs $n$ times per layer.
\item[(LC2)]
There are $n^2$ letters, each of which occurs once per layer.
\end{itemize}
Because of the meaning of \textit{type} given in the first
paragraph of this section, we shall call
these possibilities \textit{sorts} of Latin cube.
Thus Latin cubes of sort (LC0) are a special case of Latin cubes of
sort (LC1), but Latin cubes of sort (LC2) are quite different.
Sort (LC0) is the definition of Latin cube used in
\cite{rab:as,ball,dscube,gupta,MWcube,MulWeb}, among many others in
Combinatorics and Statistics.
Fisher used sort (LC1) in \cite{RAF42}, where he gave constructions using
abelian groups. Kishen called this a Latin cube
\textit{of first order}, and those of sort (LC2) Latin cubes \textit{of
second order}, in \cite{kish42,kish50}.
Two of these sorts have alternative descriptions using the language of this
paper. Let $L$ be the partition into letters. Then a Latin cube has sort
(LC0) if and only if $\{L,P_i,P_j\}$ is a Cartesian decomposition of the cube
whenever $i\ne j$ and $\{i,j\} \subset \{1,2,3\}$.
A Latin cube has sort (LC2) if and only if $\{L,P_i\}$
is a Cartesian decomposition of the cube for $i=1$, $2$, $3$.
The following definition is taken from \cite{DAcube}.
\begin{defn}
\label{def:reg}
A Latin cube of sort (LC2) is \textit{regular} if, whenever $\ell_1$ and
$\ell_2$ are parallel lines in the cube, the set of letters occurring in
$\ell_1$ is either exactly the same as the set of letters occurring
in $\ell_2$ or disjoint from it.
\end{defn}
(Warning: the word \textit{regular} is used by some authors with quite
a different meaning for some Latin cubes of sorts (LC0) and (LC1).)
\subsection{Some examples of Latin cubes of sort (LC2)}
In these examples, the cube is coordinatised by functions $f_1$, $f_2$ and
$f_3$ from $\Omega$ to $\Omega_1$, $\Omega_2$ and $\Omega_3$
whose kernels are the partitions $P_1$, $P_2$ and $P_3$.
For example, in Figure~\ref{fig:2}, one part of $P_1$ is $f_1^{-1}(2)$.
A statistician would typically write this as ``$f_1=2$''.
For ease of reading, we adopt the statisticians' notation.
\begin{eg}
\label{eg:2}
When $n=2$, the definition of Latin cube of sort (LC2)
forces the two occurrences of each of the four letters to be in
diagonally opposite cells
of the cube. Thus, up to permutation of the letters, the only possibility
is that shown in Figure~\ref{fig:2}.
\begin{figure}
\[
\begin{array}{c@{\qquad}c}
\begin{array}{c|c|c|}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{f_2=1} &
\multicolumn{1}{c}{f_2=2}\\
\cline{2-3}
f_1=1 & A & B\\
\cline{2-3}
f_1=2 & C & D\\
\cline{2-3}
\end{array}
&
\begin{array}{c|c|c|}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{f_2=1} & \multicolumn{1}{c}{f_2=2}\\
\cline{2-3}
f_1=1 & D & C\\
\cline{2-3}
f_1=2 & B & A\\
\cline{2-3}
\end{array}
\\[10\jot]
\quad f_3=1 &\quad f_3=2
\end{array}
\]
\caption{The unique (up to isomorphism)
Latin cube of sort (LC2) and order~$2$}
\label{fig:2}
\end{figure}
This Latin cube of sort (LC2) is regular.
The set of letters on each line of $P_1\wedge P_2$ is either $\{A,D\}$ or
$\{B,C\}$; the set of letters on each line of $P_1\wedge P_3$ is either
$\{A,B\}$ or $\{C,D\}$; and the set of letters on each line of $P_2\wedge P_3$
is either $\{A,C\}$ or $\{B,D\}$.
\end{eg}
\begin{eg}
\label{eg:nice}
Here $\Omega=T^3$, where $T$~is the additive group of $\mathbb{Z}_3$.
For $i=1$, $2$ and~$3$, the function $f_i$ picks out the $i$th coordinate
of $(t_1,t_2,t_3)$. The column headed~$L$ in Table~\ref{tab:cube2}
shows how the nine letters are allocated to the cells of the cube.
The $P_3$-layer of the cube with $f_3=0$ is as follows.
\[
\begin{array}{c|c|c|c|}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{f_2=0} & \multicolumn{1}{c}{f_2=1}
& \multicolumn{1}{c}{f_2=2}\\
\cline{2-4}
f_1=0 & A & D & G\\
\cline{2-4}
f_1=1 & I & C & F\\
\cline{2-4}
f_1=2 & E & H & B\\
\cline{2-4}
\end{array}
\ .
\]
It has each letter just once.
Similarly, the $P_3$-layer of the cube with $f_3=1$ is
\[
\begin{array}{c|c|c|c|}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{f_2=0} & \multicolumn{1}{c}{f_2=1}
& \multicolumn{1}{c}{f_2=2}\\
\cline{2-4}
f_1=0 & B & E & H\\
\cline{2-4}
f_1=1 & G & A & D\\
\cline{2-4}
f_1=2 & F & I & C\\
\cline{2-4}
\end{array}
\]
and
the $P_3$-layer of the cube with $f_3=2$ is
\[
\begin{array}{c|c|c|c|}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{f_2=0} & \multicolumn{1}{c}{f_2=1}
& \multicolumn{1}{c}{f_2=2}\\
\cline{2-4}
f_1=0 & C & F & I\\
\cline{2-4}
f_1=1 & H & B & E\\
\cline{2-4}
f_1=2 & D & G & A\\
\cline{2-4}
\end{array}
\ .
\]
Similarly you can check that if you take the $2$-dimensional $P_1$-layer
defined by any fixed value of $f_1$ then
every letter occurs just once, and the same thing happens for~$P_2$.
\begin{table}[htbp]
\[
\begin{array}{cccccccc}
\mbox{partition}& P_1 & P_2 & P_3 & Q & R & S & L\\
\mbox{function}& f_1 & f_2 & f_3 & -f_1+f_2 &-f_3+f_1 & -f_2+f_3\\
\mbox{value} & t_1 & t_2 & t_3 & -t_1+t_2 & -t_3+t_1 & -t_2+t_3 & \\
\hline
& 0 & 0 & 0 & 0 & 0 & 0 & A \\
& 0 & 0 & 1 & 0 & 2 & 1 & B \\
& 0 & 0 & 2 & 0 & 1 & 2 & C \\
& 0 & 1 & 0 & 1 & 0 & 2 &D \\
& 0 & 1 & 1 & 1 & 2 & 0 & E \\
& 0 & 1 & 2 & 1 & 1 & 1 & F \\
& 0 & 2 & 0 & 2 & 0 & 1 & G \\
& 0 & 2 & 1 & 2 & 2 & 2 & H \\
& 0 & 2 & 2 & 2 & 1 & 0 & I \\
& 1 & 0 & 0 & 2 & 1 & 0 & I \\
& 1 & 0 & 1 & 2 & 0 & 1 & G \\
& 1 & 0 & 2 & 2 & 2 & 2 & H \\
& 1 & 1 & 0 & 0 & 1 & 2 & C \\
& 1 & 1 & 1 & 0 & 0 & 0 & A \\
& 1 & 1 & 2 & 0 & 2 & 1 & B \\
& 1 & 2 & 0 & 1 & 1 & 1 & F \\
& 1 & 2 & 1 & 1 & 0 & 2 & D \\
& 1 & 2 & 2 & 1 & 2 & 0 & E \\
& 2 & 0 & 0 & 1 & 2 & 0 & E \\
& 2 & 0 & 1 & 1 & 1 & 1 & F \\
& 2 & 0 & 2 & 1 & 0 & 2 & D \\
& 2 & 1 & 0 & 2 & 2 & 2 & H \\
& 2 & 1 & 1 & 2 & 1 & 0 & I \\
& 2 & 1 & 2 & 2 & 0 & 1 & G \\
& 2 & 2 & 0 & 0 & 2 & 1 & B \\
& 2 & 2 & 1 & 0 & 1 & 2 & C \\
& 2 & 2 & 2 & 0 & 0 & 0 & A \\
\end{array}
\]
\caption{Some functions and partitions on the cells of the cube
in Example~\ref{eg:nice}}
\label{tab:cube2}
\end{table}
In addition to satisfying the property of being a Latin cube of sort (LC2),
this combinatorial structure has three other good properties.
\begin{itemize}
\item
It is a regular in the sense of Definition~\ref{def:reg}.
The set of letters in any
$P_1\wedge P_2$-line is $\{A,B,C\}$ or $\{D,E,F\}$ or $\{G,H,I\}$.
For $P_1\wedge P_3$ the letter sets are $\{A,D,G\}$, $\{B,E,H\}$ and
$\{C,F,I\}$; for $P_2\wedge P_3$ they are $\{A,E,I\}$, $\{B,F,G\}$ and
$\{C,D,H\}$.
\item
The supremum of $L$ and $P_1\wedge P_2$ is the partition $Q$ shown in
Table~\ref{tab:cube2}. This is the kernel of the function which maps
$(t_1,t_2,t_3)$ to $-t_1+t_2 = 2t_1+t_2$.
Statisticians normally write this partition
as $P_1^2P_2$. Likewise, the supremum of $L$ and $P_1\wedge P_3$ is $R$,
which statisticians might write as $P_3^2P_1$,
and the supremum of $L$ and $P_2\wedge P_3$ is $S$, written by statisticians
as $P_2^2P_3$. The partitions $P_1$, $P_2$, $P_3$,
$Q$, $R$, $S$, $P_1\wedge P_2$, $P_1\wedge P_3$, $P_2\wedge P_3$ and $L$
are pairwise compatible, in the sense of Definition~\ref{def:compatible}.
Moreover, each of them is a coset partition defined by a subgroup of $T^3$.
\item
In anticipation of the notation used in Section~\ref{sec:dag},
it seems fairly natural to rename $P_1$, $P_2$, $P_3$, $Q$, $R$ and $S$
as $P_{01}$, $P_{02}$, $P_{03}$, $P_{12}$, $P_{13}$ and $P_{23}$, in order.
For each $i$ in $\{0,1,2,3\}$, the three partitions $P_{jk}$ which have
$i$ as one of the subscripts, that is, $i\in \{j,k\}$,
form a Cartesian decomposition of the underlying set.
\end{itemize}
However, the set of ten partitions that we have named is not closed under
infima, so they do not form an orthogonal block structure.
For example, the set does not contain the infimum $P_3\wedge Q$.
This partition has nine parts of size three, one of
which consists of the cells $(0,0,0)$, $(1,1,0)$ and $(2,2,0)$,
as can be seen from Table~\ref{tab:cube2}.
\begin{figure}
\begin{center}
\setlength{\unitlength}{2mm}
\begin{picture}(60,40)
\put(5,15){\line(0,1){10}}
\put(5,15){\line(1,1){10}}
\put(5,15){\line(3,1){30}}
\put(15,15){\line(-1,1){10}}
\put(15,15){\line(1,1){10}}
\put(15,15){\line(3,1){30}}
\put(25,15){\line(-1,1){10}}
\put(25,15){\line(0,1){10}}
\put(25,15){\line(3,1){30}}
\put(45,15){\line(-1,1){10}}
\put(45,15){\line(0,1){10}}
\put(45,15){\line(1,1){10}}
\put(30,5){\line(-1,2){5}}
\put(30,5){\line(-3,2){15}}
\put(30,5){\line(3,2){15}}
\curve(30,5,5,15)
\put(30,35){\line(-1,-2){5}}
\put(30,35){\line(1,-2){5}}
\put(30,35){\line(-3,-2){15}}
\put(30,35){\line(3,-2){15}}
\curve(30,35,5,25)
\curve(30,35,55,25)
\put(5,15){\circle*{1}}
\put(4,15){\makebox(0,0)[r]{$P_1\wedge P_2$}}
\put(15,15){\circle*{1}}
\put(14,15){\makebox(0,0)[r]{$P_1\wedge P_3$}}
\put(25,15){\circle*{1}}
\put(24,15){\makebox(0,0)[r]{$P_2\wedge P_3$}}
\put(45,15){\circle*{1}}
\put(47,15){\makebox(0,0){$L$}}
\put(30,5){\circle*{1}}
\put(30,3){\makebox(0,0){$E$}}
\put(5,25){\circle*{1}}
\put(3,25){\makebox(0,0){$P_1$}}
\put(15,25){\circle*{1}}
\put(13,25){\makebox(0,0){$P_2$}}
\put(25,25){\circle*{1}}
\put(23,25){\makebox(0,0){$P_3$}}
\put(35,25){\circle*{1}}
\put(36,25){\makebox(0,0)[l]{$Q$}}
\put(45,25){\circle*{1}}
\put(46,25){\makebox(0,0)[l]{$R$}}
\put(55,25){\circle*{1}}
\put(56,25){\makebox(0,0)[l]{$S$}}
\put(30,35){\circle*{1}}
\put(30,37){\makebox(0,0){$U$}}
\end{picture}
\end{center}
\caption{Hasse diagram of the join-semilattice formed by the pairwise
compatible partitions in Example~\ref{eg:nice}}
\label{fig:nice}
\end{figure}
Figure~\ref{fig:nice} shows the Hasse diagram of the join-semilattice formed
by these ten named partitions, along with the two trivial partitions $E$
and $U$.
This diagram, along with the knowledge of compatibility, makes it clear that
any three of the minimal partitions $P_1 \wedge P_2$, $P_1 \wedge P_3$,
$P_2\wedge P_3$ and $L$ give the minimal
partitions of the orthogonal block structure defined by
a Cartesian decomposition of dimension three of the underlying set $T^3$.
Note that, although the partition $E$ is the highest point in the diagram
which is below both $P_3$ and $Q$, it is not their infimum, because their
infimum is defined in the lattice of all partitions of this set.
\end{eg}
\begin{figure}
\[
\begin{array}{c@{\qquad}c@{\qquad}c}
\begin{array}{|c|c|c|}
\hline
A & E & F\\
\hline
H & I & D\\
\hline
C & G & B\\
\hline
\end{array}
&
\begin{array}{|c|c|c|}
\hline
D & B & I\\
\hline
E & C & G\\
\hline
F & A & H\\
\hline
\end{array}
&
\begin{array}{|c|c|c|}
\hline
G & H & C\\
\hline
B & F & A\\
\hline
I & D & E\\
\hline
\end{array}
\end{array}
\]
\caption{A Latin cube of sort (LC2) which is not regular}
\label{fig:sax}
\end{figure}
\begin{eg}
\label{eg:sax}
Figure~\ref{fig:sax} shows an example which is not regular. This was originally
given in \cite{saxena}. To save space, the three $P_3$-layers are shown
side by side.
For example, there is one $P_1\wedge P_3$-line whose set of letters is
$\{A,E,F\}$ and another whose set of letters is $\{A,F,H\}$.
These are neither the same nor disjoint.
\end{eg}
If we write the group operation in Example~\ref{eg:nice} multiplicatively,
then the cells
$(t_1,t_2,t_3)$ and $(u_1,u_2,u_3)$ have the same letter if and only if
$t_1^{-1}t_2 = u_1^{-1}u_2$ and $t_1^{-1}t_3 = u_1^{-1}u_3$. This means that
$(u_1,u_2,u_3) = (x,x,x)(t_1,t_2,t_3)$ where $x=u_1t_1^{-1}$, so that
$(t_1,t_2,t_3)$ and $(u_1,u_2,u_3)$ are in the same right coset of the
diagonal subgroup $\delta(T,3)$ introduced in Section~\ref{sect:diaggroups}.
The next theorem shows that this construction can be generalised to any group,
abelian or not, finite or infinite.
\begin{theorem}
\label{th:upfront}
Let $T$ be a non-trivial group. Identify the elements of $T^3$ with the cells of a cube
in the natural way. Let $\delta(T,3)$ be the diagonal subgroup
$\{(t,t,t) \mid t \in T\}$. Then the parts of the right coset partition
$P_{\delta(T,3)}$ form the letters of a regular Latin cube of sort
(LC2).
\end{theorem}
\begin{proof}
Let $H_1$ be the subgroup $\{(1,t_2,t_3) \mid t_2 \in T, \ t_3 \in T\}$
of $T^3$. Define subgroups $H_2$ and $H_3$ similarly. Let $i \in \{1,2,3\}$.
Then $H_i \cap \delta(T,3) = \{1\}$ and $H_i\delta(T,3) = \delta(T,3)H_i = T^3$.
Proposition~\ref{prop:coset} shows that $P_{H_i} \wedge P_{\delta(T,3)} = E$ and
$P_{H_i} \vee P_{\delta(T,3)} = U$. Because $H_i\delta(T,3) = \delta(T,3)H_i$,
Proposition~\ref{prop:commeq} (considering statements (a) and~(d)) shows that $\{P_{H_i}, P_{\delta(T,3)}\}$ is a
Cartesian decomposition of $T^3$ of dimension two. Hence the parts
of $P_{\delta(T,3)}$ form the letters of a Latin cube $\Lambda$ of sort~(LC2).
Put $G_{12} = H_1 \cap H_2$ and
$K_{12} = \{(t_1,t_1,t_3) \mid t_1 \in T,\ t_3 \in T\}$.
Then the parts of $P_{G_{12}}$ are lines of the cube parallel to the $z$-axis.
Also, $G_{12} \cap \delta(T,3)=\{1\}$ and $G_{12}\delta(T,3) = \delta(T,3)G_{12}
= K_{12}$, so Propositions~\ref{prop:coset} and~\ref{prop:commeq} show that
$P_{G_{12}} \wedge P_{\delta(T,3)} = E$, $P_{G_{12}} \vee P_{\delta(T,3)} = P_{K_{12}}$,
and the restrictions of $P_{G_{12}}$ and $P_{\delta(T,3)}$ to any part
of $P_{K_{12}}$ form a grid. Therefore, within each coset of~$K_{12}$,
all lines have the same subset of letters. By the definition of supremum,
no line in any other coset of $K_{12}$ has any letters in common
with these.
Similar arguments apply to lines in each of the other two directions.
Hence $\Lambda$ is regular.
\end{proof}
The converse of this theorem is proved at the end of this section.
The set of partitions in Theorem~\ref{th:upfront} form a join-semilattice whose
Hasse diagram is the same as the one shown in Figure~\ref{fig:nice}, apart from
the naming of the partitions. We call this a \textit{diagonal semilattice
of dimension three}. The generalisation to arbitrary dimensions is given
in Section~\ref{sec:diag}.
\subsection{Results for Latin cubes}
As we hinted in Section~\ref{sec:LS},
the vast majority of Latin squares of order at least $5$
are not isotopic to Cayley tables of groups. For $m\geqslant 3$, the situation
changes dramatically as soon as we impose some more, purely combinatorial,
constraints. We continue to use the notation $\Omega$, $P_1$, $P_2$, $P_3$
and $L$ as in Section~\ref{sec:whatis}.
A Latin cube of sort (LC0) is called an \textit{extended Cayley table} of
the group~$T$ if $\Omega=T^3$ and the letter in cell $(t_1,t_2,t_3)$ is
$t_1t_2t_3$. Theorem~8.21 of \cite{rab:as} shows that, in the finite case,
for a Latin cube of sort (LC0), the set $\{P_1,P_2,P_3,L\}$ is contained in
the set of partitions of an orthogonal block structure if and only if the
cube is isomorphic to the extended Cayley table of an abelian group.
Now we will prove something similar for Latin cubes of sort (LC2), by
specifying a property of the set
\[\{ P_1, P_2, P_3, (P_1\wedge P_2)\vee L,
(P_1\wedge P_3)\vee L, (P_2\wedge P_3)\vee L\}\]
of six partitions. We do not restrict this
to finite sets. Also, because we do not insist on closure under infima,
it turns out that the group does not need to be abelian.
In Lemmas~\ref{lem:lc0} and~\ref{lem:lc3},
the assumption is that we have a Latin cube of sort~(LC2),
and that $\{i,j,k\} = \{1,2,3\}$. Write
\[
L^{ij}= L\vee(P_i\wedge P_j).
\]
To clarify the proofs, we shall use the following refinement of
Definition~\ref{def:reg}. Recall that we refer to the parts of $P_i\wedge P_j$
as $P_i\wedge P_j$-lines.
\begin{defn}
\label{def:refine}
A Latin cube of sort (LC2) is \textit{$\{i,j\}$-regular} if,
whenever $\ell_1$ and $\ell_2$ are distinct $P_i\wedge P_j$-lines,
the set of letters occurring in
$\ell_1$ is either exactly the same as the set of letters occurring
in $\ell_2$ or disjoint from it.
\end{defn}
\begin{lem}
\label{lem:lc0}
The following conditions are equivalent.
\begin{enumerate}
\item
The partition $L$ is compatible with $P_i\wedge P_j$.
\item
The Latin cube is $\{i,j\}$-regular.
\item The restrictions of $P_i\wedge P_j$, $P_k$ and $L$ to any
part of $L^{ij}$ form a Latin square.
\item Every pair of distinct $P_i\wedge P_j$-lines in the same
part of $L^{ij}$ lie in distinct parts of $P_i$.
\item The restrictions of $P_i$, $P_k$ and $L$ to any
part of $L^{ij}$ form a Latin square.
\item The set
$\{P_i,P_k,L^{ij}\}$ is a Cartesian decomposition of $\Omega$ of
dimension three.
\item Each part of $P_i\wedge P_k\wedge L^{ij}$ has size one.
\end{enumerate}
\end{lem}
\begin{proof} We prove this result without loss of generality for
$i=1$, $j=2$, $k=3$.
\begin{itemize}
\item[(a)$\Leftrightarrow$(b)]
By the definition of a Latin cube of sort (LC2),
each part of $P_1\wedge P_2$ has either zero or one cells in common
with each part of~$L$. Therefore ${P_1\wedge P_2 \wedge L}=E$,
which is uniform, so Definition~\ref{def:compatible} shows that
compatibility is the same as commutativity of the equivalence relations
underlying $P_1\wedge P_2$ and~$L$.
Consider Proposition~\ref{prop:commeq} with $P_1\wedge P_2$ and $L$ in place
of $P_1$ and $P_2$. Condition~(a) of Proposition~\ref{prop:commeq}
is the same as condition~(a) here; and condition~(e) of
Proposition~\ref{prop:commeq} is the same as condition~(b) here. Thus
Proposition~\ref{prop:commeq} gives us the result.
\item[(a)$\Rightarrow$(c)]
Let $\Delta$ be a part of $L^{12}$. If $L$ is compatible with $P_1\wedge P_2$
then, because ${P_1\wedge P_2 \wedge L}=E$,
Proposition~\ref{prop:commeq} shows that
the restrictions of $P_1\wedge P_2$ and $L$ to $\Delta$ form a Cartesian
decomposition of $\Delta$. Each part of $P_3$ has precisely one cell in
common with each part of $P_1\wedge P_2$,
because $\{P_1,P_2,P_3\}$ is a Cartesian decomposition of $\Omega$,
and precisely one cell in common with each part of $L$,
because the Latin cube has sort (LC2).
Hence the restrictions of $P_1\wedge P_2$, $P_3$ and $L$ to $\Delta$
form a Latin square. (Note that $P_3$ takes all of its values within $\Delta$,
but neither $P_1\wedge P_2$ nor $L$ does.)
\item[(c)$\Rightarrow$(d)]
Let $\ell_1$ and $\ell_2$ be distinct $P_1\wedge P_2$-lines
that are contained in the same part $\Delta$ of $L^{12}$. Every letter
which occurs in $\Delta$ occurs in both of these lines. If $\ell_1$ and
$\ell_2$ are contained in the same part of $P_1$, then that $P_1$-layer
contains at least two occurrences of some letters, which contradicts the
fact that $L\wedge P_1=E$ for a Latin cube of sort (LC2).
\item[(d)$\Rightarrow$(e)]
Let $\Delta$ be a part of $L^{12}$ and let $\lambda$ be a part of~$L$
inside~$\Delta$. Let $p_1$ and $p_3$ be parts of $P_1$ and $P_3$.
Then $\left| p_1 \cap \lambda \right| = \left| p_3 \cap \lambda \right|=1$
by definition of a Latin cube of sort (LC2). Condition (d) specifies that
$p_1 \cap \Delta$ is a part of $P_1 \wedge P_2$. Therefore
$(p_1 \cap \Delta) \cap p_3$ is a part of ${P_1 \wedge P_2 \wedge P_3}$, so
$ \left |(p_1 \cap \Delta) \cap (p_3 \cap \Delta)\right |=
\left |(p_1 \cap \Delta) \cap p_3\right| =1$.
Thus the restrictions of $P_1$, $P_3$, and $L$ to $\Delta$ form a Latin
square.
\item[(e)$\Rightarrow$(f)]
Let $\Delta$, $p_1$ and $p_3$ be parts of $L^{12}$, $P_1$ and $P_3$
respectively. By the definition of a Latin cube of sort (LC2),
$p_1 \cap \Delta$ and $p_3 \cap \Delta$ are both non-empty. Thus
condition (e) implies that $\left | p_1 \cap p_3 \cap \Delta \right|=1$.
Hence $\{P_1, P_3, L^{12}\}$ is a Cartesian
decomposition of dimension three.
\item[(f)$\Rightarrow$(g)] This follows immediately
from the definition of a Cartesian decomposition (Definition~\ref{def:cart}).
\item[(g)$\Rightarrow$(d)]
If (d) is false then there is a part~$\Delta$ of $L^{12}$ which
contains distinct
$P_1\wedge P_2$-lines $\ell_1$ and $\ell_2$ in the same part~$p_1$ of~$P_1$.
Let $p_3$ be any part of $P_3$. Then, since $\{P_1,P_2,P_3\}$ is a
Cartesian decomposition, $\left |p_3\cap \ell_1\right | =
\left | p_3\cap \ell_2\right | =1$ and so
$\left| p_1\cap p_3 \cap \Delta \right | \geqslant 2$. This contradicts~(g).
\item[(d)$\Rightarrow$(b)]
If (b) is false, there are distinct $P_1\wedge P_2$-lines $\ell_1$
and $\ell_2$
whose sets of letters $\Lambda_1$ and $\Lambda_2$ are neither the same nor
disjoint. Because $\Lambda_1 \cap \Lambda_2 \ne \emptyset$, $\ell_1$
and $\ell_2$ are contained in the same part of $L^{12}$.
Let $\lambda \in \Lambda_2 \setminus \Lambda_1$. By definition of a Latin
cube of sort (LC2),
$\lambda$ occurs on precisely one cell~$\omega$
in the $P_1$-layer which contains $\ell_1$. By assumption, $\omega \notin
\ell_1$. Let $\ell_3$ be the $P_1\wedge P_2$-line containing~$\omega$.
Then $\ell_3$ and $\ell_2$ are in the same part of $L^{12}$, as are
$\ell_1$ and $\ell_2$. Hence $\ell_1$ and $\ell_3$ are in the
same part of $L^{12}$ and the same part of $P_1$. This contradicts~(d).
\end{itemize}
\end{proof}
\begin{lem}
\label{lem:lc3}
The set $\{P_i,L^{ik},L^{ij}\}$ is a Cartesian decomposition of $\Omega$ if
and only if $L$ is compatible with both $P_i\wedge P_j$ and $P_i \wedge P_k$.
\end{lem}
\begin{proof}
If $L$ is not compatible with $P_i\wedge P_j$, then
Lemma~\ref{lem:lc0} shows that there is a part of
${P_i \wedge P_k \wedge L^{ij}}$ of size at least two.
This is contained in a part of $P_i\wedge P_k$. Since $P_i \wedge P_k
\preccurlyeq L^{ik}$, it is also contained in a part of~$L^{ik}$. Hence
$\{P_i, L^{ij}, L^{ik}\}$ is not a Cartesian decomposition of~$\Omega$.
Similarly, if $L$ is not compatible with $P_i\wedge P_k$ then
$\{P_i, L^{ij}, L^{ik}\}$ is not a Cartesian decomposition of~$\Omega$.
For the converse, Lemma~\ref{lem:lc0} shows
that if $L$ is compatible with
$P_i\wedge P_j$ then $\{P_i, P_k, L^{ij}\}$ is a Cartesian decomposition of
$\Omega$. Let $\Delta$ be a part of $L^{ij}$, and let $L^*$ be the
restriction of $L$ to $\Delta$. Lemma~\ref{lem:lc0} shows that
$P_i$, $P_k$ and $L^*$ form a Latin square on~$\Delta$. Thus distinct
letters in~$L^*$ occur only in distinct parts of $P_i \wedge P_k$.
If $L$ is also compatible with $P_i\wedge P_k$, then Lemma~\ref{lem:lc0}
shows that each part of $L^{ik}$ is a union of parts of $P_i\wedge P_k$,
any two of which are in different parts of $P_i$ and different parts of~$P_k$,
and all of which have the same letters.
Hence any two different letters in $L^*$
are in different parts of~$L^{ik}$. Since $\{P_i,P_k,L^{ij}\}$ is a Cartesian
decomposition of~$\Omega$,
every part of $P_i\wedge P_k$ has a non-empty intersection with~$\Delta$, and
so every part of $L^{ik}$ has a non-empty intersection with~$\Delta$.
Since $L\prec L^{ik}$, such an intersection consists of one or more
parts of $L^*$ in $\Delta$. We have already noted that distinct
letters in $L^*$ are in different parts of $L^{ik}$, and so it follows that the
restriction of $L^{ik}$ to $\Delta$ is the same as~$L^*$.
Hence the restrictions of $P_i$, $P_k$ and $L^{ik}$ to $\Delta$ form a Latin
square on $\Delta$, and so the restrictions of $P_i$ and $L^{ik}$ to $\Delta$
give a Cartesian decomposition of~$\Delta$.
This is true for every part $\Delta$ of $L^{ij}$, and so it follows that
$\{P_i, L^{ij}, L^{ik}\}$ is a Cartesian decomposition of~$\Omega$.
\end{proof}
\begin{lem}
\label{lem:lc4}
The set $\{P_i, L^{ij},L^{ik}\}$ is a Cartesian decomposition of $\Omega$
if and only if the set $\{P_i \wedge P_j, P_i\wedge P_k,L\}$
generates a Cartesian lattice under taking suprema.
\end{lem}
\begin{proof}
If $\{P_i \wedge P_j, P_i\wedge P_k,L\}$ generates a Cartesian lattice under
taking suprema then the maximal partitions in the Cartesian lattice are
$(P_i \wedge P_j) \vee (P_i\wedge P_k)$, $(P_i \wedge P_j) \vee L$ and
$(P_i \wedge P_k) \vee L$. They form a Cartesian decomposition, and
are equal to $P_i$, $L^{ij}$ and $L^{ik}$
respectively.
Conversely, suppose that $\{P_i, L^{ij},L^{ik}\}$ is a Cartesian decomposition
of~$\Omega$. The minimal partitions in the corresponding Cartesian lattice
are $P_i \wedge L^{ij}$, $P_i \wedge L^{ik}$ and $L^{ij} \wedge L^{ik}$. Now,
$L \preccurlyeq L^{ij}$ and $L \preccurlyeq L^{ik}$, so
$L \preccurlyeq L^{ij} \wedge L^{ik}$.
Because the Latin cube has sort~(LC2), $\{P_i,L\}$ and
$\{P_i,L^{ij}\wedge L^{ik}\}$ are both Cartesian decompositions of~$\Omega$.
Since
$L \preccurlyeq L^{ij} \wedge L^{ik}$, this forces $L=L^{ij}\wedge L^{ik}$.
The identities of the other two infima are confirmed by a similar argument.
We have $P_i \wedge P_j \preccurlyeq P_i$, and
$P_i \wedge P_j \preccurlyeq L^{ij}$, by definition of~$L^{ij}$. Therefore
$P_i \wedge P_j \preccurlyeq P_i \wedge L^{ij}$.
Lemmas~\ref{lem:lc0} and~\ref{lem:lc3} show that $\{P_i,P_k,L^{ij}\}$ is a
Cartesian decomposition of~$\Omega$. Therefore $\{P_k,P_i \wedge L^{ij}\}$
and $\{P_k, P_i \wedge P_j\}$ are both Cartesian decompositions of~$\Omega$.
Since $P_i \wedge P_j \preccurlyeq P_i \wedge L^{ij}$, this forces
$P_i \wedge P_j = P_i \wedge L^{ij}$.
Likewise, $P_i \wedge P_k = P_i \wedge L^{ik}$.
\end{proof}
The following theorem is a direct consequence of
Definitions~\ref{def:reg} and~\ref{def:refine} and
Lemmas~\ref{lem:lc0}, \ref{lem:lc3} and~\ref{lem:lc4}.
\begin{theorem}
\label{thm:regnice}
For a Latin cube of sort~(LC2), the following conditions are equivalent.
\begin{enumerate}
\item
The Latin cube is regular.
\item
The Latin cube is $\{1,2\}$-regular, $\{1,3\}$-regular and $\{2,3\}$-regular.
\item
The partition $L$ is compatible with each of $P_1\wedge P_2$, $P_1\wedge P_3$
and $P_2\wedge P_3$.
\item Each of $\{P_1,P_2,P_3\}$,
$\{P_1,L^{12},L^{13}\}$, $\{P_2,L^{12},L^{23}\}$ and $\{P_3, L^{13}, L^{23}\}$
is a Cartesian decomposition.
\item
Each of the sets ${\{P_1\wedge P_2, P_1 \wedge P_3, P_2\wedge P_3\}}$,
${\{P_1\wedge P_2, P_1 \wedge P_3, L\}}$, \linebreak
${\{P_1\wedge P_2, P_2\wedge P_3, L\}}$ and
${\{P_1 \wedge P_3, P_2\wedge P_3, L\}}$
generates a Cartesian lattice under taking suprema.
\end{enumerate}
\end{theorem}
The condition that $\{P_1,P_2,P_3\}$ is a Cartesian decomposition
is a part of the definition of a Latin cube. This condition is
explicitly included in item~(d) of Theorem~\ref{thm:regnice} for clarity.
The final result in this section gives us the stepping stone for the proof of
Theorem~\ref{thm:main}.
The proof is quite detailed, and makes frequent use of the
relabelling techniques that we already saw in Sections~\ref{sec:LS}
and~\ref{sesc:quasi}.
\begin{theorem}
\label{thm:bingo}
Consider a Latin cube of sort~(LC2) on an underlying set~$\Omega$,
with coordinate partitions $P_1$, $P_2$ and $P_3$, and letter partition~$L$.
If every three of $P_1 \wedge P_2$, $P_1 \wedge P_3$, $P_2\wedge P_3$ and $L$
are the minimal partitions in a Cartesian lattice on~$\Omega$
then there is a group~$T$ such that, up to relabelling the letters
and the three sets of coordinates,
$\Omega=T^3$ and $L$ is the coset partition defined
by the diagonal subgroup $\{(t,t,t) \mid t \in T\}$.
Moreover, $T$~is unique up to group isomorphism.
\end{theorem}
\begin{proof}
Theorem~\ref{thm:regnice} shows that a Latin cube satisfying this condition
must be regular.
As $\{P_1,P_2,P_3\}$ is a Cartesian decomposition of $\Omega$ and,
by Lemma~\ref{lem:lc0}, $\{P_i,P_j,L^{ik}\}$ is also a Cartesian
decomposition of~$\Omega$ whenever $\{i,j,k\} = \{1,2,3\}$,
the cardinalities of $P_1$, $P_2$, $P_3$, $L^{12}$, $L^{13}$ and $L^{23}$
must all be equal
(using the argument in the proof of Proposition~\ref{p:order}).
Thus we may label the parts of each by the same set~$T$.
We start by labelling the parts of $P_1$, $P_2$ and $P_3$. This identifies
$\Omega$ with $T^3$. At first, these three labellings are arbitrary, but
they are made more specific as the proof progresses.
Let $(a,b,c)$ be a cell of the cube. Because
$P_1\wedge P_2 \preccurlyeq L^{12}$, the part of $L^{12}$ which contains
cell $(a,b,c)$ does not depend on the value of~$c$. Thus
there is a binary operation $\circ$ from $T \times T$ to $T$ such that
$a \circ b$ is the label of the part of $L^{12}$ containing
$\{(a,b,c)\mid c \in T\}$; in other words, $(a,b,c)$ is in part
$a \circ b$ of $L^{12}$, irrespective of the value of $c$.
Lemma~\ref{lem:lc0} and Proposition~\ref{p:order} show that,
for each $a$ in $T$, the function $b \mapsto a \circ b$ is a bijection
from $T$ to~$T$. Similarly, for each $b$ in~$T$, the function
$a \mapsto a \circ b$ is a bijection.
Therefore $(T,\circ)$ is a quasigroup.
Similarly, there are binary operations $\star$ and $\diamond$ on $T$
such that the labels of the parts of $L^{13}$ and $L^{23}$ containing
cell $(a,b,c)$ are $c \star a$ and $b \diamond c$ respectively.
Moreover, $(T,\star)$ and $(T,\diamond)$ are both quasigroups.
Now we start the process of making explicit bijections between some pairs
of the six partitions.
Choose any part of $P_1$ and label it $e$. Then the labels of the parts
of $L^{12}$ can be aligned with those of $P_2$ so that $e \circ b= b$ for
all values of~$b$.
In the quasigroup $(T, \star)$, we may use the column headed $e$ to give
a permutation $\sigma$ of $T$ to align the labels of the parts of~$P_3$
and those of~$L_{13}$ so that $c\star e = c\sigma$ for all values of~$c$.
Let $(a,b,c)$ be a cell of the cube. Because $\{L,P_1\}$ is a Cartesian
decomposition of the cube, there is a unique cell $(e,b',c')$
in the same part of $L$ as $(a,b,c)$. Then
\begin{eqnarray*}
a \circ b & = & e \circ b' = b',\\
c \star a & = & c' \star e = c'\sigma, \quad \mbox{and}\\
b \diamond c& =& b'\diamond c'.
\end{eqnarray*}
Hence
\begin{equation}
b\diamond c = (a\circ b) \diamond ((c \star a)\sigma^{-1})
\label{eq:threeops}
\end{equation}
for all values of $a$, $b$ and $c$ in~$T$.
The quasigroup $(T,\diamond)$ can be viewed as a Latin square with rows
labelled by parts of $P_2$ and columns labelled by parts of $P_3$.
Consider the $2 \times 2$ subsquare shown in Figure~\ref{fig:subsq}. It has
$b_1 \diamond c_1 = \lambda$, $b_1 \diamond c_2 = \mu$,
$b_2 \diamond c_1 = \nu$ and $b_2 \diamond c_2 = \phi$.
\begin{figure}
\[
\begin{array}{c|c|c|}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{c_1} & \multicolumn{1}{c}{c_2}\\
\cline{2-3}
b_1 & \lambda & \mu\\
\cline{2-3}
b_2 & \nu & \phi\\
\cline{2-3}
\end{array}
\]
\caption{A $2 \times 2$ subsquare of the Latin square defined by
$(T,\diamond)$}
\label{fig:subsq}
\end{figure}
Let $b_3$ be any row of this Latin square.
Then there is a unique $a$ in $T$ such
that $a \circ b_1=b_3$. By Equation~(\ref{eq:threeops}),
\begin{eqnarray*}
b_3 \diamond ((c_1 \star a)\sigma^{-1}) & = &
(a \circ b_1) \diamond ((c_1 \star a)\sigma^{-1}) = b_1 \diamond c_1
= \lambda, \quad \mbox{and}\\
b_3 \diamond ((c_2 \star a)\sigma^{-1}) & = &
(a \circ b_1) \diamond ((c_2 \star a)\sigma^{-1}) = b_1 \diamond c_2
= \mu.
\end{eqnarray*}
The unique occurrence of letter $\nu$ in column $(c_1\star a)\sigma^{-1}$ of
this Latin square is in row~$b_4$, where $b_4= a \circ b_2$, because
\[
b_4 \diamond ((c_1 \star a)\sigma^{-1}) =
(a \circ b_2) \diamond ((c_1 \star a)\sigma^{-1}) = b_2 \diamond c_1
= \nu.
\]
Now
\[
b_4 \diamond ((c_2 \star a)\sigma^{-1}) =
(a \circ b_2) \diamond ((c_2 \star a)\sigma^{-1}) = b_2 \diamond c_2
= \phi.
\]
This shows that whenever the letters in three cells of a $2 \times 2$
subsquare are known then the letter in the remaining cell is forced.
That is, the Latin square $(T,\diamond)$
satisfies the quadrangle criterion (Definition~\ref{def:quad}).
By Theorem~\ref{thm:frolov}, this property proves that $(T,\diamond)$ is
isotopic to the Cayley table of a group. By \cite[Theorem~2]{albert},
this group is unique up to group isomorphism.
As remarked at the end of Section~\ref{sesc:quasi}, we can now relabel the
parts of $P_2$, $P_3$ and $L^{23}$ so that $b \diamond c = b^{-1}c$ for
all $b$, $c$ in $T$. Then Equation~(\ref{eq:threeops}) becomes
$b^{-1} c = (a\circ b)^{-1} ((c \star a)\sigma^{-1})$, so that
\begin{equation}
(a\circ b) b^{-1} c = (c \star a)\sigma^{-1}
\label{eq:plod}
\end{equation}
for all $a$, $b$, $c$ in $T$.
Putting $b=c$ in Equation~(\ref{eq:plod}) gives
\begin{equation}
(a \circ c)\sigma = c \star a
\label{eq:plodonon}
\end{equation}
for all $a$, $c$ in $T$, while putting $b=1$ gives
\[
((a \circ 1) c)\sigma = c\star a
\]
for all $a$, $c$ in $T$.
Combining these gives
\begin{equation}
\label{eq:plodon}
a \circ c = (a \circ 1)c = (c\star a)\sigma^{-1}
\end{equation}
for all $a,c\in T$.
We have not yet made any explicit use of the labelling of the parts
of $P_1$ other than $e$, with $e \circ 1=1$.
The map $a \mapsto a \circ 1$ is a bijection
from $T$ to $T$, so we may label the parts of $P_1$ in such a way
that $e=1$ and $a \circ 1 = a^{-1}$ for all $a$ in $T$.
Then Equation~(\ref{eq:plodon}) shows that $a \circ b = a^{-1}b$
for all $a$, $b$ in $T$.
Now that we have fixed the labelling of the parts of $P_1$, $P_2$ and $P_3$,
it is clear that they are the partitions of $T^3$
into right cosets of the subgroups as shown in the first three rows of
Table~\ref{tab:coset}.
Consider the partition $L^{23}$. For $\alpha =(a_1,b_1,c_1)$ and
$ \beta =(a_2,b_2,c_2)$ in~$T^3$, we have (using the notation in Section~\ref{sec:part})
\begin{eqnarray*}
L^{23}[\alpha] = L^{23}[\beta]
& \iff & b_1 \diamond c_1 = b_2 \diamond c_2\\
& \iff & b_1^{-1}c_1 = b_2^{-1}c_2\\
& \iff & \mbox{$\alpha$ and $\beta$ are in the same right coset of $K_{23}$,}
\end{eqnarray*}
where $K_{23} = \{(t_1,t_2,t_2) \mid t_1 \in T,\ t_2 \in T\}$. In other words,
$L^{23}$ is the coset partition of $T^3$ defined by $K_{23}$.
Since $a \circ b = a^{-1}b$, a similar argument shows that $L^{12}$ is the
coset partition of $T^3$ defined by $K_{12}$, where
$K_{12} = \{(t_1,t_1,t_2) \mid t_1 \in T,\ t_2 \in T\}$.
Equation~(\ref{eq:plodonon}) shows that the kernel of the function
$(c,a) \mapsto c \star a$ is the same as the kernel of the function
$(c,a) \mapsto a^{-1}c$, which is in turn the same as the kernel of the function
$(c,a) \mapsto c^{-1}a$. It follows that $L^{13}$ is the
coset partition of $T^3$ defined by $K_{13}$, where
$K_{13} = \{(t_1,t_2,t_1) \mid t_1 \in T,\ t_2 \in T\}$.
Thus the partitions $P_i$ and $L^{ij}$ are the partitions of $T^3$
into right cosets of the subgroups as shown in Table~\ref{tab:coset}.
Lemma~\ref{lem:lc4} shows that the letter partition~$L$ is equal to
$L^{ij} \wedge L^{ik}$ whenever $\{i,j,k\} = \{1,2,3\}$.
Consequently, $L$ is the partition into right cosets of the diagonal
subgroup $\{(t,t,t) \mid t \in T\}$.
\end{proof}
\begin{table}[htbp]
\[
\begin{array}{crcl}
\mbox{Partition} & \multicolumn{3}{c}{\mbox{Subgroup of $T^3$}}\\
\hline
P_1 & & & \{(1,t_2,t_3)\mid t_2 \in T, \ t_3 \in T\}\\
P_2 & & & \{(t_1,1,t_3) \mid t_1 \in T, \ t_3 \in T\}\\
P_3 & & & \{(t_1,t_2,1)\mid t_1 \in T, \ t_2 \in T\}\\
L^{12} & K_{12} & = & \{(t_1,t_1,t_3) \mid t_1 \in T, \ t_3 \in T\}\\
L^{13} & K_{13} & = & \{(t_1,t_2,t_1) \mid t_1 \in T, \ t_2 \in T\}\\
L^{23} & K_{23} & = & \{(t_1,t_2,t_2) \mid t_1 \in T, \ t_2 \in T\}\\
\hline
P_1\wedge P_2 & & & \{(1,1,t):t\in T\}\\
P_1\wedge P_3 & & & \{(1,t,1):t\in T\}\\
P_2\wedge P_3 & & & \{(t,1,1):t\in T\}\\
L & \delta(T,3) & = & \{(t,t,t):t\in T\}
\end{array}
\]
\caption{Coset partitions at the end of the proof of Theorem~\ref{thm:bingo}
and some infima}
\label{tab:coset}
\end{table}
The converse of Theorem~\ref{thm:bingo} was given in Theorem~\ref{th:upfront}.
For $\{i,j,k\}= \{1,2,3\}$, let $H_i$ be the intersection of the subgroups of
$T^3$ corresponding to partitions $P_i$ and $L^{jk}$ in Table~\ref{tab:coset},
so that the parts of $P_i \wedge L^{jk}$ are the right cosets of $H_i$.
Then $H_1 = \{(1,t,t)\mid t \in T\}$ and $H_2 = \{(u,1,u)\mid u \in T\}$. If
$T$ is abelian then $H_1H_2=H_2H_1$ and so the right-coset partitions
of $H_1$ and $H_2$ are compatible. If $T$ is not abelian then $H_1H_2 \ne
H_2H_1$ and so these coset partitions are not compatible. Because we do not
want to restrict our theory to abelian groups, we do not require our collection
of partitions to be closed under infima. Thus we require a join-semilattice
rather than a lattice.
\subsection{Automorphism groups}
\begin{theorem}
Suppose that a regular Latin cube $M$ of sort (LC2) arises from a group $T$
by the construction of Theorem~\ref{th:upfront}. Then the group of
automorphisms of $M$ is equal to the diagonal group $D(T,3)$.
\label{t:autDT3}
\end{theorem}
\begin{proof}[Proof (sketch)]
It is clear from the proof of Theorem~\ref{th:upfront} that $D(T,3)$ is
a subgroup of $\operatorname{Aut}(M)$, and we have to prove equality.
Just as in the proof of Theorem~\ref{t:autDT2}, if $G$~denotes the
automorphism group of~$M$, then it suffices to prove that the group of strong
automorphisms of~$M$ fixing the cell $(1,1,1)$ is equal to $\operatorname{Aut}(T)$.
In the proof of Theorem~\ref{thm:bingo}, we choose a part of the partition
$P_1$ which will play the role of the identity of $T$, and using the partitions
we find bijections between the parts of the maximal partitions and show that
each naturally carries the structure of the group $T$. It is clear that
any automorphism of the Latin cube which fixes $(1,1,1)$ will preserve these
bijections, and hence will be an automorphism of $T$. So we have equality.
\end{proof}
\begin{remark}
We will give an alternative proof of this theorem in the next section, in
Theorem~\ref{t:autDTm}.
\end{remark}
\section{Diagonal groups and diagonal semilattices}
\label{sec:diag}
\subsection{Diagonal semilattices}\label{sec:diag1}
Let $T$ be a group, and $m$ be an integer with $m\geqslant2$. Take $\Omega$ to
be the group~$T^m$. Following our convention in Section~\ref{sect:diaggroups},
we will now denote elements of $\Omega$ by $m$-tuples in square brackets.
Consider the following subgroups of $\Omega$:
\begin{itemize}
\item for $1\leqslant i\leqslant m$, $T_i$ is the $i$th coordinate subgroup, the set
of $m$-tuples with $j$th entry $1$ for $j\ne i$;
\item $T_0$ is the diagonal subgroup $\delta(T,m)$ of $T^m$, the set
$\{[t,t,\ldots,t] \mid t\in T\}$.
\end{itemize}
Let $Q_i$ be the partition of $\Omega$ into right cosets of $T_i$ for
$i=0,1,\ldots,m$.
Observe that, by Theorem~\ref{thm:bingo}, the partitions $P_2\wedge P_3$,
$P_1\wedge P_3$, $P_2\wedge P_3$ and $L$ arising from a regular Latin cube
of sort (LC2) are the coset partitions defined by the four subgroups $T_1$,
$T_2$, $T_3$, $T_0$ of $T^3$ just described in the case $m=3$ (see the last
four rows of Table~\ref{tab:coset}).
\begin{prop}
\label{p:diagsemi}
\begin{enumerate}
\item The set $\{Q_0,\ldots,Q_m\}$ is invariant under the diagonal
group $D(T,m)$.
\item Any $m$ of the partitions $Q_0,\ldots,Q_m$ generate a
Cartesian lattice on $\Omega$ by taking suprema.
\end{enumerate}
\end{prop}
\begin{proof}
\begin{enumerate}
\item It is clear that the set of partitions is invariant under
right translations by elements of $T^m$ and left translations by elements of
the diagonal subgroup $T_0$, by automorphisms of $T$ (acting in the same
way on all coordinates), and under the symmetric group $S_m$ permuting the
coordinates. Moreover, it can be checked that the map
\[[t_1,t_2,\ldots,t_m]\mapsto[t_1^{-1},t_1^{-1}t_2,\ldots,t_1^{-1}t_m]\]
interchanges $Q_0$ and $Q_1$ and fixes the other partitions. So we have
the symmetric group $S_{m+1}$ acting on the whole set
$\{Q_0,\ldots,Q_m\}$. These transformations generate the diagonal group
$D(T,m)$; see Remark~\ref{rem:diaggens}.
\item The set $T^m$ naturally has the structure of an $m$-dimensional
hypercube, and $Q_1,\ldots,Q_m$ are the minimal partitions in the
corresponding Cartesian lattice. For any other set of $m$ partitions,
the assertion follows because the symmetric group $S_{m+1}$ preserves
the set of $m+1$ partitions.
\end{enumerate}
\end{proof}
\begin{defn}
Given a group~$T$ and an integer~$m$ with $m\geqslant 2$, define the partitions
$Q_0$, $Q_1$, \ldots, $Q_m$ as above.
For each subset $I$ of $\{0, \ldots, m\}$, put $Q_I = \bigvee_{i\in I}Q_i$.
The \emph{diagonal semilattice} $\mathfrak{D}(T,m)$ is the set
$\{Q_I \mid I \subseteq \{0,1,\ldots,m\}\}$ of partitions of the set $T^m$.
\end{defn}
Thus the diagonal semilattice $\mathfrak{D}(T,m)$ is the set-theoretic union
of the ${m+1}$ Cartesian lattices in Proposition~\ref{p:diagsemi}(b).
Clearly it admits the diagonal group $D(T,m)$ as a group of automorphisms.
\begin{prop}
\label{p:dsjs}
$\mathfrak{D}(T,m)$ is a join-semilattice, that is, closed under taking
joins. For $m>2$ it is not closed under taking meets.
\end{prop}
\begin{proof}
For each proper subset $I$ of $\{0, \ldots, m\}$, the partition~$Q_I$ occurs
in the Cartesian lattice generated by $\{Q_i \mid i \in K\}$
for every subset $K$ of $\{0,\ldots,m\}$ which contains $I$ and has
cardinality~$m$.
Let $I$ and $J$ be two proper subsets of $\{0,\ldots,m\}$. If
$\left |I \cup J\right| \leqslant m$ then there is a subset~$K$ of
$\{0, \ldots, m\}$ with $\left|K\right|=m$ and $I\cup J \subseteq K$.
Then $Q_I\vee Q_J = Q_{I\cup J}$ in the Cartesian lattice defined by $K$,
and this supremum does not depend on the choice of $K$. Therefore
$Q_I\vee Q_J \in \mathfrak{D}(T,m)$.
On the other hand, if $I\cup J=\{0,\ldots,m\}$, then
\[Q_I\vee Q_J = Q_0 \vee Q_1 \vee \cdots \vee Q_m \succcurlyeq
Q_1 \vee Q_2 \vee \cdots \vee Q_m = U.
\]
Hence $Q_I\vee Q_J=U$, and so $Q_I\vee Q_J\in \mathfrak{D}(T,m)$.
If $m=3$, consider the subgroups
\[H=T_0T_1=\{[x,y,y] \mid x,y\in T\}\quad\mbox{ and }
\quad K=T_2T_3=\{[1,z,w] \mid z,w\in T\}.\]
If $P_H$ and $P_K$ are the corresponding coset partitions, then
\[P_H=Q_{\{0,1\}}\quad \mbox{ and } \quad P_K=Q_{\{2,3\}},\]
which are both in $\mathfrak{D}(T,3)$. Now, by Proposition~\ref{prop:coset},
\[P_H\wedge P_K=P_{H\cap K},\]
where $H\cap K=\{[1,y,y] \mid y\in T\}$; this is a subgroup of $T^m$, but the
coset partition $P_{H\cap K}$ does not belong to $\mathfrak{D}(T,3)$. This example is
easily generalised to larger values of $m$.
\end{proof}
When $T$~is finite, Propositions~\ref{p:diagsemi}(b) and~\ref{p:dsjs}
show that $\mathfrak{D}(T,m)$ is a Tjur block structure but is not an
orthogonal block structure when $m>2$ (see Section~\ref{sect:moreparts}).
We will see in the next section that the property in
Proposition~\ref{p:diagsemi}(b) is exactly what is required for the
characterisation of diagonal semilattices. First, we extend
Definition~\ref{def:weak}.
\begin{defn}\label{def:isomsl}
For $i=1$, $2$, let $\mathcal{P}_i$ be a finite set of partitions of a
set $\Omega_i$. Then $\mathcal{P}_1$ is \textit{isomorphic} to
$\mathcal{P}_2$ if there is a bijection $\phi$ from $\Omega_1$ to $\Omega_2$
which induces a bijection from $\mathcal{P}_1$ to $\mathcal{P}_2$ which
preserves the relation $\preccurlyeq$.
\end{defn}
As we saw in Section~\ref{sec:LS}, this notion of isomorphism
is called \textit{paratopism} in the context of Latin squares.
\medskip
The remark before Proposition~\ref{p:diagsemi} shows that a regular Latin
cube of sort (LC2) ``generates'' a diagonal semilattice $\mathfrak{D}(T,3)$
for a group $T$, unique up to isomorphism. The next step is to consider larger
values of $m$.
\subsection{The theorem}\label{sect:mt}
We repeat our axiomatisation of diagonal structures from the introduction.
We emphasise to the reader that we do not assume a Cartesian decomposition on
the set $\Omega$ at the start; the $m+1$ Cartesian decompositions are imposed by
the hypotheses of the theorem, and none is privileged.
\begin{theorem}\label{th:main}
Let $\Omega$ be a set with $|\Omega|>1$, and $m$ an integer at least $2$. Let $Q_0,\ldots,Q_m$
be $m+1$ partitions of $\Omega$ satisfying the following property: any $m$
of them are the minimal non-trivial partitions in a Cartesian lattice on
$\Omega$.
\begin{enumerate}
\item If $m=2$, then the three partitions are the row, column, and letter
partitions of a Latin square on $\Omega$, unique up to paratopism.
\item If $m>2$, then there is a group $T$, unique up to group isomorphism,
such that $Q_0,\ldots,Q_m$ are the minimal non-trivial partitions in a diagonal
semilattice $\mathfrak{D}(T,m)$ on $\Omega$.
\end{enumerate}
\end{theorem}
Note that the converse of the theorem is true: Latin squares (with ${m=2}$)
and diagonal semilattices have the property that their minimal non-trivial
partitions do satisfy our hypotheses.
The general proof for $m\geqslant 3$ is by induction, the base case being $m=3$.
The base case follows from Theorem~\ref{thm:bingo}, as discussed in the
preceding subsection, while the induction step
is given in Subsection~\ref{s:mtinduction}.
\subsection{Setting up}
First, we give some notation.
Let $\mathcal{P}$ be a set of partitions of $\Omega$,
and $Q$ a partition of~$\Omega$. We denote by $\mathcal{P}/\!\!/ Q$ the
following object: take all partitions $P\in\mathcal{P}$ which satisfy
$Q\preccurlyeq P$; then regard each such $P$ as a partition, not of~$\Omega$, but
of~$Q$ (that is, of the set of parts of $Q$).
Then $\mathcal P/\!\!/ Q$ is the set of these partitions of~$Q$.
(We do not write this as $\mathcal{P}/Q$, because this notation has almost the
opposite meaning in the statistical literature cited in
Section~\ref{sec:prelim}.)
The next result is routine but should help to familiarise this concept.
Furthermore, we will temporarily call a set $\{Q_0,\ldots,Q_m\}$ of partitions
of~$\Omega$ satisfying the hypotheses of
Theorem~\ref{th:main}
a \emph{special set of dimension $m$}.
\begin{prop}\label{p:quots}
Let $\mathcal{P}$ be a set of partitions of $\Omega$, and $Q$ a minimal
non-trivial element of $\mathcal{P}$.
\begin{enumerate}
\item If $\mathcal{P}$ is an $m$-dimensional Cartesian lattice, then
$\mathcal{P}/\!\!/ Q$ is an $(m-1)$-dimensional Cartesian lattice.
\item If $\mathcal{P}$ is the join-semilattice generated by an $m$-dimensional
special set $\mathcal{Q}$, and $Q\in\mathcal{Q}$, then $\mathcal{P}/\!\!/ Q$
is generated by a special set of dimension $m-1$.
\item If $\mathcal{P}\cong\mathfrak{D}(T,m)$ is a diagonal semilattice, then
$\mathcal{P}/\!\!/ Q\cong\mathfrak{D}(T,m-1)$.
\end{enumerate}
\end{prop}
\begin{proof}
\begin{enumerate}
\item This follows from Proposition~\ref{p:antiiso}, because if $Q=P_I$
where $I = \{1, \ldots, m\} \setminus \{i\}$
then we are effectively just limiting the set of indices to~$I$.
\item
This follows from part~(a).
\item Assume that $\mathcal P=\mathfrak D(T,m)$. Then, since $\operatorname{Aut}(\mathcal P)$
contains $D(T,m)$, which is transitive on $\{Q_0,\ldots,Q_m\}$, we may assume that $Q=Q_m$.
Thus $\mathcal P/\!\!/ Q$ is a set of partitions of $Q_m$.
In the group $T^{m+1}\rtimes\operatorname{Aut}(T)$ generated by elements of types
(I)--(III) in Remark~\ref{rem:diaggens}, the subgroup $T_m$ generated
by right multiplication of the last coordinate by elements of $T$ is normal,
and the quotient is $T^m\rtimes\operatorname{Aut}(T)$. Moreover, the subgroups $T_i$ commute
pairwise, so the parts of $Q_i\vee Q_m$ are the orbits of $T_iT_m$ (for
$i<m$) and give rise to a minimal partition in $\mathfrak{D}(T,m-1)$.
\end{enumerate}
\end{proof}
\subsection{Automorphism groups}
\label{sec:dag}
In the cases $m=2$ and $m=3$, we showed that the automorphism group of the
diagonal semilattice $\mathfrak{D}(T,m)$ is the diagonal group $D(T,m)$. The
same result holds for arbitrary $m$; but this time, we prove this result first,
since it is needed in the proof of the main theorem. The proof below also
handles the case $m=3$.
\begin{theorem}
For $m\geqslant2$, and any non-trivial group $T$, the automorphism group of the
diagonal semilattice $\mathfrak{D}(T,m)$ is the diagonal group $D(T,m)$.
\label{t:autDTm}
\end{theorem}
\begin{proof}
Our proof will be by induction on $m$. The cases $m=2$ and $m=3$ are given by
Theorems~\ref{t:autDT2} and~\ref{t:autDT3}. However, we base the induction at
$m=2$, so we provide an alternative proof for Theorem~\ref{t:autDT3}. So in
this proof we assume that $m>2$ and that the result holds with $m-1$
replacing~$m$.
Recall from Section~\ref{sect:diaggroups} that $\widehat D(T,m)$ denotes the
pre-diagonal group, so that
$D(T,m)\cong \widehat D(T,m)/ \widehat K$, with $ \widehat K$
as in~\eqref{eq:K}.
Suppose that $\sigma:\widehat D(T,m)\to D(T,m)$ is the natural projection
with $\ker\sigma=\widehat K$.
By Proposition~\ref{p:diagsemi}, we know that $D(T,m)$ is a subgroup of $\operatorname{Aut}(\mathfrak{D}(T,m))$, and we have
to show that equality holds. Using the principle of Proposition~\ref{p:subgp},
it suffices to show that the group $\operatorname{SAut}(\mathfrak{D}(T,m))$ of strong
automorphisms of $\mathfrak{D}(T,m)$ is the group $\sigma(T^{m+1}\rtimes\operatorname{Aut}(T))$
generated by
the images of the elements of the pre-diagonal group of types (I)--(III), as given in
Remark~\ref{rem:diaggens}.
Consider $Q_m$, one of the minimal partitions in $\mathfrak{D}(T,m)$, and let
$\overline\Omega$ be the set of parts of $Q_m$. For $i<m$, the collection of
subsets of $\overline\Omega$ which are the parts of $Q_m$ inside a part of
$Q_i\vee Q_m$ is a partition $\overline Q_i$ of $\overline\Omega$.
Proposition~\ref{p:quots}(c) shows that the $\overline Q_i$ are the minimal
partitions of $\mathfrak{D}(T,m-1)$, a diagonal semilattice
on~$\overline\Omega$.
Moreover, the group $\sigma(T_m)$ is the
kernel of the action of $\sigma(T^{m+1}\rtimes\operatorname{Aut}(T))$ on~$\overline\Omega$.
Further, since $T_m\cap \widehat K=1$, $\sigma(T_m)\cong T_m\cong T$.
As in Section~\ref{sect:diaggroups}, let $\widehat H$ be the
stabiliser in $\widehat D(T,m)$ of the element $[1,\ldots,1]$:
then $T_m\cap \widehat H=1$ and so $T_m$ acts faithfully and
regularly on each part of $Q_m$.
So it suffices to show that the same is true of $\operatorname{SAut}(\mathfrak{D}(T,m))$;
in other words, it is enough to show that the subgroup $H$ of $\operatorname{SAut}(\mathfrak{D}(T,m))$
fixing setwise all parts of $Q_m$
and any given point $\alpha$ of $\Omega$ is trivial.
Any $m$ of the partitions $Q_0,\ldots,Q_m$ are the minimal partitions
in a Cartesian lattice of partitions of $\Omega$. Let $P_{ij}$ denote the supremum of the partitions
$Q_k$ for $k\notin\{i,j\}$. Then, for fixed $i$, the partitions $P_{ij}$
(as $j$ runs over $\{0,\ldots,m\}\setminus\{i\}$) are the maximal partitions
of the Cartesian lattice
generated by $\{ Q_j \mid 0\leqslant j\leqslant~m \mbox{ and } j\ne i\}$
and form a Cartesian decomposition of~$\Omega$.
Hence each point of $\Omega$ is uniquely determined
by the parts of these partitions which contain it
(see Definition~\ref{def:cart}).
For distinct $i,j<m$, all parts of $P_{ij}$ are fixed by $H$, since each is a union of
parts of $Q_m$. Also, for $i<m$, the part of $P_{im}$ containing $\alpha$ is
fixed by $H$. By the defining property of the Cartesian decomposition
$\{P_{ij}\mid 0\leqslant j\leqslant m\mbox{ and }j\neq i\}$, we conclude that $H$ fixes every point lying in
the same part of $P_{im}$ as $\alpha$ and this holds for all $i<m$.
Taking $\alpha=[1,\ldots,1]$, the argument in the last two paragraphs shows
in particular that
$H$ fixes pointwise the part $P_{0m}[\alpha]$ of $P_{0m}$ and the part
$P_{1m}[\alpha]$ of $P_{1m}$ containing
$\alpha$. In other words, $H$ fixes pointwise the sets
\begin{align*}
P_{0m}[\alpha]&=\{[t_1,\ldots,t_{m-1},1]\mid t_1,\ldots,t_{m-1}\in T\}\mbox{ and}\\
P_{1m}[\alpha]&=\{[t_1,\ldots,t_{m-1},t_1]\mid t_1,\ldots,t_{m-1}\in T\}.
\end{align*}
Applying, for a given $t\in T$, the same argument to the element $\alpha'=[t,1,\ldots,1,t]$
of $P_{1m}[\alpha]$, we obtain that $H$ fixes pointwise the set
\[
P_{0m}[\alpha']=\{[t_1,\ldots,t_{m-1},t]\mid t_1,\ldots,t_{m-1}\in T\}.
\]
Letting $t$ run through the elements of $T$, the union of the
parts $P_{0m}[\alpha']$ is $\Omega$, and
this implies that $H$ fixes all elements of $\Omega$ and we are done.
\end{proof}
The particular consequence of Theorem~\ref{t:autDTm} that we require in the proof of the
main theorem is the following.
\begin{cor}\label{c:forinduction}
Suppose that $m\geqslant3$. Let $\mathcal P$ and $\mathcal P'$ be
diagonal semilattices isomorphic to $\mathfrak D(T,m)$, and let $Q$ and
$Q'$ be minimal partitions in
$\mathcal P$ and $\mathcal P'$, respectively.
Then each isomorphism $\psi:\mathcal P/\!\!/ Q\to \mathcal P'/\!\!/ Q'$
is induced by an isomorphism $\overline{\psi}: \mathcal P\to \mathcal P'$
mapping $Q$ to $Q'$.
\end{cor}
\begin{proof}
We may assume without loss of generality that $\mathcal P=\mathcal P'=\mathfrak D(T,m)$ and,
since $\operatorname{Aut}(\mathfrak D(T,m))$ induces $S_{m+1}$ on the minimal partitions
$Q_0,\ldots,Q_m$
of $\mathfrak D(T,m)$, we can also suppose that $Q=Q'=Q_m$.
Thus $\mathcal P/\!\!/ Q= \mathcal P'/\!\!/ Q'\cong \mathfrak D(T,m-1)$.
Let $\sigma:\widehat D(T,m)\to D(T,m)$ be the natural projection map,
as in the proof of
Theorem~\ref{t:autDTm}.
The subgroup of $\operatorname{Aut}(\mathfrak D(T,m))$ fixing $Q_m$ is the image
$X=\sigma(T^{m+1}\rtimes (\operatorname{Aut}(T)\times S_m))$ where the subgroup $S_m$ of $S_{m+1}$
is
the stabiliser of the point $m$ in the action on $\{0,\ldots,m\}$.
Moreover, the subgroup $X$ contains $\sigma(T_m)$, the copy of $T$ acting on the last
coordinate of the $m$-tuples, which is regular on each part
of $Q_m$. Put $Y=\sigma(T_m)$. Then $Y$~is the kernel of the induced action
of $X$ on $\mathcal P/\!\!/ Q_m$, which is isomorphic to $
\mathfrak D(T,m-1)$, and so $X/Y\cong D(T,m-1)$. Moreover since $m\geqslant 3$,
it follows from Theorem~\ref{t:autDTm} that
$X/Y = \operatorname{Aut}(\mathfrak D(T,m-1))$. Thus the given map $\psi$ in
$\operatorname{Aut}(\mathfrak D(T,m-1))$
lies in $X/Y$, and we may choose $\overline{\psi}$ as any pre-image of $\psi$ in $X$.
\end{proof}
\subsection{Proof of the main theorem}\label{s:mtinduction}
Now we begin the proof of Theorem~\ref{th:main}. The proof is by induction
on $m$. As we remarked in
Section~\ref{sect:mt},
there is nothing to prove for $m=2$, and the case $m=3$ follows from
Theorem~\ref{thm:bingo}. Thus we assume that $m\geqslant4$. The induction
hypothesis yields that the main theorem is true for dimensions~$m-1$ and~$m-2$.
Given a special set $\{Q_0,\ldots,Q_m\}$ generating a semilattice $\mathcal{P}$,
we know, by Proposition~\ref{p:quots}, that, for each $i$, $\mathcal{P}/\!\!/ Q_i$
is generated by a special
set of dimension $m-1$, and so is isomorphic to $\mathfrak{D}(T,m-1)$ for
some group $T$. Now, $T$ is independent of the choice of $i$; for, if
$\mathcal{P}/\!\!/ Q_i\cong\mathfrak{D}(T_i,m-1)$, and
$\mathcal{P}/\!\!/ Q_j\cong\mathfrak{D}(T_j,m-1)$, then,
by Proposition~\ref{p:quots}(c),
\[
\mathfrak{D}(T_i,m-2)\cong\mathcal{P} /\!\!/ (Q_i\vee Q_j)
\cong\mathfrak{D}(T_j,m-2),
\]
so by induction $T_i\cong T_j$.
(This proof works even when $m=4$, because it is the reduction to $m=3$ that
gives the groups $T_i$ and $T_j$, so that the Latin squares
$\mathfrak{D}(T_i,2)$ and $\mathfrak{D}(T_j,2)$ are both Cayley tables of groups,
and so Theorem~\ref{thm:albert} implies that $T_i\cong T_j$.)
We call $T$ the \emph{underlying group} of the special set.
\begin{theorem}
\label{th:QQ}
Let $\mathcal{Q}$ and $\mathcal{Q}'$ be special sets of dimension $m\geqslant4$
on sets $\Omega$ and $\Omega'$ with the same underlying group $T$.
Then $\mathcal{Q}$ and $\mathcal{Q'}$ are isomorphic in the sense of
Definition~\ref{def:isomsl}.
\end{theorem}
\begin{proof}
Let $\mathcal{P}$ and $\mathcal{P}'$ be the join-semilattices
generated by $\mathcal{Q}$ and $\mathcal{Q}'$ respectively,
where $\mathcal{Q} = \{Q_0, \ldots, Q_m\}$ and
$\mathcal{Q}' = \{Q'_0, \ldots, Q'_m\}$.
We consider the three partitions $Q_1$, $Q_2$, and
$Q_1\vee Q_2$. Each part of $Q_1\vee Q_2$ is partitioned by $Q_1$ and $Q_2$;
these form a $|T|\times|T|$ grid, where the parts of $Q_1$ are the rows and
the parts of $Q_2$ are the columns. We claim that
\begin{itemize}
\item There is a bijection $F_1$ from the set of parts of $Q_1$ to the set of
parts of $Q_1'$ which induces an isomorphism from $\mathcal{P} /\!\!/ Q_1$ to
$\mathcal{P}' /\!\!/ Q_1'$.
\item There is a bijection $F_2$ from the set of parts of $Q_2$ to the set of
parts of $Q_2'$ which induces an isomorphism from $\mathcal{P} /\!\!/ Q_2$ to
$\mathcal{P}' /\!\!/ Q_2'$.
\item There is a bijection $F_{12}$ from the set of parts of $Q_1\vee Q_2$ to
the set of parts of $Q_1'\vee Q_2'$ which induces an isomorphism from
$\mathcal{P} /\!\!/ (Q_1\vee Q_2)$ to $\mathcal{P}' /\!\!/ (Q_1'\vee Q_2')$;
moreover, each of $F_1$ and $F_2$, restricted to the partitions of
$\mathcal{P}/\!\!/ (Q_1\vee Q_2)$, agrees with $F_{12}$.
\end{itemize}
The proof of these assertions is as follows.
As each part of $Q_1 \vee Q_2$ is a union of parts of $Q_1$,
the partition $Q_1 \vee Q_2$ determines a partition $R_1$ of
$Q_1$ which is a minimal partition of $\mathcal P/\!\!/ Q_1$.
Similarly $Q'_1 \vee Q'_2$ determines a minimal partition $R_1'$ of $\mathcal P'/\!\!/
Q_1'$.
Then since $\mathcal P/\!\!/ Q_1\cong \mathcal P'/\!\!/ Q_1'\cong \mathfrak D(T,m-1)$,
by the induction hypothesis, as discussed above,
we may choose an isomorphism
$F_1: \mathcal P/\!\!/ Q_1\to \mathcal P'/\!\!/ Q_1'$
in the first bullet point such that $R_1$ is mapped to $R_1'$.
Now $F_1$ induces an isomorphism
$(\mathcal P/\!\!/ Q_1)/\!\!/ R_1 \to (\mathcal P'/\!\!/ Q'_1)/\!\!/ R_1'$,
and since there are natural isomorphisms from
$(\mathcal P/\!\!/ Q_1)/\!\!/ R_1$ to
$\mathcal P/\!\!/ (Q_1 \vee Q_2)$ and from
$(\mathcal P'/\!\!/ Q'_1)/\!\!/ R_1'$ to
$\mathcal P'/\!\!/ (Q'_1 \vee Q'_2)$,
$F_1$ induces an isomorphism
\[F_{12}: \mathcal P/\!\!/ (Q_1 \vee Q_2) \to
\mathcal P'/\!\!/ (Q'_1 \vee Q'_2).
\]
The join $Q_1 \vee Q_2$ determines a partition
$R_2$ of $Q_2$ which is a minimal partition of $\mathcal P/\!\!/ Q_2$, and
$Q'_1 \vee Q'_2$ determines a minimal partition $R'_2$ of
$\mathcal P'/\!\!/ Q_2'$. Further, we have natural isomorphisms from
$(\mathcal P/\!\!/ Q_2)/\!\!/ R_2$ to $\mathcal P/\!\!/ (Q_1 \vee Q_2)$ and from
$(\mathcal P'/\!\!/ Q'_2)/\!\!/ R'_2$ to $\mathcal P'/\!\!/ (Q'_1 \vee Q'_2)$,
so we may view $F_{12}$ as an isomorphism from
$(\mathcal P/\!\!/ Q_2)/\!\!/ R_2$ to $(\mathcal P'/\!\!/ Q'_2)/\!\!/ R'_2$.
By Corollary~\ref{c:forinduction}, the isomorphism $F_{12}$ is induced by an
isomorphism from $\mathcal{P} /\!\!/ Q_2$ to $\mathcal{P}' /\!\!/ Q_2'$,
and we take $F_2$ to be this isomorphism.
Thus, $F_{12}$ maps each part $\Delta$ of $Q_1\vee Q_2$ to a part $\Delta'$ of
$Q_1'\vee Q_2'$, and $F_1$ maps the rows of the grid on $\Delta$ described above to the rows of
the grid on $\Delta'$, and similarly $F_2$ maps the columns.
Now the key observation is that there is a unique bijection~$F$ from the points
of $\Delta$ to the points of $\Delta'$ which maps rows to rows (inducing~$F_1$)
and columns to columns (inducing~$F_2$). For each point of $\Delta$ is the
intersection of a row and a column, and can be mapped to the
intersection of the image row and column in $\Delta'$.
Thus, taking these maps on each part of $Q_1\vee Q_2$ and combining them,
we see that there is a unique bijection $F\colon\Omega\to\Omega'$ which induces $F_1$
on the parts of~$Q_1$ and $F_2$ on the parts of~$Q_2$. Since $F_1$ is an
isomorphism from $\mathcal{P} /\!\!/ Q_1$ to $\mathcal{P}' /\!\!/ Q_1'$,
and similarly for $F_2$, we see that
\begin{quote}
$F$ maps every element of $\mathcal{P}$ which is above \emph{either}
$Q_1$ or $Q_2$ to the corresponding element of $\mathcal{P}'$.
\end{quote}
To complete the proof, we have to deal with the remaining partitions of $\mathcal P$
and $\mathcal P'$.
We note that every partition in $\mathcal{P}$ has the form
\[Q_I=\bigvee_{i\in I}Q_i\]
for some $I\subseteq\{0,\ldots,m\}$. By the statement proved in the previous paragraph,
we may assume that $I\cap\{1,2\}=\emptyset$ and in particular that
$|I|\leqslant m-1$.
Suppose first that $|I|\leqslant m-2$. Then there is some $k\in\{0,3,\ldots,m\}$
such that $k\not\in I$. Without loss of generality we may assume that
$0\not\in I$.
Since $\{Q_1,\ldots,Q_m\}$ generates a Cartesian lattice, which is closed
under meet, we have
\[Q_I=Q_{I\cup\{1\}}\wedge Q_{I\cup\{2\}},\]
and since the partitions on the right are mapped by $F$ to $Q'_{I\cup\{1\}}$ and
$Q'_{I\cup\{2\}}$, it follows that $F$ maps $Q_I$ to $Q'_I$.
Consider finally the case when $|I|=m-1$; that is, $I=\{0,3,4,\ldots,m\}$.
As $m\geqslant 4$, we have $0, 3\in I$ and may put
$J = I\setminus \{0,3\}=\{4,\ldots,m\}$.
Then, for $i\in\{0,3\}$, $\left| J \cup \{i\} \right|= m-2$, so
the argument in the previous paragraph shows that $F$ maps $Q_{J \cup \{i\}}$
to $Q'_{J \cup \{i\}}$.
Since $Q_I = Q_{J \cup \{0\}} \vee Q_{J \cup \{3\}}$, it follows
that $F$ maps $Q_I$ to $Q'_I$. \
\end{proof}
Now the proof of the main theorem follows. For let $\mathcal{Q}$ be a special
set of partitions of $\Omega$ with underlying group $T$.
By Proposition~\ref{p:diagsemi},
the set of minimal partitions in $\mathfrak{D}(T,m)$ has the same property.
By Theorem~\ref{th:QQ}, $\mathcal{Q}$~is isomorphic to this special set,
so the
join-semilattice it generates is isomorphic to~$\mathfrak{D}(T,m)$.
\section{Primitivity and quasiprimitivity}\label{s:pqp}
A permutation group is said to be \emph{quasiprimitive} if all its non-trivial
normal subgroups are transitive. In particular, primitive groups are
quasiprimitive, but a quasiprimitive group may be imprimitive. If $T$ is a
(not necessarily finite) simple group and $m\geqslant 2$, then the diagonal group
$D(T,m)$ is a primitive permutation group of simple diagonal type;
see~\cite{aschsc}, \cite{kov:sd}, or~\cite[Section~7.4]{ps:cartesian}.
In this section, we investigate the primitivity and quasiprimitivity of diagonal
groups for an arbitrary~$T$; our conclusions are in Theorem~\ref{th:primaut} in
the introduction.
The proof requires some preliminary lemmas.
A subgroup of a group~$G$ is \emph{characteristic} if it is
invariant under $\operatorname{Aut}(G)$. We say that $G$~is \emph{characteristically simple}
if its only characteristic subgroups are itself and $1$. We require some
results about abelian characteristically simple groups.
An abelian group $(T,+)$ is said to be \emph{divisible}
if, for every positive integer~$n$ and every $a\in T$,
there exists $b\in T$ such that $nb=a$. The group $T$ is
\emph{uniquely divisible} if,
for all $a\in T$ and $n\in\mathbb{N}$, the element $b\in T$ is unique. Equivalently,
an abelian group $T$ is divisible if and only if
the map $T\to T$, $x\mapsto n x$ is surjective for all $n\in\mathbb{N}$, while
$T$ is uniquely divisible if and only if the same map is bijective
for all $n\in\mathbb{N}$. Uniquely divisible groups are also referred to as
\emph{$\mathbb{Q}$-groups}. If $T$ is a uniquely divisible group,
$p\in\mathbb{Z}$, $q\in \mathbb{Z}\setminus\{0\}$ and $a\in T$, then there is
a unique $b\in T$ such that $qb=a$ and we define $(p/q)a=pb$.
This defines a $\mathbb{Q}$-vector space
structure on~$T$. Also note that any non-trivial uniquely divisible group is
torsion-free.
In the following lemma, elements of $T^{m+1}$ are written as
$(t_0,\ldots,t_m)$ with $t_i\in T$,
and $S_{m+1}$ is considered as the symmetric group
acting on the set $\{0,\ldots,m\}$. Moreover, we let $H$ denote
the group $\operatorname{Aut}(T)\times S_{m+1}$; then $H$ acts on $T^{m+1}$ by
\begin{equation}\label{eq:Gomegaact}
(t_0,\ldots,t_m)(\varphi,\pi)=(t_{0\pi^{-1}}\varphi,\ldots,t_{m\pi^{-1}}\varphi)
\end{equation}
for all $(t_0,\ldots,t_m)$ in $T^{m+1}$, $\varphi$ in $\operatorname{Aut}(T)$,
and $\pi$ in $S_{m+1}$.
The proof of statements (b)--(c) depends on the
assertion that bases exist in an arbitrary vector space, which is a well-known
consequence of the Axiom
of Choice. Of course, in special cases, for instance when $T$ is finite-dimensional
over $\mathbb{F}_p$ or over $\mathbb{Q}$, then the use of the Axiom of Choice can be avoided.
\begin{lem}\label{lem:charab}
The following statements hold for any non-trivial abelian
characteristically simple group~$T$.
\begin{enumerate}
\item Either $T$ is an elementary abelian $p$-group or
$T$ is a uniquely divisible group. Moreover, $T$ can be considered
as an $\mathbb{F}$-vector space, where $\mathbb{F}=\mathbb{F}_p$ in the first
case, while $\mathbb F=\mathbb{Q}$ in the second case.
\item $\operatorname{Aut} (T)$ is transitive on the set $T\setminus\{0\}$.
\item Suppose that $m\geqslant 1$ and put
\begin{align*}
\Delta&=\delta(T,m+1)=\{(t,\ldots,t)\in T^{m+1}\mid t\in T\}\mbox{ and }\\
\Gamma&=\left\{(t_0,\ldots,t_m)\in T^{m+1}\mid \sum_{i=0}^mt_i=0\right\}.
\end{align*}
Then $\Delta$ and $\Gamma$ are $H$-invariant subgroups of $T^{m+1}$.
Furthermore, precisely one of the following holds.
\begin{enumerate}
\item $T$ is an elementary abelian $p$-group where $p\mid(m+1)$,
so that $\Delta\leqslant \Gamma$. In particular, $\Gamma/\Delta$ is an
$H$-invariant subgroup of $T^{m+1}/\Delta$, which is proper if $m\geqslant2$
\item Either $T$ is uniquely divisible or $T$ is an elementary
abelian $p$-group with $p\nmid (m+1)$. Further, in this case,
$T^{m+1}=\Gamma\oplus \Delta$ and $\Gamma$ has no proper, non-trivial
$H$-invariant subgroup.
\end{enumerate}
\end{enumerate}
\end{lem}
\begin{proof}
\begin{enumerate}
\item
First note that, for $n\in\mathbb{N}$, both the image $nT$
and the kernel $\{t\in T\mid nt=0\}$ of the map $t\mapsto nt$ are
characteristic subgroups of $T$.
If $T$ is not a divisible group, then there exist $n\in\mathbb{N}$ and
$a\in T$ such that $a \notin nT$. Thus
$nT\neq T$, and hence, since $T$ is characteristically simple, $nT=0$.
In particular, $T$ contains a non-zero element of finite order,
and hence $T$ also contains an element of order $p$ for some prime~$p$.
Since $T$ is abelian, the set $Y=\{t\in T\mid pt=0\}$ is a non-trivial
characteristic subgroup, and so $Y=T$; that is, $T$ is an
elementary abelian $p$-group and it can be
regarded as an $\mathbb F_p$-vector space.
Hence we may assume that $T$ is a non-trivial divisible group. That is,
$nT=T$ for all $n\in\mathbb{N}$, but also, as $T$ is characteristically simple,
$\{t\in T\mid nt=0\}=\{0\}$
for all $n\in \mathbb{N}$. Hence $T$ is uniquely divisible. In this case, $T$ can be viewed
as a $\mathbb{Q}$-vector space, as explained before the statement of this lemma.
\item
By part~(a), $T$ can be considered as a vector space over some field
$\mathbb F$. If $a,b\in T\setminus\{0\}$, then, by extending the sets $\{a\}$ and
$\{b\}$ into $\mathbb F$-bases, we can construct an $\mathbb F$-linear
transformation that takes $a$ to $b$.
\item
The definition of $\Delta$ and $\Gamma$ implies that they are
$H$-invariant, and also that, if $T$ is an elementary abelian $p$-group
such that $p$ divides $m+1$, then $\Delta<\Gamma$, and so $\Gamma/\Delta$ is a
proper $H$-invariant subgroup of $T^{m+1}/\Delta$.
Assume now that
either $T$ is uniquely divisible or $T$ is a $p$-group with $p\nmid(m+1)$.
Then $T^{m+1}=\Delta\oplus \Gamma$ where the decomposition is into the direct
sum of $H$-modules. It suffices to show that,
if $\mathbf{a}=(a_0,\ldots,a_m)$ is a non-trivial element of $\Gamma$,
then the smallest
$H$-invariant subgroup $X$ that contains
$\mathbf{a}$ is equal to $\Gamma$.
The non-zero element $\mathbf a$ of $\Gamma$ cannot be of the form $(b,\ldots,b)$
for $b\in T\setminus\{0\}$,
because $(m+1)b\neq 0$ whether $T$ is uniquely divisible or $T$ is a $p$-group
with $p\nmid(m+1)$. In
particular there exist distinct $i,j$ in $\{0,\ldots,m\}$
such that $a_i\neq a_j$.
Applying an element $\pi$ in $S_{m+1}$,
we may assume without loss of generality
that $a_0\neq a_1$. Applying the transposition $(0,1)\in S_{m+1}$,
we have that
$(a_1,a_0,a_2,\ldots,a_m)\in X$, and so
\[
(a_0,a_1,a_2,\ldots,a_m)-(a_1,a_0,a_2,\ldots,a_m)=(a_0-a_1,a_1-a_0,0,\ldots,0)\in X.
\]
Hence there is a non-zero element $a\in T$ such that $(a,-a,0,\ldots,0)\in X$.
By part~(b), $\operatorname{Aut}(T)$ is transitive on non-zero
elements of $T$ and hence $(a,-a,0,\ldots,0)\in X$ for
all $a\in T$. As $S_{m+1}$ is transitive on pairs of indices $i,j\in\{0,\ldots,m\}$ with
$i\neq j$, this implies that
all elements of the form $(0,\ldots,0,a,0,\ldots,0,-a,0,\ldots,0)\in T^{m+1}$ belong
to $X$, but these elements generate $\Gamma$, and so $X=\Gamma$, as required.
\end{enumerate}
\end{proof}
Non-abelian characteristically simple groups are harder to describe.
A direct product of pairwise isomorphic non-abelian simple groups is
characteristically simple.
Every finite characteristically simple group is of this form, but in the
infinite case this is not true; the first example of a
characteristically simple group not of this form was published by
McLain~\cite{mclain} in 1954, see also Robinson~\cite[(12.1.9)]{djsr}.
\medskip
Now we work towards the main result of this section, the classification of
primitive or quasiprimitive diagonal groups. First we do the case where $T$ is
abelian.
\begin{lem}\label{lem:prabreg}
Let $G$ be a permutation group on a set $\Omega$ and let $M$ be an
abelian regular normal subgroup of $G$. If $\omega\in\Omega$, then
$G=M\rtimes G_\omega$ and the following are
equivalent:
\begin{enumerate}
\item $G$ is primitive;
\item $G$ is quasiprimitive;
\item $M$ has no proper non-trivial subgroup which is invariant under
conjugation by elements of $G_\omega$.
\end{enumerate}
\end{lem}
\begin{proof}
The product decomposition $G=MG_\omega$ follows from the transitivity of $M$, while
$M\cap G_\omega=1$ follows from the regularity of $M$. Hence $G=M\rtimes G_\omega$.
Assertion~(a) clearly implies assertion~(b). The fact that (b) implies (c) follows
from~\cite[Theorem~3.12(ii)]{ps:cartesian} by noting that $M$, being abelian, has no non-trivial inner automorphisms.
Finally, that (c) implies (a) follows directly from~\cite[Theorem~3.12(ii)]{ps:cartesian}.
\end{proof}
To handle the case where $T$ is non-abelian, we need the following definition
and lemma.
A group $X$ is said to be \emph{perfect} if $X'=X$,
where $X'$ denotes the commutator subgroup.
The following lemma is Lemma 2.3 in \cite{charfact}, where the proof can be
found. For $X=X_1\times\cdots\times X_k$ a direct product of groups and
$S\subseteq\{1,\ldots,k\}$, we denote by $\pi_S$ the projection
from $X$ onto $\prod_{i\in S}X_i$.
\begin{lem}\label{comminside}
Let $k$ be a positive integer, let $X_1,\ldots,X_k$ be groups, and suppose, for
$i\in \{1,\ldots,k\}$, that $N_i$ is a perfect subgroup of $X_i$.
Let $X=X_1\times\cdots\times X_k$ and let $K$ be a subgroup of $X$ such that for
all $i$, $j$ with $1\leqslant i<j\leqslant k$, we have
$N_i\times N_j\leqslant \pi_{\{i,j\}}(K)$. Then $N_1\times\cdots\times N_k\leqslant K$.
\end{lem}
Now we are ready to prove Theorem~\ref{th:primaut}. In this proof, $G$ denotes
the group $D(T,m)$ with $m\geqslant2$. As defined earlier in this section, we let
$H=A\times S$, where $A=\operatorname{Aut}(T)$ and $S=S_{m+1}$.
Various properties of diagonal groups whose proofs are straightforward are
used without further comment.
\begin{proof}[Proof of Theorem~\ref{th:primaut}]
We prove (a)~$\Rightarrow$~(b)~$\Rightarrow$~(c)~$\Rightarrow$~(a).
\begin{itemize}
\item[(a)$\Rightarrow$(b)] Clear.
\item[(b)$\Rightarrow$(c)] We show that $T$ is characteristically simple
by proving the contrapositive. Suppose that $N$ is
a non-trivial proper characteristic subgroup of $T$.
Then $N^{m+1}$ is a normal subgroup of $G$, as is readily
checked. We claim that the orbit of the point $[1,1,\ldots,1]\in\Omega$
under $N^{m+1}$ is $N^m$. We have to check that this set is fixed by right
multiplication by $N^m$ (this is clear, and it is also clear that it is a
single orbit), and that
left multiplication of every coordinate by a fixed element
of $N$ fixes $N^m$ (this is also clear). So $D(T,m)$ has an intransitive
normal subgroup, and is not quasiprimitive.
If $T$ is abelian, then it is either an elementary abelian $p$-group or
uniquely divisible. In the former case, if $p\mid(m+1)$, the subgroup
$\Gamma$ from Lemma~\ref{lem:charab} acts intransitively
on $\Omega$, and is normalised by
$H$; so $G$ is not
quasiprimitive, by Lemma~\ref{lem:prabreg}. (The image of $[0,\ldots,0]$
under the element $(t_0,\ldots,t_m)\in\Gamma$ is
$[t_1-t_0,t_2-t_0,\ldots,t_m-t_0]$, which has coordinate sum zero since
$-mt_0=t_0$. So the orbit of $\Gamma$ consists of $m$-tuples with coordinate
sum zero.)
\item[(c)$\Rightarrow$(a)] Assume that $T$ is characteristically simple, and
not an elementary abelian $p$-group for which $p\mid(m+1)$.
If $T$ is abelian, then it is either uniquely divisible or an elementary
abelian $p$-group with $p\nmid(m+1)$. Then
Lemma~\ref{lem:charab}(c) applies; $T^{m+1}=\Gamma\oplus\Delta$, where
$\Delta$ is the kernel of the action of $T^{m+1}$ on $\Omega$, and $\Gamma$ contains no
proper non-trivial $H$-invariant subgroup; so by Lemma~\ref{lem:prabreg},
$G$ is primitive.
So we may suppose that $T$ is non-abelian and characteristically simple.
Then $Z(T)=1$, and so $T^{m+1}$ acts faithfully on $\Omega$,
and its subgroup $R=T^m$ (the set of elements of $T^{m+1}$ of the form
$(1,t_1,\ldots,t_m)$) acts regularly.
Let $L=\{(t_0,1,\ldots,1) \mid t_0\in T\}$.
Put $N=T^{m+1}$. Then $RL=LR=N \cong L\times R$.
We identify $L$ with $T_0$ and $R$ with $T_1 \times \cdots \times T_m$.
Then $N$ is normal in $G$, and $G=NH$.
Let $\omega=[1,\ldots,1]\in\Omega$ be fixed. Then
$G_\omega=H$ and $N_\omega=I$, where $I$ is the subgroup of $A$
consisting of inner automorphisms of~$T$.
To show that $G$ is primitive on $\Omega$, we show that $G_\omega$ is a
maximal subgroup of $G$. So let $X$ be a subgroup of $G$ that properly
contains $G_\omega$. We will show that $X=G$.
Since $S\leqslant X$, we have that $X=(X\cap (NA))S$.
Similarly, as $N_\omega A \leqslant X \cap (NA)$, we find that
$X \cap (N A) = (X \cap N) A$.
So $X = (X \cap N) (A S) = (X \cap N) G_\omega$.
Then, since $G_\omega$ is a proper subgroup of $X$ and $G_\omega \cap N = N_\omega$,
it follows that $X \cap N$ properly contains $N_\omega$.
Set $X_0=X\cap N$.
Thus there exist some pair $(i,j)$ of distinct indices
and an element $(u_0,u_1,\ldots,u_m)$ in $X_0$ such that $u_i\neq u_j$. Since
$(u_i^{-1},\ldots,u_i^{-1}) \in X_0$, it follows that there exists an
element $(t_0,t_1,\ldots,t_m)\in X_0$ such that $t_i=1$ and $t_j\neq~1$.
Since $S\cong S_{m+1}$ normalises $N_\omega A$ and permutes the
direct factors of $N=T_0\times T_1\times \cdots \times T_m$ naturally,
we may assume without loss of generality that $i=0$ and $j=1$, and hence that
there exists an
element $(1,t_1,\ldots,t_m)\in X_0$ with $t_1\neq 1$; that is,
$T_1\cap\pi_{0,1}(X_0)\neq 1$,
where $\pi_{0,1}$ is the projection from $N$ onto $T_0\times T_1$.
If $\psi\in A$, then $\psi$ normalises $X_0$ and acts
coordinatewise on $T^{m+1}$; so $(1,t_1^\psi,\ldots,t_m^\psi)\in X_0$, so that
$t_1^\psi\in T_1\cap \pi_{0,1}(X_0)$. Now,
$\{t_1^\psi \mid \psi \in A\}$ generates a characteristic subgroup of~$T_1$.
Since $T_1$ is characteristically simple, $T_1\leqslant\pi_{0,1}(X_0)$. A
similar argument shows that $T_0\leqslant \pi_{0,1}(X_0)$. Hence
$T_0\times T_1=\pi_{0,1}(X_0)$. Since the group $S\cong S_{m+1}$ acts
$2$-transitively on the direct factors of $N$, and since $S$ normalises $X_0$
(as $S< G_\omega<X$), we
obtain, for all distinct $i,\ j\in\{1,\ldots,m\}$, that
$\pi_{i,j}(X_0)=T_i\times T_j$ (where $\pi_{i,j}$ is the projection onto
$T_i\times T_j$).
Since the $T_i$ are non-abelian characteristically simple groups, they are
perfect. Therefore Lemma~\ref{comminside} implies that $X_0=N$, and hence
$X=(X_0A)S=G$. Thus $G_\omega$ is a maximal subgroup of $G$, and $G$ is
primitive, as required.
\end{itemize}
\end{proof}
In the case $m=1$, diagonal groups behave a little differently. If $T$ is
abelian, then the diagonal group is simply the holomorph of $T$, which is
primitive (and hence quasiprimitive) if and only if $T$ is characteristically
simple. The theorem is true as stated if $T$ is non-abelian, in which case
the diagonal group is the permutation group on $T$ generated by left and right
multiplication, inversion, and automorphisms of~$T$.
\section{The diagonal graph}\label{s:diaggraph}
The diagonal graph is a graph which stands in a similar relation to the
diagonal semilattice as the Hamming graph does to the Cartesian lattice.
In this section, we define it, show that apart from a few small cases its
automorphism group is the diagonal group, and investigate some of its
properties, including its connection with the permutation group property
of \emph{synchronization}.
We believe that this is an interesting class of graphs, worthy of study by
algebraic graph theorists. The graph $\Gamma_D(T,m)$ has appeared in some
cases: when $m=2$ it is the Latin-square graph associated with the Cayley
table of~$T$, and when $T=C_2$ it is the \emph{folded cube}, a
distance-transitive graph.
\subsection{Diagonal graph and diagonal semilattice}
\label{sec:dgds}
In this subsection we define the \emph{diagonal graph} $\Gamma_D(T,m)$ associated
with a diagonal semilattice $\mathfrak{D}(T,m)$. We show that, except for five
small cases (four of which we already met in the context of Latin-square graphs
in Section~\ref{sect:lsautgp}), the
diagonal semilattice and diagonal graph determine each other, and so they have
the same automorphism group, namely $D(T,m)$.
Let $\Omega$ be the underlying set of a diagonal semilattice
$\mathfrak{D}(T,m)$, for $m\geqslant2$ and for a not necessarily finite group $T$. Let $Q_0,\ldots,Q_m$ be the minimal partitions
of the semilattice (as in Section~\ref{sec:diag1}). We define the diagonal graph as follows.
The vertex set is $\Omega$; two vertices are joined if they lie in the same
part of $Q_i$ for some $i$ with $0\leqslant i\leqslant m$. Since parts of distinct $Q_j$, $Q_{j'}$ intersect in at most one point, the value of $i$ is unique. Clearly
the graph is regular with valency $(m+1)(|T|-1)$ (if $T$ is finite).
We represent the vertex set by $T^m$, with $m$-tuples in square brackets.
Then $[t_1,\ldots,t_m]$ is joined to all vertices obtained by changing one
of the coordinates, and to all vertices $[xt_1,\ldots,xt_m]$ for $x\in T$,
$x\ne1$. We say that the adjacency of two vertices differing in the $i$th
coordinate is of \emph{type $i$}, and that of two vertices differing by a
constant left factor is of \emph{type $0$}.
The semilattice clearly determines the graph. So, in particular, the group
$D(T,m)$ acts as a group of graph automorphisms.
If we discard one of the partitions $Q_i$, the remaining partitions form the
minimal partitions in a Cartesian lattice; so the corresponding edges
(those of all types other than~$i$) form a
Hamming graph (Section~\ref{sec:HGCD}). So the diagonal graph is the
edge-union of $m+1$ Hamming graphs $\operatorname{Ham}(T,m)$ on the same set of vertices.
Moreover, two vertices lying in a part of $Q_i$ lie at
maximal distance~$m$ in the Hamming graph obtained by removing $Q_i$.
\begin{theorem}
If $(T,m)$ is not $(C_2,2)$, $(C_3,2)$, $(C_4,2)$, $(C_2\times C_2,2)$, or
$(C_2,3)$, then the diagonal graph determines uniquely the diagonal semilattice.
\label{t:autdiaggraph}
\end{theorem}
\begin{proof}
We handled the case $m=2$ in Proposition~\ref{p:lsgraphaut} and the following
comments, so we can assume that $m\geqslant3$.
The assumption that $m\geqslant3$ has as a consequence that the parts of the
partitions $Q_i$ are the maximal cliques of the graph. For clearly they are
cliques. Since any clique of size $2$ or $3$ is contained in a Hamming graph,
we see that any clique of size greater than~$1$ is contained in a
maximal clique, which has this form; and it is the unique maximal clique
containing the given clique. (See the discussion of cliques in Hamming
graphs in the proof of Theorem~\ref{th:cdham}.)
So all the parts of the partitions $Q_i$ are determined by the graph; we
need to show how to decide when two cliques are parts of the same partition.
We call each maximal clique a \emph{line}; we say it is an \emph{$i$-line},
or has \emph{type~$i$}, if it is a part of $Q_i$. (So an $i$-line is a maximal
set any two of whose vertices are type-$i$ adjacent.) We have to show that the
partition of lines into types is determined by the graph structure. This
involves a closer study of the graph.
Since the graph admits $D(T,m)$, which induces the symmetric group $S_{m+1}$
on the set of types of line, we can assume (for example) that if we have
three types involved in an argument, they are types $1$, $2$ and $3$.
Call lines $L$ and $M$ \emph{adjacent} if they are disjoint but there are
vertices $x\in L$ and $y\in M$ which are adjacent. Now the following holds:
\begin{quote}
Let $L$ and $M$ be two lines.
\begin{itemize}\itemsep0pt
\item If $L$ and $M$ are adjacent $i$-lines, then every vertex in $L$ is
adjacent to a vertex in $M$.
\item If $L$ is an $i$-line and $M$ a $j$-line adjacent to $L$, with $i\ne j$,
then there are at most two vertices in $L$ adjacent to a vertex in $M$, and
exactly one such vertex if $m>3$.
\end{itemize}
\end{quote}
For suppose that two lines $L$ and $M$ are adjacent, and suppose first that
they have the same type, say type $1$, and that $x\in L$ and $y\in M$ are
on a line of type~$2$. Then $L=\{[*,a_2,a_3,\ldots,a_m]\}$ and
$M=\{[*,b_2,b_3,\ldots,b_m]\}$, where $*$ denotes an arbitrary element of $T$.
We have $a_2\ne b_2$ but $a_i=b_i$ for
$i=3,\ldots,m$. The common neighbours on the two lines
are obtained by taking the entries $*$ to be equal in the two lines.
(The conditions show that there cannot be an adjacency of type $i\ne 2$ between
them.)
Now suppose that $L$ has type~$1$ and $M$ has type~$2$, with a line of
type~$3$ joining vertices on these lines. Then we have $L=\{[*,a_2,a_3,\ldots,a_m]\}$ and
$M=\{[b_1,*,b_3,\ldots,b_m]\}$, where $a_3\ne b_3$ but $a_i=b_i$ for $i>3$;
the adjacent vertices are obtained
by putting ${*}=b_1$ in $L$ and ${*}=a_2$ in $M$.
If $m>3$, there is no adjacency of any other type between the lines.
If $m=3$, things are a little different. There is one type~$3$ adjacency between
the lines $L=\{[*,a_2,a_3]\}$ and $M=\{[b_1,*,b_3]\}$ with $a_3\ne b_3$, namely
$[b_1,a_2,a_3]$ is adjacent to $[b_1,a_2,b_3]$. There is also one type-$0$
adjacency, corresponding to multiplying $L$ on the left by $b_3a_3^{-1}$:
this makes $[x,a_2,a_3]$ adjacent to $[b_1,y,b_3]$ if and only if
$b_3a_3^{-1}x=b_1$ and $b_3a_3^{-1}a_2=y$, determining $x$ and $y$ uniquely.
So we can split adjacency of lines into two kinds: the first kind when the
edges between the two lines form a perfect matching
(so there are $|T|$ such edges); the second kind where
there are at most two such edges (and, if $m>3$, exactly one). Now two
adjacent lines have the same type if and only if the adjacency is of the first
kind. So, if either $m>3$ or $|T|>2$, the two kinds of adjacency are
determined by the graph.
Make a new graph whose vertices are the lines, two lines adjacent if their
adjacency in the preceding sense is of the first kind. Then lines in the
same connected component of this graph have the same type. The converse is
also true, as can be seen within a Hamming subgraph of the diagonal graph.
Thus the partition of lines into types is indeed determined by the graph
structure, and is preserved by automorphisms of the graph.
Finally we have to consider the case where $m=3$ and $T=C_2$. In general,
for $T=C_2$, the Hamming graph is the $m$-dimensional cube, and has a unique
vertex at distance $m$ from any given vertex; in the diagonal graph, these
pairs of antipodal vertices are joined. This is the graph known as the
\emph{folded cube} (see \cite[p.~264]{bcn}). The arguments given earlier apply
if $m\geqslant4$; but, if $m=3$, the graph is the complete bipartite graph $K_{4,4}$,
and any two disjoint edges are contained in a $4$-cycle.
\end{proof}
\begin{cor}\label{c:sameag}
Except for the cases $(T,m)=(C_2,2)$, $(C_3,2)$, $(C_2\times C_2,2)$, and
$(C_2,3)$, the diagonal semilattice $\mathfrak{D}(T,m)$ and the
diagonal graph $\Gamma_D(T,m)$ have the same automorphism group, namely
the diagonal group $D(T,m)$.
\end{cor}
\begin{proof}
This follows from Theorem~\ref{t:autdiaggraph} and the fact that
$\Gamma_D(C_4,2)$ is the Shrikhande graph, whose automorphism group is
$D(C_4,2)$: see Section~\ref{sect:lsautgp}.
\end{proof}
\subsection{Properties of finite diagonal graphs}
We have seen some graph-theoretic properties of $\Gamma_D(T,m)$ above.
In this subsection we assume that $T$ is finite and $m\geqslant2$, though we often have to exclude
the case $m=|T|=2$ (where, as we have seen, the diagonal graph is the complete
graph $K_4$).
The \emph{clique number} $\omega(\Gamma)$ of a graph~$\Gamma$
is the number of vertices in its largest clique; the
\emph{clique cover number} $\theta(\Gamma)$ is the smallest number of cliques
whose union contains every vertex; and the \emph{chromatic number}
$\chi(\Gamma)$ is the smallest number of colours required to colour the
vertices so that adjacent vertices receive different colours.
The following properties are consequences of Section~\ref{sec:dgds},
especially the proof of Theorem~\ref{t:autdiaggraph}. We give brief
explanations or pointers to each claim.
\begin{itemize}
\item There are $|T|^m$ vertices, and the valency is $(m+1)(|T|-1)$. (The
number of vertices is clear; each point $v$ lies in a unique part of size
$|T|$ in each of the $m+1$ minimal partitions of the diagonal semlattice.
Each of these parts is a maximal clique, the parts pairwise intersect
only in $v$, and the union of the parts contains all the neighbours of $v$.)
\item Except for the case $m=|T|=2$, the clique number is $|T|$, and the
clique cover number is $|T|^{m-1}$. (The parts of each minimal partition
carry maximal cliques, and thus each minimal partition realises a minimal-size
partition of the vertex set into cliques.)
\item $\Gamma_D(T,m)$ is isomorphic to $\Gamma_D(T',m')$ if and only if
$m=m'$ and $T\cong T'$. (The graph is constructed from the semilattice; and
if $m>2$, or $m=2$ and $|T|>4$, the semilattice is recovered from the graph as
in Theorem~\ref{t:autdiaggraph}; for the remaining cases, see the discussion
after Proposition~\ref{p:lsgraphaut}.)
\end{itemize}
Distances and diameter can be calculated as follows. We define two sorts of
adjacency: (A1) is $i$-adjacency for $i\ne0$, while (A2) is $0$-adjacency.
\subsubsection*{Distances in $\Gamma_D(T,m)$} We observe first that, in any
shortest path, adjacencies of fixed type occur
at most once. This is because different factors of $T^{m+1}$ commute, so
we can group those in each factor together.
We also note that distances cannot exceed $m$, since any two vertices are
joined by a path of length at most $m$ using only edges of sort (A1) (which
form a Hamming graph). So a path of smallest length is contained within a
Hamming graph.
Hence, for any two vertices $t=[t_1,\ldots,t_m]$ and $u=[u_1,\ldots,u_m]$, we
compute the distance in the graph by the following procedure:
\begin{itemize}
\item[(D1)] Let $d_1=d_1(t,u)$ be the Hamming distance between the vertices
$[t_1,\ldots,t_m]$
and $[u_1,\ldots,u_m]$. (This is the length of the shortest path not using a
$0$-adjacency.)
\item[(D2)] Calculate the quotients $u_it_i^{-1}$ for $i=1,\ldots,m$. Let
$\ell$ be the maximum number of times that a non-identity element of $T$ occurs
as one of these quotients, and set $d_2=m-\ell+1$. (We can apply left
multiplication by this common quotient to find a vertex at distance one from
$t$; then use right multiplication by $m-\ell$ appropriate elements to make the
remaining elements agree. This is the length of the shortest path using a
$0$-adjacency.)
\item[(D3)] Now the graph distance $d(u,v)=\min\{d_1,d_2\}$.
\end{itemize}
\subsubsection*{Diameter of $\Gamma_D(T,m)$} An easy argument shows that the diameter of the graph is
$m+1-\lceil (m+1)/|T|\rceil$ which is at most
$m$, with equality if and only if $|T|\geqslant m+1$. The bound $m$ also follows
directly from the fact that, in the previous procedure, both $d_1$
and $d_2$ are at most $m$.
If $|T|\geqslant m+1$, let $1,t_1,t_2,\ldots,t_m$ be pairwise distinct elements
of~$T$. It is easily
checked that $d([1,\ldots,1],[t_1,\ldots,t_m])=m$. For clearly $d_1=m$;
and for $d_2$ we note that all the ratios are distinct so $l=1$.
\subsubsection*{Chromatic number}
This has been investigated in two special cases: the case $m=2$ (Latin-square
graphs) in \cite{ghm}, and the case where $T$ is a non-abelian finite simple
group in \cite{bccsz} in connection with synchronization.
We have not been able to compute the chromatic number in all cases;
this section describes
what we have been able to prove.
The argument in~\cite{bccsz} uses the truth of the
\emph{Hall--Paige conjecture} by
Wilcox~\cite{wilcox}, Evans~\cite{evans}
and Bray et al.~\cite{bccsz},
which we briefly discuss.
(See \cite{bccsz} for the history of the proof of this conjecture.)
\begin{defn}
A \emph{complete mapping} on a group $G$ is a bijection $\phi:G\to G$ for
which the map $\psi:G\to G$ given by $\psi(x)=x\phi(x)$ is also a bijection.
The map $\psi$ is the \emph{orthomorphism} associated with $\phi$.
\end{defn}
In a Latin square, a \emph{transversal} is a set of cells, one in each row,
one in each column, and one containing each letter; an \emph{orthogonal mate}
is a partition of the cells into transversals.
It is well known
(see also~\cite[Theorems~1.4.1 and 1.4.2]{DK:book})
that the following three conditions on a finite group $G$ are
equivalent. (The original proof is in \cite[Theorem~7]{paige}.)
\begin{itemize}\itemsep0pt
\item $G$ has a complete mapping;
\item the Cayley table of $G$ has a transversal;
\item the Cayley table of $G$ has an orthogonal mate.
\end{itemize}
The \emph{Hall--Paige conjecture} \cite{hp} (now, as noted, a theorem),
asserts the following:
\begin{theorem}\label{th:hp}
The finite group $G$ has a complete mapping if and only if either $G$ has odd
order or the Sylow $2$-subgroups of $G$ are non-cyclic.
\end{theorem}
Now let $T$~be a finite group and let $m$~be an integer greater
than~$1$, and consider the diagonal graph $\Gamma_D(T,m)$. The chromatic
number of a graph cannot be smaller than its clique number. We saw at the
start of this section that the clique number is $|T|$ unless $m=2$ and $|T|=2$.
\begin{itemize}
\item Suppose first that $m$ is odd. We give the vertex $[t_1,\ldots,t_m]$
the colour
$u_1u_2 \cdots u_m$ in $T$, where $u_i=t_i$ if $i$~is odd
and $u_i=t_i^{-1}$ if $i$~is even.
If two vertices lie in a part
of $Q_i$ with $i>0$, they differ only in the $i$th coordinate, and clearly
their colours differ. Suppose that $[t_1,\ldots,t_m]$ and $[s_1,\ldots,s_m]$
lie in the same part of $Q_0$, so that $s_i=xt_i$ for $i=1,\ldots,m$,
where $x\ne1$. Put $v_i=s_i$ if $i$ is odd and $v_i=s_i^{-1}$ if $i$~is even.
Then $v_iv_{i+1} = u_iu_{i+1}$ whenever $i$ is even, so
the colour of the second vertex is
\[v_1v_2 \cdots v_m = v_1 u_2 \cdots u_m =xu_1 u_2 \cdots u_m,\]
which is different from that of the first vertex since $x\ne1$.
\item Now suppose that $m$ is even and assume in this case that the Sylow
$2$-subgroups of $T$ are are trivial or non-cyclic. Then, by
Theorem~\ref{th:hp}, $T$~has a complete mapping~$\phi$. Let $\psi$ be
the corresponding orthomorphism. We define the colour of the vertex
$[t_1,\ldots,t_m]$ to be
\[t_1^{-1}t_2t_3^{-1}t_4\cdots t_{m-3}^{-1}t_{m-2}t_{m-1}^{-1}\psi(t_m).\]
An argument similar to but a little more elaborate than in the other case
shows that this is a proper colouring. We refer to \cite{bccsz} for details.
\end{itemize}
With a little more work we get the following theorem, a
contribution to the general question concerning the chromatic number of
the diagonal graphs. Let $\chi(T,m)$ denote the chromatic
number of $\Gamma_D(T,m)$.
\begin{theorem}\label{thm:chrom}
\begin{enumerate}
\item If $m$ is odd, or if $|T|$ is odd, or if the Sylow $2$-subgroups of
$T$ are non-cyclic, then $\chi(T,m)=|T|$.
\item If $m$ is even, then $\chi(T,m)\leqslant\chi(T,2)$.
\end{enumerate}
\end{theorem}
All cases in (a) were settled above; we turn to~(b).
A \emph{graph homomorphism} from $\Gamma$ to $\Delta$ is a map from the
vertex set of $\Gamma$ to that of $\Delta$ which maps edges to edges.
A proper $r$-colouring of a graph $\Gamma$ is a homomorphism from $\Gamma$
to the complete graph $K_r$. Since the composition of homomorphisms is a
homomorphism, we see that if there is a homomorphism from $\Gamma$ to
$\Delta$ then there is a colouring of $\Gamma$ with $\chi(\Delta)$ colours,
so $\chi(\Gamma)\leqslant\chi(\Delta)$.
\begin{theorem}\label{thm:diagepi}
For any $m\geqslant 3$ and non-trivial finite group $T$, there is a homomorphism from $\Gamma_D(T,m)$
to $\Gamma_D(T,m-2)$.
\end{theorem}
\begin{proof} We define a map by mapping a vertex $[t_1,t_2,\ldots,t_m]$ of
$\Gamma_D(T,m)$ to the vertex $[t_1t_2^{-1}t_3,t_4,\ldots,t_m]$ of
$\Gamma_D(T,m-2)$, and show that this map is a homomorphism. If
two vertices of $\Gamma_D(T,m)$ agree in all but position~$j$, then their
images agree in all but position $1$ (if $j\le 3$) or $j-2$ (if $j>3$).
Suppose that $t_i=xs_i$ for $i=1,\ldots,m$. Then
$t_1t_2^{-1}t_3=xs_1s_2^{-1}s_3$, so the images of $[t_1,\ldots,t_m]$ and
$[s_1,\ldots,s_m]$ are joined. This completes the proof.
\end{proof}
This also completes the proof of Theorem~\ref{thm:chrom}.
\medskip
The paper \cite{ghm} reports new results on the chromatic number of a
Latin-square graph, in particular, if $|T|\geqslant 3$ then
$\chi(T,2)\leqslant 3|T|/2$. They also report a conjecture of Cavenagh,
which claims that $\chi(T,2)\leqslant |T|+2$,
and prove this conjecture in the case where $T$ is abelian.
Payan~\cite{Payan} showed that graphs in a class he called ``cube-like''
cannot have chromatic number~$3$. Now $\Gamma_D(C_2,2)$,
which is the complete graph~$K_4$, has chromatic number~$4$; and the
folded cubes $\Gamma_D(C_2,m)$ are ``cube-like'' in Payan's sense.
It follows from Theorems~\ref{thm:chrom} and~\ref{thm:diagepi} that the
chromatic number of the folded cube $\Gamma_D(C_2,m)$ is $2$ if $m$~is odd and
$4$ if $m$~is even. So the bound in Theorem~\ref{thm:chrom}(b) is attained if
$T\cong C_2$.
\subsection{Synchronization}\label{sec:Synch}
A permutation group $G$ on a finite set $\Omega$ is said to be
\emph{synchronizing} if, for any map $f:\Omega\to\Omega$ which is not a
permutation, the transformation monoid $\langle G,f\rangle$ on $\Omega$
generated by $G$ and $f$ contains a map of rank~$1$ (that is, one which maps
$\Omega$ to a single point). For the background of this notion in automata
theory, we refer to \cite{acs:synch}.
The most important tool in the study of synchronizing groups is the following
theorem \cite[Corollary 4.5 ]{acs:synch}. A graph is \emph{trivial} if it
is complete or null.
\begin{theorem}\label{th:nonsynch}
A permutation group $G$ is synchronizing if and only if no non-trivial
$G$-invariant graph has clique number equal to chromatic number.
\end{theorem}
From this it immediately follows that a synchronizing group is transitive
(if $G$ is intransitive, take a complete graph on one orbit of~$G$), and
primitive (take the disjoint union of complete graphs on the blocks in a
system of imprimitivity for~$G$). Now, by the O'Nan--Scott theorem
(Theorem~\ref{thm:ons}), a
primitive permutation group preserves a Cartesian or diagonal semilattice or
an affine space, or else is almost simple.
\begin{theorem}
If a group $G$ preserves a Cartesian decomposition, then it is non-synchro\-nizing.
\end{theorem}
This holds because the Hamming graph has clique number equal to chromatic
number. (We saw in the proof of Theorem~\ref{th:cdham} that the clique number of
the Hamming graph is equal to the
cardinality of the alphabet. Take the alphabet $A$ to be an abelian group;
also use $A$ for the set of colours, and give the $n$-tuple
$(a_1,\ldots,a_n)$ the colour $a_1+\cdots+a_n$. If two $n$-tuples are
adjacent in the Hamming graph, they differ in just one coordinate, and so
get different colours.)
In \cite{bccsz}, it is shown that a primitive diagonal group whose socle
contains $m+1$ simple factors with $m>1$ is non-synchronizing.
In fact, considering Theorem~\ref{th:primaut}, the following more general result
is valid.
\begin{theorem}
If $G$ preserves a diagonal semilattice $\mathfrak{D}(T,m)$ with $m>1$ and $T$
a finite group of order greater than~$2$, then $G$ is non-synchronizing.
\end{theorem}
\begin{proof}
If $T$ is not characteristically simple then Theorem~\ref{th:primaut} implies
that $G$~is imprimitive and so it is non-synchronizing. Suppose that $T$ is
characteristically simple and let $\Gamma$ be the diagonal graph
$\Gamma_D(T,m)$. Since we have excluded the case $|T|=2$, the clique number of
$\Gamma$ is $|T|$, as we showed in the preceding subsection. Also, either $T$
is an elementary abelian group of odd order or the Sylow 2-subgroups of $T$
are non-cyclic. (This is clear unless $T$ is simple, in which case it follows
from Burnside's Transfer Theorem, see \cite[(39.2)]{asch}.) So, by
Theorem~\ref{thm:chrom}, $\chi(\Gamma)=|T|$. Now Theorem~\ref{th:nonsynch}
implies that $D(T,m)$ is non-synchronizing;
since $G\leqslant D(T,m)$, also $G$~is non-synchronizing.
\end{proof}
\begin{remark}
It follows from the above that a synchronizing permutation group must be of one
of the following types: affine (with the point stabiliser a primitive linear
group); simple diagonal with socle the product of two copies of a non-abelian
simple group; or almost simple. In the first and third cases, some but not all
such groups are synchronizing; in the second case, no synchronizing example
is known.
\end{remark}
\section{Open problems}\label{s:problems}
Here are a few problems that might warrant further investigation.
For $m\geqslant 3$, Theorem~\ref{th:main} characterised $m$-dimensional special sets of
partitions as minimal partitions in join-semilattices $\mathfrak D(T,m)$ for a
group $T$. However, for $m=2$, such special sets arise from an arbitrary quasigroup $T$.
The automorphism group of the join-semilattice generated by a 2-dimensional special
set is the autoparatopism group of the quasigroup $T$ and, for $|T|>4$,
it also coincides with
the automorphism group of the corresponding Latin-square graph
(Proposition~\ref{p:autlsg}).
Since we wrote the first draft of the paper, Michael Kinyon has pointed out to
us that the Paige loops~\cite{paige:loops} (which were shown by
Liebeck~\cite{liebeck} to be the only finite simple Moufang loops which are not
groups) have vertex-primitive autoparatopism groups.
\begin{problem}
Determine whether there exists a quasigroup $T$, not isotopic to a group or
a Paige loop, whose
autoparatopism group is primitive.
This is equivalent to requiring that the automorphism group of the corresponding
Latin-square graph is vertex-primitive; see Proposition~\ref{p:autlsg}.
\end{problem}
If $T$ is a non-abelian finite simple group and $m\geqslant 3$,
then the diagonal group $D(T,m)$ is a maximal subgroup of the
symmetric or alternating group~\cite{LPS}. What happens in the infinite
case?
\begin{problem}
Find a maximal subgroup of $\operatorname{Sym}(\Omega)$ that contains
the diagonal group $D(T,m)$ if $T$ is an infinite simple group. If $\Omega$
is countably infinite, then by~\cite[Theorem~1.1]{macpr},
such a maximal subgroup exists.
(For a countable set, \cite{covmpmek} describes maximal subgroups
that stabilise a Cartesian lattice.)
\end{problem}
\begin{problem}
Investigate the chromatic number $\chi(T,m)$ of the
diagonal graph $\Gamma_D(T,m)$ if $m$ is even and $T$ has no complete mapping.
In particular, either show that the bound in Theorem~\ref{thm:chrom}(b)
is always attained (as we noted, this is true for $T=C_2$) or improve this bound.
\end{problem}
For the next case where the Hall--Paige conditions fail, namely $T=C_4$,
the graph $\Gamma_D(T,2)$ is the complement of the Shrikhande graph, and has
chromatic number $6$; so, for any even $m$, the chromatic number of
$\Gamma_D(T,m)$ is $4$, $5$ or $6$, and the sequence of chromatic numbers is
non-increasing.
If $T$ is a direct product of $m$ pairwise isomorphic non-abelian simple groups,
with $m$ an integer and $m>1$, then $D(T,m)$ preserves a Cartesian lattice
by \cite[Lemma~7.10(ii)]{ps:cartesian}. Here $T$ is not necessarily finite,
and groups with this property are called FCR (finitely completely reducible) groups.
However there are other infinite characteristically simple groups,
for example the McLain group~\cite{mclain}.
\begin{problem}
Determine whether there exist characteristically simple (but not simple) groups $T$
which are not FCR-groups, and integers $m>1$, such that $D(T,m)$ preserves a
Cartesian lattice.
It is perhaps the case that $D(T,m)$ does not preserve a Cartesian lattice
for these groups $T$; and we ask further whether $D(T,m)$ might still preserve some
kind of structure that has more automorphisms than the diagonal semilattice.
\end{problem}
\begin{problem}\label{p2}
Describe sets of more than $m+1$ partitions of
$\Omega$, any $m$ of which are the minimal elements in a Cartesian lattice.
\end{problem}
For $m=2$, these are equivalent to sets of mutually orthogonal Latin squares.
For $m>2$, any $m+1$ of the partitions are the minimal elements in a
diagonal semilattice $\mathcal{D}(T,m)$. Examples are known when $T$ is
abelian. One such family is given as follows. Let $T$ be the additive group
of a field $F$ of order $q$, where $q>m+1$; let $F=\{a_1,a_2,\ldots,a_q\}$.
Then let $W=F^m$. For $i=1,\ldots,q$, let $W_i$ be the subspace
spanned by $(1,a_i,a_i^2,\ldots,a_i^{m-1})$, and let $W_0$ be the subspace
spanned by $(0,0,\ldots,0,1)$. The coset partitions of $W$ given by these
$q+1$ subspaces have the property that any $m$ of them are the minimal elements
in a Cartesian lattice of dimension $m$ (since any $m$ of the given vectors
form a basis of $W$.) Note the connection with MDS codes and geometry: the
$1$-dimensional subspaces are the points of a normal rational curve in
$\mathrm{PG}(m-1,F)$. See~\cite{btb}.
For which non-abelian groups $T$ do examples with $m>2$ exist?
\begin{problem}
With the hypotheses of Problem~\ref{p2}, find a good upper bound
for the number of partitions, in terms of $m$ and $T$.
\end{problem}
We note one trivial bound: the number of such partitions cannot exceed
$m+|T|-1$. This is well-known when $m=2$ (there cannot be more than $|T|-1$
mutually orthogonal Latin squares of order $|T|)$. Now arguing inductively
as in the proof of Proposition~\ref{p:quots}, we see that increasing $m$ by
one can increase the number of partitions by at most one.
\medskip
Since the first draft of this paper was written, three of the authors and
Michael Kinyon have written a paper \cite{bckp} addressing (but by no means
solving) the last two problems above.
\section*{Acknowledgements} Part of the work was done while the authors were visiting
the South China University of Science and Technology (SUSTech), Shenzhen, in 2018, and
we are grateful (in particular to Professor Cai Heng Li)
for the hospitality that we received.
The authors would like to thank the
Isaac Newton Institute for Mathematical Sciences, Cambridge,
for support and hospitality during the programme
\textit{Groups, representations and applications: new perspectives}
(supported by \mbox{EPSRC} grant no.\ EP/R014604/1),
where further work on this paper was undertaken.
In particular we acknowledge a Simons Fellowship (Cameron) and a Kirk Distinguished
Visiting Fellowship (Praeger) during this programme. Schneider thanks the Centre for the Mathematics of Symmetry
and Computation
of The University of Western Australia and
Australian Research Council Discovery Grant DP160102323
for hosting his visit
in 2017 and
acknowledges the
support of the CNPq projects \textit{Produtividade em Pesquisa}
(project no.: 308212/2019-3)
and \textit{Universal} (project no.: 421624/2018-3).
We are grateful to Michael Kinyon for comments on an earlier version of the
paper and to the anonymous referee for his or her careful reading of the manuscript.
| 2024-02-18T23:39:40.993Z | 2021-05-07T02:21:39.000Z | algebraic_stack_train_0000 | 58 | 31,462 |
|
proofpile-arXiv_065-337 | \section{Introduction}
First-principles calculations based on density functional
theory~\cite{hohenberg:kohn,kohn:sham} (DFT)
and the pseudo\-potential method are widely used for studying
the energetics, structure and dynamics of solids and
liquids~\cite{gillan,galli:pasquarello,payne}. In the
standard approach, the occupied Kohn-Sham orbitals are expanded
in terms of plane waves, and the ground state is found by minimizing
the total energy with respect to the plane-wave
coefficients~\cite{car:parrinello}.
Calculations on systems of over a hundred atoms with this approach
are now quite common. However, it has proved difficult to go to very
much larger systems, because the computational effort in this approach
depends on the number of atoms $N$ at least as $N^2$, and asymptotically
as $N^3$. Because of this limitation, there has been a vigorous effort
in the past few years to develop linear-scaling
methods~\cite{yang1,yang2,yang3,yang4,yang5,baroni,galli,mauri1,mauri2,mauri3,ordejon1,ordejon2,li,nunes,hierse,hernandez:gillan,hernandez:gillan:goringe,ordejon:artacho:soler}
-- methods
in which the effort depends only linearly on the number of atoms.
We have recently described a general theoretical framework for developing
linear-scaling self-consistent DFT
schemes~\cite{hernandez:gillan,hernandez:gillan:goringe}.
We presented one practical way of implementing
such a scheme, and investigated its performance for crystalline silicon.
Closely related ideas have been reported by other
authors~\cite{hierse,ordejon:artacho:soler} -- an overview
of work on linear-scaling methods was given in the Introduction of
our previous paper~\cite{hernandez:gillan:goringe}.
The practical feasibility of linear-scaling DFT techniques is thus
well established. However, there are still technical problems to be solved
before the techniques can be routinely applied. Our aim here is to
study the problem of representing the localized orbitals that appear
in linear-scaling methods (support functions in our terminology) --
in other words, the problem of basis functions.
To put this in context, we recall briefly the main
ideas of our linear-scaling DFT method.
Standard DFT can be expressed in terms of the Kohn-Sham density matrix
$\rho ( {\bf r}, {\bf r}^\prime )$. The total-energy functional
can be written in terms of $\rho$, and the ground state is obtained
by minimization with respect to $\rho$ subject to two constraints:
$\rho$ is idempotent (it is a projector, so that its eigenvalues are
0 or 1), and its trace is equal to half the number of electrons.
Linear-scaling behavior is obtained by imposing a limitation on the
spatial range of $\rho$:
\begin{equation}
\rho ( {\bf r}, {\bf r}^\prime ) = 0 \; , \; \; \; | {\bf r} -
{\bf r}^\prime | > R_c \; .
\end{equation}
By the variational principle, we then get an upper bound $E ( R_c )$
to the true ground-state energy $E_0$. Since the true ground-state
density matrix decays to zero as $| {\bf r} - {\bf r}^\prime |
\rightarrow \infty$, we expect that $E ( R_c \rightarrow \infty ) =
E_0$. To make the scheme practicable, we introduced the further
condition that $\rho$ be separable:
\begin{equation}
\rho ( {\bf r}, {\bf r}^\prime ) = \sum_{\alpha \beta}
\phi_\alpha ( {\bf r} ) \, K_{\alpha
\beta} \, \phi_\beta ( {\bf r}^\prime ) \; ,
\end{equation}
where the number of support functions $\phi_\alpha ( {\bf r} )$ is finite.
The limitation on the spatial range of $\rho$ is imposed by requiring
that the $\phi_\alpha ( {\bf r} )$ are non-zero only in localized
regions (``support regions'') and that the spatial range of
$K_{\alpha \beta}$ is limited. In our method, the support regions are
centered on the atoms and move with them.
We have shown~\cite{hernandez:gillan,hernandez:gillan:goringe}
that the condition on the eigenvalues of $\rho$ can be
satisfied by the method of Li, Nunes and Vanderbilt~\cite{li}
(LNV): instead
of directly varying $\rho$, we express it as:
\begin{equation}
\rho = 3 \sigma * \sigma - 2 \sigma * \sigma * \sigma \; ,
\end{equation}
where the asterisk indicates the continuum analog of matrix multiplication.
As shown by LNV, this representation of $\rho$ not only ensures
that its eigenvalues lie in the range $[ 0 , 1 ]$, but it drives
them towards the values 0 and 1. In our scheme, the auxiliary
matrix $\sigma ( {\bf r}, {\bf r}^\prime )$ has the same type of
separability as $\rho$:
\begin{equation}
\sigma ( {\bf r}, {\bf r}^\prime ) = \sum_{\alpha \beta}
\phi_\alpha ( {\bf r} ) \, L_{\alpha \beta} \,
\phi_\beta ( {\bf r}^\prime ) \; .
\end{equation}
This means that $K$ is given by the matrix equation:
\begin{equation}
K = 3 L S L - 2 L S L S L \; ,
\end{equation}
where $S_{\alpha \beta}$ is the overlap matrix of support functions:
\begin{equation}
S_{\alpha \beta} = \int \! \! \mbox{d} {\bf r} \, \phi_\alpha \phi_\beta \; .
\label{eq:overlap}
\end{equation}
We can therefore summarize the overall scheme as follows. The total
energy is expressed in terms of $\rho$, which depends on
the separable quantity $\sigma$. The ground-state energy is obtained
by minimization with respect to the support functions
$\phi_\alpha ( {\bf r} )$ and the matrix elements $L_{\alpha \beta}$,
with the $\phi_\alpha$ confined to localized regions
centered on the atoms, and the
$L_{\alpha \beta}$ subject to a spatial cut-off.
The $\phi_\alpha ( {\bf r} )$ must be allowed to vary freely
in the minimization process, just like the Kohn-Sham orbitals
in conventional DFT, and we must consider how to represent them. As always,
there is a choice: we can represent them either by their values on
a grid~\cite{chelikowsky1,chelikowsky2,chelikowsky3,gygi1,gygi2,gygi3,seitsonen,hamann1,hamann2,bernholc},
or in terms of some set of basis functions.
In our previous work~\cite{hernandez:gillan,hernandez:gillan:goringe},
we used a grid representation. This was satisfactory for
discussing the feasibility of linear-scaling schemes, but seems to us to
suffer from significant drawbacks. Since the
support regions are centered on the ions in our method, this means
that when the ions move, the boundaries of the regions will
cross the grid points. In any simple grid-based scheme, this will
cause troublesome discontinuities. In addition, the finite-difference
representation of the kinetic-energy operator in a grid
representation causes problems at the boundaries of the regions.
A further point is that in a purely grid-based method we are almost certainly
using more variables than are really necessary.
These problems have led us to consider
basis-function methods.
We describe in this paper a practical basis-function scheme for
linear-scaling DFT, and we study its performance in numerical
calculations. The basis consists of an array of localized functions
-- we call them ``blip functions''.
There is an array of blip functions for each support region, and the
array moves with the region. The use of such arrays of localized
functions as a basis for quantum calculations is not
new~\cite{cho:arias:joannopoulos:lam,chen:chang:hsue,modisette,wei:chou}.
However, to our knowledge it
has not been discussed before in the context of linear-scaling calculations.
The plan of the paper is as follows. In Sec.~2, we emphasize
the importance of considering the relation between blip-function
and plane-wave basis sets, and we use this relation to analyze
how the calculated ground-state energy will depend on the width
and spacing of the blip functions.
We note some advantages of using
B-splines as blip functions, and we then present some practical
tests which illustrate the convergence of the ground-state energy
with respect to blip width and spacing.
We then go on (Sec.~3) to discuss the
technical problems of using blip-function basis sets in linear-scaling
DFT. We report the results of
practical tests, which show explicitly how the ground-state
energy in linear-scaling DFT converges to the value
obtained in a standard plane-wave calculation. Section~4 gives
a discussion of the results, and presents our conclusions. Some
mathematical derivations are given in an appendix.
\section{Blip functions and plane waves}
\subsection{General considerations}
Before we focus on linear-scaling problems, we need to set down some
elementary ideas about basis functions. To start with, we therefore
ignore the linear-scaling aspects, and we discuss the general problem
of solving Schr\"{o}dinger's equation using basis functions. It is
enough to discuss this in one dimension, and we assume a periodically
repeating system, so that the potential $V(x)$ acting on the
electrons is periodic: $V(x+t) = V(x)$, where $t$ is any
translation vector. Self-consistency questions are irrelevant at this
stage, so that $V(x)$ is given. The generalization to
three-dimensional self-consistent calculations will be straightforward.
In a plane-wave basis, the wavefunctions $\psi_i (x)$ are expanded as
\begin{equation}
\psi_i (x) = L^{- 1/2} \sum_G c_{i G} \,
\exp ( i G x ) \; ,
\end{equation}
where the reciprocal lattice vectors of the repeating geometry
are given by $G = 2 \pi n / L$ ($n$ is an integer), and we include
all $G$ up to some cut-off $G_{\rm max}$.
We obtain the ground-state energy
$E( G_{\rm max} )$ in the given basis by minimization with respect to
the $c_{i G}$, subject to the constraints of orthonormality.
For the usual variational reasons, $E ( G_{\rm max} )$ is a monotonically
decreasing function of $G_{\rm max}$ which tends to the exact value $E_0$ as
$G_{\rm max} \rightarrow \infty$.
Now instead of plane waves we want to use an array of spatially localized
basis functions (blip functions).
Let $f_0 (x)$ be some localized function, and denote by
$f_\ell (x)$ the translated
function $f_0 (x - \ell a )$, where $\ell$ is an integer
and $a$ is a spacing, which is chosen so that $L$ is an exact multiple
of $a$: $L= M a$. We use the array of blip functions $f_\ell (x)$ as a basis.
Equivalently, we can use any independent linear combinations of the
$f_\ell (x)$. In considering the relation between blip functions and
plane waves, it is particularly convenient to work with
``blip waves'', $\chi_G (x)$, defined as:
\begin{equation}
\chi_G (x) = A_G \sum_{\ell = 0}^{M - 1} f_\ell (x) \exp ( i G R_\ell ) \; ,
\end{equation}
where $R_\ell = \ell a$, and $A_G$ is some normalization constant.
The relation between blip waves and plane waves can be analyzed
by considering the Fourier representation of $\chi_G (x)$. It is
straightforward to show that $\chi_G (x)$ has Fourier components only
at wavevectors $G + \Gamma$, where $\Gamma$ is a reciprocal
lattice vector of the blip grid: $\Gamma = 2 \pi m / a$ ($m$ is an integer).
In fact:
\begin{equation}
\chi_G (x) = ( A_G / a) \sum_{\Gamma} \hat{f} (G + \Gamma )
\exp \left( i ( G + \Gamma ) x \right) \; ,
\label{eq:blipwave}
\end{equation}
where $\hat{f} (q)$ is the Fourier transform of $f_0 (x)$:
\begin{equation}
\hat{f} (q) = \int_{- \infty}^{\infty} \mbox{d}x \, f_0 (x) \, e^{i q x} \; .
\label{eq:fourier}
\end{equation}
At this point, it is useful to note that for some
choices of $f_0 (x)$ the blip-function basis set is exactly equivalent
to a plane-wave basis set. For this to happen, $\hat{f} (q)$
must be exactly zero beyond some cut-off wavevector $q_{\rm cut}$.
Then provided $q_{\rm cut} \geq G_{\rm max}$ and provided
$q_{\rm cut} + G_{\rm max} < 2 \pi / a$, all the $\Gamma \neq 0$
terms in Eq.~(\ref{eq:blipwave}) will vanish and all
blip waves for $- G_{\rm max}
\leq G \leq G_{\rm max}$ will be identical to plane-waves. (Of course, we
must also require that $\hat{f} (q) \neq 0$ for $| q | \leq G_{\rm max}$.)
Our main aim in this Section is to determine how the total energy
converges to the exact value as the width and spacing of the
blip functions are varied. The spacing is controlled by varying $a$,
and the width is controlled by scaling each blip function: $f_0 (x)
\rightarrow f_0 (sx)$, where $s$ is a scaling factor. In the case
of blip functions for which $\hat{f} (q)$ cuts off in the way just
described, the convergence of the total energy is easy to describe.
Suppose we take a fixed blip width, and hence a fixed wavevector
cut-off $q_{\rm cut}$. If the blip spacing $a$ is small enough so that
$q_{\rm cut} < \pi / a$, then it follows from what we have said
that the blip basis set is exactly equivalent to a plane-wave basis set
having $G_{\rm max} = q_{\rm cut}$. This means that the total
energy is equal to $E( q_{\rm cut} )$ and is completely independent
of $a$ when the latter falls below the threshold value
$a_{\rm th} = \pi / q_{\rm cut}$. This is connected with the fact
that the blip basis set becomes over-complete when $a < a_{\rm th}$: there
are linear dependences between the $M$ blip functions
$f_\ell (x)$.
It follows from this that the behavior of the total energy as a function
of blip spacing and blip width is as shown schematically in Fig.~1.
As the width is reduced, the cut-off $q_{\rm cut}$ increases in
proportion to the scaling factor $s$, so that the threshold spacing
$a_{\rm th}$ is proportional to the width. The energy
value $E( q_{\rm cut} )$ obtained for $a < a_{\rm th}$ decreases
monotonically with the width, as follows from the monotonic decrease
of $E( G_{\rm max} )$ with $G_{\rm max}$ for a plane-wave basis set.
Note that in Fig.~1 we have shown $E$ at fixed width as decreasing
monotonically with $a$ for $a > a_{\rm th}$. In fact, this may not always
happen. Decrease of $a$ does not correspond simply to addition of
basis functions and hence to increase of variational freedom: it also involves
relocation of the basis functions. However, what is true is that
$E$ for $a > a_{\rm th}$ is always greater than $E$ for $a < a_{\rm th}$,
as can be proved from the over-completeness of the blip basis set
for $a < a_{\rm th}$. At large spacings, the large-width blip basis is
expected to give the lower energy, since in this region the poorer
representation of long waves should be the dominant source of error.
\begin{figure}[tbh]
\begin{center}
\leavevmode
\epsfxsize=8cm
\epsfbox{idealised_blips.eps}
\end{center}
\caption{Expected schematic form for the total ground-state energy
as a function of the blip-grid spacing for two different
blip widths, in the case where the Fourier components of the
blip functions vanish beyond some cut-off. The horizontal dotted
line shows the exact ground-state energy $E_0$. The vertical dashed
lines mark the threshold values $a$ of blip grid spacing (see
text).}
\end{figure}
Up to now, we have considered only the rather artificial case where
the Fourier components of the blip function are strictly zero beyond
a cut-off. This means that the blip function must extend to infinity
in real space, and this is clearly no use if we wish to do all
calculations in real space. We actually want $f_0 (x)$ to be strictly
zero beyond some real-space cut-off $b_0$: $f_0 (x) = 0$ if
$| x | > b_0$. This means that $\hat{f} (q)$ will extend to
infinity in reciprocal space. However, we can expect that with a
judicious choice for the form of $f_0 (x)$ the Fourier components
$\hat{f} (q)$ will still fall off very rapidly, so that the
behavior of the total energy is still essentially as shown in Fig.~1.
If the choice is not judicious, we shall need a considerably
greater effort to bring $E$ within a specified tolerance of the
exact value than if we were using a plane-wave basis set. With
a plane-wave basis, a certain cut-off $G_{\rm max}$ is needed in
order to achieve a specified tolerance in $E$. With blip functions
whose Fourier components cut off at $G_{\rm max}$, we should need
a blip spacing of $\pi / G_{\rm max}$ to achieve the same tolerance. Our
requirement on the actual choice of blip function is that the spacing
needed to achieve the given tolerance should be not much less than
$\pi / G_{\rm max}$.
\subsection{B-splines as blip functions}
Given that the blip function cuts off in real space at some
distance $b_0$, it is helpful if the function and some of its
derivatives go smoothly to zero at this distance. If $f_0 (x)$
and all its derivatives up to and including the $n$th vanish
at $| x | = b_0$, then $\hat{f} (q)$ falls off asymptotically
as $1 / | q |^{n+2}$ as $| q | \rightarrow \infty$. One way of
making a given set of derivatives vanish is to build $f_0 (x)$
piecewise out of suitable polynomials. As an example, we examine here
the choice of $f_0 (x)$ as a B-spline.
B-splines are localized polynomial basis functions that are equivalent to
a representation of functions in terms of cubic splines. A single
B-spline $B (x)$ centered at the origin and covering the region
$| x | \leq 2$ is built out of third-degree polynomials in the four
intervals $-2 \leq x \leq -1$, $-1 \leq x \leq 0$, $0 \leq x \leq 1$
and $1 \leq x \leq 2$, and is defined as:
\begin{eqnarray}
B (x) = \left\{
\begin{array}{ccr}
1 - \frac{3}{2} x^2 + \frac{3}{4} | x |^3 \; & \mbox{ if } &
0 < | x | < 1 \\
\frac{1}{4} ( 2 - | x | )^3 \; & \mbox{ if } &
1 < | x | < 2 \\
0 \; & \mbox{ if } & 2 < | x |
\end{array}
\right.
\end{eqnarray}
The function and its first two derivatives are continuous everywhere.
The Fourier transform of $B (x)$, defined as in Eq.~(\ref{eq:fourier}),
is:
\begin{equation}
\hat{B} (q) = \frac{1}{q^4} ( 3 - 4 \cos q + \cos 2q ) \; ,
\end{equation}
which falls off asymptotically as $q^4$, as expected. Our choice of blip
function is thus $f_0 (x) = B (2x / b_0 )$, so that
$\hat{f} (q) = \hat{B} ( \frac{1}{2} b_0 q )$.
The transform $\hat{B} (q)$ falls rapidly to small values in the
region $| q | \simeq \pi$. It is exactly zero at the set of wavevectors
$q_n = 2 \pi n$ ($n$ is a non-zero integer), and is very small in a
rather broad region around each $q_n$, because the lowest
non-vanishing term in a polynomial expansion of $3 - 4 \cos q + \cos 2q$
is of degree $q^4$. This suggests that this choice of blip function
will behave rather similarly to one having a Fourier cut-off
$q_{\rm cut} = 2 \pi / b_0$. In other words, if we keep
$b_0$ fixed and reduce the blip spacing $a$, the energy should
approach the value obtained in a plane-wave calculation having
$G_{\rm max} = q_{\rm cut}$ when $a \simeq \frac{1}{2} b_0$. The
practical tests that now follow will confirm this. (We note that
B-splines are usually employed with a blip spacing equal to
$\frac{1}{2} b_0$;
here, however, we are allowing the spacing to vary).
\subsection{Practical tests}
Up to now, it was convenient to work in one dimension, but for practical
tests we clearly want to go to real three-dimensional systems.
To do this, we simply take
the blip function $f_0 ( {\bf r} )$ to be the product of factors
depending on the three Cartesian components of ${\bf r}$:
\begin{equation}
f_0 ( {\bf r} ) = p_0 ( x )\, p_0 ( y )\, p_0 ( z ) \; .
\label{eq:factor}
\end{equation}
All the considerations outlined above for a blip basis in one
dimension apply unchanged to the individual factors $p_0 ( x )$ etc,
which are taken here to be B-splines.
Corresponding to the blip grid of spacing $a$ in one dimension, the
blip functions $f_0 ( {\bf r} )$ now sit on the points of a
three-dimensional grid which we assume here to be simple cubic. The
statements made above about the properties of the B-spline
basis are expected to remain true in this three-dimensional form.
We present here some tests on the performance of a B-spline basis
for crystalline Si. At first
sight, it might appear necessary to write a new code in order
to perform such tests. However, it turns out that rather minor
modifications to an existing plane-wave code allow one to produce
results that are identical to those that would be obtained
with a B-spline basis. For the purpose of the present tests, this
is sufficient. The notion behind this device is that blip functions
can be expanded in terms of plane waves, so that the function-space
spanned by a blip basis is contained within the space spanned by a
plane-wave basis, provided the latter has a large enough $G_{\rm max}$.
Then all we have to do to get the blip basis is to project
from the large plane-wave space into the blip space. Mathematical
details of how to do this projection in practice are given in
the Appendix. The practical tests have been done with the
CASTEP code~\cite{payne},
which we have modified to perform the necessary projections.
Our tests have been done on the diamond-structure Si crystal, using the
Appelbaum-Hamann~\cite{appelbaum:hamann}
local pseudopotential; this is an empirical
pseudopotential, but suffices to illustrate the points of
principle at issue here.
The choice of
$k$-point sampling is not expected to make much difference
to the performance of blip functions, and we have done the calculations
with a $k$-point set corresponding to the lowest-order 4 $k$-point
Monkhorst-Pack~\cite{monkhorst:pack} sampling for an 8-atom cubic cell.
If we
go the next-order set of 32 $k$-points, the total energy per
atom changes by less than 0.1~eV. For reference purposes, we
have first used CASTEP in its normal unmodified plane-wave form
to examine the convergence of total energy with respect to plane-wave
cut-off for the Appelbaum-Hamann potential. We find that for
plane-wave cut-off energies $E_{\rm pw} = \hbar^2 G_{\rm max}^2 / 2 m$
equal to 150, 250 and 350~eV, the total energies per atom are --115.52,
--115.64 and --115.65~eV. This means that to obtain an accuracy of
0.1~eV/atom, a cut-off of 150~eV (corresponding to $G_{\rm max} =
6.31$~\AA$^{-1}$) is adequate. According to the discussion of Sec.~2.2,
the properties of this plane-wave basis should be quite well
reproduced by a blip-function basis of B-splines having half-width
$b_0 = 2 \pi / G_{\rm max} = 1.0$~\AA, and
the total energy calculated with this basis should converge
rapidly to the plane-wave result when the blip
spacing falls below $a \simeq \frac{1}{2} b_0 = 0.5$~\AA.
\begin{figure}[tbh]
\begin{center}
\leavevmode
\epsfxsize=8cm
\epsfbox{blips.eps}
\end{center}
\caption{Convergence of the total energy for two different blip
half-widths
as a function of the blip grid spacing. The calculations were
performed with
the Appelbaum-Hamann pseudopotential. $E(plw)$ is
the plane-wave result, obtained with a cutoff of 250~eV.}
\end{figure}
We have done the tests with two widths of B-splines: $b_0 =$
1.25 and 1.0~\AA, and in each case we have calculated $E$
as a function of the blip spacing $a$. In all cases, we have used a
plane-wave cut-off large enough to ensure that errors in
representing the blip-function basis are negligible compared
withe errors attributable to the basis itself. Our results for
$E$ as a function of $a$ (Fig.~2) fully confirm our expectations.
First, they have the general form indicated in Fig.~1. The difference
is that since we have no sharp cut-off in reciprocal space, $E$ does
not become constant when $a$ falls below a threshold, but instead continues to
decrease towards the exact value. Second, for $b_0 = 1.0$~\AA, $E$ does
indeed converge rapidly to the plane-wave result when $a$ falls
below $\sim 0.5$~\AA. Third, the larger blip width gives the lower
energy at larger spacings.
\section{The blip-function basis in linear-scaling calculations}
\subsection{Technical matters}
In our linear-scaling scheme, the support functions $\phi_\alpha ({\bf r})$
must be varied within support regions, which are centered on the
atoms. These regions, taken to be spherical with radius $R_{\rm reg}$
in our previous work, move with the atoms.
In the present work, the $\phi_\alpha$
are represented in terms of blip functions. Each atom has a blip grid attached
to it, and this grid moves rigidly with the atom. The blip functions
sit on the point of this moving grid. To make the region localized,
the set of blip-grid points is restricted to those for which the
associated blip function is wholly contained within the region
radius $R_{\rm reg}$.
If we denote by $f_{\alpha \ell} ({\bf r})$
the $\ell$th blip function in the region supporting
$\phi_\alpha$, then the representation is:
\begin{equation}
\phi_\alpha ({\bf r}) = \sum_\ell b_{\alpha \ell} \,
f_{\alpha \ell} ({\bf r}) \; ,
\end{equation}
and the blip coefficients $b_{\alpha \ell}$ have to be varied
to minimize the total energy.
The $\phi_\alpha$ enter the calculation through their overlap matrix elements
and the matrix elements of kinetic and potential energies. The overlap
matrix $S_{\alpha \beta}$ [see Eq.~(\ref{eq:overlap})] can be expressed
analytically
in terms of the blip coefficients:
\begin{equation}
S_{\alpha \beta} = \sum_{\ell \ell^\prime} b_{\alpha \ell} \,
b_{\beta \ell^\prime}\, s_{\alpha \ell , \beta \ell^\prime} \; ,
\end{equation}
where $s_{\alpha \ell , \beta \ell^\prime}$ is the overlap matrix between
blip functions:
\begin{equation}
s_{\alpha \ell , \beta \ell^\prime} = \int \! \! \mbox{d} {\bf r} \,
f_{\alpha \ell} \, f_{\beta \ell^\prime} \; ,
\end{equation}
which is known analytically. Similarly, the kinetic energy matrix elements:
\begin{equation}
T_{\alpha \beta} = - \frac{\hbar^2}{2 m} \int \! \! \mbox{d} {\bf r} \,
\phi_\alpha \nabla^2 \phi_\beta
\end{equation}
can be calculated analytically by writing:
\begin{equation}
T_{\alpha \beta} = \sum_{\ell \ell^\prime} b_{\alpha \ell} \,
b_{\beta \ell^\prime} \, t_{\alpha \ell , \beta \ell^\prime} \; ,
\end{equation}
where:
\begin{equation}
t_{\alpha \ell , \beta \ell^\prime} = - \frac{\hbar^2}{2 m}
\int \! \! \mbox{d} {\bf r} \, f_{\alpha \ell}
\nabla^2 f_{\beta \ell^\prime} \; .
\end{equation}
However, matrix elements of the potential energy cannot be
treated analytically, and their integrations must be approximated
by summation on a grid. This `integration grid' is, of course,
completely distinct from the blip grids. It does not move with
the atoms, but is a single grid fixed in space. If the position
of the $m$th point on the integration grid is called ${\bf r}_m$,
then the matrix elements of the local potential (pseudopotential plus
Hartree and exchange-correlation potentials) are approximated by:
\begin{equation}
V_{\alpha \beta} = \int \! \! \mbox{d} {\bf r} \,
\phi_\alpha V \phi_\beta \simeq
\delta \omega_{\rm int} \sum_m \phi_\alpha ( {\bf r}_m )
V( {\bf r}_m ) \phi_\beta ( {\bf r}_m ) \; ,
\end{equation}
where $\delta \omega_{\rm int}$ is the volume per grid point. For a non-local
pseudopotential, we assume the real-space version of the
Kleinman-Bylander~\cite{kleinman:bylander}
representation, and the terms in this are also calculated as a sum over
points of the integration grid. We note that the approximate equivalence
of the B-spline and plane-wave bases discussed above gives us an
expectation for the required integration-grid spacing {\em h\/}. In a
plane-wave calculation, {\em h\/} should in principle be less than
$\pi(2G_{\rm max})$. But the blip-grid spacing is approximately
$a=\pi G_{\rm max}$. We therefore expect to need
$h \approx \frac{1}{2} a$.
In order to calculate $V_{\alpha \beta}$ like this, we have to know all the
values $\phi_\alpha ( {\bf r}_m )$ on the integration grid:
\begin{equation}
\phi_\alpha ( {\bf r}_m ) = \sum_\ell b_{\alpha \ell} \,
f_{\alpha \ell} ( {\bf r}_m ) \; .
\label{eq:support}
\end{equation}
At first sight, it would seem that each point ${\bf r}_m$ would be
within range of a large number of blip functions, so that many
terms would have to be summed over for each ${\bf r}_m$
in Eq.~(\ref{eq:support}). In fact, this is
not so, provided the blip functions factorize into Cartesian components
in the way shown in Eq.~(\ref{eq:factor}).
To see this, assume that the blip grid and integration grid are cubic,
let the blip-grid index $\ell$ correspond to the triplet
$( \ell_x , \ell_y , \ell_z )$, and let the factorization of
$f_\ell ( {\bf r} )$ be written as:
\begin{equation}
f_\ell ( {\bf r}_m ) = p_{\ell_x} ( x_m ) \, p_{\ell_y} ( y_m ) \,
p_{\ell_z} (z_m ) \; ,
\end{equation}
where $x_m$, $y_m$ and $z_m$ are the Cartesian components of ${\bf r}_m$
(we suppress the index $\alpha$ for brevity). The sum
over $\ell$ in Eq.~(\ref{eq:support}) can then be performed as a
sequence of three summations,
the first of which is:
\begin{equation}
\theta_{\ell_y \ell_z} ( x_m ) = \sum_{\ell_x}
b_{\ell_x \ell_y \ell_z} \, p_{\ell_x} ( x_m ) \; .
\end{equation}
The number of operations needed to calculate all these quantities
$\theta_{\ell_y \ell_z} ( x_m )$ is just the number of points
$( \ell_x , \ell_y , \ell_z )$ on the blip grid times the number
$\nu_{\rm int}$ of points $x_m$ for which $p_{\ell_x} ( x_m )$ is
non-zero for a given $\ell_x$. This number $\nu_{\rm int}$ will generally
be rather moderate, but the crucial point is that the number of
operations involved is proportional only to $\nu_{\rm int}$ and not to
$\nu_{\rm int}^3$. Similar considerations will apply to the sums
over $\ell_y$ and $\ell_z$.
It is worth remarking that since we have to calculate
$\phi_\alpha ( {\bf r}_m )$ anyway, we have the option of calculating
$S_{\alpha \beta}$ by direct summation on the grid as well. In fact,
$T_{\alpha \beta}$ can also be treated this way, though here
one must be more careful, since it is essential that its symmetry
($T_{\alpha \beta} = T_{\beta \alpha}$) be preserved by whatever scheme
we use. This can be achieved by, for example, calculating the gradient
$\nabla \phi_\alpha ( {\bf r}_m )$ analytically on the integration
grid and then using integration by parts to express $T_{\alpha \beta}$
as an integral over $\nabla \phi_\alpha \cdot \nabla \phi_\beta$.
In the present work, we use the analytic forms for
$s_{\alpha \ell , \beta \ell^{\prime}}$ and
$t_{\alpha \ell , \beta \ell^{\prime}}$.
A full linear-scaling calculation requires minimization of the
total energy with respect to the quantities $\phi_{\alpha}$ and
$L_{\alpha \beta}$. However, at present we are concerned solely
with the representation of $\phi_{\alpha}$, and the cut-off
applied to $L_{\alpha \beta}$ is irrelevant. For our practical
tests of the blip-function basis, we have therefore taken the
$L_{\alpha \beta}$ cut-off to infinity, which is equivalent to exact
diagonalization of the Kohn-Sham equation. Apart from this, the
procedure we use for determining the ground state, i.e. minimizing
$E$ with respect to the $\phi_{\alpha}$ functions, is essentially
the same as in our previous
work~\cite{hernandez:gillan,hernandez:gillan:goringe}. We use
conjugate-gradients~\cite{numerical:recipes} minimization
with respect to the blip-function coefficients $b_{\alpha \ell}$.
Expressions for the required derivatives are straightforward to derive
using the methods outlined earlier~\cite{hernandez:gillan:goringe}.
\subsection{Practical tests}
We present here numerical tests both for the
Appelbaum-Hamann~\cite{appelbaum:hamann} local
pseudopotential for Si used in Sec.~2 and for a standard
Kerker~\cite{kerker}
non-local pseudopotential for Si. The aims of the tests are: first,
to show that the B-spline basis gives the accuracy in support-function
calculations to be expected from our plane-wave calculations; and second
to examine the convergence of $E$ towards the exact plane-wave
results as the region radius $R_{\rm reg}$ is increased. For present
purposes, it is not particularly relevant to perform the tests on
large systems. The tests have been done on perfect-crystal Si at
the equilibrium lattice parameter, as in Sec.~2.3.
\begin{table}[tbh]
\begin{center}
\begin{tabular}{cccc}
& $h$ (\AA) & $E$ (eV/atom) & \\
\hline
& 0.4525 & -0.25565 & \\
& 0.3394 & 0.04880 & \\
& 0.2715 & 0.00818 & \\
& 0.2263 & -0.01485 & \\
& 0.1940 & 0.00002 & \\
& 0.1697 & -0.00270 & \\
& 0.1508 & 0.00000 & \\
\end{tabular}
\end{center}
\caption{Total energy $E$ as a function of integration grid spacing
$h$ using
a region radius $R_{\rm reg} = 2.715$~\AA. The blip half-width
$b_0$ was set to
0.905~\AA\ and the blip grid spacing used was 0.4525~\AA. The zero of
energy is set equal to the result obtained with the finest grid.}
\end{table}
We have shown in Sec.~2.3 that a basis of B-splines having a
half-width $b_0 = 1.0$~\AA\ gives an error of $\sim 0.1$~eV/atom
if the blip spacing is $\sim 0.45$~\AA.
For the present tests we have used the
similar values $b_0 = 0.905$~\AA\ and $a = 0.4525$~\AA, in the
expectation of getting this level of agreement with CASTEP plane-wave
calculations in the limit $R_{\rm reg} \rightarrow \infty$.
To check the influence of integration grid spacing $h$, we have made
a set of
calculations at different $h$ using $R_{\rm reg} = 2.715$~\AA, which is
large enough to be representative (see below). The results (Table 1)
show that $E$ converges rapidly with decreasing $h$, and
ceases to vary for present purposes when $h = 0.194$~\AA.
This confirms our expectation that $h \approx \frac{1}{2} a$. We have then
used this grid spacing to study the variation of $E$ with $R_{\rm reg}$,
the results for which are given in Table~2, where we also
compare with the plane-wave results. The extremely rapid convergence
of $E$ when $R_{\rm reg}$ exceeds $\sim 3.2$~\AA\ is very striking,
and our results show that $R_{\rm reg}$ values yielding an accuracy
of $10^{-3}$~eV/atom are easily attainable. The close agreement with
the plane-wave result fully confirms the effectiveness of the blip-function
basis. As expected from the variational principle, $E$ from the blip-function
calculations in the $R_{\rm reg} \rightarrow \infty$ limit lies
slightly above the plane-wave value, and the discrepancy of $\sim 0.1$~eV
is of the size expected from the tests of Sec.~2.3 for (nearly) the
present blip-function width and spacing. (We also remark in
parenthesis that the absolute agreement between results obtained
with two entirely different codes is useful
evidence for the technical correctness of our codes.)
\begin{table}[tbh]
\begin{center}
\begin{tabular}{ccc}
$R_{\rm reg}$ (\AA) & \multicolumn{2}{c}{$E$ (eV/atom)} \\
& local pseudopotential & non-local psuedopotential \\
\hline
2.2625 & 1.8659 & 1.9653 \\
2.7150 & 0.1554 & 0.1507 \\
3.1675 & 0.0559 & 0.0396 \\
3.6200 & 0.0558 & 0.0396 \\
4.0725 & 0.0558 & 0.0396 \\
\end{tabular}
\end{center}
\caption{Convergence of the total energy $E$ as a function of the
region radius $R_{\rm reg}$ for silicon with a local and a non-local
pseudopotential. The calculations were performed with a blip grid
spacing of 0.4525~\AA\ and a blip half-width of 0.905~\AA\ in both
cases.
The zero of energy was taken to be the plane wave result obtained with
each pseudopotential, with plane wave cutoffs
of 250 and 200 eV respectively.}
\end{table}
The results obtained in our very similar tests using the
Kleinman-Bylander form of the Kerker pseudopotential for Si are also
shown in Table~2. In plane-wave calculations, the plane-wave cut-off
needed for the Kerker potential to obtain a given accuracy is very similar
to that needed in the Appelbaum-Hamann potential, and we have therefore
used the same B-spline parameters. Tests on the integration-grid spacing
show that we can use the value $h = 0.226$~\AA, which is close to what
we have used with the local pseudopotential.
The total
energy converges in the same rapid manner for $R_{\rm reg} > 3.2$~\AA,
and the agreement of the converged result with the CASTEP value
is also similar to what we saw with the Appelbaum-Hamann pseudopotential.
\section{Discussion}
In exploring the question of basis sets for linear-scaling
calculations, we have laid great stress on the relation with
plane-wave basis sets. One reason for doing this is that the plane-wave
technique is the canonical method for pseudopotential calculations,
and provides the easiest way of generating definitive results by
going to basis-set convergence. We have shown that within the linear-scaling
approach the total energy can be taken to convergence by systematically
reducing the width and spacing of a blip-function basis set, just as
it can be taken to convergence by increasing the plane-wave cut-off in
the canonical method. By analyzing the relation between the plane-wave and
blip-function bases, we have also given simple formulas for estimating
the blip width and spacing needed to achieve the same accuracy as
a given plane-wave cut-off. In addition, we have shown that the density
of integration-grid points relates to the number of blip functions
in the same way as it relates to the number of plane waves. Finally,
we have seen that the blip-function basis provides a practical way of
representing support functions in linear-scaling calculations, and that
the total energy converges to the plane-wave result as the region
radius is increased.
These results give useful insight into what can be expected of linear-scaling
DFT calculations. For large systems, the plane-wave method requires
a massive redundancy of information: it describes the space of occupied
states using a number of variables of order $N \times M$ ($N$ the number of
occupied orbitals, $M$ the number of plane waves), whereas the number of
variables in a linear-scaling method is only of order $N \times m$
($m$ the number of basis functions for each support function). This means
that the linear-scaling method needs fewer variables than the plane-wave
method by a factor $m / M$. But we have demonstrated that to achieve
a given accuracy the number of blip functions per unit volume is not
much greater than the number of plane waves per unit volume.
Then the factor $m / M$ is
roughly the ratio between the volume of a support region and the
volume of the entire system. The support volume must clearly depend on
the nature of the system. But for the Si system, we have seen that
convergence is extremely rapid once the region radius
exceeds $\sim 3.2$~\AA, corresponding to a region volume of 137~\AA$^3$,
which is about 7 times greater than the volume per atom of 20~\AA$^3$.
In this example, then, the plane-wave method needs more variables
than the linear-scaling method when the number of atoms $N_{\rm atom}$
is greater than $\sim 7$, and for larger systems it needs more
variables by a `redundancy factor' of $\sim N_{\rm atom} / 7$. (For
a system of 700 atoms, e.g., the plane-wave redundancy factor would
be $\sim 100$.) In this sense, plane-wave calculations on large systems
are grossly inefficient. However, one should be aware that there are
other factors in the situation, like the number of iterations needed
to reach the ground state in the two methods. We are not yet in a position
to say anything useful about this, but we plan to return to it.
Finally, we note an interesting question. The impressive rate of convergence
of ground-state energy with increase of region radius shown in Table~2
raises the question of what governs this convergence rate, and whether it
will be found in other systems, including metals. We remark that
this is not the same as the well-known question about the rate of
decay of the density matrix $\rho ( {\bf r} , {\bf r}^{\prime} )$
as $| {\bf r} - {\bf r}^{\prime} | \rightarrow \infty$, because in our
formulation both $\phi_\alpha ( {\bf r} )$ and $L_{\alpha \beta}$
play a role in the decay. Our intuition is that this decay is controlled
by $L_{\alpha \beta}$. We hope soon to report results on the support
functions for different materials, which will shed light on this.
\section*{Acknowledgements}
This project was performed in the framework of the U.K. Car-Parrinello
Consortium, and the work of CMG is funded by the
High Performance Computing Initiative (HPCI) through grant GR/K41649.
The work of EH is supported by EPSRC grant GR/J01967.
The use of B-splines
in the work arose from discussions with James Annett.
| 2024-02-18T23:39:41.175Z | 1996-09-20T17:30:30.000Z | algebraic_stack_train_0000 | 66 | 6,802 |
|
proofpile-arXiv_065-359 | \section{Why should we study rare charm decays? }
At HERA recent measurements of the charm production cross section
in $e p$ collisions at an
energy $\sqrt{s_{ep}} \approx 300$ GeV yielded a value of about
$1 \mu$b \cite{dstar-gp}.
For an integrated luminosity of 250 pb$^{-1}$,
one expects therefore about $25 \cdot 10^7$ produced c$\overline{\mbox{c}}$ \ pairs,
mainly through the boson-gluon fusion process.
This corresponds to a total of about
$30 \cdot 10^7$ neutral $D^{o}$,
$10 \cdot 10^7$ charged $D^{\pm}$,
some $5 \cdot 10^7$ $D_S$,
and about $5 \cdot 10^7$ charmed baryons.
A sizable fraction of this large number of $D$'s is accessible
via decays within a HERA detector, and thus
should be used to improve substantially our knowledge on
charmed particles.
There are several physics issues of great interest.
This report will cover however only aspects related
to the decay of charmed mesons in rare decay channels, and
in this sense provides an update of the discussion
presented in an earlier workshop on HERA physics \cite{hera-91a}.
In the following we shall discuss these aspects, and
point out the theoretical expectations.
Based on experiences made at HERA with charm studies,
we shall present an estimate on the sensitivity
for the detailed case study of the search for the
rare decay $D^0 \rightarrow \mu^+ \mu^- $.
Other challenging aspects such as the production mechanism
and detailed comparisons with QCD calculations, or the use
of charmed particles in the extraction of proton and photon
parton densities, will not be covered here.
Possibly the most competitive future source of $D$-mesons is
the proposed tau-charm factory.
The continuing efforts
at Fermilab (photoproduction and hadroproduction experiments),
at CERN (LEP) and at Cornell(CESR),
which are presently providing the highest
sensitivities, are compared with the situation at HERA.
In addition, all these different approaches
provide useful and complementary information
on various properties in the charm system.
\section{Decay processes of interest}
\subsection{Leading decays }
The charm quark is the only heavy quark besides the b quark and can be used
to test the heavy quark symmetry \cite{rf-isgurw}
by measuring form factors or decay constants.
Hence, the $D$-meson containing a charmed quark is heavy as well
and disintegrates through a large number of decay channels.
The leading decays
$c \rightarrow s + q{\bar q}$ or
$c \rightarrow s + {\bar l} \nu$
occur with branching ratios of order a few \%
and allow studies of QCD mechanisms
in a transition range between high and very low energies.
Although experimentally very challenging, the search for
the purely leptonic decays
$D^{\pm} \rightarrow \mu^{\pm} \nu$ and an improved
measurement of $D_S^{\pm} \rightarrow \mu^{\pm} \nu$
should be eagerly pursued further,
since these decays
offer direct access to the meson decay constants $f_D$ and $f_{D_S}$,
quantities that can possibly be calculated accurately by lattice
gauge theory methods
\cite{rf-marti},\cite{rf-wittig}.
\subsection{ Singly Cabibbo suppressed decays (SCSD)}
Decays suppressed by a factor $\sin{\theta_C}$, the socalled
singly Cabibbo suppressed decays (SCSD),
are of the form
$c \rightarrow d u {\bar d}$ or
$c \rightarrow s {\bar s} u$.
Examples of SCSD, such as
$D \rightarrow \pi \pi$ or $ K \bar{K}$ have been observed
at a level of $10^{-3}$ branching ratio
(1.5 and 4.3 $\cdot 10^{-3}$, respectively)
\cite{rf-partbook}.
They provide information about the
CKM-matrix, and also are background
processes to be worried about in the search for rare decays.
\subsection{ Doubly Cabibbo suppressed decays and
$D^0 \longleftrightarrow {\bar D^0}$ mixing}
Doubly Cabibbo suppressed decays (DCSD) of the form
$c \rightarrow d {\bar s} u$ have
not been observed up to now\cite{rf-partbook},
with the exception
of the mode $BR(D^0 \to K^+ \pi^- )$ that has a branching
ratio of $(2.9 \pm 1.4) \cdot 10^{-4}$.
The existing upper bounds are at the level of a few $10^{-4}$,
with branching ratios expected at the level of $10^{-5}$.
These DCSD are particularly interesting from the QCD-point of view,
and quite a few predictions have been made\cite{rf-bigi}.
DCSD also act as one of the main background processes
to the $D^0 \leftrightarrow \bar{D^0} $ \ mixing and therefore must be well understood,
before the problem of mixing itself can be successfully attacked.
As in the neutral Kaon and B-meson system, mixing between the
$D^0$ and the $\bar{D^0}$ is expected to occur (with $\Delta C = 2$).
The main contribution is expected due to long distance effects, estimated
to be as large as about
$r_D \sim 5 \cdot 10^{-3}$
\cite{rf-wolf},
while the standard box diagram yields $r_D \sim 10^{-5}$
\cite{rf-chau}.
Here $r_D$ is the mixing parameter
$ r_D \simeq (1/2) \cdot ( \Delta M / \Gamma)^2 $, with contributions by the
DCSD neglected.
Recall that the DCSD poses a serious background source in case
only the
time-integrated spectra are studied. The two sources can however be
better separated,
if the decay time dependence of the events is recorded separately
(see e.g. \cite{rf-anjos}). More details on the prospect of
measuring mixing at HERA are given in \cite{yt-mixing}.
\subsection{ Flavour Changing Neutral Currents (FCNC)}
An important feature of the standard model is that {\it flavour
changing neutral currents (FCNC with $\Delta C =1$)}
only occur at the one loop level in the SM
{\it i.e.} through short distance contributions,
such as e.g. in penguin and box diagrams
as shown in figs.\ref{feyn-loop} and
\ref{feyn-penguin}.
These are transitions of the form
$s \rightarrow d + N$ or
$c \rightarrow u + N$, where $N$
is a non-hadronic neutral state such as $\gamma \ $ or $\ l{\bar l}$, and give
rise to the decays
$D \rightarrow \rho \gamma $, $D^0 \rightarrow \mu^+ \mu^- $, $D^+ \rightarrow \pi^+ \mu^+ \mu^- $ \ etc.
Although the relevant couplings are the same as those of leading decays,
their rates are very small as they are suppressed by
the GIM mechanism \cite{gim} and the unfavourable quark masses
within the loops.
The SM-prediction for the branching ratios are
of order $10^{-9}$ for $D^0 \to X l^+ l^-$ and
of $O(10^{-15})$ for $D^0 \to l^+ l^-$, due to additional
helicity suppression.
A summary of the expected branching ratios obtained from
calculations of the loop integrals (\cite{rf-willey}, \cite{rf-bigi},
\cite{hera-91a}, \cite{long-range})
using also the QCD- short distance
corrections available \cite{rf-cella} is given in
table \ref{tab-exp}.
However, FCNC are sensitive to new, heavy particles in the loops, and
above all, to new physics in general.
In addition to these short distance loop diagrams, there are contributions
from long distance effects, which might be even larger by several
orders of magnitude\cite{long-range}.
To mention are photon pole amplitudes
($\gamma$-pole)
and vector meson dominance (VMD) induced processes.
The $\gamma$-pole model (see fig.\ref{feyn-gpole})
in essence is a W-exchange decay with a
virtual photon radiating from one of the quark lines. The behaviour
of the amplitude depends on the spin state of the final state
particle (vector V or pseudoscalar P).
The dilepton mass distribution for
$D \to V l^+ l^-$ modes peaks at zero (small $Q^2$) since the photon
prefers to be nearly real. On the other hand, the pole amplitude
for $D \to P l^+ l^-$ decays vanishes for small dilepton masses
because $D \to P \gamma$ is forbidden by angular momentum
conservation.
The VMD model (see fig.\ref{feyn-gpole}b) proceeds through the
decay $D \to X V^0 \to X l^+ l^-$.
The intermediate vector meson $V^0$ ($\rho, \omega, \phi$)
mixes with a virtual photon which then couples to the lepton pair.
The dilepton mass spectrum therefore will exhibit poles at the
corresponding vector meson masses due to real $V^0$ mesons decaying.
Observation of FCNC processes at rates that exceed the
long distance contributions hence opens a window
into physics beyond the standard model.
Possible scenarios include leptoquarks or heavy neutral leptons
with sizable couplings to $e$ and $\mu$.
A measurement of such long distance contributions in the
charm sector is inherently
of interest, as it can be used to estimate similar effects
in the bottom sector \cite{long-d},
e.g. for the decay $b \to s \gamma$,
which was seen at the level of $2.3 \cdot 10^{-4}$.
A separation of short and long range contributions would allow
e.g. a determination of $\mid V_{td}/V_{ts} \mid$
from the ratio
$BR(B \to \rho \gamma) / BR(B \to K^* \gamma)$
and bears as such a very high potential.
\begin{figure}[ht]
\epsfig{file=feyn-loop.eps,width=9cm}
\caption{\it Example of an FCNC process in the standard model
at the loop level: $D^0 \rightarrow \mu^+ \mu^- $\ . }
\label{feyn-loop}
\end{figure}
\begin{figure}[ht]
\epsfig{file=feyn-box.eps,width=7.5cm}
\epsfig{file=feyn-penguin.eps,width=7.5 cm}
\caption{\it FCNC processes: short range contributions due to
box diagrams (a) or penguin diagrams (b).}
\label{feyn-penguin}
\end{figure}
\begin{figure}[ht]
\epsfig{file=feyn-gpole.eps,width=7.5cm}
\epsfig{file=feyn-vdm.eps,width=7.5cm}
\caption{\it FCNC processes : long range contributions due to
$\gamma$-pole amplitude (a) and vector meson dominance (b).}
\label{feyn-gpole}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Decay mode & Expected branching ratio \\
\hline
\hline
$ c \to u \gamma $ & $10^{-15} - 10^{-14}$ \\
$ D \to \rho \gamma $ & $10^{-7}$ \\
$ D \to \gamma \gamma $ & $10^{-10}$ \\
\hline
$ c \to u l {\bar l} $ & $5 \cdot 10^{-8}$ \\
$ D^+ \to \pi^+ e^+ e^- $ & $10^{-8}$ \\
$ D^0 \to \mu^+ \mu^- $ & $10^{-19}$ \\
\hline
\hline
\end{tabular}
\caption[Expectations for loop processes.]
{Expectations for branching ratios of loop processes
based on SM calculations, hereby assuming the
BR of both $D \to \rho \rho $ and
$D \to \rho \pi$ to be below $10^{-3}$.}
\label{tab-exp}
\end{center}
\end{table}
\subsection{ Forbidden decays }
Decays which are not allowed
to all orders in the standard model, the {\it forbidden} decays,
are exciting signals of new physics.
Without claim of completeness, we shall list
here some of the more important ones:
\begin{itemize}
\item Lepton number (L) or lepton family (LF) number violation (LFNV)
in decays such as $D^0 \to \mu e$, $D^0 \to \tau e$.
It should be strongly emphasized that decays of $D$-mesons test
couplings complementary to those effective in K- or B-meson decays.
Furthermore, the charmed quark is the only possible charge 2/3
quark which allows
detailed investigations of unusual couplings.
These are often predicted to occur in models with
i) technicolour \cite{rf-masiero};
ii) compositeness \cite{rf-lyons};
iii) leptoquarks \cite{rf-buchmu} \cite{rf-campb};
(see e.g. fig.\ref{feyn-x-s}a and b); this can include
among others non-SU(5) symmetric flavour-dependent
couplings (u to $l^{\pm}$, and d to $\nu$), which
would forbid decays of the sort $K_L \to \mu \mu, \ \mu e $, while
still allowing for charm decays;
iv) massive neutrinos (at the loop level) in an extended standard model;
v) superstring inspired phenomenological models
e.g. MSSM models with a Higgs doublet;
vi) scalar exchange particles that would manifest
themselves e.g. in decays of the form $D^0 \to \nu {\bar \nu}$.
\item Further models feature {\it horizontal} interactions,
mediated by particles connecting u and c or d and s quarks
(see e.g. fig.\ref{feyn-x-s}a).
They appear with similar
signatures as the doubly Cabibbo suppressed decays.
\item Baryon number violating decays, such as
$D^0 \to p e^-$ or $D^+ \to n e^-$. They
are presumably very much suppressed,
although they are not directly related to proton decay.
\item The decay
$ D \rightarrow \pi \gamma $ is absolutely forbidden by gauge invariance
and is listed here only for completeness.
\end{itemize}
\vspace{-1.cm}
\begin{figure}[ht]
\epsfig{file=feyn-x-s.eps,width=9.2cm}
\epsfig{file=feyn-x-t.eps,width=7.6cm}
\caption{\it FCNC processes or LFNV decays, mediated by
the exchange of a scalar particle X
or a particle H mediating ``horizontal interactions'', or a leptoquark LQ.}
\label{feyn-x-s}
\end{figure}
The clean leptonic decays make it possible to search for leptoquarks.
If they do not couple also to quark-(anti)quark pairs, they cannot cause
proton decay but yield decays such as
$K \rightarrow \bar{l_1} l_2 $ or
$D \rightarrow \bar{l_1} l_2 $.
In the case of scalar leptoquarks
there is no helicity suppression and consequently the
experimental sensitivity to such decays is enhanced.
Let us emphasize here again, that
decays of $D$-mesons are complementary to those of Kaons, since they probe
different leptoquark types.
To estimate the sensitivity we write the effective four-fermion coupling as
$(g^2_{eff}/M^2_{LQ})$, and obtain
\begin{equation}
\frac{ (M_{LQ}\ /\ 1.8\ TeV)}{g_{eff}}
\geq
\sqrt[4] {\frac{10^{-5}}{BR(D^0 \rightarrow \mu^+\mu^-) }}.
\label{mlq}
\end{equation}
Here $g_{eff}$ is an effective coupling and includes possible mixing effects.
Similarly, the decays $D^+ \rightarrow e^+ \nu$, $D^+ \rightarrow \pi^+ e^+ e^- $ \ can be used to set bounds
on $M_{LQ}$. With the expected sensitivity, one can probe heavy exchange particles
with masses in the $1 \ (TeV / g_{eff})$ range.
Any theory attempting to explain the hadron-lepton symmetry or the
``generation'' aspects of the standard model will give rise to new phenomena
connected to the issues mentioned here. Background problems make it quite
difficult to search for signs
of them at high energies; therefore precision experiments
at low energies (like the highly successful $\mu$-, $\pi$- or K-decay experiments)
are very suitable to probe for any non-standard phenomena.
\section{Sensitivity estimate for HERA}
In this section we present
an estimate on the sensitivity to detect the
decay mode $D^0 \rightarrow \mu^+ \mu^- $.
As was pointed out earlier, this is among the
cleanest manifestation of FCNC or LFNV processes
\cite{rf-willey}.
We base the numbers on our experience
gained in the analysis of the 1994 data, published in \cite{dstar-gp}.
There the $D$-meson decay is measured in the decay mode
$ D^{*+} \rightarrow D^{0} \pi^{+}_s\ ; \
D^0 \to K^{-}\pi^{+}$, exploiting
the well established $D^{*+}(2010)$ tagging
technique\cite{rf-feldma}.
In analogy, we assume
for the decay chain $ D^{*+} \rightarrow D^{0} \pi^{+}_s ;
D^{0} \rightarrow \mu^+ \mu^- $,
a similar resolution of $\sigma \approx 1.1$ MeV in
the mass difference
$ \Delta M = M(\mu^+ \mu^- \pi^+_s) - M(\mu^+ \mu^- ) $
as in \cite{dstar-gp}.
\noindent
In order to calculate a sensitivity for the $D^0 \rightarrow \mu^+ \mu^- $
decay branching fraction we make the following
assumptions:
\noindent
i) luminosity $L = 250\ pb^{-1} $;
ii) cross section
$\sigma (e p \to c {\bar c} X) \mid_{\sqrt s_{ep} \approx 300; Q^2< 0.01}
= 940\ nb $;
iii) reconstruction efficiency $\epsilon_{reconstruction} = 0.5 $;
iv) trigger efficiency
$\epsilon_{trigger} = 0.6 $; this is based
on electron-tagged events, and hence applies to
photoproduction processes only.
v) The geometrical acceptance $A$ has been properly calculated
by means of Monte Carlo simulation for both
decay modes $D^0 \rightarrow K^- \pi^+ $\ and $D^0 \rightarrow \mu^+ \mu^- $\ for a rapidity interval of
$\mid \eta \mid < 1.5 $. For the parton density functions
the GRV parametrizations were employed, and the
charm quark mass was assumed to be $m_c = 1.5$. We obtained \\
$A = 6 \%$ for $p_T(D^*) > 2.5$ (for $K^{-}\pi^{+}$ ) \\
$A = 18 \%$ for $p_T(D^*) > 1.5$ (for $K^{-}\pi^{+}$ ) \\
$A = 21 \%$ for $p_T(D^*) > 1.5$ (for $\mu^+ \mu^- $)
\noindent
A direct comparison with the measured decays $N_{K \pi}$
into $ K^{-}\pi^{+}$ \cite{dstar-gp} then yields the expected
number of events $N_{\mu \mu}$ and determines the branching ratio to
\vspace*{-0.5cm}
$$ BR(D^0 \to \mu^+ \mu^-) = BR(D^0 \to K^{-}\pi^{+}) \cdot
\frac{N_{\mu \mu}}{L_{\mu \mu}} \cdot \frac{L_{K \pi}}{N_{K \pi}}
\cdot \frac{A(p_T>2.5)}{A(p_T>1.5)} $$
Taking the numbers from \cite{dstar-gp}
$N_{K \pi} = 119$ corresponding to an integrated
luminosity of $L_{K \pi} = 2.77 \ pb^{-1}$,
one obtains
\vspace*{0.5cm}
\fbox{ $BR(D^0 \to \mu^+ \mu^-) = 1.1 \cdot 10^{-6} \cdot N_{\mu \mu}$ }
\noindent
In the case of {\it NO} events observed, an upper limit
on the branching ratio calculated by means of Poisson statistics
$(N_{\mu \mu} = 2.3)$, yields a value of
$BR(D^0 \to \mu^+ \mu^-) < 2.5 \cdot 10^{-6}$ at 90\% c.l.
In the case of an observation of
a handful events e.g. of $O(N_{\mu \mu} \approx 10$), one obtains
$BR(D^0 \to \mu^+ \mu^-) \approx 10^{-5}$.
This can be turned into an estimate for the mass of a potential
leptoquark mediating this decay according to eqn.\ref{mlq},
and yields a value of
$M_{LQ}/g_{eff} \approx 1.8$ TeV.
\begin{table}[tb]
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
Mode & BR (90\% C.L.) & Interest & Reference \\
\hline
\hline
$ r= {({\bar D^0} \to \mu^+X)} \over {({\bar D^0} \to \mu^-X)} \ $ &
$1.2*10^{-2}$ &$\ $ $\Delta C = 2$, Mix $\ $ & BCDMS 85 \\
$ \ $ & $5.6*10^{-3}$ & $\Delta C = 2$, Mix & E615 86 \\
\hline
${(D^0 \to {\bar D^0} \to K^+\pi^-)} \over
{(D^0 \to K^+ \pi^- + K^- \pi^+ )}$ &
$4*10^{-2}$ &$\ $ $\Delta C = 2$, Mix $\ $ & HRS 86 \\
$\ $ & $ = 0.01^*$ & $\Delta C = 2$, Mix & MarkIII 86 \\
$\ $ & $1.4*10^{-2}$ & $\Delta C = 2$, Mix & ARGUS 87 \\
$ \ $ & $3.7*10^{-3}$ &$\ $ $\Delta C = 2$, Mix $\ $ & E691 88 \\
\hline
$D^0 \to \mu^+ \mu^-$ & $7.0*10^{-5}$ & FCNC & ARGUS 88 \\
$D^0 \to \mu^+ \mu^-$ & $3.4*10^{-5}$ & FCNC & CLEO 96 \\
$D^0 \to \mu^+ \mu^-$ & $1.1*10^{-5}$ & FCNC & E615 86 \\
\hline
$D^0 \to e^+ e^-$ & $1.3*10^{-4}$ & FCNC & MarkIII 88 \\
$D^0 \to e^+ e^-$ & $1.3*10^{-5}$ & FCNC & CLEO 96 \\
\hline
$D^0 \to \mu^{\pm} e^{\mp}$ & $1.2*10^{-4}$ & FCNC, LF & MarkIII 87 \\
$D^0 \to \mu^{\pm} e^{\mp}$ & $1.0*10^{-4}$ & FCNC, LF & ARGUS 88 \\
$D^0 \to \mu^{\pm} e^{\mp}$ & $(1.9*10^{-5})$ & FCNC, LF & CLEO 96 \\
\hline
$D^0 \to {\bar K}^0 e^+ e^-$ & $ 1.7*10^{-3} $ & \ & MarkIII 88 \\
$D^0 \to {\bar K}^0 e^+ e^-/\mu^+ \mu^-/\mu^{\pm} e^{\mp}$
& $ 1.1/6.7/1.*10^{-4} $ & FCNC, LF & CLEO 96 \\
$D^0 \to {\bar K}^{*0} e^+ e^-/\mu^+ \mu^-/\mu^{\pm} e^{\mp}$
& $ 1.4/11.8/1.*10^{-4} $ & FCNC, LF & CLEO 96 \\
$D^0 \to \pi^0 e^+ e^-/\mu^+ \mu^-/\mu^{\pm} e^{\mp}$
& $ 0.5/5.4/.9*10^{-4} $ & FCNC, LF & CLEO 96 \\
$D^0 \to \eta e^+ e^-/\mu^+ \mu^-/\mu^{\pm} e^{\mp}$
& $ 1.1/5.3/1.*10^{-4} $ & FCNC, LF & CLEO 96 \\
$D^0 \to \rho^0 e^+ e^-/\mu^+ \mu^-/\mu^{\pm} e^{\mp}$
& $ 1./4.9/0.5*10^{-4} $ & FCNC, LF & CLEO 96 \\
$D^0 \to \omega e^+ e^-/\mu^+ \mu^-/\mu^{\pm} e^{\mp}$
& $ 1.8/8.3/1.2*10^{-4} $ & FCNC, LF & CLEO 96 \\
$D^0 \to \phi e^+ e^-/\mu^+ \mu^-/\mu^{\pm} e^{\mp}$
& $ 5.2/4.1/0.4*10^{-4} $ & FCNC, LF & CLEO 96 \\
\hline
$D^0 \to K^+ \pi^-\pi^+\pi^-$ & $< 0.0015$ & DC & CLEO 94 \\
$D^0 \to K^+ \pi^-\pi^+\pi^-$ & $< 0.0015$ & DC & E691 88 \\
$D^0 \to K^+ \pi^-$ & $=0.00029$ & DC & CLEO 94 \\
$D^0 \to K^+ \pi^-$ & $< 0.0006$ & DC & E691 88 \\
\hline
\end{tabular}
\caption[Experimental limits on rare $D^0$-meson decays.]
{Experimental limits at 90\% c.l. on rare $D^0$-meson decays
(except where indicated by =).
Here L, LF, FCNC, DC and Mix denote
lepton number and lepton family number violation, flavour changing
neutral currents, doubly Cabibbo suppressed decays and mixing,
respectively.}
\label{tab-d}
\end{center}
\end{table}
\section{Background considerations}
\subsection{Background sources and rejection methods}
The most prominent sources of background originate from
i) genuine leptons from semileptonic B- and D-decays,
and decay muons from $K, \pi$ decaying in the detector;
ii) misidentified hadrons, {\it i.e.} $\pi, K$, from other
decays, notably leading decays and SCSD; and
iii) combinatorial background from light quark processes.
The background can be considerably suppressed by applying various
combinations of the following techniques:
\begin{itemize}
\item $D^*$-tagging technique \cite{rf-feldma}: \\
A tight window on the mass difference $\Delta M$ is the most
powerful criterium.
\item Tight kinematical constraints (\cite{rf-grab2},\cite{hera-91a}): \\
Misidentification of hadronic $D^0$ 2-body decays such as
$D^0 \rightarrow K^- \pi^+ $ ($3.8\%$ BR), $D^0 \rightarrow \pi^+ \pi^- $ ($0.1\%$ BR) and $D^0 \rightarrow K^+ K^- $ ($0.5\%$ BR)
are suppressed by more than an order of magnitude
by a combination of tight windows
on both $\Delta M$ and $M^{inv}_D$.
Final states containing Kaons can be very efficiently discriminated, because
the reflected $M^{inv}$ is sufficiently separated from the true signal
peak. However, this is not true for a pion-muon or pion-electron
misidentification.
The separation is slightly better between $D^0 \rightarrow e^+ e^- $\ and $D^0 \rightarrow \pi^+ \pi^- $.
\item Vertex separation requirements for secondary vertices: \\
Background from light quark production,
and of muons from K- and $\pi$- decays within the detector are
further rejected by exploiting the information of secondary vertices (e.g.
decay length separation, pointing back to primary vertex etc.).
\item Lepton identification (example H1) :\\
Electron identification is possible by using $dE/dx$ measurements
in the drift chambers{\rm, } the shower shape analysis in the calorimeter
(and possibly the transition radiators information).
Muons are identified with the instrumented
iron equipped with limited streamer tubes, with the
forward muon system, and in combination with
the calorimeter information.
The momentum has to be above $\sim 1.5$ to
$2 \ $ GeV/c to allow the $\mu$ to reach the instrumented iron.
Thus, the decay $D^0 \to \mu^+ \mu^-$ suffers from background contributions
by the SCSD mode $D^0 \to \pi^+ \pi^-$, albeit with a known
$BR = 1.6 \cdot 10^{-3} $; here
$\mu$-identification helps extremely well.
An example of background suppression using the particle ID
has been shown in ref.\cite{hera-91a},
where a suppression factor of order $O(100)$ has been achieved.
\item Particle ordering methods exploit the fact that
the decay products of the charmed mesons tend to
be the {\it leading} particles in the event (see e.g. \cite{dstar-dis}).
In the case of observed jets, the charmed mesons are
expected to carry a large fraction of the jet energy.
\item Event variables such as e.g. the total transverse energy
$E_{transverse}$ tend to reflect the difference in event topology
between heavy and light quark production processes, and hence
lend themselves for suppression of light quark background.
\end{itemize}
\subsection{Additional experimental considerations}
\begin{itemize}
\item Further possibilities to enhance overall statistics are the
usage of inclusive decays (no tagging), where the gain
in statistics is expected to be about
$\frac{ N(all D^0)}{ N(D^0 from D^*)} = 0.61 / 0.21 \approx 3$,
however on the the cost of higher background contributions.
\item In the decays $D^0 \to e e$ or $D^0 \to \mu e$ one expects
factors of 2 to 5 times better background rejection efficiency.
\item Trigger :
A point to mention separately is the trigger. To be able to
measure a BR at the level of $10^{-5}$, the event filtering
process has to start at earliest possible stage.
This should happen preferably at the
first level of the hardware trigger, because it will
not be feasible to store some $10^{+7}$ events on permanent
storage to dig out the few rare decay candidates.
This point, however, has up to now not yet been thoroughly
studied, let alone been implemented at the
hardware trigger level.
\end{itemize}
\begin{table}[tb]
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
Mode & BR (90\% C.L.) & Interest & Reference \\
\hline
\hline
$D^+ \to \pi^+ e^+ e^-$ & $6.6*10^{-5}$ & FCNC & E791 96 \\
$D^+ \to \pi^+ \mu^+ \mu^-$ & $1.8*10^{-5}$ & FCNC & E791 96 \\
$D^+ \to \pi^+ \mu^+ e^-$ & $3.3*10^{-3}$ & LF& MarkII 90 \\
$D^+ \to \pi^+ \mu^- e^+$ & $3.3*10^{-3}$ & LF& MarkII 90 \\
\hline
$D^+ \to \pi^- e^+ e^+$ & $4.8*10^{-3}$ & L& MarkII 90 \\
$D^+ \to \pi^- \mu^+ \mu^+$ & $2.2*10^{-4}$ & L& E653 95 \\
$D^+ \to \pi^- \mu^+ e^+$ & $3.7*10^{-3}$ & L+LF& MarkII 90 \\
$D^+ \to K l l $ & similar & L+LF& MarkII 90 \\
\hline
$c \to X \mu^+ \mu^-$ & $1.8*10^{-2}$ & FCNC & CLEO 88 \\
$c \to X e^+ e^-$ & $2.2*10^{-3}$ & FCNC & CLEO 88 \\
$c \to X \mu^+ e^-$ & $3.7*10^{-3}$ & FCNC & CLEO 88 \\
\hline
$D^+ \to \phi K^+ $ & $1.3*10^{-4}$ & DC & E687 95 \\
$D^+ \to K^+ \pi^+ \pi^- $ & $=6.5*10^{-4}$ & DC & E687 95 \\
$D^+ \to K^+ K^+ K^- $ & $1.5*10^{-4}$ & DC & E687 95 \\
\hline
$D^+ \to \mu^+ \nu_{\mu}$ & $7.2*10^{-4}$ & $f_D$ & MarkIII 88 \\
\hline
\hline
$D_S\to \pi^- \mu^+ \mu^+$ & $4.3*10^{-4}$ & L& E653 95 \\
$D_S\to K^- \mu^+ \mu^+$ & $5.9*10^{-4}$ & L& E653 95 \\
$D_S \to \mu^+ \nu_{\mu}$ & $=9 *10^{-4}$ & $f_{D_S}=430$ & BES 95 \\
\hline
\end{tabular}
\caption[Experimental limits on rare $D^+$- and $D_s$-meson decays.]
{Selection of experimental limits at 90\% c.l.
on rare $D^+$- and $D_s$-meson decays\cite{rf-partbook}
(except where indicated by =).}
\label{tab-ds}
\end{center}
\end{table}
\section{Status of sensitivity in rare charm decays}
Some of the current experimental upper limits
at 90\% c.l. on the branching ratios of
rare $D$ decays are summarised in
tables ~\ref{tab-d} and \ref{tab-ds}
according to \cite{rf-partbook}.
Taking the two-body decay $D^0 \to \mu^+ \mu^-$ to be the
sample case, a comparison of the achievable sensitivity on
the upper limit on branching fraction
$B_{D^0 \to \mu^+ \mu^-}$ at 90\% c.l. is summarized
in table \ref{tab-comp} for different experiments,
assuming that NO signal
events are being detected (see \cite{rf-grab1}
and \cite{rf-partbook}).
Note that the sensitivity reachable at HERA is compatible with
the other facilities, provided the above assumed luminosity is
actually delivered. This does not hold for a
proposed $\tau$-charm factory, which - if ever built and performing
as designed - would exceed all other facilities by at least
two orders of magnitude (\cite{rf-rafetc}).
\noindent
The status of competing experiments at other facilities
is the following :
\noindent
\begin{itemize}
\item SLAC : $e^+e^-$ experiments : Mark-III, MARK-II, DELCO : stopped.
\item CERN : fixed target experiments : ACCMOR, E615, BCDMS, CCFRC : stopped. \\
LEP-experiments : previously ran at the $Z^0$-peak;
now they continue
with increased $\sqrt{s}$, but at a {\it reduced} $\sigma$ for such
processes;
\item Fermilab (FNAL) : the photoproduction experiments E691/TPS and
hadroproduction experiments E791 and E653 are
stopped, with some analyses being finished based on about
$O(10^5)$ reconstructed events. In the near
future highly competitive results are to be expected from
the $\gamma p$ experiments E687 and
its successor E831 (FOCUS), based on an statistics
of about $O(10^5)$ and an estimated $10^6$ reconstructed
charm events, respectively. But also the hadroproduction
experiment E781 (SELEX) is anticipated to reconstruct some $10^6$
charm events within a few years.
\item DESY : ARGUS $e^+e^-$ : stopped, final papers emerging now. \\
HERA-B : With a very high cross section of
$\sigma(p N \to c {\bar c}) \approx 30 \mu$b at
$\sqrt{s} = 39 $ GeV and an extremely high luminosity,
a total of up to $10^{12} \ \ c {\bar c}$-events may be
produced. Although no detailed studies exist so far,
a sensitivity of order $10^{-5}$ to $10^{-7}$ might be expected,
depending on the background rates.
\item CESR : CLEO is continuing steadily to collect data, and above all
is the present leader in
sensitivity for many processes (see table \ref{tab-d}).
\item BEPC : BES has collected data at $\sqrt{s}=4.03$ GeV (and 4.14 GeV),
and is continuing to do so; BES will become competitive as soon as
enough statistics is available, because
the background conditions are very favourable.
\item $\tau$-charm factory : The prospects for a facility
being built in China (Beijing) are uncertain.
If realized, this is going to be the
most sensitive place to search for rare charm decays.
Both, kinematical constraints (e.g. running at the $\psi''(3700)$)
and the missing background from non-charm induced processes
will enhance its capabilities.
\end{itemize}
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|c|c|}
\hline
$\ $ & SPEAR & BEPC & E691 & LEP & $\tau-c F$ & CLEO & HERA \\
\hline
\hline
$\sigma_{cc}$ (nb) & 5.8 & 8.6 & 500 & 4.5 & 5.8 & 1.3 & $\sigma_{ep}=940$ \\
\hline
$L (pb^{-1}$) & 9.6 & 30 & 0.5 & 150 & $10^4$ & 3850 & 250\\
\hline
$N_D$ & $6*10^4$ & $3*10^5$ & $2.5*10^5$ & $10^6$
& $6*10^7$ & $5 * 10^6$ & $2.4*10^8$\\
\hline
$\epsilon \cdot A$ & 0.4 & 0.5 & 0.25 & 0.05 & 0.5 & 0.1 & 0.06 \\
\hline
$N_{BGND}$ & O(0) & O(1) & O(10) & O(10) & O(1) & O(1) & S/N$\approx$1 \\
\hline
$\sigma_{charm} \over \sigma_{total} $
& 1 & 0.4 & 0.01 & 0.1 & 1 & 0.1 & 0.1 \\
\hline
\hline
$B_{D^0 \to \mu^+ \mu^-}$ &
$1.2*10^{-4}$ & $2*10^{-5}$ & $4*10^{-5}$ &
$5*10^{-5}$ & $5*10^{-8}$ & $3.4 *10^{-5}$ & $2.5*10^{-6}$ \\
\hline
\end{tabular}
\caption[Comparison of sensitivity]
{Comparison of estimated sensitivity to the sample decay mode
$D^0 \to \mu^+ \mu^-$ for different facilities or experiments.}
\label{tab-comp}
\end{center}
\end{table}
\section{Summary}
$D$-meson decays offer a rich spectrum of interesting physics; their rare
decays may provide
information on new physics, which is complementary to the
knowledge stemming from $K$-meson and $B$-decays.
With the prospect of order a few times $10^8$
produced charmed mesons per year,
HERA has the potential to contribute substantially to this field.
Further competitive results can be anticipated from the fixed target
experiments at Fermilab or from a possible $\tau$-charm factory.
For the rare decay $D^0 \rightarrow \mu^+ \mu^- $ investigated here we
expect at least an order of magnitude improvement in sensitivity
over current results (see table given above) for a total integrated luminosity of
$\int L dt $ = 250 pb$^{-1}$, the limitation here being statistical.
An extrapolation to even higher luminosity is rather difficult
without a very detailed numerical simulation, because
at some (yet unknown) level the background processes will
become the main limiting factor for the sensitivity, rendering
sheer statistics useless.
For this, a good tracking resolution, excellent particle
identification (e, $\mu,\ \pi,\ K,\ {\rm p}$) and a high resolution for
secondary vertices is required
to keep the systematics under control, and either to
unambigously identify a signal of new physics, or to
reach the ultimate limit
in sensitivity.
| 2024-02-18T23:39:41.244Z | 1996-09-12T17:37:21.000Z | algebraic_stack_train_0000 | 71 | 5,262 |
|
proofpile-arXiv_065-513 | \section{Introduction}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
It is well known that the renormalization group (RG) equations \cite{rg}
have a peculiar power to improve the global nature of functions obtained
in the perturbation theory in quantum field theory (QFT)\cite{JZ}:
The RG equations may be interpreted as representing the fact that
the physical quantities ${\cal O}(p,\alpha, \mu)$
should not depend on the renormalization point $\mu$
having any arbitrary value,
\begin{eqnarray}
\frac{\partial {\cal O}(p, \alpha;\mu)}{\d \mu}=0.
\end{eqnarray}
Such a floating renormalization
point was first introduced by Gell-Mann and Low in the celebrated
paper\cite{rg}.
It is Goldenfeld, Oono and their collaborators ( to be abbreviated
to GO)
\cite{goldenfeld1,goldenfeld2} who first showed that the RG
equation can be used for purely mathematical problems as
to improving the global nature of the solutions of differential
equations obtained in the perturbation theory.
One might say, however, that their presentation of the method is
rather heuristic, heavily relied on the RG prescription
in QFT and statistical physics; it seems that they were not so
eager to give a mathematical reasoning to the method so that their
method may be understandable even for those who are not familiar with the
RG.\footnote{In Appendix A, we give a brief account of the
Goldenfeld et al's prescription.}
In fact, the reason why the RG equations even in QFT ``improve''
naive perturbation had not been elucidated. One may say that when
GO successfully applied the
RG method to purely mathematical problems such as solving differential
equations, it had shaped a clear problem to reveal the mathematical
reason of the powefullness of the RG method, at least, a la
Stuckelberg-Peterman and Gell-Mann-Low.
Quite recently, the present author has formulated the method and
given the reasoning of GO's method on the basis of the classical theory of
envelopes\cite{kunihiro,kunihiro2}:
It was demonstrated that owing to the very RG equation,
the functions consturcted from the solutions in the perturbation theory
certainly satisfies the differential equation in question uniformly up
to the order with which local solutions around $t=t_0$ is constructed.
It was also indicated in a generic way that
the RG equation may be regarded as the envelope equation.
In fact, if a family of curves $\{{\rm C}_{\mu}\}_{\mu}$
in the $x$-$y$ plane is represented by $y=f(x; \mu)$,
the function $g(x)$ representing the envelope E is given
by eliminating the parameter $\mu$ from the equation
\begin{eqnarray}
\frac{\partial f(x; \mu)}{\partial \mu}=0.
\end{eqnarray}
One can readily recognize the similarity of the envelope equation
Eq.(1.2) with the RG equation Eq.(1.1).
In Ref.'s\cite{kunihiro,kunihiro2},
a simplified prescription of the RG method is also presented.
For instance, the perturbative expansion is made with
respect to a small parameter and independent functions\footnote{Such an
asmptotic series is called {\em generalizes asymptotic series}.
The author is gratefull to T. Hatusda for telling him this fact and
making him recognize its
significance.}, and
the procedure of the ''renormalization" has been
shown unnecessary.
However, the work given in \cite{kunihiro,kunihiro2} may be said to be
incomplete
in the following sense:
To give the proof mentioned above, the scalar equation in question
was converted to a
system of {\em first order} equations, which describe a vetor field.
But the theory of envelopes for vetor fields, i.e.,envelopes of
trajectories, has not been presented in \cite{kunihiro,kunihiro2}.
The theory should have
been formulated for vector equations to make the discussion
self-contained and complete.
One of the purposes of the present paper is therefore
to reformulate geometrically the RG
method for vector equations, i.e., systems of ODE's and PDE's and
to complete the discussion given in \cite{kunihiro,kunihiro2}.
Another drawback of the previous work is that
a reasoning given to a procedure to setting $t_0=t$
in the RG method\footnote{See Appendix A.} of Goldenfeld et al
was not fully persuasive.\footnote{The author is grateful to Y. Oono
for his criticism on this point.} In this paper, we present a more
convincing reasoning for the procedure.
Once the RG method is formulated for vector
fields,
the applicability of the
RG method developed by Goldenfeld, Oono and their collaborators
is found to be wider than one might have imagined:
The RG method is applicable also to,
say,$n$-dimensional vector equations
that are not simply converted to a scalar equation of the $n$-th
order; needless to say, it is not necessarily possible to convert
a system of ordinary differential equations (or dynamical system)
to a scalar equation of a high order with a simple structure,
although the converse is always possible
trivially.
For partial differential equations,
it is not always possible to convert a system to a scalar equation of a
high order\cite{curant}.
Moreover, interesting equations in science including physics and
applied mathematics
are often given as a system. Therefore, it is of interest and
importance to show that the RG method can be extended and applied to
vector equations. To demonstrate the powefulness of the method,
we shall work out some specific examples of vector equations.
We shall emphasize that the RG method provides
a general method for the reduction of the dynamics as the
reductive perturbation method (abbreviated to RP method)\cite{kuramoto}
does.
It should be mentioned that Chen, Goldenfeld and Oono\cite{goldenfeld2}
already indicated that it is a rule that the
RG equation gives equations for slow motions which the RP method may
also describe.
In this paper, we shall confirm their observation
in a most general setting for vector equations.
Furthermore, one can show \cite{kunihiro3} that the natural extension of
the RG method also applies to {\em difference} equations or maps, and
an extended envelope equation leads to a reduction of the dynamics
even for discrete maps.
Thus one sees that the RG method is truly
a most promising candidate for a general theory of
the reduction of the dynamics, although actual computation is often
tediuous in such a general and mechanical method.
This paper is organized as follows:
In the next section, we desribe the theory of envelopes for curves (or
trajectories) in
parameter representation.
In section 3, the way how to construct envelope surfaces is given when
a family of surfaces in three-dimensional space are parametrized
with two parameters.
In section 4, we give the
basic mathematical theorem for the RG method applied to
vector fields.
This section is partially a recapitulation of
a part of Ref.\cite{kunihiro}, although some
clarifications are made here.
In section 5, some examples are examined in this method,
such as
the forced Duffing\cite{holmes}, the Lotka-Volterra\cite{lotka}
and the Lorenz\cite{lorenz,holmes} equations.
The Duffing equation is also an example of non-autonomous one,
containing an external force.
In section 6, we treat generic equations with a bifurcation;
the Landau-Stuart\cite{stuart} and
the Ginzburg-Landau equation will be derived in the RG method.
The final section is devoted to a brief summary and concluding remarks.
In Appendix A, a critical review of the Goldenfeld et al's method is
given.
In Appendix B, the Duffing equation is solved as a scalar equation
in the RG method.
\section{Envelopes of trajectories}
\setcounter{equation}{0}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
To give a geometrical meaning to the RG equation for systems, one
needs to formulate a theory of envelopes of curves which are
given in a parameter representation: For example,
if the equation is for
${\bf u} (t)=\ ^t(x(t), y(t))$, the solution forms a trajectory or curve in the
$x$-$y$ plane with $t$ being a parameter.
In this section, we give a brief account of the classical theory
of envelopes for curves in the $n$-dimensional space,
given in a parameter representation.
Let a family of curves $\{{\rm C}_{\alpha}\}_{\alpha}$ in an
$n$-dimensional space be given by
\begin{eqnarray}
{\bf X} (t; \alpha)=\ ^t(X_1(t; \alpha), X_2(t; \alpha), ... ,
X_n(t;\alpha)),
\end{eqnarray}
where the point $(X_1, X_2,... X_n)$ moves in the $n$-dimensional space
when $t$ is varied. Curves in the family is parametrized by $\alpha$.
We suppose that the family of curves $\{{\rm C}_{\alpha}\}_{\alpha}$
has the envelope E:
\begin{eqnarray}
{\bf X} _E(t)=\ ^t(X_{E1}(t; \alpha), X_{E2}(t; \alpha), ... ,X_{En}
(t;\alpha)).
\end{eqnarray}
The functions ${\bf X} _E(t)$ may be obtained from ${\bf X}(t;\alpha)$ as
follows. If the contact point of C$_{\alpha}$ and E is given by
$t=t_{\alpha}$, we have
\begin{eqnarray}
{\bf X}(t_\alpha; \alpha)={\bf X}_E(t_{\alpha}).
\end{eqnarray}
For each point in E,
there exists a parameter $\alpha=\alpha(t)$: Thus the envelope
function is given by
\begin{eqnarray}
{\bf X}_E(t_{\alpha})={\bf X}(t_\alpha; \alpha(t_{\alpha})).
\end{eqnarray}
Then the problem is to get the function $\alpha(t)$, which is achieved
as follows.
The condition that E and C$_{\alpha}$ has the common tangent line at
${\bf X}(t_\alpha; \alpha)={\bf X}_E(t_{\alpha})$ reads
\begin{eqnarray}
\frac{d{\bf X}}{dt}\biggl\vert_{t=t_{\alpha}}=
\frac{d{\bf X}_E}{dt}\biggl\vert_{t=t_{\alpha}}.
\end{eqnarray}
On the other hand, differentiating Eq.(2.4), one has
\begin{eqnarray}
\frac{d{\bf X}_E}{dt}\biggl\vert_{t=t_{\alpha}}=\frac{\d {\bf X}}{\d t}
\biggl\vert_{t=t_{\alpha}}+ \frac{\d {\bf X}}{\d \alpha}\frac{d \alpha}{dt}
\biggl\vert_{t=t_{\alpha}} .
\end{eqnarray}
From the last two equations, we get
\begin{eqnarray}
\frac{\d {\bf X}}{\d \alpha}={\bf 0}.
\end{eqnarray}
From this equation, the function $\alpha=\alpha(t)$ is obtained.
This is of the same form as the RG equation. Thus one may call the
envelope equation the RG/E equation, too.
In the application of the envelope theory for constructing global
solutions of differential equations, the parameter is the initial time
$t_0$, i.e., $\alpha =t_0$.
Actually,
apart from $t_0$, we have unknown functions given as initial values
in the applications. We use the above
condition to determine the $t_0$ dependence of the initial values
by imposing that $t_0=t$. In section 4, we shall show that
the resultant function obtained as the envelope of the
local solutions in the perturbation theory becomes an approximate but
uniformly valid solution.
\section{Envelope Surfaces}
\setcounter{equation}{0}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
This section is devoted to give the condition for constructing
the envelope surface of a family of surfaces with two parameters
in the three-dimensional space. The generalization to the
$n$-dimensional case is straightforward.
Let $\{ {\rm S}_{\tau_1 \tau_2}\}_{\tau_1\tau_2}$ be a family of surfaces
given by
\begin{eqnarray}
F({\bf r}; \tau_1, \tau_2)=0,
\end{eqnarray}
and E the envelope surface of it given by
\begin{eqnarray}
G({\bf r})=0,
\end{eqnarray}
with ${\bf r}=(x, y, z)$.
The fact that E contacts with S$_{\tau_1\tau_2}$ at $(x, y,z)$ implies
\begin{eqnarray}
G({\bf r})=F({\bf r};\tau_1({\bf r}), \tau_2({\bf r}))=0.
\end{eqnarray}
Let $({\bf r}+d{\bf r}, \tau_1+d\tau_1, \tau_2+d\tau_2)$ gives another
point in E, then
\begin{eqnarray}
G({\bf r}+d{\bf r})=F({\bf r}+d{\bf r};\tau_1+d\tau_1, \tau_2+d\tau_2)=0.
\end{eqnarray}
Taking the difference of the two equations, we have
\begin{eqnarray}
\nabla F\cdot d{\bf r}+\frac{\d F}{\d \tau_1}d\tau_1+
\frac{\d F}{\d \tau_2}d\tau_2=0.
\end{eqnarray}
On the other hand, the fact that E and S$_{\tau_1\tau_2}$ have a
common tangent plane at ${\bf r}$ implies that
\begin{eqnarray}
\nabla F\cdot d{\bf r}=0.
\end{eqnarray}
Combining the last two equations, one has
\begin{eqnarray}
\frac{\d F}{\d \tau_1}d\tau_1+\frac{\d F}{\d \tau_2}d\tau_2=0.
\end{eqnarray}
Since $d\tau_1$ and $d\tau_2$ may be varied independently, we have
\begin{eqnarray}
\frac{\d F}{\d \tau_1}=0,\ \ \ \ \frac{\d F}{\d \tau_2}=0.
\end{eqnarray}
From these equations, we get $\tau_i$ as a function of ${\bf r}$;
$\tau _i=\tau_i({\bf r})$.
As an example, let
\begin{eqnarray}
F(x, y, z; \tau_1, \tau_2)={\rm e} ^{-\tau_1y}\{1-y(x-\tau_1)\}+{\rm e} ^{-\tau_2x}
\{1-x(y-\tau_2)\}-z.
\end{eqnarray}
The conditions ${\d F}/{\d \tau_1}=0$ and ${\d F}/{\d \tau_2}=0$
give
\begin{eqnarray}
\tau_1=x,\ \ \ \tau_2=y,
\end{eqnarray}
respectively. Hence one finds that
the envelope is given by
\begin{eqnarray}
G(x, y, z)=F(x, y, z; \tau_1=x, \tau_2=y)=2{\rm e} ^{-xy}-z=0,
\end{eqnarray}
or $z=2{\rm exp}(-xy)$.
It is obvious that the discussion can be extended to higher dimensional
cases.
In Ref.\cite{kunihiro2}, envelope surfaces were constructed in multi
steps when the RG method was applied to PDE's. However, as
has been shown in this section, the construction can be performed by single
step.
\setcounter{section}{3}
\setcounter{equation}{0}
\section{The basis of the RG method for systems }
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\subsection{ODE's}
Let ${\bf X}=\, ^t(X_1, X_2, \cdots , X_n)$ and
${\bf F}({\bf X}, t; \epsilon) =\, ^t(F _1({\bf X}, t; \epsilon)$,
$F _2({\bf X}, t; \epsilon),\cdots , F _n({\bf X}, t; \epsilon))$,
and ${\bf X}$ satisfy the equation
\begin{eqnarray}
\frac{d{\bf X}}{dt} = {\bf F}({\bf X} , t; \epsilon).
\end{eqnarray}
Let us try to have the perturbation solution of
Eq.(4.1) around $t=t_0$ by expanding
\begin{eqnarray}
{\bf X} (t; t_0)= {\bf X} _0(t; t_0) + \epsilon {\bf X} _1(t; t_0)
+ \epsilon^2{\bf X} _2(t; t_0) \cdots.
\end{eqnarray}
We suppose that an approximate solution
$\tilde{{\bf X}}=\tilde{\bf X} (t; t_0, {\bf W}(t_0))$
to the equation up to $O(\epsilon ^p)$ is obtained,
\begin{eqnarray}
\frac{d\tilde{\bf X} (t; t_0, {\bf W}(t_0))}{dt}=
{\bf F} (\tilde{\bf X} (t), t; \epsilon) + O(\epsilon^p),
\end{eqnarray}
where the $n$-dimensional vector
${\bf W}(t_0)$ denotes the initial values assigned at the initial
time $t=t_0$. Here notice that $t_0$ is arbitrary.
Let us construct the envelope function ${\bf X} _E(t)$ of the family of
trajectories given
by the functions $\tilde{\bf X}(t; t_0, {\bf W}(t_0))$ with $t_0$ parameterizing the
trajectories. The construction is performed as follows: First we impose the
RG/E equation, which now reads
\begin{eqnarray}
\frac{d\tilde{\bf X}}{d t_0}={\bf 0}.
\end{eqnarray}
Notice that $\tilde{\bf X}$ contains the unknown function ${\bf W}(t_0)$
of $t_0$.\footnote{
This means that Eq.(4.4) is a
total derivative w.r.t. $t_0$;
\begin{eqnarray}
\frac{d\tilde{\bf X}}{d t_0}=\frac{\d\tilde{\bf X}}{\d t_0}+
\frac{d{\bf W}}{d t_0}\cdot\frac{\d\tilde{\bf X}}{\d {\bf W}}={\bf 0}.\nonumber
\end{eqnarray}
}
In the usual theory of envelopes, as given in section 2,
this equation gives $t_0$ as a function
of $t$. However, since we are now constructing the perturbation solution
that is as close as possible to the exact one around $t=t_0$, we demand
that the RG/E equation should give the solution $t_0=t$, i.e.,
the parameter
should coincide with the point of tangency. It means that the RG/E equation
should determine the $n$-components of the initial
vector ${\bf W}(t_0)$ so that $t_0=t$. In fact,
Eq.(4.4) may give equations as many as $n$ which are independent
of each other.\footnote{In the applications given below, the equation
is, however, reduced to a scalar equation.}
Thus the envelope function is given by
\begin{eqnarray}
{\bf X} _E(t)=\tilde{\bf X} ( t; t, {\bf W}(t)).
\end{eqnarray}
Then the fundamental theorem for the RG method is the following:\\
{\bf Theorem:}\ \ {\em ${\bf X}_E(t)$ satisfies the original
equation uniformly up to $O(\epsilon ^p)$.}
{\bf Proof} \ \ The proof is already given in Eq.(3.21)
of Ref.\cite{kunihiro}. Here we recapitulate it for completeness.
$\forall t_0$, owing to the
RG/E equation one has
\begin{eqnarray}
\frac{d{\bf X}_E}{dt}\Biggl\vert _{t=t_0} &=&
\frac{d\tilde{\bf X}(t; t_0, {\bf W}(t_0))}{d t}\Biggl\vert _{t=t_0}+
\frac{d\tilde{\bf X}(t; t_0, {\bf W}(t_0))}{d t_0}\Biggl\vert _{t=t_0},
\nonumber \\
\ \ &=& \frac{d\tilde{\bf X}(t; t_0, {\bf W}(t_0))}{d t}\Biggl\vert _{t=t_0},
\nonumber \\
\ \ &=& {\bf F} ({\bf X} _E(t_0), t_0; \epsilon) + O(\epsilon^p),
\end{eqnarray}
where Eq.(4.4) has been used in the last equality. This concludes the
proof.
\subsection{PDE's}
It is desirable to develop a general theory for systems of PDE's as
has been done for ODE's. But such a general theorem is not available
yet.
Nevertheless it {\em is} known that the simple generalization
of Eq. (4.4) to envelope surfaces works.
Let $\tilde{\bf X} (t, {\bf x} : t_0, {\bf x} _0; {\bf W} (t_0, {\bf x} _0))$
is an approximate solution given in the perturbation theory up to
$O(\epsilon^p)$ of
a system of PDE's with respect to $t$ and ${\bf x} =(x_1, x_2, \dots , x_n)$.
Here we have made explicit that the solution has an initial and boundary
value ${\bf W} (t_0, {\bf x} _0)$ dependent on $t_0$ and
${\bf x} _0= (x_{10}, x_{20}, \dots , x_{n0})$.
As has been shown in section 3, the RG/E equation now reads
\begin{eqnarray}
\frac{d \tilde{\bf X}}{d t_0}={\bf 0}, \ \ \
\frac{d \tilde{\bf X}}{d x_{i0}}={\bf 0}, \ \ (i=1, 2, \dots , n).
\end{eqnarray}
Notice again that $\tilde{\bf X}$ contains the unknown function
${\bf W}(t_0, {{\bf x} _0})$ dependent on $t_0$ and ${\bf x} _0$, hence
the derivatives are total derivatives.
As the generalization of the case for ODE's, we demand that the RG/E
equation should be compatible with the condition that the
coordinate of the point of tangency becomes the parameter of the
family of the surfaces;i.e.,
\begin{eqnarray}
t_0=t, \ \ \ {\bf x} _0 ={\bf x}.
\end{eqnarray}
Then the RG/E equation is now reduced to equations for the
unknown function ${\bf W}$, which will be shown to be
the amplitude equations such as time-dependent Ginzburg-Landau
equation. Here we remark that although Eq.(4.7) is a vector equation,
the equation to appear below will be reduced to a
scalar one; see subsection 6.2.
It can be shown, at least for equations treated so far and here,
that the resultant envelope
functions satisfy the original equations uniformly up to
$O(\epsilon^p)$; see also Ref.\cite{kunihiro2}.
\section{Simple examples}
\setcounter{equation}{0}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
In this section, we treat a few of simple examples of systems
of ODE's to show the how the RG method works.
The examples are the Duffing\cite{holmes} equation of non-autonomous
nature, the Lotka-Volterra\cite{lotka} and the Lorenz\cite{lorenz}
equation. The first one may be treated as
a scalar equation. Actually, the equation is easier to calculate when
treated as a scalar one. We give such a treatment in Appendix B.
We shall work out to derive the time dependence of the
solution to the Lotka-Volterra equation explicitly.
The last one is an example with three degrees of freedom, which shows
a bifurcation\cite{holmes}. We shall give the center manifolds to this
equation around the first bifurcation of the Lorenz model.
A general treatment for
equations with a bifurcation will be treated in section 6.
\subsection{Forced Duffing equation}
The forced Duffing equations are reduced to
\begin{eqnarray}
\ddot {x}+ 2\epsilon \gamma \dot{x}+ (1+\epsilon \sigma)x + \epsilon hx^3&=&
\epsilon f\cos t, \nonumber \\
\ddot {y}+ 2\epsilon \gamma \dot{y}+ (1+\epsilon \sigma)y + \epsilon hy^3 &=&
\epsilon f\sin t.
\end{eqnarray}
Defining a complex variable $z=x+i y$, one has
\begin{eqnarray}
\ddot {z}+ 2\epsilon \gamma \dot{z}+ (1+\epsilon \sigma)z +
\frac{\epsilon h}{2}(3\vert z\vert^2z +{z^{\ast}}^3)= \epsilon f{\rm e}^{it}.
\end{eqnarray}
We suppose that $\epsilon$ is small.
We convert the equation to the system
\begin{eqnarray}
\biggl(\frac{d}{dt} -L_0\biggl){\bf u} = -\epsilon F(\xi, \eta; t)
\pmatrix{0\cr 1},
\end{eqnarray}
where
\begin{eqnarray}
{\bf u} &=& \pmatrix{\xi \cr \eta}, \ \ \
\xi= z, \ \ \eta = \dot{z},\nonumber \\
L_0 &=& \pmatrix{\ 0 & 1\cr
-1& 0},
\end{eqnarray}
and
\begin{eqnarray}
F(\xi, \eta; t)=\sigma \xi + 2\gamma \eta \frac{h}{2}(3\vert \xi\vert^2
+ {\xi ^{\ast}}^3) - f{\rm e}^{it}.
\end{eqnarray}
Let us first solve the equation in the perturbation theory by expanding
\begin{eqnarray}
{\bf u} = {\bf u} _0 + \epsilon {\bf u} _1 + \dots,
\end{eqnarray}
with ${\bf u} _i=\ ^t(\xi _i, \eta _i)$\, $(i=0, 1, \dots)$.
We only have to solve the following equations successively;
\begin{eqnarray}
\biggl(\frac{d}{dt} -L_0\biggl){\bf u} _0&=&{\bf 0}, \nonumber \\
\biggl(\frac{d}{dt} -L_0\biggl){\bf u} _1&=&
- F(\xi _0, \eta _0; t)\pmatrix{0\cr 1},
\end{eqnarray}
and so on.
The solution of the zero-th order equation is found to be
\begin{eqnarray}
{\bf u} _0(t; t_0)= W(t_0){\bf U} {\rm e}^{it},
\end{eqnarray}
where ${\bf U} $ is an eigenvector belonging to an eigen value $i$ of $L_0$,
\begin{eqnarray}
L_0{\bf U} = i{\bf U}, \ \ \ {\bf U}=\pmatrix{1\cr i}.
\end{eqnarray}
The other eigenvector is given by the complex conjugate ${\bf U}^{\ast}$,
which belongs to the other eigenvalue $-i$. We have made it explicit
that the constant $W$ may be dependent on the initial time $t_0$.
In terms of the component,
\begin{eqnarray}
\xi_0(t;t_0)= W(t_0){\rm e} ^{it}, \ \ \
\eta _0(t;t_0) = iW(t_0){\rm e}^{it}.
\end{eqnarray}
Inserting these into $F(\xi _0, \eta _0;t)$, one has
\begin{eqnarray}
F(\xi _0, \eta _0;t)={\cal W}(t_0){\rm e}^{it} +
\frac{h}{2}{W^{\ast}}^3{\rm e}^{-3it},
\end{eqnarray}
with
\begin{eqnarray}
{\cal W}(t_0)\equiv (\sigma +2i\gamma)W + \frac{3h}{2}\vert W\vert ^2W -f
\end{eqnarray}
We remark that the inhomogeneous term includes a term proportional to
the zero-th order solution. Thus ${\bf u} _1$ contains a resonance
or a secular term as follows;
\begin{eqnarray}
{\bf u} _1(t; t_0)&=& -\frac{1}{2i}{\cal W}{\rm e}^{it}\{(t-t_0 +\frac{1}{2i})
{\bf U} -\frac{1}{2i}{\bf U} ^{\ast}\}
-\frac{h}{16}{W^{\ast}}^3{\rm e}^{-3it}({\bf U} -2{\bf U} ^{\ast}).
\end{eqnarray}
In terms of the components
\begin{eqnarray}
\xi _1(t;t_0)&=& \frac{i}{2}{\cal W}{\rm e} ^{it}(t-t_0)+\frac{h}{16}{W^{\ast}}^3
{\rm e}^{-3it}, \nonumber \\
\eta _1(t; t_0)&=& -\frac{{\cal W}}{2}{\rm e}^{it}(t-t_0 - i)
-\frac{3i}{16}h{W^{\ast}}^3{\rm e}^{-3it}.
\end{eqnarray}
Adding the terms, we have
\begin{eqnarray}
{\bf u}(t)&\simeq& {\bf u}_0(t;t_0) + \epsilon {\bf u}_1(t;t_0), \nonumber \\
\ \ \ &=& W(t_0){\bf U} {\rm e}^{it}-
\epsilon \frac{1}{2i}{\cal W}{\rm e}^{it}\{(t-t_0 +\frac{1}{2i})
{\bf U} -\frac{1}{2i}{\bf U} ^{\ast}\}
-\epsilon \frac{h}{16}{W^{\ast}}^3{\rm e}^{-3it}({\bf U} -2{\bf U} ^{\ast}),
\nonumber \\
\ \ \ &\equiv& \tilde{{\bf u}}(t;t_0).
\end{eqnarray}
In terms of the components,
\begin{eqnarray}
\xi(t;t_0)&\simeq&W(t_0){\rm e}^{it} +\epsilon \frac{i}{2}{\cal W}(t_0){\rm e}^{it}(t-t_0)
+\epsilon \frac{h}{16}{W^{\ast}}^3{\rm e}^{-3it}\equiv \tilde{\xi},\nonumber \\
\eta(t;t_0)&\simeq&iW(t_0){\rm e}^{it}-\epsilon\frac{{\cal W}}{2}{\rm e}^{it}(t-t_0 -i)
-\epsilon \frac{3i}{16}h{W^{\ast}}^3{\rm e}^{-3it}\equiv\tilde{\eta}.
\end{eqnarray}
Now let us construct the envelope ${\bf u}_E(t)$ of the family of
trajectories or curves
$\tilde{{\bf u}}(t; t_0)=(\tilde{\xi}(t;t_0), \tilde{\eta}(t;t_0))$
which is parametrized with $t_0$; ${\bf u}_E(t)$ will be found to be an
approximate solution to Eq. (5.3) in the global domain. According
to section 2, the envelope
may be obtained from the equation
\begin{eqnarray}
\frac{d\tilde{{\bf u}}(t;t_0)}{d t_0}=0.
\end{eqnarray}
In the usual procedure for constructing the envelopes, the above equation
is used for obtaining $t_0$ as a function of $t$, and the resulting
$t_0=t_0(t)$ is inserted in $\tilde{{\bf u}}(t;t_0)$ to make the
envelope function ${\bf u} _E(t)=\tilde{{\bf u}}(t; t_0(t))$. In our case,
we are constructing the envelope around $t=t_0$, so we rather impose
that
\begin{eqnarray}
t_0=t,
\end{eqnarray}
and Eq.(5.17) is used to obtain the initial value
$W(t_0)$ as a function of $t_0$. That is, we have
\begin{eqnarray}
0&=&\frac{d\tilde{{\bf u}}(t;t_0)}{d t_0}\biggl\vert _{t_0=t},\nonumber \\
\ &=&
\frac{dW}{dt}{\bf U} {\rm e}^{it} +\epsilon\frac{{\cal W}}{2i}{\rm e}^{it}{\bf U} +
\epsilon\frac{i}{2}\frac{d{\cal W}}{dt}{\rm e}^{it}\frac{1}{2i}({\bf U} -{\bf U} ^{\ast})
-\frac{3\epsilon h}{16}\frac{dW^{\ast}}{dt}{\rm e}^{-3it}({\bf U} -2{\bf U} ^{\ast}).
\end{eqnarray}
Noting that the equation is consistent with $dW/dt=O(\epsilon)$, one has
\begin{eqnarray}
\frac{dW}{dt}&=& i\frac{\epsilon}{2}{\cal W}(t),\nonumber \\
\ \ \ &= & i\frac{\epsilon}{2}\{(\sigma +2i\gamma)W(t)+ \frac{3h}{2}
\vert W(t)\vert ^2 W(t) -f\}.
\end{eqnarray}
This is the amplitude equation called Landau-Stuart equation,
which may be also given by the
RP method\cite{kuramoto} as a reduction of
the dynamics.
With this equation, the envelope trajectory is given by
\begin{eqnarray}
\xi_E(t)&=& W(t){\rm e}^{it} + \epsilon \frac{h}{16}{W^{\ast}}^3{\rm e}^{-3it},
\nonumber \\
\eta _E(t)&=& i(W(t)+\epsilon \frac{1}{2}{\cal W}(t)){\rm e}^{it}
-\epsilon \frac{3i}{16}h{W^{\ast}}^3{\rm e}^{-3it}.
\end{eqnarray}
For completeness, let us examine the stationary solution of the
Landau-Stuart equation, briefly;
\begin{eqnarray}
{\cal W}=(\sigma +2i\gamma)W + \frac{3}{2}\epsilon h\vert W\vert ^2W-f=0.
\end{eqnarray}
Writing $W$ as
\begin{eqnarray}
W=A{\rm e} ^{-i\theta},
\end{eqnarray}
we have
\begin{eqnarray}
A^2\biggl[(\frac{3}{2}hA^2+\sigma)^2+4\gamma^2\biggl]=f^2,
\end{eqnarray}
which describes the jumping phenomena of the Duffing oscillator.
\subsection{Lotka-Volterra equation}
As another simple example, we take
the Lotka-Volterra equation\cite{lotka};
\begin{eqnarray}
\dot{x}= ax -\epsilon xy, \ \ \ \ \dot{y}=-by+\epsilon'xy,
\end{eqnarray}
where the constants $a, b, \epsilon$ and $\epsilon'$ are assumed to be positive.
It is well known that the equation has the conserved quantity, i.e.,
\begin{eqnarray}
b\ln\vert x\vert + a\ln \vert y\vert -(\epsilon' x+\epsilon y)={\rm const.}.
\end{eqnarray}
The fixed points are given by $(x=0, y=0)$ and $(x=b/\epsilon', y=a/\epsilon)$.
Shifting and scaling the variables by
\begin{eqnarray}
x=(b+ \epsilon\xi)/\epsilon', \ \ \ \ y=a/\epsilon + \eta,
\end{eqnarray}
we get the reduced equation given by the system
\begin{eqnarray}
\biggl(\frac{d}{dt}- L_0\biggl){\bf u}= -\epsilon\xi\eta\pmatrix{\ 1\cr -1},
\ \ \ \
\end{eqnarray}
where
\begin{eqnarray}
{\bf u} = \pmatrix{\xi\cr \eta},\ \ \ \ L_0=\pmatrix{0 & -b\cr a & \ 0}.
\end{eqnarray}
The eigen value equation
\begin{eqnarray}
L_0{\bf U}=\lambda _0{\bf U}
\end{eqnarray}
has the solution
\begin{eqnarray}
\lambda _0=\pm i\sqrt{ab}\equiv \pm i\omega, \ \ \ \
{\bf U} =\pmatrix{\, 1\cr \mp i\frac{\omega}{b}}.
\end{eqnarray}
Let us try to apply the perturbation theory to solve the equation
by expanding the variable in a Taylor series of $\epsilon$;
\begin{eqnarray}
{\bf u}={\bf u}_0+\epsilon{\bf u}_1 +\epsilon^2{\bf u}_2+\cdots,
\end{eqnarray}
with ${\bf u} _i=\ ^t(\xi _i, \eta_i)$.
The lowest term satisfies the equation
\begin{eqnarray}
\biggl(\frac{d}{dt}- L_0\biggl){{\bf u}}_0={\bf 0},
\end{eqnarray}
which yields the solution
\begin{eqnarray}
{\bf u} _0(t;t_0)=W(t_0){{\rm e}}^{i\omega t}{\bf U} + {\rm c.c.},
\end{eqnarray}
or
\begin{eqnarray}
\xi _0= W(t_0){\rm e} ^{i\omega t} + {\rm c.c.}, \ \ \ \
\eta _0=-\frac{\omega}{b}\big(iW(t_0){\rm e} ^{i\omega t} + {\rm c.c.}\big).
\end{eqnarray}
Here we have supposed that the initial value
$W$ depends on the initial time $t_0$.
Noting that
\begin{eqnarray}
\pmatrix{\ 1\cr -1}=\alpha {\bf U} + {\rm c.c.},
\end{eqnarray}
with $\alpha=(1- ib/\omega)/2$, one finds that
the first order term satisfies the equation
\begin{eqnarray}
\biggl(\frac{d}{dt} - L_0\biggl){\bf u} _1=
\frac{\omega}{b}\biggl[iW^2 {\rm e} ^{2i\omega t}
(\alpha {\bf U} + {\rm c.c.}) + {\rm c. c.}\biggl],
\end{eqnarray}
the solution to which is found to be
\begin{eqnarray}
{\bf u} _1=\frac{1}{b}\biggl[W^2(\alpha {\bf U} + \frac{\alpha ^{\ast}}{3}
{\bf U} ^{\ast})
{\rm e}^{2i\omega t} + {\rm c.c.}\biggl],
\end{eqnarray}
or
\begin{eqnarray}
\xi _1 &=&\frac{1}{b}\bigl( \frac{2\omega - ib}{3\omega}W^2{\rm e} ^{2i\omega t}
+ {\rm c.c.}\bigl), \nonumber \\
\eta _1 &=& -\frac{\omega}{3b^2}
\bigl( \frac{2b+ i\omega }{\omega}W^2{\rm e} ^{2i\omega t}
+{\rm c.c.}\bigl).
\end{eqnarray}
The second order equation now reads
\begin{eqnarray}
\biggl(\frac{d}{dt} - L_0\biggl){\bf u} _2 =
\frac{1}{3b^2}\biggl[\{ (b-i\omega)\vert W\vert ^2W{\rm e}^{i\omega t}
+ 3(b+i\omega)W^3{\rm e} ^{3i\omega t}\} + {\rm c.c.}\biggl]\pmatrix{\ 1\cr -1}.
\end{eqnarray}
We remark that the inhomogeneous term has a part proportional to the
zero-th-order solution, which gives rise to a resonance. Hence the
solution necessarily includes secular terms as follows;
\begin{eqnarray}
{\bf u} _2&=& \Biggl[\frac{b-i\omega}{3b^2}\vert W\vert ^2W
\biggl\{ \alpha (t-t_0 +i\frac{\alpha^{\ast}}{2\omega})
{\bf U} + \frac{\alpha ^{\ast}}{2i\omega}{\bf U} ^{\ast}\biggl\}
{\rm e} ^{i\omega t } \nonumber \\
\ \ \ & & + \frac{b+i\omega}{4b^2i\omega}W^3(2\alpha {\bf U} +
\alpha^{\ast}{\bf U} ^{\ast}){\rm e}^{3i\omega t}\Biggl]
+ {\rm c.c.} .
\end{eqnarray}
In terms of the components, one finds
\begin{eqnarray}
\xi _2 &=&
\Biggl[ \frac{-i}{6\omega}\frac{b^2+\omega^2}{b^2}\vert W\vert ^2W(t-t_0)
{\rm e} ^{i\omega t}\nonumber +
\frac{W^3}{8b^2\omega ^2}\{ (3\omega ^2 -b^2)
- 4ib\omega \}{\rm e} ^{3i\omega t}\Biggl] + {\rm c.c.} \nonumber \\
\eta _2 &=& \frac{\vert W\vert ^2W}{6b^3}
\Biggl[
-(b^2 +\omega^2)(t-t_0) +\frac{1}{\omega}
\{2b\omega +i (b^2 -\omega ^2)\}\Biggl]{\rm e}^{i\omega t}\nonumber \\
\ \ \ \ & \ &
+ \frac{W^3}{8b^3}\{ -4b + \frac{i}{\omega}(3b^2 -\omega ^2)\}
{\rm e}^{3i\omega t}
+ {\rm c.c.} .
\end{eqnarray}
The RG/E equation reads
\begin{eqnarray}
\frac{d {\bf u}}{d t_0}={\bf 0},
\end{eqnarray}
with $t_0=t$, which gives the equation for $W(t)$ as
\begin{eqnarray}
\frac{d W}{dt}= - i\epsilon^2 \frac{\omega ^2+b^2}{6\omega b^2}\vert W\vert ^2 W.
\end{eqnarray}
If we define $A(t)$ and $\theta (t)$ by
$W(t)=(A(t)/2i) {\rm exp} i\theta(t)$, the equation gives
\begin{eqnarray}
A(t)= {\rm const.}, \ \ \ \
\theta (t) = - \frac{\epsilon^2A^2}{24}(1+ \frac{b^2}{\omega ^2})\omega t
+ \bar{\theta }_0,
\end{eqnarray}
with $\bar{\theta }_0$ being a constant. Owing to the prefactor $i$
in r.h.s. of Eq. (5.44), the absolute value of the amplitude $A$ becomes
independent of $t$, while the phase $\theta$ has a $t$-dependence.
The envelope function is given by
\begin{eqnarray}
{\bf u} _E(t)=\pmatrix{\xi _E(t)\cr \eta _E(t)}=
{\bf u} (t, t_0)\Biggl\vert_{t_0=t, \d {\bf u}/\d t_0=0}.
\end{eqnarray}
In terms of the components, one has
\begin{eqnarray}
\xi _{_E}&= & A\sin \Theta (t) -
\epsilon \frac{A^2}{6\omega}(\sin 2\Theta (t)
+ \frac{2\omega }{b}\cos 2\Theta (t))\nonumber \\
\ \ \ & \ & -\frac{\epsilon^2 A^3}{32}\frac{3\omega ^2 -b^2}{\omega ^2b^2}
(\sin 3\Theta (t) - \frac{4\omega b}{3\omega ^2 -b^2}\cos 3\Theta (t) ),
\nonumber \\
\eta _{_E} &=& -\frac{\omega}{b}\Biggl[
\biggl(A - \frac{\epsilon^2A^3}{24}\frac{b^2-\omega ^2}{b^2\omega ^2}\biggl)
\cos \Theta (t) - \frac{\epsilon ^2 A^3}{12b\omega}\sin \Theta (t)
\nonumber \\
\ \ \ \ & \ & + \epsilon \frac{A^2}{2b}\biggl(\sin 2\Theta (t) -
\frac{2b}{3\omega}\cos 2\Theta (t)\biggl)
- \frac{\epsilon^2A^3}{8b\omega}\biggl( \sin 3\Theta (t)
- \frac{3b^2 -\omega ^2}{4b^2\omega ^2}\cos 3\Theta (t)\biggl)\Biggl],
\end{eqnarray}
where
\begin{eqnarray}
\Theta (t) \equiv \tilde {\omega} t + \bar{\theta}_0,
\ \ \ \ \tilde {\omega} \equiv \{
1- \frac{\epsilon^2A^2}{24}(1+ \frac{b^2}{\omega ^2})\}\omega .
\end{eqnarray}
One sees that the angular frequency is shifted.
We identify ${\bf u}_E(t)= (\xi _E(t), \eta _E(t))$ as an approximate
solution to Eq.(5.28).
According to the basic theorem presented in section 4, ${\bf u}_E(t)$
is an approximate but uniformly valid solution to the equation
up to $O(\epsilon^3)$. We remark that
the resultant trajectory is closed in conformity with the conservation law
given in Eq. (5.26).
``Explicit solutions'' of two-pieces of Lotka-Volterra equation were
considered by Frame \cite{frame}; however, his main conceren was on
extracting the period of the solutions in an average method.
Comparing the Frame's method,
the RG method is simpler, more transparent and explicit.
The present author is not aware of any other
work which gives an explicit form of the solution as given in Eq. (5.47,48).
\subsection{The Lorenz model}
The Lorenz model\cite{lorenz} for the thermal convection is given by
\begin{eqnarray}
\dot{\xi}&=&\sigma(-\xi+\eta),\nonumber \\
\dot{\eta}&=& r\xi -\eta -\xi\zeta,\nonumber \\
\dot{\zeta}&=& \xi\eta - b \zeta.
\end{eqnarray}
The steady states are give by
\begin{eqnarray}
{\rm (A)}\ \ (\xi, \eta, \zeta)=(0, 0, 0),\ \ \
{\rm (B)}\ \ (\xi, \eta, \zeta)=
(\pm \sqrt{b(r-1)},\pm \sqrt{b(r-1)},r-1).
\end{eqnarray}
The linear stability analysis\cite{holmes} shows that the origin is stable for
$0<r<1$ but unstable for $r>1$, while the latter steady states (B) are
stable for $1<r<\sigma(\sigma+b+3)/(\sigma -b-1)\equiv r_c$ but
unstable for $r>r_c$.
In this paper, we examine the non-linear stability around the origin
for $r\sim 1$; we put
\begin{eqnarray}
r=1+\mu \ \ \ {\rm and}\ \ \
\mu =\chi \epsilon^2, \ \ \ \chi={\rm sgn}\mu.
\end{eqnarray}
We expand the quantities as Taylor series of $\epsilon$:
\begin{eqnarray}
{\bf u}\equiv \pmatrix{\xi\cr
\eta\cr
\zeta}
= \epsilon {\bf u}_1+\epsilon^2{\bf u}_2 + \epsilon ^3{\bf u}_3 + \cdots,
\end{eqnarray}
where ${\bf u} _i=\ ^t(\xi_i, \eta_i, \zeta_i) $\ $(i=1, 2, 3, \dots)$.
The first order equation reads
\begin{eqnarray}
\biggl(\frac{d}{dt} - L_0\biggl){\bf u}_1={\bf 0},
\end{eqnarray}
where
\begin{eqnarray}
L_0=\pmatrix{-\sigma & \sigma & 0\cr
1 & -1 & 0\cr
0 & 0 & -b},
\end{eqnarray}
the eigenvalues of which are found to be
\begin{eqnarray}
\lambda _1=0, \ \ \ \lambda _2= - \sigma -1,\ \ \ \lambda _3= -b.
\end{eqnarray}
The respective eigenvectors are
\begin{eqnarray}
{\bf U} _1=\pmatrix{1\cr
1\cr
0}, \ \ \
{\bf U} _2=\pmatrix{\sigma\cr
-1\cr
0}, \ \ \
{\bf U} _3=\pmatrix{0\cr
0\cr
1}.
\end{eqnarray}
When we are interested in the asymptotic state as $t\rightarrow \infty$,
one may take the neutrally stable solution
\begin{eqnarray}
{\bf u} _1(t; t_0)=W(t_0){\bf U}_1,
\end{eqnarray}
where we have made it explicit that the solution may depend on the
initial time $t_0$, which is supposed to be close to $t$.
In terms of the components,
\begin{eqnarray}
\xi_1(t)=W(t_0), \ \ \ \eta_1(t)=W(t_0), \ \ \ \zeta _1(t) =0.
\end{eqnarray}
The second order equation now reads
\begin{eqnarray}
\biggl(\frac{d}{dt} - L_0\biggl){\bf u}_2=\pmatrix{\ \ 0\cr
-\xi_1\zeta_1\cr
\xi_1\eta_1}
= W^2{\bf U}_3,
\end{eqnarray}
which yields
\begin{eqnarray}
{\bf u}_2(t)=\frac{W^2}{b}{\bf U}_3,
\end{eqnarray}
or in terms of the components
\begin{eqnarray}
\xi_2=\eta_2=0, \ \ \ \zeta_2=\frac{W^2}{b}.
\end{eqnarray}
Then the third order equation is given by
\begin{eqnarray}
\biggl(\frac{d}{dt} - L_0\biggl){\bf u}_3=
\pmatrix{\ \ \ 0\cr
-\chi\xi_1-\xi_2\zeta_1-\xi_1\zeta_2\cr
\xi_2\eta_1+\xi_1\eta_2}
= \frac{1}{1+\sigma}(\chi W-\frac{1}{b}W^3)(\sigma{\bf U}_1 -{\bf U}_2),
\end{eqnarray}
which yields
\begin{eqnarray}
{\bf u}_3=\frac{1}{1+\sigma}(\chi W-\frac{1}{b}W^3)
\{\sigma(t-t_0 + \frac{1}{1+\sigma}){\bf U}_1 -
\frac{1}{1+\sigma}{\bf U}_2\}.
\end{eqnarray}
Thus gathering all the terms, one has
\begin{eqnarray}
{\bf u} (t;t_0)&=&
\epsilon W(t_0){\bf U}_1 + \frac{\epsilon^2}{b}W(t_0)^2{\bf U}_3 \nonumber \\
\ \ \ \ &\ & \ \ \
+ \frac{\epsilon ^3}{1+\sigma}(\chi W(t_0) -\frac{1}{b}W(t_0)^3)
\{\sigma(t-t_0 + \frac{1}{1+\sigma}){\bf U}_1 -
\frac{1}{1+\sigma}{\bf U}_2\},
\end{eqnarray}
up to $O(\epsilon ^4)$.
The RG/E equation now reads
\begin{eqnarray}
{\bf 0}&=&\frac{d {\bf u}}{d t_0}\biggl\vert_{t_0=t},\nonumber \\
\ &=& \epsilon \frac{dW}{dt}{\bf U}_1+ 2 \frac{\epsilon^2}{b}W\frac{dW}{dt}{\bf U}_3
-\frac{\sigma}{1+\sigma}\epsilon^3(\chi W - \frac{1}{b}W^3){\bf U}_1,
\end{eqnarray}
up to $O(\epsilon^4)$. Noting that one may self-consistently assume that
$dW/dt=O(\epsilon^2)$, we have the amplitude equation
\begin{eqnarray}
\frac{dW}{dt}=\epsilon^2\frac{\sigma}{1+\sigma}(\chi W(t) - \frac{1}{b}W(t)^3).
\end{eqnarray}
With this $W(t)$, the envelope function is given by
\begin{eqnarray}
{\bf u}_E(t)&=&{\bf u} (t; t_0=t),\nonumber \\
\ \ \ &=&
\epsilon W(t){\bf U}_1 + \frac{\epsilon^2}{b}W(t)^2{\bf U}_3
+ \frac{\epsilon ^3}{(1+\sigma)^2}(\chi W(t) -\frac{1}{b}W(t)^3)
(\sigma {\bf U}_1 -{\bf U}_2),
\end{eqnarray}
or
\begin{eqnarray}
\xi_E(t)&=&\epsilon W(t),\nonumber \\
\eta_E(t)&=& \epsilon W(t) +\frac{\epsilon^3}{1+\sigma}
(\chi W(t)-\frac{1}{b}W(t)^3),\nonumber \\
\zeta_E(t)&=& \frac{\epsilon^2}{b}W(t)^2.
\end{eqnarray}
We may identify the envelope functions thus constructed as a global
solution to the Lorenz model; according to the general theorem
given in section 4,
the envelope functions satisfy Eq.(5.49) approximately but
uniformly for $\forall t$ up to $O(\epsilon ^4)$.
A remark is in order here; Eq.(5.68) shows that the slow manifold which
may be identified with a center manifold\cite{holmes} is given by
\begin{eqnarray}
\eta=(1+ \epsilon^2\frac{\chi}{1+\sigma})\xi - \frac{1}{b(1+\sigma)}\xi^3,
\ \ \ \zeta= \frac{1}{b}\xi^2.
\end{eqnarray}
Notice here that the RG method is also a powefull tool to extract center
manifolds in a concrete form. It is worth mentioning that since the
RG method utilizes
neutrally stable solutions as the unperturbed ones, it is rather natural
that the RG method can extract center manifolds when exist.
The applicability of the RG method was discussed in
\cite{goldenfeld2} using a generic model having a center manifold,
although the relation between the exitence of center manifolds and
neutrally stable solutions is not so transparent in their general
approach.
\setcounter{equation}{0}
\section{Bifurcation Theory}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
In this section, we take generic equations
with a bifurcation. We shall derive the Landau-Stuart and Ginzburg-Landau
equations in the RG method.
In this section, we shall follow Kuramoto's monograph\cite{kuramoto}
for notations to clarify the correspondence between the RG method and
the reductive perturbation (RP) method.
\subsection{Landau-Stuart equation}
We start with the $n$-dimensional equation
\begin{eqnarray}
\frac{d{\bf X}}{dt} = {\bf F}({\bf X} , t; \mu ).
\end{eqnarray}
Let ${\bf X}_0(\mu)$ is a steady solution
\begin{eqnarray}
{\bf F}({\bf X}_0(\mu) ; \mu)=0.
\end{eqnarray}
Shifting the variable as ${\bf X} = {\bf X}_0 + {\bf u}$,
we have a Taylor series
\begin{eqnarray}
\frac{d{\bf u}}{dt} = L{\bf u} + M{\bf u} {\bf u} + N{\bf u} {\bf u} {\bf u}+\cdots ,
\end{eqnarray}
where we have used the diadic and triadic notations\cite{kuramoto};
\begin{eqnarray}
L_{ij}&=&\frac{\d F_i}{\d X_j}\biggl\vert _{{\bf X} ={\bf X} _0}, \ \ \
(M{\bf u} {\bf u})_i=\sum _{j, k}
{1\over 2} \frac{\d ^2 F_i}{\d X_j\d X_k}\biggl\vert _{{\bf X} ={\bf X} _0}u_ju_k,
\nonumber \\
(N{\bf u} {\bf u} {\bf u})_i&=& \sum _{j, k, l}
\frac{1}{6}\frac{\d ^3 F_i}{\d X_j\d X_k\d X_l}
\biggl\vert _{{\bf X} ={\bf X} _0}u_ju_ku_l.
\end{eqnarray}
We suppose that when $\mu<0$, ${\bf X}_0$ is stable for sufficiently
small perturbations, while when $\mu >0$, otherwise. We also
confine ourselves to the case where a Hopf bifurcation occurs.
We expand $L, M$ and $N$ as
\begin{eqnarray}
L=L_0 + \mu L_1 + \cdots , \ \ M=M_0 + \mu M_1 + \cdots ,
\ \ N=N_0 + \mu N_1 + \cdots .
\end{eqnarray}
The eigenvalues $\lambda^{\alpha}\, (\alpha=1, 2, \dots , n)$ of $L$
are also expanded as
\begin{eqnarray}
\lambda^{\alpha}=\lambda^{\alpha}_0 + \mu \lambda^{\alpha}_1 + \cdots,
\end{eqnarray}
with
\begin{eqnarray}
L_0{\bf U} _{\alpha}= \lambda^{\alpha}_0{\bf U} _{\alpha}.
\end{eqnarray}
We assume that $\lambda^{1}_0=-\lambda^{2}_0$ are
pure imaginary, i.e., $\lambda^{1} _0 = i\omega_0$, and
$\Re \lambda^{\alpha}_0<0$ for $\alpha=3, 4, \dots$.
Defining $\epsilon$ and $\chi$ by $\epsilon = \sqrt{\vert \mu\vert}$
and $\chi={\rm sgn}\mu$, we expand as
\begin{eqnarray}
{\bf u} = \epsilon {\bf u} _1 + \epsilon^2 {\bf u}_2 + \epsilon ^3{\bf u}_3 +\cdots.
\end{eqnarray}
The ${\bf u} _i$ $(i=1, 2, 3, ...)$ satisfies
\begin{eqnarray}
\biggl(\frac{d}{dt} - L_0\biggl){\bf u} _1&=& {\bf 0}, \nonumber \\
\biggl(\frac{d}{dt} - L_0\biggl){\bf u} _2&=& M_0{\bf u} _1{\bf u} _1, \nonumber \\
\biggl(\frac{d}{dt} - L_0\biggl){\bf u} _3&=& \chi L_1{\bf u} _1 + 2M_0{\bf u} _1
{\bf u} _2 + N_0 {\bf u} _1{\bf u} _1 {\bf u} _1,
\end{eqnarray}
etc.
To see the asymptotic behavior as $t\rightarrow \infty$,
we take the neutrally stable solution as the lowest one around
$t= t_0$;
\begin{eqnarray}
{\bf u} _1 (t; t_0)=W(t_0){\bf U} {\rm e}^{i\omega_0t} + {\rm c.c.},
\end{eqnarray}
where c.c. stands for the complex conjugate. With this choice, we have
only two degrees of freedom for the initial value $W(t_0)$.
The second order equation is solved easily to yield
\begin{eqnarray}
{\bf u} _2(t;t_0)= \bigl({\bf V}_{+}W(t_0)^2 {\rm e}^{2i\omega_0 t} +
{\rm c. c.} \bigl) +
{\bf V}_0\vert W(t_0)\vert ^2,
\end{eqnarray}
where
\begin{eqnarray}
{\bf V}_{+}= - (L_0-2i\omega _0)^{-1} M_0{\bf U} {\bf U},\ \ \
{\bf V}_{0}= - 2L_0^{-1} M_0{\bf U} \bar{{\bf U}},
\end{eqnarray}
with $\bar{{\bf U}}$ being the complex conjugate of ${\bf U}$.\footnote{
In other sections, we use the notation $a^{\ast}$ for the
complex conjugate of $a$. In this section, $^{\ast}$ is used for a
different meaning, following ref.\cite{kuramoto}; see Eq. (6.16).}
Inserting ${\bf u} _1$ and ${\bf u} _2$ into the r.h.s of Eq. (6.9),
we get
\begin{eqnarray}
\biggl(\frac{d}{dt} - L_0\biggl){\bf u} _3 &=&\bigl\{\chi L_1 W{\bf U} +
(2M_0\bar{{\bf U}}{\bf V}_{+} + 3N_0{\bf U} {\bf U}\bar{{\bf U}})
\vert W\vert ^2W\bigl\}{\rm e}^{i\omega_0t} + {\rm c.c.} + {\rm h.h.},
\nonumber \\
\ \ \ & \equiv & {\bf A}{\rm e}^{i\omega_0t}
+ {\rm c.c.} + {\rm h.h.},
\end{eqnarray}
where h.h. stands for higher harmonics.
So far, the discussion is a simple perturbation theory
and has proceeded in the same way
as given in the RP method except for not having introduced multiple times.
Now we expand ${\bf A}$ by the eigenvectors
${\bf U} _{\alpha}$ of $L_0$ as
\begin{eqnarray}
{\bf A}=\sum _{\alpha}A_{\alpha}{\bf U} _{\alpha},
\end{eqnarray}
where
\begin{eqnarray}
A_{\alpha}= {\bf U} ^{\ast}_{\alpha}{\bf A}.
\end{eqnarray}
Here ${\bf U} ^{\ast}_{\alpha}$ satisfies
\begin{eqnarray}
{\bf U} ^{\ast}_{\alpha}L_0=\lambda^{\alpha}_0L_0 ,
\end{eqnarray}
and
is normalized as ${\bf U}^{\ast}_{\alpha}{\bf U}_{\alpha} =1$.
Then we get for ${\bf u}_3$
\begin{eqnarray}
{\bf u}_3(t;t_0)=\{A_1(t-t_0+\delta){\bf U} +
\sum _{\alpha\not= 1}\frac{A_{\alpha}}{i\omega_0 - \lambda_0^{\alpha}}
{\bf U}_{\alpha}\}{\rm e}^{i\omega_0t}
+ {\rm c.c.} + {\rm h.h.}.
\end{eqnarray}
The constant $\delta$ is chosen so that the coefficient of the
secular term of the first component vanishes at $t=t_0$.
Note the appearance of the secular term which was
to be avoided in the RP method: The condition
for the secular terms to vanish is called the solvability condition
which plays the central role in the RP method\cite{kuramoto}.
Thus we finally get
\begin{eqnarray}
{\bf u}(t;t_0)=\{\epsilon W(t_0){\bf U}
+ \epsilon ^3 \bigl(A_1(t-t_0+ \delta){\bf U} +
\sum _{\alpha\not= 1}\frac{A_{\alpha}}{i\omega_0 - \lambda_0^{\alpha}}
{\bf U}_{\alpha}\bigl)\}{\rm e}^{i\omega_0t}+ {\rm c.c.} + {\rm h.h.}.
\end{eqnarray}
The RG/E equation
\begin{eqnarray}
\frac{d {{\bf u}}}{d t_0}\Biggl\vert_{t_0=t}={\bf 0},
\end{eqnarray}
yields
\begin{eqnarray}
\frac{dW}{dt}&=&\epsilon ^2A_1, \nonumber \\
\ \ &=& \epsilon^2\bigl[
\chi {\bf U} ^{\ast}L_1{\bf U} W+ \{ 2{\bf U}^{\ast}M_0\bar{{\bf U}}{\bf V}_{+}
+3{\bf U}^{\ast}N_0{\bf U}\bfU\bar{{\bf U}}\}\vert W\vert^2W\bigl] ,
\end{eqnarray}
up to $O(\epsilon^3)$. Here note that the terms coming from h.h.
do not contribute to this order because
$dW/dt_0$ is $O(\epsilon ^2)$.
The resultant equation is so called the Landau-Stuart equation and
coincides with the result derived in the RP method\cite{kuramoto}.
\subsection{The Ginzburg-Landau equation}
We add the diffusion term to Eq.(6.1);
\begin{eqnarray}
\frac{d{\bf X}}{dt} = {\bf F}({\bf X} )+ D\nabla ^2 {\bf X},
\end{eqnarray}
where $D$ is a diagonal matrix. Let ${\bf X} _0$ be a uniform and steady
solution.
Shifting the variable ${\bf X} = {\bf X} _0 +{\bf u}$ as before, we have
\begin{eqnarray}
\frac{d{\bf u}}{dt} = \hat{L}{\bf u} + M{\bf u} {\bf u} + N{\bf u} {\bf u} {\bf u}+\cdots ,
\end{eqnarray}
with
\begin{eqnarray}
\hat{L} = L +D\nabla ^2.
\end{eqnarray}
Then using the same expansion as before, we have the same equation for
${\bf u} _1, {\bf u}_2$ and ${\bf u}_3$ as given in Eq.(6.9) with $L_0$
being replaced with $\hat{L}_0\equiv L_0 + D\nabla ^2$.
To see the asymptotic behavior as $t\rightarrow \infty$,
we take the neutrally stable uniform solution as the lowest one around
$t= t_0$ and ${\bf r}={\bf r}_0$;
\begin{eqnarray}
{\bf u} _1 (t, {\bf r}; t_0, {\bf r}_0) =
W(t_0, {\bf r}_0){\bf U} {\rm e}^{i\omega_0t} + {\rm c.c.}.
\end{eqnarray}
With this choice, we have
only two degrees of freedom for the initial value $W(t_0, {\bf r}_0)$.
The second order equation is solved easily to yield the same form as
that given in Eq.(6.11).
Inserting ${\bf u} _1$ and ${\bf u} _2$ into the r.h.s of Eq. (6.9) with
$L_0$ replaced with $\hat{L}_0$,
we have
\begin{eqnarray}
\biggl(\frac{\d}{\d t} - \hat{L}_0\biggl){\bf u} _3 &=&\bigl\{\chi L_1 W{\bf U} +
(2M_0\bar{{\bf U}}{\bf V}_{+} + 3N_0{\bf U} {\bf U}\bar{{\bf U}})
\vert W\vert ^2W\bigl\}{\rm e}^{i\omega_0t} + {\rm c.c.} + {\rm h.h.},
\nonumber \\
\ \ \ & \equiv & {\bf A}{\rm e}^{i\omega_0t}
+ {\rm c.c.} + {\rm h.h.} .
\end{eqnarray}
Then we get for ${\bf u}_3$ in the spatially 1-dimensional case,
\begin{eqnarray}
{\bf u}_3(t;t_0)&=&\biggl[A_1\{c_1(t-t_0+\delta)
-\frac{c_2}{2} D^{-1}(x^2 -x_0^2+\delta')\}{\bf U} +
\sum _{\alpha\not= 1}\frac{A_{\alpha}}{i\omega_0 - \lambda_0^{\alpha}}
{\bf U}_{\alpha}\biggl]{\rm e}^{i\omega_0t} \nonumber \\
\ \ \ &\ & + {\rm c.c.} + {\rm h.h.},
\end{eqnarray}
with $c_1+c_2=1$.
We have introduced constants $\delta$ and $\delta'$ so that the
secular terms of the first component of ${\bf u} _3$ vanish at $t=t_0$
and $x=x_0$.
Note the appearance of the secular terms both $t$- and
$x$-directions; these terms were
to be avoided in the RP method
with the use of the solvability condition.
Adding all the terms, we finally get
\begin{eqnarray}
{\bf u}(t;t_0)&=&\biggl[(\epsilon W(t_0,x_0){\bf U}
+ \epsilon ^3 \{A_1\Big(c_1(t-t_0+\delta)-
\frac{c_2}{2} D^{-1}(x^2 -x_0^2+\delta')\Big){\bf U} \nonumber \\
\ \ \ &\ & \ \ \
+ \sum _{\alpha\not=1}\frac{A_{\alpha}}{i\omega_0 - \lambda_0^{\alpha}}
{\bf U}_{\alpha}\}\biggl]{\rm e}^{i\omega_0t}
+ {\rm c.c.} + {\rm h.h.},
\end{eqnarray}
up to $O(\epsilon ^4)$.
The RG/E equation\footnote{See section 3.}
\begin{eqnarray}
\frac{d {{\bf u}}}{d t_0}\Biggl\vert_{t_0=t}={\bf 0}, \ \ \
\frac{d {{\bf u}}}{d x_0}\Biggl\vert_{x_0=x}={\bf 0}, \ \ \
\end{eqnarray}
yields
\begin{eqnarray}
\frac{\d W}{\d t}=\epsilon ^2c_1A_1 + O(\epsilon^3),
\ \ \ \ D\frac{\d W}{\d x}=-\epsilon ^2xc_2A_1 +O(\epsilon^3).
\end{eqnarray}
We remark that the seemingly vector equation is reduced to a scalar
one.
Differentiating the second equation once again, we have
\begin{eqnarray}
D\frac{\d ^2W}{\d x^2}=-\epsilon ^2c_2A_1 +O(\epsilon^3).
\end{eqnarray}
Here we have utilized the fact that $\d W/\d x= O(\epsilon^2)$.
Noting that $c_1+c_2=1$, we finally reach
\begin{eqnarray}
\frac{\d W}{\d t}- D\frac{\d ^2W}{\d x^2}&=&\epsilon ^2A_1, \nonumber \\
\ \ &=& \epsilon^2\bigl[
\chi {\bf U} ^{\ast}L_1{\bf U} W+ \{ 2{\bf U}^{\ast}M_0\bar{{\bf U}}{\bf V}_{+}
+3{\bf U}^{\ast}N_0{\bf U}\bfU\bar{{\bf U}}\}\vert W\vert^2W\bigl] ,
\end{eqnarray}
up to $O(\epsilon^3)$.
This is so called the time-dependent Ginzburg-Landau (TDGL)
equation and
coincides with the amplitude equation derived in the RP method\cite{kuramoto}.
We have seen that the RG method can reduce the dynamics of
a class of non-linear equations as the RP method
can. Therefore it is needless to say that our method can be applied to
the Brusselators\cite{brussel}, for instance, and leads to the same amplitude
equations as the RP method\cite{kuramoto} does\cite{kunihiro4}.
\section{A brief summary and concluding remarks}
In this paper, we have shown that the RG method of Goldenfeld, Oono and
their collaborators can be
equally applied to vector equations, i.e.,
systems of ODE's and PDE's, as to scalar
equations.\cite{goldenfeld1,goldenfeld2,kunihiro,kunihiro2}
We have formulated the method
on the basis of the classical thoery of envelopes, thereby
completed the argument given in \cite{kunihiro,kunihiro2}.
We have worked out for some examples of systems of ODE's, i.e.,
the forced Duffing, the Lotka-Volterra and the Lorenz equation.
It has been also shown in a generic way that the method applied to
equations with a bifurcation leads to the amplitude equations, such as
the Landau-Stuart and
the (time-dependent) Ginzburg-Landau equation.
Then how about the phase equations\cite{kuramoto}?:
The phase equations describe another reduced dynamics.
The basis of the reduction of the dynamics by the phase equations lies in the
fact that when a symmetry is broken, there appears
a slow motion which is a classical counter part of the Nambu-Goldstone boson
in quantum field theory.
We believe that if the phase equations are related to slow motions of the
system at all, the RG method should also leads to the phase equations.
It is an interesting task to be done to show that it is the case.
There is another class of dynamics than those described by differential
equations, i.e., difference equations or discrete maps.
It is interesting that a
natural extension of the RG/E equation to difference
equations leads to a reduction of the dynamics.\cite{kunihiro3}
This fact suggests that the RG method pioneered by Goldenfeld , Oono and
their collaborators provides one of the most promising candidate for
a general theory of
the reduction of dynamics, although it is certain that such a mechanical
and general method is often tedious in the actual calculations.\footnote{
It should be mentioned that there are other methods
\cite{other1,other2} for the dynamical reduction
as promising as the RG and RP method are.}
As an application of the reduction of difference equations,
it will be interesting to see whether the coupled map lattice
equations as systems of non-linear difference equations\cite{cml}
can be reduced to simpler equations by the RG method. We hope that we can
report about it in the near future.
\vspace{2.5cm}
\centerline{\large{\bf Acknowledgements}}
This work is partly a reply to some people who asked
if the RG method could be applied to vector equations.
The author acknowledges
M. Davis and Y. Kuramoto for questions and comments for the previous
papers\cite{kunihiro,kunihiro2}.
He also thanks J. Matsukidaira and T. Ikeda for indicating the
significance of examining vector equations.
J Matsukidaira is gratefully acknowledged for useful comments
in the earlist stage of this work. He thanks M. Yamaguti and
Y. Yamagishi for discussions on difference equations.
He is indebted to R. Hirota, H. Matano,
Y. Nishiura, J. Satsuma and M. Yamaguti
for their interest in this work. He expresses his sincere thanks to M.
Yamaguti for his encouragement.
{\large {\bf Note added}}
After submitting the paper, the author was informed that S. Sasa applied
the RG method to derive phase equations in a formal way.
The author is grateful to S. Sasa
for sending me the TEX file({\tt patt-sol/9608008})
of the paper before its publication.
\newpage
\setcounter{equation}{0}
\centerline{\bf {\large Appendix A}}
\renewcommand{\theequation}{A.\arabic{equation}}
In this Appendix, we give a quick review of Goldenfeld, Oono and
their collaborators' prescription for the RG method. Then we summarize
the problems to which a mthematical reasoning is needed in the author's
point of view.
We take the following simplest example to
show their prescription:
\begin{eqnarray}
\frac{d^2 x}{dt^2}\ +\ \epsilon \frac{dx}{dt}\ +\ x\ =\ 0,
\end{eqnarray}
where $\epsilon$ is supposed to be small. The exact solution reads
\begin{eqnarray}
x(t)= A \exp (-\frac{\epsilon}{2} t)\sin( \sqrt{1-\frac{\epsilon^2}{4}} t + \theta),
\end{eqnarray}
where $A$ and $\theta$ are constant to be determined by an initial
condition.
Now, let us blindly apply the perturbation theory expanding $x$ as
\begin{eqnarray}
x(t) = x_0(t) \ +\ \epsilon x_1(t)\ +\ \epsilon ^2 x_2(t)\ +\ ... .
\end{eqnarray}
The result is found to be\cite{kunihiro}
\begin{eqnarray}
x(t; t_0)&=& A_0\sin (t +\theta_0) -\epsilon\frac{A_0}{2} (t -t_0)\sin(t+\theta_0)
\nonumber \\
\ \ \ & \ \ \ & +\epsilon^2\frac{A_0}{8}
\{ (t-t_0)^2\sin(t +\theta_0) - (t-t_0)\cos(t+\theta_0)\}
+ O(\epsilon^3).
\end{eqnarray}
Now here come the crucial steps of the Goldenfeld et al's prescription:
\begin{description}
\item{(i)}
First they introduce a dummy time $\tau$ which is close to $t$, and
``renormalize" $x(t; t_0)$ by writing
$t - t_0 = t-\tau +\tau - t_0$;
\begin{eqnarray}
x(t, \tau)&=& A(\tau)\sin (t +\theta(\tau)) -\epsilon\frac{A(\tau)}{2}
(t -\tau)\sin(t+\theta(\tau))
\nonumber \\
\ \ \ & \ \ \ & + \epsilon^2\frac{A(\tau)}{8}
\{ (t-\tau)^2\sin(t +\theta(\tau)) - (t-\tau)\cos(t+\theta(\tau))\}
+ O(\epsilon^3),
\end{eqnarray}
with
\begin{eqnarray}
x(\tau, \tau)= A(\tau)\sin (\tau +\theta(\tau)).
\end{eqnarray}
Here $A_0$ and $\theta_0$ have been multiplicatively renormalized to
$A(\tau)$ and $\theta(\tau)$.
\item{(ii)}
They observe that $\tau $ is an arbitrary constant introduced by hand,
thus they claim that
the solution $x(t, \tau)$ should not
depend on $\tau$; namely, $x(t, \tau)$ should satisfy the equation
\begin{eqnarray}
\frac{d x(t, \tau)}{d \tau}=0.
\end{eqnarray}
This is similar to the RG equation in the field theory
where $\tau$ corresponds to the
renormalization point $\tau$; hence the name of the RG method.
\item{(iii)}
Finally they impose another important but a mysterious condition that
\begin{eqnarray}
\tau=t.
\end{eqnarray}
\end{description}
From (ii) and (iii), one has
\begin{eqnarray}
\frac{dA}{d\tau} + \frac{\epsilon}{2} A=0, \ \ \
\frac{d\theta}{d\tau}+\frac{\epsilon^2}{8}=0,
\end{eqnarray}
which gives
\begin{eqnarray}
A(\tau)= \bar{A}{\rm e}^{-\epsilon\tau/2}, \ \ \
\theta (\tau)= -\frac{\epsilon^2}{8}\tau + \bar{\theta},
\end{eqnarray}
where $\bar{A}$ and $\bar{\theta}$ are constant numbers.
Thus, rewriting $\tau$ to $t$ in $x(\tau)$, one gets
\begin{eqnarray}
x(t,t)= \bar{A}\exp(-\frac{\epsilon}{2} t)\sin((1-\frac{\epsilon ^2}{8})t +
\bar{\theta}).
\end{eqnarray}
They identify $x(t,t)$ with the desired solution $x(t)$. Then one finds
that the resultant $x(t)$ is an approximate but uniformly valid
solution to Eq.(A.1).
In short, the solution obtained in the perturbation theory with
the local nature has been ``improved'' by the RG equation Eq.(A.7)
to become a global solution.
But what have we done mathematically?
what is a mathematical meaning of the "renormalization''
replacing $t_0$ with the extra dummy time $\tau$?
Can't we avoid the "renormalization''
procedure to solve a purely mathematical problem?
Why can we identify $x(t,t)$ with the desired solution?;
with $\tau $ being a constant, $x(t, \tau)$ can be a(n) (approximate)
solution to Eq. (A.1), can't it? In other words, when the operator $d/dt$
hits the second argument of $x(t, t)$, what happens?
In Ref.\cite{kunihiro}, it was shown that
the ``renormalization" procedure to introduce the extra dummy
time $\tau$ is not necessary.
Furthermore, it was clarified that
the conditions (ii) and (iii) are the ones to construct
the {\em envelope} of the family of the local solutions
obtained in the perturbation theory;
$x(t; t)$ is the envelope function of the
family of curves given by $x(t; t_0)$ where $t_0$ parametrizes the
curves in the family.
Furthermore, it was shown that the envelope function $x(t,t)$ satisfies
the orginal equations approximately but uniformly; the hitting of $d/dt$
on the second argument of $x(t, t)$ does not harm anything.
In short, the prescription given by Goldenfeld, Oono and their collaborators
is not incorrect, but the reasoning for the prescription is given in
\cite{kunihiro,kunihiro2} and will be more refined in the present
paper.
In Ref.\cite{kunihiro2}, a simplification of the prescription and its
mathematical foundation is given for PDE's.
\newpage
\centerline{\bf {\large Appendix B}}
\setcounter{equation}{0}
\renewcommand{\theequation}{B.\arabic{equation}}
In this Appendix, we solve the forced Duffing equation without converting it
to a system. It is easier to solve it in this way than
in the way shown in the text.
We start with Eq. (2.6)
\begin{eqnarray}
\ddot {z}+ 2\epsilon \gamma \dot{z}+ (1+\epsilon \sigma)z +
\frac{\epsilon h}{2}(3\vert z\vert^2z +{z^{\ast}}^3)= \epsilon f{\rm e}^{it},
\end{eqnarray}
where $\epsilon$ is small.
Expanding $z$ as
\begin{eqnarray}
z=z_0+\epsilon z_1 +\epsilon ^2z_2 + \cdots,
\end{eqnarray}
one gets for $z$ in the perturbation theory
\begin{eqnarray}
z(t; t_0)= W(t_0){\rm e} ^{it}+
\epsilon (t-t_0)\{f- W(\sigma + 2i\gamma) - \frac{3h}{2}\vert W\vert^2W\}
{\rm e} ^{it} + \epsilon \frac{1}{16}{W^{\ast}(t)}^3{\rm e}^{3it} + O(\epsilon^2).
\end{eqnarray}
Note that there exists a secular term in the first order term.
The RG/E equation reads\cite{kunihiro}
\begin{eqnarray}
\frac{d z}{d t_0}=0
\end{eqnarray}
with $t_0=t$, which leads to
\begin{eqnarray}
\dot{W}=
-\epsilon(\sigma +2i\gamma)W - \frac{3}{2}\epsilon h\vert W\vert ^2W+ \epsilon f
\end{eqnarray}
up to $O(\epsilon ^2)$. Here we have discarded terms such as $\epsilon dW/dt$,
which is $O(\epsilon ^2)$ because $dW/dt=O(\epsilon)$.
The resultant equation for the amplitude is the Landau-Stuart equation for the
Duffing equation. The envelope is given
\begin{eqnarray}
z_E(t)=z(t; t_0=t)=
W(t){\rm e} ^{it} + \frac{\epsilon}{16} {W^{\ast}}^3{\rm e}^{3it} + O(\epsilon^2).
\end{eqnarray}
We identify $z_E(t)$ with a global solution of Eq.(B.2), and
$x(t)={\rm Re}[z_E]$ and $y(t)={\rm Im}[z_E]$ are solutions to
Eq.(B.1). As shown in the text, $\forall t$,
$z_E(t)$ satisfies Eq.(B.2) uniformly up to $O(\epsilon ^2)$.
\newpage
\newcommand{N. \ Goldenfeld}{N. \ Goldenfeld}
\newcommand{Y.\ Oono}{Y.\ Oono}
| 2024-02-18T23:39:41.757Z | 1996-09-06T02:43:49.000Z | algebraic_stack_train_0000 | 96 | 10,474 |
|
proofpile-arXiv_065-524 | \section*{Acknowledgments}
We are grateful to V.A. Kuzmin and M.A. Shifman for useful
discussions. We also wish to thank R. Ball for the clarification
of the current status of the QCD analysis of the HERA data.
It is the pleasure to thank the members of the
University of Minnesota for hospitality during this interesting
meeting. The participation at the
DPF-96 Meeting of APS was partly supported by the Russian Fund for
Fundamental Research, Grant N 96-02-18897. The work on this report
is done within the framework of the Grant N 96-01-01860, supported
by the Russian Fund for Fundamental Research.
\newpage
| 2024-02-18T23:39:41.792Z | 1996-09-22T14:34:57.000Z | algebraic_stack_train_0000 | 100 | 109 |
|
proofpile-arXiv_065-547 | \section{Introduction}
Late-type Low Surface Brightness galaxies (LSBs) are considered to be very young
stellar systems, because of their rather blue colors (de Blok, van der Hulst \&
Bothun 1995, McGaugh \& Bothun 1996) and very low oxygen abundances
(McGaugh, 1994). Based on these observational evidences there
have been recently theoretical suggestions that LSBs are formed
inside dark matter halos that collapsed very recently, at
$z\le 1$, from density fluctuations of small amplitude
(Dalcanton, Spergel, \& Summers 1996, Mo, McGaugh, \& Bothun
1994).
In this work we study the colors of LSBs from the point of view
of synthetic stellar populations (SSP), and show that LSBs
could not be as young as claimed in the quoted literature.
Recently one of us (PP) has obtained a stellar Initial Mass
Function (hereafter P-IMF) starting from high-resolution numerical
simulations of the supersonic random motions in the interstellar
medium (Nordlund \& Padoan, 1997; Padoan, Jones \&
Nordlund,1997).
Here we will plug this P-IMF into the latest version of our
synthetic
stellar population code which is based on Jimenez \& MacDonald (1997)
evolutionary tracks
and Kurucz atmospheric models (Kurucz 1992). With this we
compute synthetic colors and colors gradients for LSBs
(section 2) and we show how these can be used to set tight
bounds on the ages of their stellar discs (section 3). We also
show that the color gradients are well fitted (section 4), and
we speculate on the cosmological implications of these results
in section 5.
\section{Synthetic stellar populations for LSBs}
In the following when we will refer to LSBs' we will always mean
the sample of late-type disc galaxies observed by de Blok,
van der Hulst \& Bothun (1995). For each galaxy of their sample
the HI surface density, and the surface brightness profiles in
several bands are published.
LSBs are found to be rather blue; the color tends to become bluer
in the outer regions of their discs. De Blok, van der Hulst
\& Bothun (1995) noted that it is difficult to understand the
colors of LSBs,
if their stellar population is old or forming at a declining rate.
McGaugh and Bothun (1996) from the analysis of their sample
concluded that the stellar
populations in LSBs must be very young, because of the very blue colors
and of the very low metallicity. In fact an IMF appropriate to the
solar neighbourhood, like the one by
Miller and Scalo (1979), has a shape very flat for ${\rm M}\leq 0.1 {\rm
M}_{\odot}$ and this results in too red V-I colors when B-V are properly
fitted.
Since the discs of LSBs are rather quiescent when compared with
HSB discs, we suppose that their colors are an excellent probe
of their stellar IMF. Although this can at most be taken as
first approximation, it gives an excellent fit to many
observed relations, as we will show. Moreover, it allows us to
probe to which extent our P-IMF can provide a
realistic interpretation of observed data. At variance with
other IMF, in the P-IMF there are no free
parameters, and it is
based on a model for the structure and dynamics of molecular
clouds, that has
strong observational support (Padoan, Jones, \& Nordlund 1997,
Padoan \& Nordlund 1997).
The P-IMF is designed to model large scale star formation, and
contains a
dependence on mean density $n$, temperature $T$, and velocity
dispersion
$\sigma_{v}$ of the star forming gas. The mean stellar mass is
given by:
\begin{equation}
M_{*}=1\rm {M}_{\odot}\left(\frac{T}{10\,K}\right)^2\left(\frac{n}{10\, cm^{-3}}\right)^{-1/2}
\left(\frac{\sigma_{v}}{5\, km/s}\right)^{-1}
\label{eq1}
\end{equation}
As a significant example we apply the P-IMF to a simple
exponential disc model, with height-scale
equal to $100\rm\, {pc}$, length scale equal to $3\, \rm{Kpc}$,
and total mass
equal to $\rm{M_{D}}=3\times10^9 \rm {M}_{\odot}$, a set of
parameters chosen to be representative of the LSBs. Our
results
about colors depend only slightly on these
particular values, however.
As a measure of the gas velocity dispersion we use the disc vertical
velocity dispersion. We also assume that all stars are formed
in a cold gas phase, at T$=10\, K$.
Note that the same stellar mass would be obtained if the vertical velocity
dispersion, instead of the height-scale, were kept constant along the radius,
because of the dependence on velocity dispersion and density in
equation (1).
Fig.~1 shows the IMF predicted for such a disc at 1$ \rm {kpc}$
and
6$ \rm {kpc}$ from its center. The IMF is more massive
than the Miller-Scalo (dashed line), but also less broad. The
IMF at
6$ \rm {kpc}$ is also more massive than at 1$ \rm {kpc}$.
We then expect that with these properties the stellar populations
which will form will be rather blue, and will become bluer at
larger distances from the center, as is observed in LSBs.
To compute the synthetic colors we used the latest
version of our synthetic stellar population code (Jimenez et al. 1996).
The code uses the library of stellar tracks computed with
JMSTAR9
and the set of atmospheric models calculated
by Kurucz (Kurucz 1992). A careful treatment of {\rm all}
evolutionary stages
has been done following the prescriptions in Jimenez et al. (1995), and Jimenez
et al. (1996). Different star formation rates and stellar IMF
are incorporated in the code, so a large parameter space can be investigated.
We find that the star formation in LSBs can be
adequately described with
an initial burst, followed by a quiescent evolution up to the
present time. It has been already remarked (van der Hulst et al.,
1993) that LSBs' gas surface
densities are too low to allow efficient star formation according to
Kennicut criterion (Kennicut 1989). Therefore it is reasonable to
argue that significant star formation is limited to an initial
burst. The duration of the burst is almost irrelevant
to the colors, because of its rather old age, but
it cannot be much longer than a few $10^7$ yr, in order to be
consistent with the low metallicity of the synthetic stellar
population, and with the low oxygen abundance of the HII
regions observed by McGaugh (1994) in LSBs.
We find that the colors of LSBs are not difficult to reproduce,
as long as stars smaller than $1 \rm{M}_{\odot}$ are not
as numerous as in the solar-neighborhood population, which would
give a
too red V-I color, and as long as a low metallicity is used.
Indeed, one can easily see,
from the theoretical models by Kurucz (1992), that even a
{\it single} star with low
metallicity (Z=0.0002) can reproduce the colors of LSBs. As an
example, the colors
of a typical galaxy from the sample of de Blok, van der Hulst,
\& Bothun, namely F568-V1,
are: U-B=-0.16, B-V=0.57, B-R=0.91, V-I=0.77 (luminosity
weighted); the colors
of a Kurucz model with temperature T=5500 K, $\log$(g)=4.5,
Z=0.0002
are: U-B=-0.17, B-V=0.56, B-R=0.94, V-I=0.75. This model
corresponds to
a star of $0.94 \rm{M}_{\odot}$, having a lifetime of 11 Gyr.
Obviously, the reason for
such a good match does not lie in the fact that the stellar IMF does not
contain any star more
massive than $1 \rm{M}_{\odot}$, as suggested in the past
(Romanishin, Strom, \& Strom 1983, Schombert et al. 1990), but
simply in the fact that $0.94 \rm{M}_{\odot}$ is the mass
at the turn-off for the stellar population of F568-V1 in our
model, which gives an age for this galaxy's disc of about 11
Gyr.
\section{The age of LSBs}
In Fig.~2 we plot the time evolution of the colors for a very low
metallicity ($Z=0.0002$), and in Fig.~3 for a higher metallicity
($Z=0.0040$).
In order to compare the theoretical prediction with the observed colors,
we have used the mean values of the luminosity-weighted colors listed
in Table~4 of de Block, van der Hulst, \& Bothun (1995).
Since the color of a stellar population is affected by age and
metallicity, we also plot in Fig.~2 the mean of the observed colors,
excluding the three galaxies for which U-B is observed and has a positive
value. The error bars represent the dispersion around the mean. It is
clear
that the fit is excellent for an age of $12 \,\rm{Gyr}$, and that an age
$\le 9 \,\rm{Gyr}$ is definitely inconsistent with the data.
In
Fig.~3 we plot the mean of the colors for the three galaxies with
positive U-B. These redder galaxies are better fitted by a higher
metallicity,
$Z=0.0040$, which is one fifth of the solar metallicity, and is one of
the highest metallicity estimated by McGaugh (1994) in LSB HII regions.
The best fit for the age is $9\, \rm{Gyr}$.
The effect of the metallicity on the colors is illustrated
in Fig.~4 and 5, where we show the trajectories of the time evolution of our
models in color-color diagrams. It is evident that we do not find LSBs
younger than $9\,{\rm Gyr}$, for any metallicity consistent with the
observations. The spread in colors is a result of the spread in
metallicity, as is shown by the remarkable agreement between the
trajectories and the observations. For instance, in the (B-V,U-B)
diagram, where the trajectories are well separated, also the observed
points show a similar spread in U-B. On the other hand, in the
(B-R,B-V) diagram, where the theoretical trajectories are almost coincident,
also the observational points are nicely aligned around the
trajectories.
Therefore {\it the ages of LSBs' discs rule out the possibility
that they
formed from primordial density fluctuations of low amplitude,
collapsed at $z\le1$}. Such old ages may seem difficult to
reconcile with those of the relatively young stellar populations in
normal late-type galaxies, that have U-B and B-V colors comparable
to those of LSBs, and B-R and V-I even redder. However, the very blue
U-B and B-V colors in LSBs are very well explained by the very low
metallicities, rather than by the young stellar ages, and the B-R and V-I
colors are explained by the lack of small stars (as the P-IMF predicts),
in comparison with a Miller-Scalo
IMF.
The diagram (B-V,U-B) shown in Fig.~5 is particularly important,
because it can be used to estimate the age of single galaxies, without
an independent determination of the metallicity of its stellar population.
In fact, in that diagram the time evolution is almost horizontal, along
B-V, while the metallicity variations are almost vertical, along U-B.
In other words, the degeneracy age-metallicity in the colors is broken
in such a digram. We can therefore see that galaxies of different
metallicities have all about the same age (11-12 Gyr). The horizontal
dispersion of the observational points, along B-V, is approximately
0.1 mag, which is comparable to the observational uncertainty. Therefore,
the determination of the age of LSB discs with presently available
photometry, and without an independent estimate of the metallicity,
has an uncertainty of $\pm 2.0$ Gyr (0.1 mag in B-V).
\section{Color gradients}
An interesting feature of LSBs is their color gradient: LSBs are bluer in the
periphery than near the center of their disc (de Blok, van der Hulst, \&
Bothun 1995).
Our theoretical models predict a color gradient in agreement with the
observations. In fact, the exponential disc model has a volume density
that decreases with increasing radius, and equation (1) shows that
the typical stellar mass in the IMF grows with decreasing gas density,
producing increasingly bluer colors.
We have computed the color gradients per disc length-scale, for a
model with an age of 12 Gyr, and metallicity Z=0.0002. In Table~1
we show the results, compared with the observational data, which are
obtained from the mean of the values listed by de Blok, van der Hulst,
\& Bothun (1995), in their Table~3. Again, we have excluded from the mean the
three galaxies with U-B$>0$, since they require a metallicity significantly larger
than Z=0.0002. Together with the mean gradients, we give the mean of the
errors listed by the above mentioned authors.
The agreement between observational data and theory is striking. Note
that the model is just the one that best fits the colors of LSBs, as shown
in Fig.~2, rather than being an ad hoc model which fits the
color gradients.
Therefore {\it the color gradient of LSBs indicates that the stellar IMF
is more massive towards the periphery of the discs than near the
center}, as predicted by our P-IMF.
\section{Conclusions}
In this work we have shown that the P-IMF, applied to a simple
exponential disc model, allows an excellent description of the colors
and color gradients of LSBs. This allows us to draw a few
interesting consequences:
\begin{itemize}
\item The Miller-Scalo IMF produces too red V-I colors, and therefore
cannot describe the stellar population of LSB galaxies;
\item The P-IMF, applied to a simple exponential disc model with an
initial burst of star formation, produces
excellent fits of the LSBs' colors and color gradients;
\item The metallicity of LSB stellar populations ranges from practically
zero to about one fifth solar.
\item Although most stars in LSBs are formed in an initial burst, a relation between
colors and surface brightness is not expected, because the colors are strongly
affected also by the metallicity.
\item The age of LSBs, inferred from the UBVRI colors, is between
$9$ and $13\, \rm{Gyr}$. These disc populations are therefore about as
old as the disc of our Galaxy.
\item Since LSBs galaxies are old they cannot be explained as late collapsed objects (low density fluctuations at $z\le1$), therefore their origin remains still unexplained.
\end{itemize}
\acknowledgements
This work has been supported by the Danish National Research Foundation
through its establishment of the Theoretical Astrophysics Center.
RJ and VAD thank TAC for the kind hospitality and
support.
| 2024-02-18T23:39:41.852Z | 1996-09-12T21:33:37.000Z | algebraic_stack_train_0000 | 105 | 2,412 |
|
proofpile-arXiv_065-600 | \section{Introduction}
Irrotational dust spacetimes have been widely studied, in
particular as models for the late universe, and as arenas for
the evolution of density perturbations and gravity wave
perturbations. In linearised theory, i.e. where the irrotational
dust spacetime is close to a Friedmann--Robertson--Walker dust
spacetime, gravity wave perturbations are usually
characterised by
transverse traceless tensor modes.
In terms of the covariant and gauge--invariant
perturbation formalism initiated by Hawking \cite{h}
and developed by Ellis and Bruni \cite{eb},
these perturbations are described by the
electric and magnetic Weyl tensors, given respectively by
\begin{equation}
E_{ab}=C_{acbd}u^c u^d\,,\quad H_{ab}={\textstyle{1\over2}}\eta_{acde}u^e
C^{cd}{}{}_{bf}u^f
\label{eh}
\end{equation}
where $C_{abcd}$ is the Weyl tensor, $\eta_{abcd}$ is the
spacetime permutation tensor, and $u^a$ is the dust four--velocity.
In the so--called `silent universe' case
$H_{ab}=0$, no information is exchanged between neighbouring
particles, also in the exact nonlinear case. Gravity wave
perturbations require nonzero $H_{ab}$, which is
divergence--free in the linearised case \cite{led}, \cite{he},
\cite{b}.
A crucial question for the analysis of gravity waves
interacting with matter is whether
the properties of the linearised perturbations are
in line with those of the exact nonlinear theory.
Lesame et al. \cite{led} used the covariant formalism
and then specialised to a shear tetrad, in order to
study this question. They concluded that in the nonlinear case,
the only solutions with $\mbox{div}\,H=0$ are those with $H_{ab}=0$
--- thus indicating a linearisation instability, with potentially
serious implications for standard analyses of gravity waves, as
pointed out in \cite{m}, \cite{ma}.
It is shown here that the argument of \cite{led}
does not in fact prove that
$\mbox{div}\,H=0$ implies
$H_{ab}=0$.
The error in \cite{led} is traced to an incorrect sign
in the Weyl tensor decomposition (see below).\footnote{The authors of
\cite{led} are in agreement about the error and its implication
(private communication).}
The same covariant formalism is used here, but with modifications
that
lead to simplification and greater clarity. This improved
covariant formalism renders the equations more transparent, and
together with the new identities derived via the formalism,
it facilitates a fully covariant analysis,
not requiring
lengthy tetrad calculations such as those
used in \cite{led}.
The improved formalism
is presented in Section II, and the identities that are crucial for
covariant analysis are given in the appendix.
In Section III, a covariant derivation is given to show
that {\em in the generic case of irrotational dust
spacetimes, the constraint equations are
preserved under evolution.}
A by--product of the
argument is the identification of the
error in \cite{led}.
In a companion paper \cite{mel},
we use the covariant formalism of Section III
to show that when $\mbox{div}\,H=0$,
no further conditions are generated. In particular, $H_{ab}$ {\em is
not forced to vanish, and there is not a linearisation instability.}
A specific example is presented in
Section IV, where
it is shown that Bianchi type V spacetimes
include cases in which
$\mbox{div}\,H=0$ but $H_{ab}\neq0$.
\section{The covariant formalism for propagation and
constraint equations}
The notation and conventions are based on
those of \cite{led}, \cite{e1};
in particular $8\pi G=1=c$, round brackets enclosing indices
denote symmetrisation and square brackets denote
anti--symmetrisation. Curvature tensor conventions are given in
the appendix.
Considerable simplification and streamlining results from
the following definitions: the projected permutation tensor
(compare \cite{e3}, \cite{mes}),
\begin{equation}
\varepsilon_{abc}=\eta_{abcd}u^d
\label{d1}
\end{equation}
the projected, symmetric and trace--free part of a tensor,
\begin{equation}
S_{<ab>}=h_a{}^c h_b{}^d S_{(cd)}-
{\textstyle{1\over3}}S_{cd}h^{cd} h_{ab}
\label{d2}
\end{equation}
where $h_{ab}=g_{ab}+u_au_b$ is the spatial projector
and $g_{ab}$ is the metric,
the projected spatial covariant derivative (compare \cite{e2},
\cite{eb}, \cite{mes}),
\begin{equation}
\mbox{D}_a S^{c\cdots d}{}{}{}{}_{e\cdots f}=h_a{}^b h^c{}_p \cdots
h^d{}_q h_e{}^r \cdots h_f{}^s \nabla_b
S^{p\cdots q}{}{}{}_{r\cdots s}
\label{d3}
\end{equation}
and the covariant spatial curl of a tensor,
\begin{equation}
\mbox{curl}\, S_{ab}=\varepsilon_{cd(a}\mbox{D}^c S_{b)}{}^d
\label{d4}
\end{equation}
Note that
$$
S_{ab}=S_{(ab)}\quad\Rightarrow\quad\mbox{curl}\, S_{ab}=\mbox{curl}\, S_{<ab>}
$$
since $\mbox{curl}\,(fh_{ab})=0$ for any $f$.
The covariant spatial divergence of $S_{ab}$ is
$$(\mbox{div}\,S)_a=\mbox{D}^b S_{ab}$$
The covariant spatial curl of a vector is
$$
\mbox{curl}\, S_a=\varepsilon_{abc}\mbox{D}^bS^c
$$
Covariant analysis of propagation and constraint equations
involves frequent use of a number of algebraic and differential
identities governing the above quantities. In particular, one
requires commutation rules for spatial and time derivatives.
The necessary identities are collected for convenience in the
appendix, which includes a simplification of
known results and a number of new results.
The Einstein, Ricci and Bianchi equations may be covariantly split
into propagation and constraint equations \cite{e1}.
The propagation equations given in
\cite{led} for irrotational dust are simplified by the present
notation, and become
\begin{eqnarray}
\dot{\rho}+\Theta\rho &=& 0
\label{p1}\\
\dot{\Theta}+{\textstyle{1\over3}}\Theta^2 &=& -{\textstyle{1\over2}}\rho
-\sigma_{ab}\sigma^{ab}
\label{p2}\\
\dot{\sigma}_{ab}+{\textstyle{2\over3}}\Theta\sigma_{ab}+\sigma_{c<a}
\sigma_{b>}{}^c &=& -E_{ab}
\label{p3}\\
\dot{E}_{ab}+\Theta E_{ab}-3\sigma_{c<a}E_{b>}{}^c &=&
\mbox{curl}\, H_{ab}-{\textstyle{1\over2}}\rho\sigma_{ab}
\label{p4}\\
\dot{H}_{ab}+\Theta H_{ab}-3\sigma_{c<a}H_{b>}{}^c &=& -\mbox{curl}\, E_{ab}
\label{p5}
\end{eqnarray}
while the constraint equations become
\begin{eqnarray}
\mbox{D}^b\sigma_{ab} &=& {\textstyle{2\over3}}\mbox{D}_a \Theta
\label{c1}\\
\mbox{curl}\, \sigma_{ab}&=& H_{ab}
\label{c2}\\
\mbox{D}^b E_{ab} &=& {\textstyle{1\over3}}\mbox{D}_a \rho +
\varepsilon_{abc}\sigma^b{}_d H^{cd}
\label{c3}\\
\mbox{D}^b H_{ab} &=& -\varepsilon_{abc}\sigma^b{}_d E^{cd}
\label{c4}
\end{eqnarray}
A dot denotes a covariant derivative along $u^a$, $\rho$ is the
dust energy density, $\Theta$ its rate of
expansion, and $\sigma_{ab}$ its shear. Equations (\ref{p4}),
(\ref{p5}), (\ref{c3}) and (\ref{c4}) display the analogy with
Maxwell's theory. The FRW case is covariantly characterised by
$$
\mbox{D}_a\rho=0=\mbox{D}_a\Theta\,,\quad\sigma_{ab}=E_{ab}=H_{ab}=0
$$
and in the linearised case of an almost FRW spacetime, these gradients
and tensors are first order of smallness.
The dynamical fields in these equations are the scalars $\rho$ and
$\Theta$, and the
tensors $\sigma_{ab}$,
$E_{ab}$ and $H_{ab}$, which all satisfy $S_{ab}=S_{<ab>}$. The
metric $h_{ab}$ of the spatial
surfaces orthogonal to $u^a$ is implicitly
also involved in the equations as a dynamical field. Its propagation
equation is simply the identity $\dot{h}_{ab}=0$,
and its constraint equation is the identity $\mbox{D}_a h_{bc}=0$ --
see (\ref{a4}). The Gauss--Codacci equations for the Ricci curvature
of the spatial surfaces \cite{e1}
\begin{eqnarray}
R^*_{ab}-{\textstyle{1\over3}}R^*h_{ab} &=& -\dot{\sigma}_{ab}-\Theta
\sigma_{ab} \nonumber\\
R^* &=&-{\textstyle{2\over3}}\Theta^2+\sigma_{ab}\sigma^{ab}+2\rho \label{r1}
\end{eqnarray}
have not been included, since the curvature is algebraically
determined by the other fields,
as follows from (\ref{p3}):
\begin{equation}
R^*_{ab}=E_{ab}-{\textstyle{1\over3}}\Theta\sigma_{ab}+\sigma_{ca}
\sigma_b{}^c+{\textstyle{2\over3}}\left(\rho-{\textstyle{1\over3}}\Theta^2\right)
h_{ab}
\label{r2}\end{equation}
The contracted Bianchi identities for the 3--surfaces \cite{e1}
$$
\mbox{D}^b R^*_{ab}={\textstyle{1\over2}}\mbox{D}_a R^*
$$
reduce to the Bianchi constraint (\ref{c3}) on using (\ref{c1}),
(\ref{c2}) and the identity (\ref{a13}) in (\ref{r1}) and
(\ref{r2}). Consequently, these identities do not impose any new
constraints.
By the constraint (\ref{c2}), one can in principle eliminate $H_{ab}$.
However, this leads to second--order derivatives in the propagation
equations (\ref{p4}) and (\ref{p5}). It seems preferable to maintain
$H_{ab}$ as a basic field.
One interesting use of (\ref{c2}) is in
decoupling the shear from the Weyl tensor.
Taking the time derivative of
the shear propagation equation (\ref{p3}), using the propagation
equation (\ref{p4}) and the constraint (\ref{c2}), together with
the identity (\ref{a16}), one gets
\begin{eqnarray}
&&-\mbox{D}^2\sigma_{ab}+\ddot{\sigma}_{ab}+{\textstyle{5\over3}}\Theta
\dot{\sigma}_{ab}-{\textstyle{1\over3}}\dot{\Theta}\sigma_{ab}+
{\textstyle{3\over2}}\mbox{D}_{<a}\mbox{D}^c\sigma_{b>c} \nonumber\\
&&{}=4\Theta\sigma_{c<a}\sigma_{b>}{}^c+6\sigma^{cd}\sigma_{c<a}
\sigma_{b>d}-2\sigma^{de}\sigma_{de}h_{c<a}\sigma_{b>}{}^c+
4\sigma_{c<a}\dot{\sigma}_{b>}{}^c
\label{s}\end{eqnarray}
where $\mbox{D}^2=\mbox{D}^a \mbox{D}_a$ is the covariant Laplacian.
This is {\em the exact nonlinear generalisation of the linearised
wave equation for shear perturbations} derived in \cite{he}.
In the linearised
case, the right hand side of (\ref{s}) vanishes, leading to a
wave equation governing the propagation of shear perturbations in
an almost FRW dust spacetime:
$$
-\mbox{D}^2\sigma_{ab}+\ddot{\sigma}_{ab}+{\textstyle{5\over3}}\Theta
\dot{\sigma}_{ab}-{\textstyle{1\over3}}\dot{\Theta}\sigma_{ab}+
{\textstyle{3\over2}}\mbox{D}_{<a}\mbox{D}^c\sigma_{b>c} \approx 0
$$
As suggested by comparison of (\ref{c2}) and (\ref{c4}), and
confirmed by the identity (\ref{a14}), div~curl is {\em not} zero,
unlike its Euclidean vector counterpart. Indeed, the divergence of
(\ref{c2}) reproduces (\ref{c4}), on using the (vector) curl
of (\ref{c1}) and
the identities
(\ref{a2}), (\ref{a8}) and (\ref{a14}):
\begin{equation}
\mbox{div (\ref{c2}) and curl (\ref{c1})}\quad\rightarrow\quad
\mbox{(\ref{c4})}
\label{i1}\end{equation}
Further
differential relations amongst the propagation and constraint
equations are
\begin{eqnarray}
\mbox{curl (\ref{p3}) and (\ref{c1}) and (\ref{c2}) and
(\ref{c2})$^{\displaystyle{\cdot}}$}\quad
& \rightarrow & \quad\mbox{(\ref{p5})} \label{i2}\\
\mbox{grad (\ref{p2}) and div (\ref{p3}) and (\ref{c1}) and
(\ref{c1})$^{\displaystyle{\cdot}}$ and
(\ref{c2})}\quad & \rightarrow & \quad \mbox{(\ref{c3})} \label{i3}
\end{eqnarray}
where the identities (\ref{a7}), (\ref{a11.}), (\ref{a13}),
(\ref{a13.}) and (\ref{a15}) have been used.
Consistency
conditions may arise
to preserve the constraint equations under
propagation along $u^a$ \cite{led}, \cite{he}.
In the general
case, i.e. without imposing any assumptions about
$H_{ab}$ or other quantities, the constraints are
preserved under evolution.
This is shown in the next section, and forms the
basis for analysing special cases, such as
$\mbox{div}\,H=0$.
\section{Evolving the constraints: general case}
Denote the constraint equations (\ref{c1}) --- (\ref{c4}) by
${\cal C}^A=0$, where
$$
{\cal C}^A=\left(\mbox{D}^b\sigma_{ab}-{\textstyle{2\over3}}\mbox{D}_a\Theta\,,\,
\mbox{curl}\,\sigma_{ab}-H_{ab}\,,\,\cdots\right)
$$
and $A={\bf 1},\cdots, {\bf 4}$.
The evolution of ${\cal C}^A$ along $u^a$ leads to a
system of equations $\dot{{\cal C}}^A={\cal F}^A
({\cal C}^B)$, where ${\cal F}^A$ do not contain
time derivatives, since these are eliminated via the propagation
equations and suitable identities. Explicitly, one obtains after
lengthy calculations the following:
\begin{eqnarray}
\dot{{\cal C}}^{\bf 1}{}_a&=&-\Theta{\cal C}^{\bf 1}{}_a+2\varepsilon_a{}^{bc}
\sigma_b{}^d{\cal C}^{\bf 2}{}_{cd}-{\cal C}^{\bf 3}{}_a
\label{pc1}\\
\dot{{\cal C}}^{\bf 2}{}_{ab}&=&-\Theta{\cal C}^{\bf 2}{}_{ab}
-\varepsilon^{cd}{}{}_{(a}\sigma_{b)c}{\cal C}^{\bf 1}{}_d
\label{pc2}\\
\dot{{\cal C}}^{\bf 3}{}_a&=&-{\textstyle{4\over3}}\Theta{\cal C}^{\bf 3}{}_a
+{\textstyle{1\over2}}\sigma_a{}^b{\cal C}^{\bf 3}{}_b-{\textstyle{1\over2}}\rho
{\cal C}^{\bf 1}{}_a \nonumber\\
&&{}+{\textstyle{3\over2}}E_a{}^b{\cal C}^{\bf 1}{}_b
-\varepsilon_a{}^{bc}E_b{}^d{\cal C}^{\bf 2}
{}_{cd}+{\textstyle{1\over2}}\mbox{curl}\,{\cal C}^{\bf 4}{}_a
\label{pc3}\\
\dot{{\cal C}}^{\bf 4}{}_a&=&-{\textstyle{4\over3}}\Theta{\cal C}^{\bf 4}{}_a
+{\textstyle{1\over2}}\sigma_a{}^b{\cal C}^{\bf 4}{}_b
\nonumber\\
&&{}+{\textstyle{3\over2}}H_a{}^b{\cal C}^{\bf 1}{}_b
-\varepsilon_a{}^{bc}H_b{}^d{\cal C}^{\bf 2}
{}_{cd}-{\textstyle{1\over2}}\mbox{curl}\,{\cal C}^{\bf 3}{}_a
\label{pc4}
\end{eqnarray}
For completeness, the following list of equations used in the
derivation is given:\\
Equation
(\ref{pc1}) requires (\ref{a7}), (\ref{a11.}), (\ref{p2}), (\ref{p3}),
(\ref{c1}), (\ref{c2}), (\ref{c3}), (\ref{a13}) -- where (\ref{a13})
is needed to eliminate the following term from the right hand side
of (\ref{pc1}):
\begin{eqnarray*}
&&\varepsilon_{abc}\sigma^b{}_d\,\mbox{curl}\,\sigma^{cd}
-\sigma^{bc}\mbox{D}_a \sigma_{bc}\\
&&{}+\sigma^{bc}
\mbox{D}_c \sigma_{ab}+{\textstyle{1\over2}}\sigma_{ac}\mbox{D}_b\sigma^{bc} \equiv0
\end{eqnarray*}
Equation
(\ref{pc2}) requires (\ref{a15}), (\ref{p3}), (\ref{p5}), (\ref{c1}),
(\ref{c2}), (\ref{a3.}) -- where (\ref{a3.}) is needed to eliminate
the following term from the right hand side of (\ref{pc2}):
$$
\varepsilon_{cd(a}\left\{\mbox{D}^c\left[\sigma_{b)}{}^e\sigma^d{}_e\right]+
\mbox{D}^e\left[\sigma_{b)}{}^d\sigma^c{}_e\right]\right\}\equiv0
$$
Equation
(\ref{pc3}) requires (\ref{a11.}), (\ref{p1}), (\ref{p4}), (\ref{p5}),
(\ref{a14}), (\ref{a3}), (\ref{c1}), (\ref{c3}), (\ref{c4}),
(\ref{a13}) -- where (\ref{a13}) is needed to eliminate the
following term from the right hand side of (\ref{pc3}):
\begin{eqnarray*}
&& {\textstyle{1\over2}}\sigma_{ab}\mbox{D}_c E^{bc}
+\varepsilon_{abc}E^b{}_d\, \mbox{curl}\,\sigma^{cd}\\
& &{}+\varepsilon_{abc}\sigma^b{}_d
\,\mbox{curl}\, E^{cd}
+{\textstyle{1\over2}}E_{ab}\mbox{D}_c\sigma^{bc}+E^{bc}\mbox{D}_b\sigma_{ac}\\
& &{}+\sigma^{bc}\mbox{D}_b E_{ac}-
\mbox{D}_a\left(\sigma^{bc}E_{bc}\right)\equiv 0
\end{eqnarray*}
Equation
(\ref{pc4}) requires (\ref{a11.}), (\ref{p3}), (\ref{p4}), (\ref{p5}),
(\ref{a14}), (\ref{a13}), (\ref{c1}), (\ref{c2}), (\ref{c3}),
(\ref{c4}).
In \cite{led}, a sign error in the Weyl tensor decomposition
(\ref{a5}) led to spurious consistency conditions arising from
the evolution of (\ref{c1}), (\ref{c2}). The evolution
of the Bianchi constraints (\ref{c3}), (\ref{c4})
was not considered in \cite{led}.
Now suppose that the constraints
are satisfied on an initial spatial surface $\{t=t_0\}$, i.e.
\begin{equation}
{\cal C}^A\Big|_{t_0}=0
\label{i}\end{equation}
where
$t$ is proper time along the dust worldlines. Then by
(\ref{pc1}) -- (\ref{pc4}), it follows that the
constraints are satisfied for all time, since ${\cal C}^A=0$ is
a solution for the given initial data. Since the system is linear,
this solution is unique.
This establishes that the constraint equations are preserved under
evolution. However, it does not prove existence of solutions to
the constraints in the generic case
--- only that if solutions exist, then they evolve
consistently. The question of existence is currently under
investigation. One would like to show explicitly how a metric
is constructed from given initial data in the covariant formalism.
This involves in particular considering whether the
constraints generate new constraints, i.e. whether they are
integrable as they stand, or whether there are implicit
integrability conditions. The relation (\ref{i1}) is part of the
answer to this question, in that it shows how, within any
$\{t=\mbox{ const}\}$ surface, the constraint ${\cal C}^{\bf 4}$
is satisfied if ${\cal C}^{\bf 1}$ and ${\cal C}^{\bf 2}$ are
satisfied. Specifically, (\ref{i1}) shows that
\begin{equation}
{\cal C}^{\bf 4}{}_a={\textstyle{1\over2}}\mbox{curl}\,{\cal C}^{\bf 1}
{}_a-\mbox{D}^b{\cal C}^{\bf 2}{}_{ab}
\label{i4}\end{equation}
Hence, if one takes ${\cal C}^{\bf 1}$ as determining
$\mbox{grad}\,\Theta$,
${\cal C}^{\bf 2}$ as defining $H$ and ${\cal C}^{\bf 3}$ as
determining $\mbox{grad}\,\rho$, the constraint equations are
consistent with each other because ${\cal C}^{\bf 4}$ then follows.
Thus if there exists a solution to the constraints on
$\{t=t_0\}$, then it is consistent and it evolves consistently.
In the next section, Bianchi type V spacetimes are shown to provide
a concrete example of existence and consistency in the case
$$
\mbox{div}\,E\neq 0\neq\mbox{curl}\, E\,,\quad\mbox{div}\,H=0\neq\mbox{curl}\, H\,,\quad
\mbox{grad}\,\rho=0=\mbox{grad}\,\Theta
$$
\section{Spacetimes with $\mbox{div}\,H=0\neq H$}
Suppose now that the magnetic Weyl tensor is divergence--free, a
necessary condition for gravity waves:
\begin{equation}
\mbox{div}\,H=0\quad\Leftrightarrow\quad [\sigma,E]=0
\label{dh}\end{equation}
where $[S,V]$ is the index--free notation for the covariant commutator
of tensors [see (\ref{a2})], and the equivalence follows from
the constraint (\ref{c4}).
Using the covariant
formalism of Section III, it can be shown \cite{mel} that (\ref{dh})
is preserved under evolution without generating further conditions.
In particular, (\ref{dh}) does not force $H_{ab}=0$ -- as shown by
the following explicit example.
First note that by (\ref{r2}) and (\ref{dh}):
$$
R^*_{ab}={\textstyle{1\over3}}R^*h_{ab}\quad\Rightarrow\quad
[\sigma,R^*]=0\quad\Rightarrow\quad\mbox{div}\,H=0
$$
i.e., {\em irrotational dust spacetimes
have $\mbox{div}\,H=0$ if $R^*_{ab}$ is isotropic.}
Now the example arises from the class of irrotational spatially
homogeneous spacetimes,
comprehensively analysed and classified by Ellis and MacCallum
\cite{em}.
According to Theorem 7.1 of \cite{em}, the only non--FRW
spatially homogeneous spacetimes
with $R^*_{ab}$ isotropic are Bianchi type I and
(non--axisymmetric) Bianchi type V. The former have $H_{ab}=0$.
For the latter, using
the shear eigenframe $\{{\bf e}_a\}$ of \cite{em}
\begin{equation}
\sigma_{ab} = \sigma_{22}\,\mbox{diag}(0,0,1,-1) \label{b0}
\end{equation}
Using (\ref{r1}) and (\ref{r2}) with (\ref{b0}), one
obtains
\begin{eqnarray}
E_{ab} &=& {\textstyle{1\over3}}\Theta\sigma_{ab}-\sigma_{c<a}
\sigma_{b>}{}^c \nonumber\\
&=&{\textstyle{1\over3}}
\sigma_{22}\,\mbox{diag}\left(0,2\sigma_{22},\Theta-\sigma_{22},
-\Theta-\sigma_{22}\right) \label{b0'}
\end{eqnarray}
in agreement with \cite{em}.\footnote{Note that
$E_{ab}$ in \cite{em} is the negative of $E_{ab}$ defined
in (\ref{eh}).}
The tetrad forms of div and curl
for type V are (compare \cite{vu}):
\begin{eqnarray}
\mbox{D}^b S_{ab}&=&\partial_b S_a{}^b-
3a^b S_{ab} \label{b2}\\
\mbox{curl}\, S_{ab} &=& \varepsilon_{cd(a}\partial^c
S_{b)}{}^d+\varepsilon_{cd(a}S_{b)}{}^c a^d \label{b3}
\end{eqnarray}
where $S_{ab}=S_{<ab>}$, $a_b=a\delta_b{}^1$
($a$ is the type V Lie algebra parameter) and
$\partial_a f$ is the directional derivative of $f$
along ${\bf e}_a$. Using (\ref{b3}) and (\ref{c2}):
\begin{eqnarray}
H_{ab} &=& \mbox{curl}\,\sigma_{ab}\nonumber\\
&=&-2a\sigma_{22}\delta_{(a}{}^2\delta_{b)}{}^3
\label{b1}\end{eqnarray}
Hence:\\ {\em Irrotational Bianchi V dust spacetimes in general
satisfy} $\mbox{div}\,H=0\neq H$.
Using (\ref{b0})---(\ref{b1}), one obtains
\begin{eqnarray}
\mbox{D}^bH_{ab}&=&0 \label{v1}\\
\mbox{curl}\, H_{ab}&=& -a^2\sigma_{ab} \label{v2}\\
\mbox{curl}\,\c H_{ab}&=& -a^2H_{ab} \label{v3}\\
\mbox{D}^bE_{ab} &=& -\sigma_{bc}\sigma^{bc}a_a \label{v4}\\
\mbox{curl}\, E_{ab} &=&{\textstyle{1\over3}}\Theta H_{ab} \label{v5}
\end{eqnarray}
Although (\ref{v1}) is a necessary condition for gravity waves,
it is not sufficient, and (\ref{b0'}) and (\ref{b1}) show that
$E_{ab}$ and $H_{ab}$ decay with the shear, so that
the type V solutions cannot be interpreted as gravity waves.
Nevertheless, these solutions do establish the existence of
spacetimes with $\mbox{div}\,H=0\neq H$.
This supplements the known result that the only spatially homogeneous
irrotational dust spacetimes with $H_{ab}=0$ are FRW, Bianchi types
I and VI$_{-1}$ $(n^a{}_a=0)$, and Kantowski--Sachs \cite{bmp}.
When $H_{ab}=0$, (\ref{b0}) and (\ref{b1}) show that $\sigma_{ab}=0$,
in which case the type V solution reduces to FRW.\\
A final remark concerns the special case $H_{ab}=0$, i.e. the
silent universes. The considerations of this paper show that the
consistency analysis of silent universes undertaken in \cite{lde}
needs to be re--examined. This is a further topic currently under
investigation. It seems likely that the silent universes, in the
full nonlinear theory, are {\em not} in general consistent.
\acknowledgements
Thanks to the referee for very helpful comments, and
to George Ellis, William Lesame and Henk van Elst
for very useful discussions.
This research was supported by grants from Portsmouth, Natal and
Cape Town Universities. Natal University, and especially Sunil
Maharaj, provided warm hospitality while part of this research
was done.
| 2024-02-18T23:39:41.996Z | 1996-09-30T12:38:02.000Z | algebraic_stack_train_0000 | 115 | 3,577 |
|
proofpile-arXiv_065-726 | \section{Introduction}
The Feynman diagrammatic technique has proven quite useful in order to
perform and organize the perturbative solution of quantum many-body
theories. The main idea is the computation of the Green's or
correlation functions by splitting the action $S$ into a quadratic or
free part $S_Q$ plus a remainder or interacting part $S_I$ which is
then treated as a perturbation. From the beginning this technique has
been extended to derive exact relations, such as the
Schwinger-Dyson~\cite{Dy49,Sc51,It80} equations, or to make
resummation of diagrams as that implied in the effective action
approach~\cite{Il75,Ne87} and its generalizations~\cite{Co74}.
Consider now a generalization of the above problem, namely, to solve
(i.e., to find the Green's functions of) a theory with action given by
$S+\delta S$ perturbatively in $\delta S$ but where the
``unperturbed'' action $S$ (assumed to be solved) is not necessarily
quadratic in the fields. The usual answer to this problem is to write
the action as a quadratic part $S_Q$ plus a perturbation $S_I+\delta
S$ and then to apply the standard Feynman diagrammatic technique. This
approach is, of course, correct but it does not exploit the fact that
the unperturbed theory $S$ is solved, i.e., its Green's functions are
known. For instance, the computation of each given order in $\delta S$
requires an infinite number of diagrams to all orders in $S_I$. We
will refer to this as the {\em standard expansion}. In this paper it
is shown how to systematically obtain the Green's functions of the
full theory, $S+\delta S$, in terms of those of the unperturbed one,
$S$, plus the vertices provided by the perturbation, $\delta
S$. Unlike the standard expansion, in powers of $S_I+\delta S$, the
expansion considered here is a strict perturbation in $\delta S$ and
constitutes the natural extension of the Feynman diagrammatic
technique to unperturbed actions which are not necessarily
quadratic. We shall comment below on the applications of such an
approach.
\section{Many-body theory background}
\subsection{Feynman diagrams and standard Feynman rules}
In order to state our general result let us recall some well known
ingredients of quantum many-body theory (see e.g.~\cite{Ne87}), and in
passing, introduce some notation and give some needed definitions.
Consider an arbitrary quantum many-body system described by variables
or {\em fields} $\phi^i$, that for simplicity in the presentation will
be taken as bosonic. As will be clear below, everything can be
generalized to include fermions. Without loss of generality we can use
a single discrete index $i$ to represent all the needed labels (DeWitt
notation). For example, for a relativistic quantum field theory, $i$
would contain space-time, Lorentz and Dirac indices, flavor, kind of
particle and so on. Within a functional integral formulation of the
many-body problem, the expectation values of observables, such as
$A[\phi]$, take the following form:
\begin{equation}
\langle A[\phi] \rangle = \frac{\int\exp\left(S[\phi]\right)A[\phi]\,d\phi}
{\int\exp\left(S[\phi]\right)\,d\phi}\,.
\label{eq:1}
\end{equation}
Here the function $S[\phi]$ will be called the {\em action} of the
system and is a functional in general. Note that in some cases
$\langle A[\phi]\rangle$ represents the time ordered vacuum
expectation values, in other the canonical ensemble averages, etc, and
also the quantity $S[\phi]$ may correspond to different objects in
each particular application. In any case, all (bosonic) quantum
many-body systems can be brought to this form and only
eq.~(\ref{eq:1}) is needed to apply the Feynman diagrammatic
technique. As already noted, this technique corresponds to write the
action in the form $S[\phi]=S_Q[\phi]+S_I[\phi]$:
\begin{equation}
S_Q[\phi]=\frac{1}{2}m_{ij}\phi^i\phi^j\,,\qquad
S_I[\phi]=\sum_{n\geq 0}\frac{1}{n!}g_{i_1\dots i_n}
\phi^{i_1}\cdots\phi^{i_n} \,,
\end{equation}
where we have assumed that the action is an analytical function of the
fields at $\phi^i=0$. Also, a repeated indices convention will be used
throughout. The quantities $g_{i_1\dots i_n}$ are the {\em coupling
constants}. The matrix $m_{ij}$ is non singular and otherwise
arbitrary, whereas the combination $m_{ij}+g_{ij}$ is completely
determined by the action. The {\em free propagator}, $s^{ij}$, is
defined as the inverse matrix of $-m_{ij}$. The signs in the
definitions of $S[\phi]$ and $s^{ij}$ have been chosen so that there
are no minus signs in the Feynman rules below. The $n$-point {\em
Green's function} is defined as
\begin{equation}
G^{i_1\cdots i_n}= \langle\phi^{i_1}\cdots\phi^{i_n}\rangle\,, \quad n\geq 0\,.
\end{equation}
Let us note that under a non singular linear transformation of the
fields, and choosing the action to be a scalar, the coupling constants
transform as completely symmetric covariant tensors and the propagator
and the Green's functions transform as completely symmetric
contravariant tensors. The tensorial transformation of the Green's
functions follows from eq.~(\ref{eq:1}), since the constant Jacobian
of the transformation cancels among numerator and denominator.
Perturbation theory consists of computing the Green's functions as a
Taylor expansion in the coupling constants. We remark that the
corresponding series is often asymptotic, however, the perturbative
expansion is always well defined. By inspection, and recalling the
tensorial transformation properties noted above, it follows that the
result of the perturbative calculation of $G^{i_1\cdots i_n}$ is a sum
of monomials, each of which is a contravariant symmetric tensor
constructed with a number of coupling constants and propagators, with
all indices contracted except $(i_1\cdots i_n)$ times a purely
constant factor. For instance,
\begin{equation}
G^{ab}= \cdots + \frac{1}{3!}s^{ai}g_{ijk\ell}s^{jm}s^{kn}s^{\ell
p}g_{mnpq}s^{qb} +\cdots \,.
\label{eq:example}
\end{equation}
Each monomial can be represented by a {\em Feynman diagram} or graph:
each $k$-point coupling constant is represented by a vertex with $k$
prongs, each propagator is represented by an unoriented line with two
ends. The dummy indices correspond to ends attached to vertices and
are called {\em internal}, the free indices correspond to unattached
or external ends and are the {\em legs} of the diagram. The lines
connecting two vertices are called {\em internal}, the others are {\em
external}. By construction, all prongs of every vertex must be
saturated with lines. The diagram corresponding to the monomial in
eq.~(\ref{eq:example}) is shown in figure~\ref{f:1}.
\begin{figure}
\begin{center}
\vspace{-0.cm}
\leavevmode
\epsfysize = 2.0cm
\makebox[0cm]{\epsfbox{f.1.EPS}}
\end{center}
\caption{Feynman graph corresponding to the monomial in
eq.~(\ref{eq:example}).}
\label{f:1}
\end{figure}
A graph is {\em connected} if it is connected in the topological
sense. A graph is {\em linked} if every part of it is connected to at
least one of the legs (i.e., there are no disconnected $0$-legs
subgraphs). All connected graphs are linked. For instance, the graph
in figure~\ref{f:1} is connected, that in figure~\ref{f:2}$a$ is
disconnected but linked and that in figure~\ref{f:2}$b$ is
unlinked. To determine completely the value of the graph, it only
remains to know the weighting factor in front of the monomial. As
shown in many a textbook~\cite{Ne87}, the factor is zero if the
diagram is not linked. That is, unlinked graphs are not to be included
since they cancel due to the denominator in eq.~(\ref{eq:1}); a result
known as Goldstone theorem. For linked graphs, the factor is given by
the inverse of the {\em symmetry factor} of the diagram which is
defined as the order of the symmetry group of the graph. More
explicitly, it is the number of topologically equivalent ways of
labeling the graph. For this counting all legs are distinguishable
(due to their external labels) and recall that the lines are
unoriented. Dividing by the symmetry factor ensures that each distinct
contributions is counted once and only once. For instance, in
figure~\ref{f:1} there are three equivalent lines, hence the factor
$1/3!$ in the monomial of eq.~(\ref{eq:example}).
\begin{figure}[htb]
\centerline{\mbox{\epsfysize=4.0cm\epsffile{f.2.EPS}}}
\vspace{6pt}
\caption{$(a)$ A linked disconnected graph. $(b)$ A unlinked
graph. The cross represents a 1-point vertex.}
\label{f:2}
\end{figure}
Thus, we arrive to the following {\em Feynman rules} to compute
$G^{i_1\cdots i_n}$ in perturbation theory:
\begin{enumerate}
\item Consider each $n$-point linked graph. Label the legs with
$(i_1,\dots,i_n)$, and label all internal ends as well.
\item Put a factor $g_{j_1\dots j_k}$ for each $k$-point vertex, and a
factor $s^{ij}$ for each line. Sum over all internal indices and
divide the result by the symmetry factor of the graph.
\item Add up the value of all topologically distinct such graphs.
\end{enumerate}
We shall refer to the above as the Feynman rules of the theory
``$S_Q+S_I$''. There are several relevant remarks to be made: If
$S[\phi]$ is a polynomial of degree $N$, only diagrams with at most
$N$-point vertices have to be retained. The choice $g_{ij}=0$ reduces
the number of diagrams. The 0-point vertex does not appear in any
linked graph. Such term corresponds to an additive constant in the
action and cancels in all expectation values. On the other hand, the
only linked graph contributing to the 0-point Green's function is a
diagram with no elements, which naturally takes the value 1.
Let us define the {\em connected Green's functions}, $G_c^{i_1\cdots
i_n}$, as those associated to connected graphs (although they can be
given a non perturbative definition as well). From the Feynman rules
above, it follows that linked disconnected diagrams factorize into its
connected components, thus the Green's functions can be expressed in
terms of the connected ones. For instance
\begin{eqnarray}
G^i &=& G_c^i \,, \nonumber \\
G^{ij} &=& G_c^{ij} + G_c^iG_c^j \,, \\
G^{ijk} &=& G_c^{ijk} +
G_c^iG_c^{jk} + G_c^jG_c^{ik} + G_c^kG_c^{ij} + G_c^iG_c^jG_c^k \,.
\nonumber
\end{eqnarray}
It will also be convenient to introduce the {\em generating function}
of the Green's functions, namely,
\begin{equation}
Z[J] = \int\exp\left(S[\phi]+J\phi\right)\,d\phi \,,
\label{eq:Z}
\end{equation}
where $J\phi$ stands for $J_i\phi^i$ and $J_i$ is called the {\em
external current}. By construction,
\begin{equation}
\frac{Z[J]}{Z[0]} = \langle\exp\left(J\phi\right)\rangle
=\sum_{n\geq 0}\frac{1}{n!}G^{i_1\cdots i_n}J_{i_1}\cdots J_{i_n}\,,
\end{equation}
hence the name generating function. The quantity $Z[0]$ is known as
{\em partition function}. Using the replica method~\cite{Ne87}, it can
be shown that $W[J]=\log\left(Z[J]\right)$ is the generator of the
connected Green's functions. It is also shown that $W[0]$ can be
computed, within perturbation theory, by applying essentially the same
Feynman rules given above as the sum of connected diagrams without
legs and the proviso of assigning a value $-\frac{1}{2}{\rm
tr}\,\log(-m/2\pi)$ to the diagram consisting of a single closed
line. The partition function is obtained if non connected diagrams are
included as well. In this case, it should be noted that the
factorization property holds only up to possible symmetry factors.
\subsection{The effective action}
To proceed, let us introduce the {\em effective action}, which will be
denoted $\Gamma[\phi]$. It can be defined as the Legendre transform of
the connected generating function. For definiteness we put this in the
form
\begin{equation}
\Gamma[\phi] = \min_J\left(W[J]-J\phi\right)\,,
\end{equation}
although in general $S[\phi]$, $W[J]$, as well as the fields, etc, may
be complex and only the extremal (rather than minimum) property is
relevant. For perturbation theory, the key feature of the effective
action is as follows. Recall that a connected graph has $n$ {\em
loops} if it is possible to remove at most $n$ internal lines so that
it remains connected. For an arbitrary graph, the number of loops is
defined as the sum over its connected components. {\em Tree} graphs
are those with no loops. For instance the diagram in
figure~\ref{f:1} has two loops whereas that in figure~\ref{f:3} is
a tree graph. Then, the effective action coincides with the equivalent
action that at tree level would reproduce the Green's functions of
$S[\phi]$. To be more explicit, let us make an arbitrary splitting of
$\Gamma[\phi]$ into a (non singular) quadratic part $\Gamma_Q[\phi]$
plus a remainder, $\Gamma_I[\phi]$,
\begin{equation}
\Gamma_Q[\phi]=\frac{1}{2}\overline{m}_{ij}\phi^i\phi^j\,, \qquad
\Gamma_I[\phi]=\sum_{n\ge 0}\frac{1}{n!}
\overline{g}_{i_1\dots i_n}\phi^{i_1}\cdots\phi^{i_n}\,,
\end{equation}
then the Green's functions of $S[\phi]$ are recovered by using the
Feynman rules associated to the theory ``$\Gamma_Q+\Gamma_I$'' but
adding the further prescription of including only tree level
graphs. The building blocks of these tree graphs are the {\em
effective line}, $\overline{s}^{ij}$, defined as the inverse matrix of
$-\overline{m}_{ij}$, and the {\em effective (or proper) vertices},
$\overline{g}_{i_1\dots i_n}$. This property of the effective action
will be proven below. Let us note that $\Gamma[\phi]$ is completely
determined by $S[\phi]$, and is independent of how $m_{ij}$ and
$\overline{m}_{ij}$ are chosen. In particular, the combination
$\overline{m}_{ij}+\overline{g}_{ij}$ in free of any choice. Of
course, the connected Green's are likewise obtained at tree level from
the theory ``$\Gamma_Q+\Gamma_I$'', but including only connected
graphs.
\begin{figure}[htb]
\centerline{\mbox{\epsfysize=3.5cm\epsffile{f.3.EPS}}}
\caption{A tree graph.}
\label{f:3}
\end{figure}
For ulterior reference, let us define the {\em effective current} as
$\overline{g}_i$ and the {\em self-energy} as
\begin{equation}
\Sigma_{ij}= \overline{m}_{ij}+\overline{g}_{ij}-m_{ij}\,.
\end{equation}
Note that $\Sigma_{ij}$ depends not only on $S[\phi]$ but also on the
choice of $S_Q[\phi]$.
A connected graph is {\em 1-particle irreducible} if it remains
connected after removing any internal line, and otherwise it is called
{\em 1-particle reducible}. In particular, all connected tree graphs
with more than one vertex are reducible. For instance the graph in
figure~\ref{f:1} is 1-particle irreducible whereas those in
figures~\ref{f:3} and ~\ref{f:4} are reducible. To {\em amputate} a
diagram (of the theory ``$S_Q+S_I$'') is to contract each leg with a
factor $-m_{ij}$. In the Feynman rules, this corresponds to not to
include the propagators of the external legs. Thus the amputated
diagrams are covariant tensors instead of contravariant. Then, it is
shown that the $n$-point effective vertices are given by the connected
1-particle irreducible amputated $n$-point diagrams of the theory
``$S_Q+S_I$''. (Unless $n=2$. In this case the sum of all such
diagrams with at least one vertex gives the self-energy.)
\begin{figure}[htb]
\centerline{\mbox{\epsfysize=3.5cm\epsffile{f.4.EPS}}}
\caption{$(a)$ A 1-particle reducible graph. $(b)$ A graph with a
tadpole subgraph.}
\label{f:4}
\end{figure}
A graph has {\em tadpoles} if it contains a subgraph from which stems
a single line. It follows that all graphs with 1-point vertices have
tadpoles. Obviously, when the single line of the tadpole is internal,
the graph is 1-particle reducible (cf. figure~\ref{f:4}$b$). An
important particular case is that of actions for which
$\langle\phi^i\rangle$ vanishes. This ensures that the effective
current vanishes, i.e. $\overline{g}_i=0$ and thus all tree graphs of
the theory ``$\Gamma_Q+\Gamma_I$'' are free of tadpoles (since tadpole
subgraphs without 1-point vertices require at least one loop). Given
any action, $\langle\phi^i\rangle=0$ can be achieved by a redefinition
of the field $\phi^i$ by a constant shift, or else by a readjustment
of the original current $g_i$, so this is usually a convenient
choice. A further simplification can be achieved if $\Gamma_Q[\phi]$
is chosen as the full quadratic part of the effective action, so that
$\overline{g}_{ij}$ vanishes. Under these two choices, each Green's
function requires only a finite number of tree graphs of the theory
``$\Gamma_Q+\Gamma_I$''. Also, $\overline{s}^{ij}$ coincides with the
full connected propagator, $G_c^{ij}$, since a single effective line
is the only possible diagram for it. Up to 4-point functions, it is
found
\begin{eqnarray}
G_c^i &=& 0 \,, \nonumber \\
G_c^{ij} &=& \overline{s}^{ij} \,, \label{eq:connected}
\\
G_c^{ijk} &=&
\overline{s}^{ia}\overline{s}^{jb}\overline{s}^{kc}\overline{g}_{abc}
\,,\nonumber \\
G_c^{ijk\ell} &=&
\overline{s}^{ia}\overline{s}^{jb}\overline{s}^{kc}\overline{s}^{\ell
d}\overline{g}_{abcd} \nonumber \\
& & +\overline{s}^{ia}\overline{s}^{jb}\overline{g}_{abc}
\overline{s}^{cd}\overline{g}_{def}\overline{s}^{ek}\overline{s}^{f\ell}
+\overline{s}^{ia}\overline{s}^{kb}\overline{g}_{abc}
\overline{s}^{cd}\overline{g}_{def}\overline{s}^{ej}\overline{s}^{f\ell}
+\overline{s}^{ia}\overline{s}^{\ell b}\overline{g}_{abc}
\overline{s}^{cd}\overline{g}_{def}\overline{s}^{ek}\overline{s}^{fj}
\,.\nonumber
\end{eqnarray}
The corresponding diagrams are depicted in figure~\ref{f:5}. Previous
considerations imply that in the absence of tadpoles, $G_c^{ij}=
-((m+\Sigma)^{-1})^{ij}$.
\begin{figure}[htb]
\centerline{\mbox{\epsfysize=4.5cm\epsffile{f.5.EPS}}}
\vspace{6pt}
\caption{Feynman diagrams for the 3- and 4-point connected Green's
functions in terms of the proper functions
(cf. eq.~(\ref{eq:connected})). The lighter blobs represent the
connected functions, the darker blobs represent the irreducible
functions.}
\label{f:5}
\end{figure}
\section{Perturbation theory on non quadratic actions}
\subsection{Statement of the problem and main result}
All the previous statements are well known in the literature. Consider
now the action $S[\phi]+\delta S[\phi]$, where
\begin{equation}
\delta S[\phi]=\sum_{n\ge 0}\frac{1}{n!} \delta g_{i_1\dots i_n}
\phi^{i_1}\cdots\phi^{i_n}\,,
\end{equation}
defines the {\em perturbative vertices}, $\delta g_{i_1\dots i_n}$. The
above defined standard expansion to compute the full Green's
functions corresponds to the Feynman rules associated to
the theory ``$S_Q+(S_I+\delta S)$'', i.e., with $g_{i_1\cdots i_n}+\delta
g_{i_1\cdots i_n}$ as new vertices. Equivalently, one can use an
obvious generalization of the Feynman rules, using one kind of
line, $s^{ij}$, and two kinds of vertices, $g_{i_1\dots i_n}$ and
$\delta g_{i_1\dots i_n}$, which should be considered as
distinguishable. As an alternative, we seek instead a diagrammatic
calculation in terms of $\Gamma[\phi]$ and $\delta S[\phi]$, that is,
using $\overline{s}^{ij}$ as line and $\overline{g}_{i_1\dots i_n}$
and $\delta g_{i_1\dots i_n}$ as vertices. The question of which new
Feynman rules are to be used with these building blocks is answered by
the following
{\bf Theorem.} {\em The Green's functions associated to
$S[\phi]+\delta S[\phi]$ follow from applying the Feynman rules of the
theory ``$\Gamma_Q+(\Gamma_I+\delta S)$'' plus the further
prescription of removing the graphs that contain ``unperturbed
loops'', i.e., loops constructed entirely from effective elements
without any perturbative vertex $\delta g_{i_1\dots i_n}$.}
This constitutes the basic result of this paper. The same statement
holds in the presence of fermions. The proof is given below. We remark
that the previous result does not depend on particular choices, such
as $\overline{g}_i=\overline{g}_{ij}=0$. As a consistency check of the
rules, we note that when $\delta S$ vanishes only tree level graphs of
the theory ``$\Gamma_Q+\Gamma_I$'' remain, which is indeed the correct
result. On the other hand, when $S[\phi]$ is quadratic, it coincides
with its effective action (up to an irrelevant constant) and therefore
there are no unperturbed loops to begin with. Thus, in this case our
rules reduce to the ordinary ones. In this sense, the new rules given
here are the general ones whereas the usual rules correspond only to
the particular case of perturbing an action that is quadratic.
\subsection{Illustration of the new Feynman rules}
To illustrate our rules, let us compute the corrections to the
effective current and the self-energy, $\delta\overline{g}_i$ and
$\delta\Sigma_{ij}$, induced by a perturbation at most quadratic in
the fields, that is,
\begin{equation}
\delta S[\phi]= \delta g_i\phi^i+\frac{1}{2}\delta g_{ij}\phi^i\phi^j \,,
\end{equation}
and at first order in the perturbation. To simplify the result, we
will choose a vanishing $\overline{g}_{ij}$. On the other hand,
$S_Q[\phi]$ will be kept fixed and $\delta S[\phi]$ will be included
in the interacting part of the action, so $\delta\Sigma_{ij}=
\delta\overline{m}_{ij}$.
Applying our rules, it follows that $\delta\overline{g}_i$ is given by
the sum of 1-point diagrams of the theory ``$\Gamma_Q+(\Gamma_I+\delta
S)$'' with either one $\delta g_i$ or one $\delta g_{ij}$ vertex and
which are connected, amputated, 1-particle irreducible and contain no
unperturbed loops. Likewise, $\delta\Sigma_{ij}$ is given by 2-point
such diagrams. It is immediate that $\delta g_i$ can only appear in
0-loop graphs and $\delta g_{ij}$ can only appear in 0- or 1-loop
graphs, since further loops would necessarily be unperturbed. The
following result is thus found
\begin{eqnarray}
\delta\overline{g}_i &=& \delta g_i + \frac{1}{2}\delta g_{ab}
\overline{s}^{aj}\overline{s}^{bk}\overline{g}_{jki}\,, \nonumber \\
\delta\Sigma_{ij} &=& \delta g_{ij} + \delta g_{ab}\overline{s}^{ak}
\overline{s}^{b\ell}\overline{g}_{kni} \overline{g}_{\ell rj}
\overline{s}^{nr} +\frac{1}{2}\delta g_{ab}
\overline{s}^{ak}\overline{s}^{b\ell}\overline{g}_{k\ell ij}\,.
\label{eq:2}
\end{eqnarray}
The graphs corresponding to the r.h.s. are shown in
figure~\ref{f:6}. There, the small full dots represent the
perturbative vertices, the lines with lighter blobs represent the
effective line and the vertices with darker blobs are the effective
vertices. The meaning of this equation is, as usual, that upon
expansion of the skeleton graphs in the r.h.s., every ordinary Feynman
graph (i.e. those of the theory ``$S_Q+(S_I+\delta S)$'') appears once
and only once, and with the correct weight. In other words, the new
graphs are a resummation of the old ones.
\begin{figure}[htb]
\centerline{\mbox{\epsfysize=4.6cm\epsffile{f.6.EPS}}}
\vspace{6pt}
\caption{Diagrammatic representation of eqs.~(\ref{eq:2}). The small
full dot represents perturbation vertices. All graphs are amputated.}
\label{f:6}
\end{figure}
Let us take advantage of the above example to make several
remarks. First, in order to use our rules, all $n$-point effective
vertices have to be considered, in principle. In the example of
figure~\ref{f:6}, only the 3-point proper vertex is needed for the
first order perturbation of the effective current and only the 3- and
4-point proper vertices are needed for the self-energy. Second, after
the choice $\overline{g}_{ij}=0$, the corrections to any proper vertex
requires only a finite number of diagrams, for any given order in each
of the perturbation vertices $\delta g_{i_1\dots i_n}$. Finally,
skeleton graphs with unperturbed loops should not be
included. Consider, e.g. the graph in figure~\ref{f:7}$a$. This graph
contains an unperturbed loop. If its unperturbed loop is contracted to
a single blob, this graph becomes the third 2-point graph in
figure~\ref{f:6}, therefore it is intuitively clear that it is
redundant. In fact, the ordinary graphs obtained by expanding the
blobs in figure~\ref{f:7}$a$ in terms of ``$S_Q+S_I$'' are already be
accounted for by the expansion of the third 2-point graph in
figure~\ref{f:6}.
\begin{figure}[htb]
\centerline{\mbox{\epsfysize=2.5cm\epsffile{f.7.EPS}}}
\vspace{6pt}
\caption{$(a)$ A redundant graph. Meaning of lines and vertices as
in figures~\ref{f:5} and ~\ref{f:6}. $(b)$ The associated
unperturbed graph to $(a)$.}
\label{f:7}
\end{figure}
For a complicated diagram of the theory ``$\Gamma_Q+(\Gamma_I+\delta
S)$'', the cleanest way to check for unperturbed loops is to construct
its {\em associated unperturbed graph}. This is the graph of the theory
``$\Gamma_Q+\Gamma_I$'' which is obtained after deleting all perturbation
vertices, so that the ends previously attached to such vertices become
external legs in the new graph. Algebraically this means to remove the
$\delta g_{i_1\dots i_n}$ factors so that the involved indices become
external (uncontracted) indices. The number of unperturbed loops of
the old (perturbed) graph coincides the number of loops of the
associated unperturbed graph. The associated graph to that in
figure~\ref{f:7}$a$ is depicted in figure~\ref{f:7}$b$.
\section{Some applications}
Of course, the success of the standard Feynman diagrammatic technique
is based on the fact that quadratic actions, unlike non quadratic
ones, can be easily and fully solved. Nevertheless, even when the
theory $S[\phi]$ is not fully solved, our expansion can be
useful. First, it helps in organizing the calculation. Indeed, in the
standard expansion the same 1-, 2-,..., $n$-point unperturbed Green's
functions are computed over and over, as subgraphs, instead of only
once. Second, and related, because the perturbative expansion in
$S_I[\phi]$ must be truncated, in the standard expansion one is in
general using different approximations for the same Green's functions
of $S[\phi]$ in different subgraphs. As a consequence, some known
exact properties (such as symmetries, experimental values of masses or
coupling constants, etc) of the Green's functions of $S[\phi]$ can be
violated by the standard calculation. On the contrary, in the
expansion proposed here, the Green's functions of $S[\phi]$ are taken
as an input and hence one can make approximations to them (not
necessarily perturbative) to enforce their known exact properties. As
an example consider the Casimir effect. The physical effect of the
conductors is to change the photon boundary conditions. This in turn
corresponds to modify the free photon propagator~\cite{Bo85}, i.e., to
add a quadratic perturbation to the Lagrangian of quantum
electrodynamics (QED). Therefore our expansion applies. The advantage
of using it is that one can first write down rigorous relations
(perturbative in $\delta S$ but non perturbative from the point of
view of QED) and, in a second step, the required QED propagators and
vertex functions can be approximated (either perturbatively or by some
other approach) in a way that is consistent with the experimentally
known mass, charge and magnetic moment of the electron, for instance.
Another example would be chiral perturbation theory: given some
approximation to massless Quantum Chromodynamics (QCD), the
corrections induced by the finite current quark masses can be
incorporated using our scheme as a quadratic perturbation. Other
examples would be the corrections induced by a non vanishing
temperature or density, both modifying the propagator.
\subsection{Derivation of diagrammatic identities}
Another type of applications comes in the derivation of diagrammatic
identities. We can illustrate this point with some Schwinger-Dyson
equations~\cite{Dy49,Sc51,It80}. Let $\epsilon^i$ be field
independent. Then, noting that the action $S[\phi+\epsilon]$ has
$\Gamma[\phi+\epsilon]$ as its effective action, and for infinitesimal
$\epsilon^i$, it follows that the perturbation $\delta
S[\phi]=\epsilon^i\partial_i S[\phi]$ yields a corresponding
correction $\delta\Gamma[\phi]=\epsilon^i\partial_i \Gamma[\phi]$ in
the effective action. Therefore for this variation we can write:
\begin{eqnarray}
\delta\overline{g}_i &=& \delta\partial_i\Gamma[0]=
\epsilon^j\partial_i\partial_j\Gamma[0]=
\epsilon^j(m+\Sigma)_{ij}\,, \nonumber \\
\delta\Sigma_{ij} &=& \delta\partial_i\partial_j\Gamma[0]=
\epsilon^k\partial_i\partial_j\partial_k\Gamma[0]=
\epsilon^k\overline{g}_{ijk}\,.
\end{eqnarray}
Let us particularize to a theory with a 3-point bare vertex, then
$\delta S[\phi]$ is at most a quadratic perturbation with vertices
$\delta g_j =\epsilon^i(m_{ij}+g_{ij})$ and $\delta g_{jk}=\epsilon^i
g_{ijk}$. Now we can immediately apply eqs.~(\ref{eq:2}) to obtain
the well known Schwinger-Dyson equations
\begin{eqnarray}
\Sigma_{ij} &=& g_{ij}+
\frac{1}{2}g_{iab}\bar{s}^{a\ell}\bar{s}^{br}\bar{g}_{\ell rj} \,, \\
\overline{g}_{cij} &=& g_{cij}+
g_{cab}\overline{s}^{ak}\overline{s}^{b\ell}\overline{g}_{kni}
\overline{g}_{\ell rj}\overline{s}^{nr}
+\frac{1}{2}g_{cab}\overline{s}^{ak}\overline{s}^{b\ell}
\overline{g}_{k\ell ij}\,. \nonumber
\end{eqnarray}
The corresponding diagrams are depicted in figure~\ref{f:8}.
\begin{figure}[htb]
\centerline{\mbox{\epsfysize=4.3cm\epsffile{f.8.EPS}}}
\vspace{6pt}
\caption{Two Schwinger-Dyson equations for a cubic action.}
\label{f:8}
\end{figure}
\subsection{Effective Lagrangians and the double-counting problem}
There are instances in which we do not have (or is not practical to
use) the underlying unperturbed action and we are provided directly,
through the experiment, with the Green's functions. In these cases it
is necessary to know which Feynman rules to use with the exact Green's
functions of $S$. Consider for instance the propagation of particles
in nuclear matter. This is usually described by means of so called
effective Lagrangians involving the nucleon field and other relevant
degrees of freedom (mesons, resonances, photons, etc). These
Lagrangians are adjusted to reproduce at tree level the experimental
masses and coupling constants. (Of course, they have to be
supplemented with form factors for the vertices, widths for the
resonances, etc, to give a realistic description, see
e.g.~\cite{Er88}.) Thus they are a phenomenological approximation to
the effective action rather than to the underlying bare action $S$. So
to say, Nature has solved the unperturbed theory (in this case the
vacuum theory) for us and one can make experimental statements on the
exact (non perturbative) Green's functions. The effect of the nuclear
medium is accounted for by means of a Pauli blocking correction to the
nucleon propagator in the vacuum, namely,
\begin{eqnarray}
G(p)&=&(p^0-\epsilon(\vec{p})+i\eta)^{-1}+
2i\pi n(\vec{p})\delta(p^0-\epsilon(\vec{p}))
= G_0(p) +\delta G(p)\,,
\end{eqnarray}
where $G_0(p)$ and $G(p)$ stand for the nucleon propagator at vacuum
and at finite density, respectively, $n(\vec{p})$ is the Fermi sea
occupation number and $\epsilon(\vec{p})$ is the nucleon kinetic
energy. In the present case, the vacuum theory is the unperturbed one
whereas the Pauli blocking correction is a 2-point perturbation to the
action and our expansion takes the form of a density expansion.
The use of an effective Lagrangian, instead of a more fundamental one,
allows to perform calculations in terms of physical quantities and
this makes the phenomenological interpretation more direct. However,
the use of the standard Feynman rules is not really justified since
they apply to the action and not to the effective action, to which the
effective Lagrangian is an approximation. A manifestation of this
problem comes in the form of double-counting of vacuum contributions,
which has to be carefully avoided. This is obvious already in the
simplest cases. Consider, for instance, the nucleon self-energy coming
from exchange of virtual pions, with corresponding Feynman graph
depicted in figure~\ref{f:9}$a$. This graph gives a non vanishing
contribution even at zero density. Such vacuum contribution is
spurious since it is already accounted for in the physical mass of the
nucleon. The standard procedure in this simple case is to subtract the
same graph at zero density in order to keep the true self-energy. This
is equivalent to drop $G_0(p)$ in the internal nucleon propagator and
keep only the Pauli blocking correction $\delta G(p)$. In more
complicated cases simple overall subtraction does not suffice, as it
is well known from renormalization theory; there can be similar
spurious contributions in subgraphs even if the graph vanishes at zero
density. An example is shown in the photon self-energy graph of
figure~\ref{f:9}$b$. The vertex correction subgraphs contain a purely
vacuum contribution that is already accounted for in the effective
$\gamma NN$ vertex. Although such contributions vanish if the
exchanged pion is static, they do not in general. As is clear from our
theorem, the spurious contributions are avoided by not allowing vacuum
loops in the graphs. That is, for each standard graph consider all the
graphs obtained by substituting each $G(p)$ by either $G_0(p)$ or
$\delta G(p)$ and drop all graphs with any purely vacuum loop. We
emphasize that strictly speaking the full propagator and the full
proper vertices of the vacuum theory have to be used to construct the
diagrams. In each particular application it is to be decided whether a
certain effective Lagrangian (plus form factors, widths, etc) is a
sufficiently good approximation to the effective action.
\begin{figure}[htb]
\centerline{\mbox{\epsfysize=3.0cm\epsffile{f.9.EPS}}}
\vspace{6pt}
\caption{Nucleon (a) and photon (b) self-energy diagrams.}
\label{f:9}
\end{figure}
\subsection{Derivation of low density theorems}
A related application of our rules comes from deriving low density
theorems. For instance, consider the propagation of pions in nuclear
matter and in particular the pionic self-energy at lowest order in an
expansion on the nuclear density. To this end one can use the first
order correction to the self-energy as given in eq.~(\ref{eq:2}), when
the labels $i,j$ refer to pions and the 2-point perturbation is the
Pauli blocking correction for the nucleons. Thus, the labels
$a,b,k,\ell$ (cf. second line of figure~\ref{f:6}) necessarily refer
to nucleons whereas $n,r$ can be arbitrary baryons ($B$). In this
case, the first 2-point diagram in figure~\ref{f:6} vanishes since
$i,j$ are pionic labels which do not have Pauli blocking. On the other
hand, as the nuclear density goes to zero, higher order diagrams
(i.e. with more than one full dot, not present in figure~\ref{f:6})
are suppressed and the second and third 2-point diagrams are the
leading contributions to the pion self energy. The $\pi NB$ and
$\pi\pi NN$ proper vertices in these two graphs combine to yield the
$\pi N$ $T$-matrix, as is clear by cutting the corresponding graphs by
the full dots. (Note that the Dirac delta in the Pauli blocking term
places the nucleons on mass shell.) We thus arrive at the following
low density theorem~\cite{Hu75}: at lowest order in a density
expansion in nuclear matter, the pion optical potential is given by
the nuclear density times the $\pi N$ forward scattering
amplitude. This result holds independently of the detailed
pion-nucleon interaction and regardless of the existence of other kind
of particles as well since they are accounted for by the $T$-matrix.
\subsection{Applications to non perturbative renormalization in
Quantum Field Theory}
Let us consider a further application, this time
to the problem of renormalization in Quantum Field Theory (QFT). To be
specific we consider the problem of ultraviolet divergences. To first
order in $\delta S$, our rules can be written as
\begin{equation}
\delta\Gamma[\phi] =\langle\delta S\rangle^\phi\,,
\label{eq:Lie}
\end{equation}
where $\langle A\rangle^\phi$ means the expectation value of $A[\phi]$
in the presence of an external current $J$ tuned to yield $\phi$ as
the expectation value of the field. This formula is most simply
derived directly from the definitions give above. (In passing, let us
note that this formula defines a group of transformations in the space
of actions, i.e., unlike standard perturbation theory, it preserves
its form at any point in that space.) We can consider a family of
actions, taking the generalized coupling constants as parameters, and
integrate the above first order evolution equation taking e.g. a
quadratic action as starting point. Perturbation theory corresponds to
a Taylor expansion solution of this equation.
To use this idea in QFT, note that our rules directly apply to any
pair of regularized bare actions $S$ and $S+\delta S$. Bare means that
$S$ and $S+\delta S$ are the true actions that yield the expectation
values in the most naive sense and regularized means that the cut off
is in place so that everything is finite and well defined. As it is
well known, a parametric family of actions is said to be
renormalizable if the parameters can be given a suitable dependence on
the cut off so that all expectation values remain finite in the limit
of large cut off (and the final action is non trivial, i.e., non
quadratic). In this case the effective action has also a finite
limit. Since there is no reason to use the same cut off for $S$ and
$\delta S$, we can effectively take the infinite cut off limit in
$\Gamma$ keeping finite that of $\delta S$. (For instance, we can
regularize the actions by introducing some non locality in the
vertices and taking the local limit at different rates for both
actions.) So when using eq.~(\ref{eq:Lie}), we will find diagrams with
renormalized effective lines and vertices from $\Gamma$ and bare
regularized vertices from $\delta S$. Because $\delta\Gamma$ is also
finite as the cut off is removed, it follows that the divergences
introduced by $\delta S$ should cancel with those introduced by the
loops. This allows to restate the renormalizability of a family of
actions as the problem of showing that 1) assuming a given asymptotic
behaviour for $\Gamma$ at large momenta, the parameters in $\delta S$
can be given a suitable dependence on the cut off so that
$\delta\Gamma$ remains finite, 2) the assumed asymptotic behaviour is
consistent with the initial condition (e.g. a free theory) and 3) this
asymptotic behaviour is preserved by the evolution equation. This
would be an alternative to the usual forest formula analysis which
would not depend on perturbation theory. If the above program were
successfully carried out (the guessing of the correct asymptotic
behaviour being the most difficult part) it would allow to write a
renormalized version of the evolution equation~(\ref{eq:Lie}) and no
further renormalizations would be needed. (Related ideas regarding
evolution equations exist in the context of low momenta expansion, see
e.g.~\cite{Morris} or to study finite temperature
QFT~\cite{Pietroni}.)
To give an (extremely simplified) illustration of these ideas, let us
consider the family of theories with Euclidean action
\begin{equation}
S[\phi,\psi]=\int
d^4x\left(\frac{1}{2}(\partial\phi)^2+\frac{1}{2}m^2\phi^2
+\frac{1}{2}(\partial\psi)^2+\frac{1}{2}M^2\psi^2
+\frac{1}{2}g\phi\psi^2 + h\phi + c\right).
\end{equation}
Here $\phi(x)$ and $\psi(x)$ are bosonic fields in four dimensions.
Further, we will consider only the approximation of no
$\phi$-propagators inside of loops. This approximation, which treats
the field $\phi$ at a quasi-classical level, is often made in the
literature. It As it turns out, the corresponding evolution equation
is consistent, that is, the right-hand side of eq.~(\ref{eq:Lie}) is
still an exact differential after truncation. In order to evolve the
theory we will consider variations in $g$, and also in $c$, $h$ and
$m^2$, since these latter parameters require a ($g$-dependent)
renormalization. (There are no field, $\psi$-mass or coupling constant
renormalization in this approximation.) That is
\begin{equation}
\delta S[\phi,\psi]= \int
d^4x\left(\frac{1}{2}\delta m^2\phi^2
+\frac{1}{2}\delta g\phi\psi^2 + \delta h\phi + \delta c\right)\,.
\end{equation}
The graphs with zero and one $\phi$-leg are divergent and clearly they
are renormalized by $\delta c$ and $\delta h$, so we concentrate on the
remaining divergent graph, namely, that with two $\phi$-legs. Noting
that in this quasi-classical approximation $g$ coincides with the full
effective coupling constant and $S_\psi(q)=(q^2+M^2)^{-1}$ coincides
with the the full propagator of $\psi$, an application of the rules
gives (cf. figure~\ref{f:10})
\begin{equation}
\delta\Sigma_\phi(k)= \delta m^2 - \delta g
g\int\frac{d^4q}{(2\pi)^4}\theta(\Lambda^2-q^2) S_\psi(q)S_\psi(k-q)\,,
\label{eq:21}
\end{equation}
where $\Lambda$ is a sharp ultraviolet cut off.
\begin{figure}[htb]
\centerline{\mbox{\epsfxsize=12cm\epsffile{f.10.EPS}}}
\vspace{6pt}
\caption{Diagrammatic representation of eq.~(\ref{eq:21}).}
\label{f:10}
\end{figure}
Let us denote the cut off integral by $I(k^2,\Lambda^2)$. This
integral diverges as $\frac{1}{(4\pi)^2}\log(\Lambda^2)$ for large
$\Lambda$ and fixed $k$. Hence $\delta\Sigma_\phi$ is guaranteed to
remain finite if, for large $\Lambda$, $\delta m^2$ is taken in the
form
\begin{equation}
\delta m^2 = \delta m_R^2 + \delta g g\frac{1}{(4\pi)^2}\log(\Lambda^2/\mu^2)
\end{equation}
where $\mu$ is an arbitrary scale (cut off independent), and $\delta
m_R^2$ is an arbitrary variation. Thus, the evolution equation for
large cut off can be written in finite form, that is, as a
renormalized evolution equation, as follows
\begin{equation}
\delta\Sigma_\phi(k)= \delta m_R^2 - \delta g gI_R(k^2,\mu^2)\,,
\end{equation}
where
\begin{equation}
I_R(k^2,\mu^2)=\lim_{\Lambda\to\infty}\left(I(k^2,\Lambda^2)
-\frac{1}{(4\pi)^2}\log(\Lambda^2/\mu^2)\right)\,.
\end{equation}
Here $\delta g$ and $\delta m_R^2$ are independent and arbitrary
ultraviolet finite variations. The physics remains constant if a
different choice of $\mu$ is compensated by a corresponding change in
$\delta m_R^2$ so that $\delta m^2$, and hence the bare regularized action,
is unchanged. The essential point has been that $\delta m^2$ could be
chosen $\Lambda$ dependent but $k^2$ independent. As mentioned, this
example is too simple since it hardly differs from standard
perturbation theory. The study of the general case (beyond
quasi-classical approximations) with this or other actions seems very
interesting from the point of view of renormalization theory.
\section{Proof of the theorem}
In order to prove the theorem it will be convenient to change the
notation: we will denote the unperturbed action by $S_0[\phi]$ and its
effective action by $\Gamma_0[\phi]$. The generating function of the
full perturbed system is
\begin{equation}
Z[J]= \int\exp\left(S_0[\phi]+\delta S[\phi] +J\phi\right)\,d\phi \,.
\label{eq:8}
\end{equation}
By definition of the effective
action, the connected generating function of the unperturbed theory is
\begin{equation}
W_0[J]=\max_\phi\left(\Gamma_0[\phi]+J\phi\right)\,,
\end{equation}
thus, up to a constant ($J$-independent) factor, we can write
\begin{eqnarray}
\exp\left(W_0[J]\right) &=& \lim_{\hbar\to 0}\left[\int
\exp\left(\hbar^{-1}\left(\Gamma_0[\phi]+J\phi\right)\right)
\,d\phi\right]^{\textstyle\hbar}\,.
\end{eqnarray}
$\hbar$ is merely a bookkeeping parameter here which is often used to
organize the loop expansion~\cite{Co73,Ne87}. The $\hbar$-th power above can be
produced by means of the replica method~\cite{Ne87}. To this end we
introduce a number $\hbar$ of replicas of the original field, which
will be distinguished by a new label $k$. Thus, the previous equation
can be rewritten as
\begin{equation}
\exp\left(W_0[J]\right)= \lim_{\hbar\to 0}\int
\exp\left(\hbar^{-1}\sum_k\left(\Gamma_0[\phi_k]+J\phi_k\right)\right)
\prod_kd\phi_k \,.
\label{eq:10}
\end{equation}
On the other hand, the identity (up to a constant)
$\int\exp\left(J\phi\right)\,d\phi = \delta[J]$, where $\delta[J]$
stands for a Dirac delta, allows to write the reciprocal relation of
eq.~(\ref{eq:Z}), namely
\begin{equation}
\exp\left(S_0[\phi]\right)= \int\exp\left(W_0[J_0]-J_0\phi\right)\,dJ_0 \,.
\label{eq:11}
\end{equation}
If we now use eq.~(\ref{eq:10}) for $\exp W_0$ in eq.~(\ref{eq:11})
and the result is substituted in eq.~(\ref{eq:8}), we obtain
\begin{equation}
Z[J]= \lim_{\hbar\to 0}\int\exp\left(
\hbar^{-1}\sum_k\left(\Gamma_0[\phi_k]+J_0\phi_k\right) +\delta
S[\phi] +\left(J-J_0\right)\phi \right) \,dJ_0\,d\phi\prod_kd\phi_k
\,.
\end{equation}
The integration over $J_0$ is immediate and yields a Dirac delta for
the variable $\phi$, which allows to carry out also this
integration. Finally the following formula is obtained:
\begin{equation}
Z[J]= \lim_{\hbar\to 0}\int
\exp\left(\hbar^{-1}\sum_k\left(\Gamma_0[\phi_k]+J\phi_k\right)
+\delta S\big[\hbar^{-1}\sum_k\phi_k\big]\right)\prod_kd\phi_k \,.
\label{eq:13}
\end{equation}
which expresses $Z[J]$ in terms of $\Gamma_0$ and $\delta S$. Except
for the presence of replicas and explicit $\hbar$ factors, this
formula has the same form as that in eq.~(\ref{eq:8}) and hence it
yields the same standard Feynman rules but with effective lines and
vertices.
Consider any diagram of the theory ``$\Gamma_Q+(\Gamma_I+\delta S)$'',
as described by eq.~(\ref{eq:13}) before taking the limit $\hbar\to
0$. Let us now show that such diagram carries precisely a factor
$\hbar^{L_0}$, where $L_0$ is the number of unperturbed loops in the
graph. Let $P$ be the total number of lines (both internal and
external), $E$ the number of legs, $L$ the number of loops and $C$ the
number of connected components of the graph. Furthermore, let $V^0_n$
and $\delta V_n$ denote the number of $n$-point vertices of the types
$\Gamma_0$ and $\delta S$ respectively. After these definitions, let
us first count the number of $\hbar$ factors coming from the explicit
$\hbar^{-1}$ in eq.~(\ref{eq:13}). The arguments are
standard~\cite{Co73,It80,Ne87}: from the Feynman rules it is clear
that each $\Gamma_0$ vertex carries a factor $\hbar^{-1}$, each
effective propagator carries a factor $\hbar$ (since it is the inverse
of the quadratic part of the action), each $n$-point $\delta S$ vertex
carries a factor $\hbar^{-n}$ and each leg a $\hbar^{-1}$ factor
(since they are associated to the external current $J$). That is, this
number is
\begin{equation}
N_0 = P - \sum_{n\geq 0} V^0_n-E-\sum_{n\ge 0}n\delta V_n \,.
\end{equation}
Recall now the definition given above of the associated unperturbed
diagram, obtained after deleting all perturbation vertices, and let
$P_0$, $E_0$, $L_0$ and $C_0$ denote the corresponding quantities for
such unperturbed graph. Note that the two definitions given for the
quantity $L_0$ coincide. Due to its definition, $P_0=P$ and also
$E_0=E+\sum_{n\geq 0}n\delta V_n$, this allows to rewrite $N_0$ as
\begin{equation}
N_0= P_0-\sum_{n\geq 0} V^0_n-E_0\,.
\end{equation}
Since all quantities now refer a to the unperturbed graph, use can be
made of the well known diagrammatic identity $N_0=L_0-C_0$. Thus from
the explicit $\hbar$, the graph picks up a factor
$\hbar^{L_0-C_0}$. Let us now turn to the implicit $\hbar$ dependence
coming from the number of replicas. The replica method idea applies
here directly: because all the replicas are identical, summation over
each different free replica label in the diagram yields precisely one
$\hbar$ factor. From the Feynman rules corresponding to the theory of
eq.~(\ref{eq:13}) it is clear that all lines connected through
$\Gamma_0$ vertices are constrained to have the same replica label,
whereas the coupling through $\delta S$ vertices does not impose any
conservation law of the replica label. Thus, the number of different
replica labels in the graph coincides with $C_0$. In this argument is
is essential to note that the external current $J_i$ has not been
replicated; it couples equally to all the replicas. Combining this
result with that previously obtained, we find that the total $\hbar$
dependence of a graph goes as $\hbar^{L_0}$. As a consequence, all
graphs with unperturbed loops are removed after taking the limit
$\hbar\to 0$. This establishes the theorem.
Some remarks can be made at this point. First, it may be noted that
some of the manipulations carried out in the derivation of
eq.~(\ref{eq:13}) were merely formal (beginning by the very definition
of the effective action, since there could be more than one extremum
in the Legendre transformation), however they are completely
sufficient at the perturbative level. Indeed, order by order in
perturbation theory, the unperturbed action $S_0[\phi]$ can be
expressed in terms of its effective action $\Gamma_0[\phi]$, hence the
Green's functions of the full theory can be expressed perturbatively
within the diagrams of the theory ``$\Gamma_Q+(\Gamma_I+\delta
S)$''. It only remains to determine the weighting factor of each graph
which by construction (i.e. the order by order inversion) will be just
a rational number. Second, it is clear that the manipulations that
lead to eq.~(\ref{eq:13}) can be carried out in the presence of
fermions as well, and the same conclusion applies. Third, note that in
passing, it has been proven also the statement that the effective
action yields at tree level the same Green's functions as the bare
action at all orders in the loop expansion, since this merely
corresponds to set $\delta S[\phi]$ to zero. Finally,
eq.~(\ref{eq:13}) does not depend on any particular choice, such as
fixing $\langle\phi^i\rangle=0$ to remove tadpole subgraphs.
\section*{Acknowledgments}
L.L. S. would like to thank C. Garc\'{\i}a-Recio and J.W. Negele for
discussions on the subject of this paper. This work is supported in
part by funds provided by the U.S. Department of Energy (D.O.E.)
under cooperative research agreement \#DF-FC02-94ER40818, Spanish
DGICYT grant no. PB95-1204 and Junta de Andaluc\'{\i}a grant no.
FQM0225.
| 2024-02-18T23:39:42.405Z | 1998-05-13T10:38:52.000Z | algebraic_stack_train_0000 | 142 | 8,428 |
|
proofpile-arXiv_065-790 | \section{Introduction}
Quantum
groups
or
q-deformed
Lie
algebra
implies
some
specific
deformations
of classical Lie algebras.
From
a
mathematical
point
of
view,
it
is
a
non-commutative
associative
Hopf
algebra.
The
structure
and
representation
theory of quantum groups
have
been
developed
extensively
by
Jimbo
[1]
and
Drinfeld
[2].
The
q-deformation
of
Heisenberg
algebra
was
made
by
Arik and Coon [3], Macfarlane [4] and Biedenharn [5].
Recently
there
has
been
some
interest
in
more
general
deformations
involving
an
arbitrary
real
functions
of
weight
generators
and
including
q-deformed algebras as a special case [6-10].
\defa^{\dagger}{a^{\dagger}}
\def\sqrt{q}{q^{-1}}
\defa^{\dagger}{a^{\dagger}}
Recently Greenberg [11] has studied the following q-deformation of
multi mode boson
algebra:
\begin{displaymath}
a_i a^{\dagger}_j -q a^{\dagger}_j a_i=\delta_{ij},
\end{displaymath}
where the deformation parameter $q$ has to be real.
The main problem of Greenberg's approach is that we can not derive the relation
among $a_i$'s operators at all.
In order to resolve this problem, Mishra and Rajasekaran [12] generalized
the algebra to complex parameter $q$ with $|q|=1$ and another real deformation
parameter $p$.
In this paper we use the result of ref [12] to construct two types of
coherent states and
q-symmetric ststes.
\defa^{\dagger}{a^{\dagger}}
\def\sqrt{q}{q^{-1}}
\defa^{\dagger}{a^{\dagger}}
\section{Two Parameter Deformed Multimode Oscillators}
\subsection{ Representation and Coherent States}
In this subsection we discuss the algebra given in ref [12] and develop its
reprsentation.
Mishra and Rajasekaran's algebra for multi mode oscillators is given by
\begin{displaymath}
a_i a^{\dagger}_j =q a^{\dagger}_j a_i~~~(i<j)
\end{displaymath}
\begin{displaymath}
a_ia^{\dagger}_i -pa^{\dagger}_i a_i=1
\end{displaymath}
\begin{equation}
a_ia_j=\sqrt{q} a_j a_i ~~~~(i<j),
\end{equation}
where $i,j=1,2,\cdots,n$.
In this case we can say that $a^{\dagger}_i$ is a hermitian adjoint of $a_i$.
The fock space representation of the algebra (1) can be easily constructed
by introducing
the hermitian number operators $\{ N_1, N_2,\cdots, N_n \}$ obeying
\begin{equation}
[N_i,a_j]=-\delta_{ij}a_j,~~~[N_i,a^{\dagger}_j]=\delta_{ij}a^{\dagger}_j,~~~(i,j=1,2,\cdots,n).
\end{equation}
From the second relation of eq.(1) and eq.(2), the relation between the
number operator
and creation and annihilation operator is given by
\begin{equation}
a^{\dagger}_ia_i =[N_i]=\frac{p^{N_i}-1}{p-1}
\end{equation}
or
\begin{equation}
N_i=\sum_{k=1}^{\infty}\frac{(1-p)^k}{1-p^k}(a^{\dagger}_i)^ka_i^k.
\end{equation}
\def|z_1,\cdots,z_n>_+{|0,0,\cdots,0>}
Let $|z_1,\cdots,z_n>_+$ be the unique ground state of this system satisfying
\begin{equation}
N_i|z_1,\cdots,z_n>_+=0,~~~a_i|z_1,\cdots,z_n>_+=0,~~~(i,j=1,2, \cdots,n)
\end{equation}
\def|n_1,n_2,\cdots,n_n>{|n_1,n_2,\cdots,n_n>}
\def|n_1,\cdots,n_i+1.\cdots ,n_n>{|n_1,\cdots,n_i+1.\cdots ,n_n>}
\def|n_1,\cdots,n_i-1.\cdots ,n_n>{|n_1,\cdots,n_i-1.\cdots ,n_n>}
and $\{|n_1,n_2,\cdots,n_n>| n_i=0,1,2,\cdots \}$ be the complete set of the orthonormal
number eigenstates obeying
\begin{equation}
N_i|n_1,n_2,\cdots,n_n>=n_i|n_1,n_2,\cdots,n_n>
\end{equation}
and
\begin{equation}
<n_1,\cdots, n_n|n^{\prime}_1,\cdots,n^{\prime}_n>=\delta_{n_1
n_1^{\prime}}\cdots\delta_{n_2
n_2^{\prime}}.
\end{equation}
If we set
\begin{equation}
a_i|n_1,n_2,\cdots,n_n>=f_i(n_1,\cdots,n_n) |n_1,\cdots,n_i-1.\cdots ,n_n>,
\end{equation}
we have, from the fact that $a^{\dagger}_i$ is a hermitian adjoint of $a_i$,
\begin{equation}
a^{\dagger}_i|n_1,n_2,\cdots,n_n>=f^*(n_1,\cdots, n_i+1, \cdots, n_n) |n_1,\cdots,n_i+1.\cdots ,n_n>.
\end{equation}
Making use of relation $ a_i a_{i+1} = \sqrt{q} a_{i+1} a_i $ we find the
following relation
for $f_i$'s:
\begin{displaymath}
q\frac{f_{i+1}(n_1,\cdots, n_n)}{f_{i+1}(n_1, \cdots, n_i-1, \cdots, n_n}
=\frac{f_i(n_1,\cdots, n_n)}{f_i(n_1,\cdots,n_{i+1}-1, \cdots, n_n)}
\end{displaymath}
\begin{equation}
|f_i( n_1, \cdots, n_i+1, \cdots, n_n)|^2 -p |f_i(n_1, \cdots, n_n)|^2=1.
\end{equation}
Solving the above equations we find
\begin{equation}
f_i(n_1,\cdots, n_n)=q^{\Sigma_{k=i+1}^n n_k}\sqrt{[n_i]},
\end{equation}
where $[x]$ is defined as
\begin{displaymath}
[x]=\frac{p^x-1}{p-1}.
\end{displaymath}
Thus the representation of this algebra becomes
\begin{displaymath}
a_i|n_1,\cdots, n_n>=q^{\Sigma_{k=i+1}^n n_k}\sqrt{[n_i]}|n_1,\cdots,
n_i-1,\cdots, n_n>~~~
\end{displaymath}
\begin{equation}
a^{\dagger}_i|n_1,\cdots, n_n>=q^{-\Sigma_{k=i+1}^n n_k}\sqrt{[n_i+1]}|n_1,\cdots,
n_i+1,\cdots,
n_n>.~~~
\end{equation}
The general eigenstates $|n_1,n_2,\cdots,n_n>$ is obtained by applying
$a^{\dagger}_i$'s operators to the ground state $|z_1,\cdots,z_n>_+$:
\begin{equation}
|n_1,n_2,\cdots,n_n> =\frac{(a^{\dagger}_n)^{n_n}\cdots (a^{\dagger}_1)^{n_1} }{\sqrt{[n_n]!\cdots[n_1]!}}|z_1,\cdots,z_n>_+,
\end{equation}
where
\begin{displaymath}
[n]!=[n][n-1]\cdots[2][1],~~~[0]!=1.
\end{displaymath}
The coherent states for $gl_q(n)$ algebra
is usually defined as
\begin{equation}
a_i|z_1,\cdots,z_i,\cdots,z_n>_-=z_i|z_1,\cdots,z_{i},\cdots,z_n>_-.
\end{equation}
\def\sqrt{q}{\sqrt{q}}
From the $gl_q(n)$-covariant oscillator algebra we obtain the following
commutation
relation between $z_i$'s and $z^*_i$'s, where $z^*_i$ is a complex conjugate
of $z_i$;
\begin{displaymath}
z_iz_j=q z_j z_i,~~~~(i<j),
\end{displaymath}
\begin{displaymath}
z^*_iz^*_j=\frac{1}{q}z^*_jz_i,~~~~(i<j),
\end{displaymath}
\begin{displaymath}
z^*_iz_j=q z_j z^*_i,~~~~(i \neq j)
\end{displaymath}
\begin{equation}
z^*_iz_i=z_iz^*_i.
\end{equation}
Using these relations the coherent state becomes
\begin{equation}
|z_1,\cdots,z_n>_-=c(z_1,\cdots,z_n)\Sigma_{n_1,\cdots,n_n=0}^{\infty}
\frac{z_n^{n_n}\cdots z_1^{n_1}}{\sqrt{[n_1]!\cdots[n_n]!}}|n_1,n_2,\cdots,n_n>.
\end{equation}
Using the eq.(13) we can rewrite eq.(16) as
\begin{equation}
|z_1,\cdots,z_n>_-=c(z_1,\cdots,z_n)e_p(z_na^{\dagger}_n)\cdots e_p(z_1
a^{\dagger}_1)|z_1,\cdots,z_n>_+, \end{equation}
where
\begin{displaymath}
e_p(x)=\Sigma_{n=0}^{\infty}\frac{x^n}{[n]!}
\end{displaymath}
is a deformed exponential function.
In order to obtain the normalized coherent states, we should impose the
condition
$~{}_<z_1,\cdots,z_n|z_1,\cdots,z_n>_-=1$. Then the normalized coherent
states are given
by
\begin{equation}
|z_1,\cdots,z_n>_-=\frac{1}{\sqrt{e_p(|z_1|^2)\cdots e_p(|z_n|^2)}}
e_p(z_na^{\dagger}_n)\cdots e_p(z_1 a^{\dagger}_1)|z_1,\cdots,z_n>_+,
\end{equation}
where $|z_i|^2=z_iz^*_i=z^*_iz_i$.
\subsection{Positive Energy Coherent States}
The purpose of this subsection is to obtain another type of
coherent states for algebra (1).
In order to do so , it is convenient to introduce
n subhamiltonians as follows
\begin{displaymath}
H_i=a^{\dagger}_ia_i-\nu,
\end{displaymath}
where
\begin{displaymath}
\nu=\frac{1}{1-p}.
\end{displaymath}
Then the commutation relation between the subhamiltonians and mode operators
are given by
\begin{equation}
H_ia^{\dagger}_j=(\delta_{ij}(p-1)+1)a^{\dagger}_jH_i,~~~~[H_i,H_j]=0.
\end{equation}
Acting subhamiltonian on the number eigenstates gives
\begin{equation}
H_i|n_1,n_2,\cdots,n_n> =-\frac{p^{n_i}}{1-p}|n_1,n_2,\cdots,n_n>
\end{equation}
Thus the energy becomes negative when $0<p<1$.
As was noticed in ref [13], for the positive energy states it is not $a_i$
but $a^{\dagger}_i$ that
play a role of the lowering operator:
\def\lambda{\lambda}
\def|\l_1p^{n_1},\cdots,\l_n p^{n_n}>{|\lambda_1p^{n_1},\cdots,\lambda_n p^{n_n}>}
\def|\l_1p^{n_1},\cdots,\l_ip^{n_i+1},\cdots,\l_n p^{n_n}>{|\lambda_1p^{n_1},\cdots,\lambda_ip^{n_i+1},\cdots,\lambda_n p^{n_n}>}
\def|\l_1p^{n_1},\cdots,\l_ip^{n_i-1},\cdots,\l_n p^{n_n}>{|\lambda_1p^{n_1},\cdots,\lambda_ip^{n_i-1},\cdots,\lambda_n p^{n_n}>}
\def|\l_1p^{-n_1},\cdots,\l_n p^{-n_n}>{|\lambda_1p^{-n_1},\cdots,\lambda_n p^{-n_n}>}
\begin{displaymath}
H_i|\l_1p^{n_1},\cdots,\l_n p^{n_n}>
=\lambda_i p^{n_i}
|\l_1p^{n_1},\cdots,\l_n p^{n_n}>
\end{displaymath}
\begin{displaymath}
a^{\dagger}_i|\l_1p^{n_1},\cdots,\l_n p^{n_n}>
=q^{-\Sigma_{k=i+1}^n n_k}\sqrt{\lambda_i p^{n_i+1}+\nu}|\l_1p^{n_1},\cdots,\l_ip^{n_i+1},\cdots,\l_n p^{n_n}>
\end{displaymath}
\begin{equation}
a_i|\l_1p^{n_1},\cdots,\l_n p^{n_n}>
=q^{\Sigma_{k=i+1}^n n_k}\sqrt{\lambda_i p^{n_i}+\nu}|\l_1p^{n_1},\cdots,\l_ip^{n_i-1},\cdots,\l_n p^{n_n}>,
\end{equation}
where $ \lambda_1, \cdots,\lambda_n >0$.
Due to this fact, it is natural to define coherent states
corresponding to the representation (21)
as the eigenstates of $a^{\dagger}_i$'s:
\def\Sigma{\Sigma}
\def\frac{\frac}
\def\lambda{\lambda}
\def\mu{\mu}
\def|z_1,\cdots,z_n>_+{|z_1,\cdots,z_n>_+}
\def|z_1,\cdots,z_i,qz_{i+1},\cdots,q z_n>_+{|z_1,\cdots,z_i,qz_{i+1},\cdots,q z_n>_+}
\begin{equation}
a^{\dagger}_i|z_1,\cdots,z_n>_+ =z_i |z_1,\cdots,z_n>_+
\end{equation}
Because the representation (21) depends on n free paprameters $ \lambda_i$'s
, the coherent
states
$|z_1,\cdots,z_n>_+$ can take different forms.
If we assume that the positive energy states are normalizable,
i.e.$~~~$ $<\lambda_1 p^{n_1},\cdots ,\lambda_n p^{n_n}|\lambda_1
p^{n_1^{\prime}},\cdots,\lambda_n
p^{n_n^{\prime}}>=\delta_{n_1 n_1^{\prime}}\cdots\delta_{n_nn_n^{\prime}}$, and
form exactly one
series
for some
fixed $\lambda_i$'s, then we can obtain
\begin{displaymath}
|z_1,\cdots,z_n>_+
\end{displaymath}
\begin{equation}
=C \Sigma_{n_1,\cdots,n_n=-\infty}^{\infty}
\left[\Pi_{k=0}^n
\frac{p^{\frac{n_k(n_k-1)}{4}}}
{\sqrt{(-\frac{\nu}{\lambda_k};p)_{n_k}}}
\left( \frac{1}{\sqrt{\lambda_k}}\right)^{n_k}\right]z_n^{n_n}\cdots
z_1^{n_1}|\l_1p^{-n_1},\cdots,\l_n p^{-n_n}>. \end{equation}
If we demand that ${}_+<z_1,\cdots,z_n|z_1,\cdots,z_n>_+=1$, we have
\begin{equation}
C^{-2} =
\Pi_{k=1}^n{}_0\psi_1(-\frac{\nu}{\lambda_k};p,-\frac{|z_k|^2}{\lambda_k})
\end{equation}
where bilateral p-hypergeometric series ${}_0\psi_1(a;p,x)$is defined by [14]
\begin{equation}
{}_0 \psi_1(a
;p ,x)
=\Sigma_{n=-\infty}^{\infty}
\frac{(-)^n p^{n(n-1)/2}}{(a;p)_{n}}x^n.
\end{equation}
\def\lambda{\lambda}
\def\mu{\mu}
\subsection{Two Parameter Deformed $gl(n)$ Algebra}
The purpose of this subsection is to derive the deformed $gl(n)$ algebra
from the deformed multimode oscillator algebra.
The multimode oscillators given in eq.(1) can be arrayed in bilinears to
construct the generators
\begin{equation}
E_{ij}=a^{\dagger}_i a_j.
\end{equation}
From the fact that $a^{\dagger}_i$ is a hermitian adjoint of $a_i$, we know that
\begin{equation}
E^{\dagger}_{ij}=E_{ji}.
\end{equation}
Then the deformed $gl(n)$ algebra is obtained from the algebra (1):
\begin{displaymath}
[E_{ii},E_{jj}]=0,
\end{displaymath}
\begin{displaymath}
[E_{ii},E_{jk}]=0,~~~(i\neq j \neq k )
\end{displaymath}
\begin{displaymath}
[E_{ij},E_{ji}]=E_{ii}-E_{jj},~~~(i \neq j )
\end{displaymath}
\begin{displaymath}
E_{ii}E_{ij}-p E_{ij} E_{ii}=E_{ij},~~~(i \neq j)
\end{displaymath}
\begin{displaymath}
E_{ij}E_{ik}=
\cases{
q^{-1}E_{ik}E_{ij} & if $ j<k$ \cr
qE_{ik}E_{ij} & if $ j>k$ \cr}
\end{displaymath}
\begin{equation}
E_{ij}E_{kl}=q^{2(R(i,k)+R(j,l)-R(j,k)-R(i,l))}E_{kl}E_{ij},~~~(i \neq j
\neq k \neq l),
\end{equation}
where the symbol $ R(i,j)$ is defined by
\begin{displaymath}
R(i,j)=\cases{
1& if $i>j$\cr
0& if $i \leq j $ \cr }
\end{displaymath}
This algebra goes to an ordinary $gl(n)$ algebra when the deformation parameters
$q$ and $p$ goes to 1.
\def\otimes{\otimes}
\section{q-symmetric states}
In this section we study the statistics of many particle state.
Let $N$ be the number of particles. Then the N-partcle state can be obtained
from
the tensor product of single particle state:
\begin{equation}
|i_1,\cdots,i_N>=|i_1>\otimes |i_2>\otimes \cdots \otimes |i_N>,
\end{equation}
where $i_1,\cdots, i_N$ take one value among $\{ 1,2,\cdots,n \}$ and the sigle
particle state is defined by $|i_k>=a^{\dagger}_{i_k}|0>$.
Consider the case that k appears $n_k$ times in the set $\{ i_1,\cdots,i_N\}$.
Then we have
\begin{equation}
n_1 + n_2 +\cdots + n_n =\sum_{k=1}^n n_k =N.
\end{equation}
Using these facts we can define the q-symmetric states as follows:
\begin{equation}
|i_1,\cdots, i_N>_q
=\sqrt{\frac{[n_1]_{p^2}!\cdots [n_n]_{p^2}!}{[N]_{p^2}!}}
\sum_{\sigma \in Perm}
\mbox{sgn}_q(\sigma)|i_{\sigma(1)}\cdots i_{\sigma(N)}>,
\end{equation}
where
\begin{displaymath}
\mbox{sgn}_q(\sigma)=
q^{R(i_1\cdots i_N)}p^{R(\sigma(1)\cdots \sigma(N))},
\end{displaymath}
\begin{equation}
R(i_1,\cdots,i_N)=\sum_{k=1}^N\sum_{l=k+1}^N R(i_k,i_l)
\end{equation}
and $[x]_{p^2}=\frac{p^{2x}-1}{p^2-1}$.
Then the q-symmetric states obeys
\begin{equation}
|\cdots, i_k,i_{k+1},\cdots>_q=
\cases{
q^{-1} |\cdots,i_{k+1},i_k,\cdots>_q & if $i_k<i_{k+1}$\cr
|\cdots,i_{k+1},i_k,\cdots>_q & if $i_k=i_{k+1}$\cr
q |\cdots,i_{k+1},i_k,\cdots>_q & if $i_k>i_{k+1}$\cr
}
\end{equation}
The above property can be rewritten by introducing the deformed transition
operator
$P_{k,k+1}$ obeying
\begin{equation}
P_{k,k+1}
|\cdots, i_k , i_{k+1},\cdots>_q =|\cdots, i_{k+1},i_k,\cdots>_q
\end{equation}
This operator satisfies
\begin{equation}
P_{k+1,k}P_{k,k+1}=Id,~~~\mbox{so}~~P_{k+1,k}=P^{-1}_{k,k+1}
\end{equation}
Then the equation (33) can be written as
\begin{equation}
P_{k,k+1}
|\cdots, i_k , i_{k+1},\cdots>_q
=q^{-\epsilon(i_k,i_{k+1})}
|\cdots, i_{k+1},i_k,\cdots>_q
\end{equation}
where $\epsilon(i,j)$ is defined as
\begin{displaymath}
\epsilon(i,j)=
\cases{
1 & if $ i>j$\cr
0 & if $ i=j$ \cr
-1 & if $ i<j$ \cr }
\end{displaymath}
It is worth noting that the relation (36) does not contain the deformation
parameter
$p$. And the relation (36) goes to the symmetric relation for the ordinary
bosons
when the deformation parameter $q$ goes to $1$.
If we define the fundamental q-symmetric state $|q>$ as
\begin{displaymath}
|q>=|i_1,i_2,\cdots,i_N>_q
\end{displaymath}
with $i_1 \leq i_2 \leq \cdots \leq i_N$, we have for any $k$
\begin{displaymath}
|P_{k,k+1}|q>|^2 =||q>|^2 =1.
\end{displaymath}
In deriving the above relation we used following identity
\begin{displaymath}
\sum_{\sigma \in Perm } p^{R(\sigma(1),\cdots, \sigma(N))}=
\frac{[N]_{p^2}!}{[n_1]_{p^2}!\cdots [n_n]_{p^2}!}.
\end{displaymath}
\section{Concluding Remark}
To conclude, I used the two parameter deformed multimode oscillator system
given in ref [12] to construct its representation, coherent states and deformed
$gl_q(n)$ algebra.
Mutimode oscillator is important when we investigate the many body quantum
mechanics
and statistical mechanics.
In order to construct the new statistical behavior for deformed particle
obeying the algebra (1), I investigated the defomed symmetric property of
two parameter deformed mutimode states.
\section*{Acknowledgement}
This paper was
supported by
the KOSEF (961-0201-004-2)
and the present studies were supported by Basic
Science
Research Program, Ministry of Education, 1995 (BSRI-95-2413).
\vfill\eject
| 2024-02-18T23:39:42.612Z | 1996-09-17T08:23:20.000Z | algebraic_stack_train_0000 | 152 | 2,631 |
|
proofpile-arXiv_065-810 | \section{Introduction}
\vspace*{-0.5pt}
\noindent
The production of heavy quarkonium states in high-energy collisions
provides an important tool to study the interplay between perturbative
and non-perturbative QCD dynamics. While the creation of heavy quarks
in a hard scattering process can be calculated in perturbative
QCD\cite{CSS86}, the subsequent transition to a physical bound state
introduces non-perturbative aspects. A rigorous framework for treating
quarkonium production and decays has recently been
developed.\cite{BBL95} The factorization approach is based on the use
of non-relativistic QCD\cite{CL86} (NRQCD) to separate the
short-distance parts from the long-distance matrix elements and
explicitly takes into account the complete structure of the quarkonium
Fock space. This formalism implies that so-called color-octet
processes, in which the heavy-quark antiquark pair is produced at
short distances in a color-octet state and subsequently evolves
non-perturbatively into a physical quarkonium, should contribute to
the cross section. It has recently been argued\cite{TEV1,CL96} that
quarkonium production in hadronic collisions at the Tevatron can be
accounted for by including color-octet processes and by adjusting the
unknown long-distance color-octet matrix elements to fit the data.
In order to establish the phenomenological significance of the
color-octet mechanism it is necessary to identify color-octet
contributions in different production processes. Color-octet
production of $J/\psi$ particles has also been studied in the context
of $e^+e^-$ annihilation\cite{BC95}, $Z$ decays\cite{CKY95}, hadronic
collisions at fixed-target experiments\cite{fthad,BR} and $B$
decays\cite{KLS2}. Here, I review the impact of color-octet
contributions and higher-order QCD corrections on the cross section
for $J/\psi$ photoproduction. The production of $J/\psi$ particles in
photon-proton collisions proceeds predominantly through photon-gluon
fusion. Elastic/diffractive mechanisms\cite{ELASTIC} can be eliminated
by measuring the $J/\psi$ energy spectrum, described by the scaling
variable $z = {p\cdot k_\psi}\, / \, {p\cdot k_\gamma}$, with $p,
k_{\psi,\gamma}$ being the momenta of the proton and $J/\psi$,
$\gamma$ particles, respectively. In the proton rest frame, $z$ is the
ratio of the $J/\psi$ to $\gamma$ energy, $z=E_{\psi}/E_\gamma$. For
elastic/diffractive events $z$ is close to one; a clean sample of
inelastic events can be obtained in the range $z\;\rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$<$}\;0.9$.
According to the NRQCD factorization formalism , the inclusive cross
section for $J/\psi$ photoproduction can be expressed as a sum of
terms, each of which factors into a short-distance coefficient and a
long-distance matrix element:
\begin{equation}\label{eq_fac}
\mbox{d}\sigma(\gamma+g \to J/\psi +X) =
\sum_n \mbox{d}\hat{\sigma}(\gamma+g \to c\bar{c}\, [n] + X)\,
\langle {\cal{O}}^{J/\psi}\,[n] \rangle
\end{equation}
Here, $\mbox{d}\hat{\sigma}$ denotes the short-distance cross section
for producing an on-shell $c\bar{c}$-pair in a color, spin and
angular-momentum state labelled by $n$. The NRQCD matrix elements
$\langle {\cal{O}}^{J/\psi} \, [n] \rangle \equiv \langle 0 |
{\cal{O}}^{J/\psi} \, [n] | 0 \rangle$ give the probability for a
$c\bar{c}$-pair in the state $n$ to form the $J/\psi$ particle. The
relative importance of the various terms in (\ref{eq_fac}) can be
estimated by using NRQCD velocity scaling rules.\cite{LMNMH92} For
$v\to 0$ ($v$ being the average velocity of the charm quark in the
$J/\psi$ rest frame) each of the NRQCD matrix elements scales with a
definite power of $v$ and the general expression (\ref{eq_fac}) can be
organized into an expansion in powers of $v^2$.
\vspace*{1pt}\baselineskip=13pt
\section{Color-singlet contribution}
\vspace*{-0.5pt}
\noindent
At leading order in $v^2$, eq.(\ref{eq_fac}) reduces to the standard
factorization formula of the color-singlet model\cite{CS}. The
short-distance cross section is given by the subprocess
\begin{equation}\label{eq_cs}
\gamma + g \to c\bar{c}\, [\mbox{$\underline{1}$},{}^3S_{1}] + g
\end{equation}
shown in Fig.\ref{fig_1}a, with $c\bar{c}$ in a color-singlet state
(denoted by \mbox{$\underline{1}$}), zero relative velocity, and
spin/angular-momentum quantum numbers $^{2S+1}L_J = {}^3S_1$. Up to
corrections of ${\cal{O}}(v^4)$, the color-singlet NRQCD matrix
element is related to the $J/\psi$ wave function at the origin through
$\langle {\cal{O}}^{J/\psi}\,[\mbox{$\underline{1}$},{}^3S_{1}] \rangle \approx
(9/2\pi)|\varphi(0)|^2$ and can be extracted from the measurement of
the $J/\psi$ leptonic decay width or calculated within potential
models. Relativistic corrections due to the motion of the charm quarks
in the $J/\psi$ bound state enhance the large-$z$ region, but can be
neglected in the inelastic domain.\cite{REL} The calculation of the
higher-order perturbative QCD corrections to the short-distance cross
section (\ref{eq_cs}) has been performed recently.\cite{KZSZ94,MK95}
Generic diagrams which build up the cross section in next-to-leading
order (NLO) are depicted in Fig.\ref{fig_1}. Besides the usual
self-energy diagrams and vertex corrections for photons and gluons
(b), one encounters box diagrams (c), the splitting of the final-state
gluon into gluon and light quark-antiquark pairs, as well as diagrams
renormalizing the initial state parton densities (e).
Inclusion of the NLO corrections reduces the scale dependence of the
theoretical prediction and increases the cross section significantly,
depending in detail on the $\gamma p$ energy and the choice of
parameters.\cite{MK95} Details of the calculation and a comprehensive
analysis of total cross sections and differential distributions for
the energy range of the fixed-target experiments and for $J/\psi$
photoproduction at \mbox{HERA} can be found elsewhere.\cite{MK95}
\begin{figure}[t]
\vspace*{2.75cm}
\begin{picture}(7,7)
\special{psfile=fig1a.ps voffset=100 hoffset=30 hscale=35
vscale=37 angle=-90 }
\end{picture}
\vspace*{5.25cm}
\begin{picture}(7,7)
\special{psfile=fig1b.ps voffset=100 hoffset=30 hscale=35
vscale=37 angle=-90 }
\end{picture}
\vspace*{0.45cm}
\fcaption{\label{fig_1} Generic diagrams for $J/\psi$
photoproduction via the color-singlet channel: (a) leading order
contribution; (b) vertex corrections; (c) box diagrams; (d)
splitting of the final state gluon into gluon or light
quark-antiquark pairs; (e) diagrams renormalizing the initial-state
parton densities.}
\vspace*{-5mm}
\end{figure}
\newpage
\vspace*{1pt}\baselineskip=13pt
\section{Color-octet contributions}
\vspace*{-0.5pt}
\noindent
Color-octet configurations are produced at leading order in
$\mbox{$\alpha_{\mbox{\scriptsize s}}$}$ through the $2\to 1$ parton
processes\cite{CK96,AFM,KLS}
\begin{eqnarray}\label{eq_oc0}
\gamma + g &\! \to \!& c\bar{c}\, [\mbox{$\underline{8}$},{}^1S_{0}]
\nonumber \\
\gamma + g &\! \to \!& c\bar{c}\, [\mbox{$\underline{8}$},{}^3P_{0,2}]
\end{eqnarray}
shown in Fig.\ref{fig_2}a. Due to kinematical constraints, the leading
color-octet terms will only contribute to the upper endpoint of the
$J/\psi$ energy spectrum, $z\approx 1$ and $p_\perp\approx 0$,
$p_\perp$ being the $J/\psi$ transverse momentum. Color-octet
configurations which contribute to inelastic $J/\psi$ photoproduction
$z \le 0.9$ and $p_\perp \ge 1$~GeV are produced through the
subprocesses\cite{CK96,KLS}
\begin{eqnarray}\label{eq_oc2}
\gamma + g &\! \to \!& c\bar{c}\, [\mbox{$\underline{8}$},{}^1S_{0}]
+ g \nonumber \\
\gamma + g &\! \to \!& c\bar{c}\, [\mbox{$\underline{8}$},{}^3S_{1}]
+ g \nonumber \\
\gamma + g &\! \to \!& c\bar{c}\, [\mbox{$\underline{8}$},{}^3P_{0,1,2}] + g
\end{eqnarray}
as shown in Fig.\ref{fig_2}b. Light-quark initiated contributions are
strongly suppressed at \mbox{HERA} energies and can safely be
neglected.
\begin{figure}[t]
\vspace*{2.5cm}
\begin{picture}(7,7)
\special{psfile=fig2.ps voffset=100 hoffset=30 hscale=32
vscale=35 angle=-90 }
\end{picture}
\vspace*{3cm}
\fcaption{\label{fig_2}
Generic diagrams for $J/\psi$ photoproduction via color-octet
channels: (a) leading color-octet contributions; (b) color-octet
contributions to inelastic $J\!/\!\psi$ production.}
\end{figure}
The transition of the color-octet $c\bar{c} \,
[\mbox{$\underline{8}$},{}^{2S+1}L_{J}]$ pair into a physical $J/\psi$
state through the emission of non-perturbative gluons is described by
the long-distance matrix elements $\langle {\cal{O}}^{J/\psi} \,
[\mbox{$\underline{8}$},{}^{2S+1}L_{J}] \rangle$. They have to be
obtained from lattice si\-mu\-la\-ti\-ons\cite{BSK96} or measured
directly in some production process. According to the velocity
scaling rules of NRQCD, the color-octet matrix elements associated
with $S$-wave quarkonia should be suppressed by a factor of $v^4$
compared to the leading color-singlet matrix element.\footnote{In the
case of $P$-wave quarkonia, color-singlet and color-octet matrix
elements contribute at the same order in $v$.\cite{BBL92}
Photoproduction of $P$-wave states is, however, suppressed compared
with $J/\psi$ states, by two orders of magnitude at
\mbox{HERA}.\cite{MA,CKHERA}} Color-octet contributions to $J/\psi$
photoproduction can thus become important only if the corresponding
short-distance cross sections are enhanced as compared to the
color-singlet process. Color-octet matrix elements have been fitted to
prompt $J/\psi$ data from \mbox{CDF}\cite{CDF} and found to be
\mbox{${\cal O}(10^{-2}$~GeV$^3)$}, consistent with the NRQCD velocity
scaling rules.\cite{TEV1,CL96} Meanwhile, fit values for color-octet
matrix elements have also been obtained from analyses of quarkonium
production in hadronic collisions at fixed-target
experiments\cite{BR}, $J/\psi$ photoproduction at the elastic
peak\cite{AFM} and $J/\psi$ production in $B$ decays\cite{KLS2}.
The results seem to indicate that the values for the color-octet
matrix elements extracted from the Tevatron data at moderate $p_\perp$
are too large; they should however be considered with some caution
since significant higher-twist corrections are expected to contribute
in the small-$p_\perp$ region probed at fixed target experiments and
in elastic $J/\psi$ photoproduction. Moreover, the comparison between
the different analyses is rendered difficult by the fact that the
color-octet matrix elements can in general only be extracted in
certain linear combinations which depend on the reaction under
consideration, see Sec.4.
\vspace*{1pt}\baselineskip=13pt
\section{$J/\psi$ photoproduction at HERA}
\vspace*{-0.5pt}
\noindent
The production of $J/\psi$ particles in high energy $ep$ collisions at
\mbox{HERA} is dominated by photoproduction events where the electron
is scattered by a small angle producing photons of almost zero
virtuality. The measurements at \mbox{HERA} provide information on the
dynamics of $J/\psi$ photoproduction in a wide kinematical region,
$30~\mbox{GeV} \; \rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$<$} \; \sqrt{s\hphantom{tk}}\!\!\!\!\! _{\gamma
p}\;\rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$<$}\; 200~\mbox{GeV}$, corresponding to initial photon
energies in a fixed-target experiment of $450~\mbox{GeV} \; \rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$<$} \;
E_\gamma \; \rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$<$} \; 20,000~\mbox{GeV}$. Due to kinematical
constraints, the leading color-octet processes (\ref{eq_oc0})
contribute only to the upper endpoint of the $J/\psi$ energy spectrum,
\mbox{$z\approx 1$} and $p_\perp\approx 0$. The color-singlet and
color-octet predictions (\ref{eq_oc0}) have been compared to
experimental data\cite{H1} obtained in the region $z\ge 0.95$ and
$p_\perp \le 1$~GeV.\cite{CK96} Since the fac\-to\-ri\-za\-tion
approach cannot be used to describe the exclusive elastic channel
$\gamma + p \to J/\psi + p$, elastic contributions had been subtracted
from the data sample. It was shown that the large cross section
predicted by using color-octet matrix elements as extracted from the
Tevatron fits appears to be in conflict with the experimental data.
It is, however, difficult to put strong upper limits for the octet
terms from a measurement of the total cross section in the region
$z\approx 1$ and $p_\perp\approx 0$ since the overall normalization of
the theoretical prediction depends strongly on the choice for the
charm quark mass and the QCD coupling. Moreover, diffractive
production mechanisms which cannot be calculated within perturbative
QCD might contaminate the region $z\approx 1$ and make it difficult to
extract precise information on the color-octet contributions. Finally,
it has been argued that sizable higher-twist effects are expected to
contribute in the region $p_\perp\; \rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$<$}\; 1$~GeV, which cause the
breakdown of the factorization formula (\ref{eq_fac}).\cite{BFY}
It is therefore more appropriate to study $J/\psi$ photoproduction in
the inelastic region $z \le 0.9$ and $p_\perp \ge 1$~GeV where no
diffractive channels contribute and where the general factorization
formula (\ref{eq_fac}) and perturbative QCD calculations should be
applicable. Adopting the NRQCD matrix elements as extracted from the
fits to prompt $J/\psi$ data at the Tevatron one finds that
color-octet and color-singlet contributions to the inelastic cross
section are predicted to be of comparable size.\cite{CK96,KLS} The
short-distance factors of the $[\mbox{$\underline{8}$},{}^{1}S_{0}]$
and $[\mbox{$\underline{8}$},{}^{3}P_{0,2}]$ channels are strongly
enhanced as compared to the color-singlet term and partly compensate
the ${\cal{O}}(10^{-2})$ suppression of the corresponding
non-perturbative matrix elements. In contrast, the contributions from
the $[\mbox{$\underline{8}$},{}^{3}S_{1}]$ and
$[\mbox{$\underline{8}$},{}^{3}P_{1}]$ states are suppressed by more
than one order of magnitude. Since color-octet and color-singlet
processes contribute at the same order in
$\mbox{$\alpha_{\mbox{\scriptsize s}}$}$, the large size of the
$[\mbox{$\underline{8}$},{}^{1}S_{0}]$ and
$[\mbox{$\underline{8}$},{}^{3}P_{0,2}]$ cross sections could not have
been anticipated from naive power counting and demonstrates the
crucial dynamical role played by the bound state quantum
numbers.\cite{BR83} As for the total inelastic cross section, the
linear combination of the color-octet matrix elements $\langle
{\cal{O}}^{J/\psi} \, [\mbox{$\underline{8}$},{}^{1}S_{0}] \rangle$
and $\langle {\cal{O}}^{J/\psi} \,
[\mbox{$\underline{8}$},{}^{3}P_{0}] \rangle$ that is probed at
\mbox{HERA} is almost identical to that extracted from the Tevatron
fits at moderate $p_\perp$, independent of
$\sqrt{s\hphantom{tk}}\!\!\!\!\! _{\gamma p}$.\footnote{At leading
order in $v^2$, the $P$-wave matrix elements are related by
heavy-quark spin symmetry, $\langle {\cal{O}}^{J/\psi}
\,[\mbox{$\underline{8}$},{}^{3}P_{J}] \rangle \approx \mbox{$(2J+1)$} \, \langle
{\cal{O}}^{J/\psi} \, [\mbox{$\underline{8}$},{}^{3}P_{0}] \rangle$.} The Tevatron
results can thus be used to make predictions for color-octet
contributions to the total inelastic $J/\psi$ photoproduction cross
section without further ambiguities. However, taking into account the
uncertainty due to the value of the charm quark mass and the strong
coupling, the significance of color-octet contributions cannot be
deduced from the analysis of the absolute $J/\psi$ production rates.
In fact, the experimental data can be accounted for by the
color-singlet channel alone, once higher-order QCD corrections are
included and the theoretical uncertainties due to variation of the
charm quark mass and the strong coupling are taken into account, as
demonstrated at the end of this section. The same statement holds true
for the transverse momentum spectrum, since, at small and moderate
$p_\perp$, both color-singlet and color-octet contributions are almost
identical in shape. At large transverse momenta, $p_\perp \;
\rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$>$} \;
10$~GeV, charm quark fragmentation dominates over the photon-gluon
fusion process.\cite{SA94,GRS95} In contrast to what was found at the
Tevatron\cite{PT_TEV}, gluon fragmentation into color-octet states is
suppressed over the whole range of $p_\perp$ in the inelastic region
$z\;\rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$<$}\; 0.9$.\cite{GRS95}
A distinctive signal for color-octet processes should, however, be
visible in the $J/\psi$ energy distribution
$\mbox{d}\sigma/\mbox{d}{}z$.\cite{CK96} The linear combination of
color-octet matrix elements that is probed by the $J/\psi$ energy
distribution does, however, depend on the value of $z$. Therefore, one
cannot directly use the Tevatron fits but has to allow the individual
color-octet matrix elements to vary in certain ranges, constrained by
the value extracted for the linear combination. It has in fact been
argued that the color-octet matrix element $\langle {\cal{O}}^{J/\psi}
\,[\mbox{$\underline{8}$},{}^{3}P_{0}] \rangle$ could be negative due to the
subtraction of power ultraviolett divergences.\cite{EBpriv} In
contrast, the matrix element $\langle {\cal{O}}^{J/\psi}
\,[\mbox{$\underline{8}$},{}^{1}S_{0}] \rangle$ is free of power divergences and its
value is thus always positive. Accordingly, I have allowed $\langle
{\cal{O}}^{J/\psi} \,[\mbox{$\underline{8}$},{}^{3}P_{0}] \rangle / m_c^2$ to vary in
the range $[-0.01,0.01]$~GeV$^3$ and determined the value of the
matrix element $\langle {\cal{O}}^{J/\psi} \,[\mbox{$\underline{8}$},{}^{1}S_{0}]
\rangle$ from the linear combination extracted at the
Tevatron.\footnote{Note that, given $\langle {\cal{O}}^{J/\psi}
\,[\mbox{$\underline{8}$},{}^{1}S_{0}] \rangle \;\rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$<$}\; 0.1$~GeV$^3$ as required
by the velocity scaling rules, a value $\langle {\cal{O}}^{J/\psi}
\,[\mbox{$\underline{8}$},{}^{3}P_{0}] \rangle / m_c^2 \;\rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$<$}\; -0.01$~GeV$^3$
would be in contradiction with the Tevatron
fits.} The result is shown in Fig.\ref{fig_3}(a) where I have plotted
\begin{figure}[ht]
\vspace*{3cm}
\begin{picture}(7,7)
\special{psfile=fig3a.ps voffset=100 hoffset=35 hscale=35
vscale=35 angle=-90 }
\end{picture}
\vspace*{5.75cm}
\begin{picture}(7,7)
\special{psfile=fig3b.ps voffset=100 hoffset=35 hscale=35
vscale=35 angle=-90 }
\end{picture}
\vspace*{3.25cm}
\fcaption{\label{fig_3} Color-singlet and color-octet contributions to
the $J\!/\!\psi$ energy distribution $\mbox{d}\sigma/\mbox{d}{}z$ at the
photon-proton centre of mass energy $\sqrt{s\hphantom{tk}}\!\!\!\!\!
_{\gamma p}\,\, = 100$~GeV integrated in the range (a) $p_\perp \ge
1$~GeV and (b) $p_\perp \ge 5$~GeV compared to experimental
data\cite{H1,ZEUS}.}
\vspace*{-5mm}
\end{figure}
(leading-order) color-singlet and color-octet contributions at a
typical \mbox{HERA} energy of $\sqrt{s\hphantom{tk}} \!\!\!\!\!
_{\gamma p}\,\, = 100$~GeV in the restricted range $p_\perp \ge
1$~GeV, compared to recent experimental data from \mbox{H1}\cite{H1}
and preliminary data from ZEUS\cite{ZEUS}. The hatched error band
indicates how much the color-octet cross section is altered if
$\langle {\cal{O}}^{J/\psi} \,[\mbox{$\underline{8}$},{}^{3}P_{0}] \rangle / m_c^2$
varies in the range $[-0.01,0.01]$~GeV$^3$, where the lower bound
corresponds to $\langle {\cal{O}}^{J/\psi} \,[\mbox{$\underline{8}$},{}^{3}P_{0}]
\rangle / m_c^2 = -0.01$~GeV$^3$. Since the shape of the distribution
is almost insensitive to higher-order QCD corrections or to the
uncertainty induced by the choice for $m_c$ and
$\mbox{$\alpha_{\mbox{\scriptsize s}}$}$, the analysis of the $J/\psi$
energy spectrum $\mbox{d}\sigma/\mbox{d}{}z$ should provide a clean
test for the underlying production mechanism. From Fig.\ref{fig_3}
one can conclude that the shape predicted by the color-octet
contributions is not supported by the experimental data. The
discrepancy with the data can only be removed when reducing the
relative weight of the color-octet contributions by at least a factor
of five.\cite{CK96} Let me emphasize that the rise of the cross
section towards large $z$ predicted by the color-octet mechanism is
not sensitive to the small-$p_\perp$ region and thus not affected by
the collinear divergences which show up at the endpoint $z=1$ and
$p_\perp=0$. This is demonstrated in Fig.\ref{fig_3}(b) where I show
color-singlet and color-octet contributions to the $J/\psi$ energy
distribution for $p_\perp > 5$~GeV. It will be very interesting to
compare these predictions with data to be expected in the future at
\mbox{HERA}. Let me finally mention that the shape of the $J/\psi$
energy distribution could be influenced by the emission of soft gluons
from the intermediate color-octet state.\cite{BR} While this effect,
which cannot be predicted within the NRQCD factorization approach,
might be significant at the elastic peak, it is by no means clear if
and in which way it could affect the inelastic region $z \;\rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$<$}\;
0.9$ and $p_\perp\;\rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$>$}\;1$~GeV. In fact, if soft gluon emission
were important, it should also result in a feed-down of the leading
color-octet contributions (\ref{eq_oc0}) into the inelastic domain,
thereby increasing the discrepancy between the color-octet cross
section and the data in the large-$z$ region.
For the remainder of this section, I will demonstrate that the
experimental results on differential distributions and total cross
sections are well accounted for by the color-singlet channel alone
including higher-order QCD corrections. This can e.g.\ be inferred
from Fig.\ref{fig_4} where I compare the NLO color-singlet prediction
for the $J/\psi$ transverse momentum distribution\cite{MK95} with
recent results from \mbox{H1}\cite{H1}.
\begin{figure}[htbp]
\vspace*{3cm}
\begin{picture}(7,7)
\special{psfile=fig4.ps voffset=100 hoffset=35 hscale=35
vscale=35 angle=-90 }
\end{picture}
\vspace*{3.25cm}
\fcaption{\label{fig_4} LO and NLO color-singlet prediction for the
$J\!/\!\psi$ transverse momentum spectrum $\mbox{d}\sigma/\mbox{d}{}p_\perp^2$
at the photon-proton centre of mass energy
$\sqrt{s\hphantom{tk}}\!\!\!\!\! _{\gamma p}\,\, = 100$~GeV
integrated in the range $z \le 0.9$ compared to experimental
data\cite{H1}.}
\end{figure}
Note that the inclusion of higher-order QCD corrections is crucial to
describe the shape of the $p_\perp$ distribution. However, a detailed
analysis of the transverse momentum spectrum reveals that the
fixed-order perturbative QCD calculation is not under proper control
in the limit $p_\perp \to 0$, Fig.\ref{fig_4}. No reliable prediction
can be made in the small-$p_\perp$ domain without resummation of large
logarithmic corrections caused by multiple gluon emission. If the
region $p_\perp \le 1$~GeV is excluded from the analysis, the
next-to-leading order color-singlet prediction accounts for the energy
dependence of the cross section and for the overall normalization,
Fig.~\ref{fig_5}. The sensitivity of the prediction to the
small-$x$ behaviour of the gluon distribution is however not very
distinctive, since the average momentum fraction of the partons
$<\!x\!>$ is shifted to larger values when excluding the
small-$p_\perp$ region.
\begin{figure}[htbp]
\vspace*{3cm}
\begin{picture}(7,7)
\special{psfile=fig5.ps voffset=100 hoffset=35 hscale=35
vscale=35 angle=-90 }
\end{picture}
\vspace*{3.5cm}
\fcaption{\label{fig_5} NLO color-singlet prediction for the total
inelastic $J\!/\!\psi$ photoproduction cross section as a function
of the photon-proton energy for different
parametrizations\cite{pdfs} of the parton distribution in the proton
compared to experimental data\cite{H1,ZEUS}.}
\end{figure}
\vspace*{1pt}\baselineskip=13pt
\section{Conclusion}
\vspace*{-0.5pt}
\noindent
I have discussed color-singlet and color-octet contributions to the
production of $J/\psi$ particles in photon-proton collisions,
including higher-order QCD corrections to the color-singlet channel.
A comparison with photoproduction data obtained at fixed-target
experiments\cite{MK95} and the $ep$ collider \mbox{HERA} reveals that
the $J/\psi$ energy spectrum and the slope of the transverse momentum
distribution are adequately accounted for by the next-to-leading order
color-singlet prediction in the inelastic region $p_\perp
\;\rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$>$}\;1$~GeV and $z\;\rlap{\lower 3.5 pt \hbox{$\mathchar \sim$}} \raise 1pt \hbox {$<$}\;0.9$. Taking into account the
uncertainty due to variation of the charm quark mass and the strong
coupling, one can conclude that the normalization too appears to be
under semi-quantitative control. Higher-twist effects\cite{HT} must be
included to improve the quality of the theoretical analysis further.
Distinctive signatures for color-octet processes should be visible in
the shape of the $J/\psi$ energy distribution. However, these
predictions appear at variance with recent experimental data obtained
at \mbox{HERA} indicating that the values of the color-octet matrix
elements $\langle {\cal{O}}^{J\!/\!\psi} \, [\mbox{$\underline{8}$},{}^{1}S_{0}]
\rangle$ and $ \langle {\cal{O}}^{J\!/\!\psi} \, [\mbox{$\underline{8}$},{}^{3}P_{0}]
\rangle$ are considerably smaller than suggested by the fits to
Tevatron data at moderate $p_\perp$. Support is added to this result
by recent analyses on $J/\psi$ production in hadronic collisions at
fixed-target energies\cite{BR} and $B$ decays\cite{KLS2}. Clearly,
much more effort, both theoretical and experimental, is needed to
establish the phenomenological significance of color-octet
contributions to $J/\psi$ production and to proof the applicability of
the NRQCD factorization approach to charmonium production in hadronic
collisions at moderate transverse momentum.
\nonumsection{Acknowledgements}
\noindent
I wish to thank Martin Beneke, Eric Braaten, Matteo Cacciari, Sean Fleming
and Arthur Hebecker for useful discussions.
\nonumsection{References}
\noindent
| 2024-02-18T23:39:42.653Z | 1996-09-20T10:44:50.000Z | algebraic_stack_train_0000 | 161 | 4,253 |
|
proofpile-arXiv_065-832 | \section*{\normalsize\bf I. Introduction}
Physics in the charm energy region is in the boundary domain between
perturbative and nonperturbative QCD. The study of charmonium physics
has recently received renewed interest. The observed hadronic decays
of charmonium may give new challenges to the present theoretical
understanding of the decay mechanisms.
More glueball candidates are observed in charmonium radiative decays,
and are arousing new studies of glueball physics. The observed prompt
production of charmonium at the Tevatron and the serious disagreement
between expected and measured production cross sections
have led to new theoretical speculations about charmonium spectrum and novel
production mechanisms. There are also many new results in
open charm physics, including new measurements of charmed meson and baryon
decays. In this report some of the new results and the status of
theoretical studies of physics
in the charm energy region will be reviewed.
\section*{\normalsize\bf II. Problems in Charmonium Hadronic Decays}
Charmonium hadronic decays may provide useful information on understanding
the nature of quark-gluon interactions and decay mechanisms.
They are essentially
related to both perturbative and nonperturbative QCD. The mechanism for
exclusive hadronic decays is still poorly understood. One of the striking
observations is the so-called ``$\rho\pi$'' puzzle, i.e., in the decays
of $J/\psi$ and $\psi'$ into $\rho\pi$ and $K^* \bar{K}$ the branching
ratios of $\psi'$ are greatly suppressed relative to that of
$J/\psi$ \cite{1fran}. New data from BES not only confirmed this
observation but also found some new suppressions in the $\omega f_2$
and $\rho a_2$ channels \cite{li}\cite{1bes}. This gives a new challenge
to the theory of charmonium hadronic decays.
Because for any exclusive hadronic channel {\it h} the decay proceeds via the
wave function at the origin of the $c\bar{c}$ bound state, one may expect
\begin{eqnarray}
Q_h\equiv\frac{B(\psi'\rightarrow h)}{B(J/\psi\rightarrow h)}
\approx
\frac{B(\psi'\rightarrow 3g)}{B(J/\psi\rightarrow 3g)}\approx
\frac{B(\psi'\rightarrow e^+e^-)}{B(J/\psi\rightarrow e^+e^-)}\approx 0.14.
\end{eqnarray}
Most channels like $3(\pi^+\pi^-)\pi^0$, $2(\pi^+\pi^-)\pi^0$,
$(\pi^+\pi^-) p\bar{p}$, $\pi^0 p\bar{p}$, $2(\pi^+\pi^-)$, and the newly
measured $p\bar{p}$, $\Lambda\bar{\Lambda}$, $\Sigma^0\bar{\Sigma}^0$,
$\Xi^-\bar{\Xi}^-$ by BES seem to approximately respect this relation. But
$Q_h$ for $\rho\pi$ and $K^*\bar{K}$ were found (and confirmed by BES)
to be smaller by more than an order
of magnitude than the normal value 0.14. The new BES data give
\cite{li}\cite{1bes}
\begin{eqnarray}
Q_{\rho\pi}<0.0028,~~ Q_{K^{*\pm}K^\mp}<0.0048.
\end{eqnarray}
This puzzle has led to some theoretical speculations.
The Hadron Helicity Conservation theorem in QCD \cite{1bl}, suggested by
Brodsky and Lepage, indicates that bacause vector-gluon
coupling conserves quark helicity for massless quarks, and each hadron's
helicity is the sum of the helicity of its valence quarks, in hard process the
total hadronic helicity is conserved (up to corrections of order $m/Q$
or higher)
\begin{eqnarray}
\sum\limits_{initial}\lambda_H = \sum\limits_{final}\lambda_H.
\end{eqnarray}
According to this theorem the decays of $J/\psi$ and $\psi'\rightarrow VP$
(vector and pseudoscalar such as $\rho\pi$ and $K^*\bar{K}$) are forbidden.
This seems to be true for $\psi'$ but not for $J/\psi$. The anomalous
aboundance of VP states in $J/\psi$ decay then needs an explaination.
In this connection one possible solution to the $\rho\pi$ puzzle is the
$J/\psi-{\cal O}$ mixing
models \cite{1nambu}\cite{1hou}\cite{1blt}. These models,
though slightly different from each other, have the same essence that the
enhancement of $J/\psi\rightarrow \rho\pi,~K^*\bar{K}$ is due to ${\cal O}
\rightarrow \rho\pi, ~K^*\bar{K}$, where ${\cal O}$ could be a Pomeron daughter
\cite{1nambu} or a vector glueball \cite{1hou}\cite{1blt}, which could lie in
the region close to $J/\psi$ mass and then mixed with $J/\psi$ but not $\psi'$.
It has been suggested to search for this vector glueball in processes
$J/\psi,~\psi'\rightarrow (\eta,~\eta',~\pi\pi)+{\cal O}$,
followed by ${\cal O}\rightarrow \rho\pi,~K^*K$.
Obviously, the $J/\psi-{\cal O}$ mixing model depends heavily on the
existence of a vector glueball near the $J/\psi$. It is therefore crucial to
search for it in the vicinity of $J/\psi$. But so far there seem no signs
for it.
Another proposed solution to this puzzle is the so-called generalized hindered
M1 transition model \cite{1pinsky}. It is argued that because
$J/\psi\rightarrow \eta_c\gamma$ is an allowed M1 transition while
$\psi'\rightarrow \eta_c\gamma$ is hindered (in the nonrelativistic limit),
using the vector-dominance
model to relate $\psi'\rightarrow \gamma\eta_c$
to $\psi'\rightarrow \psi\eta_c$ one could find the coupling
$G_{\psi'\psi\eta_c}$ is much smaller than
$G_{\psi\psi\eta_c}$, and then by
analogy, the coupling $G_{\omega'\rho\pi}$ would be much smaller
than $G_{\omega\rho\pi}$.
Assuming $\psi'\rightarrow \rho\pi$ to proceed via
$\psi'-\omega'$ mixing, while $\psi\rightarrow \rho\pi$ via $\psi-\omega$
mixing, one would find that $\psi'\rightarrow \rho\pi$ is much more severely
suppressed than $\psi\rightarrow \rho\pi$.
There is another model \cite{1ct} in which a hadronic form factor is introduced
to exponentially decrease the two meson decays of $\psi'$ relative to $J/\psi$.
But this model predicts a large suppression for many two meson modes, which may
not be compatible with the present data. There is also a proposal to explain
this puzzle based on the mechanism of sequential quark pair creation
\cite{1karl}.
Now the new BES data give a new challenge to these speculations. It is found
that in addition to $\rho\pi$ and $K^* K$ the suppression also exists in the
VT (vector-tensor) channels of $\psi'$ decays such as $\psi'\rightarrow
\omega f_2(1270)$ \cite{li}\cite{1bes}
\begin{eqnarray}
Q_{\omega f_2} <0.022,
\end{eqnarray}
and the preliminary data on $\rho a_2, K^*K^*_2, \phi f'_2$ channels seem to
also show suppressions for $\psi'$, whereas in the $b_1^\pm \pi^\mp$ channel
there is no suppression is observed for
$\psi'$.
The VT decays do not violate helicity conservation, therefore the suppression
is hard to understand. Moreover, in the $J/\psi-{\cal O}$ mixing model
the ${\cal O}\rightarrow VT$ decay is not expected to be a dominant mode, and
therefore $J/\psi\rightarrow VT$ may not be enhanced. Moreover, using the
vector dominance model one might relate $\psi'\rightarrow \omega f_2$ to
$\psi' \rightarrow \gamma f_2$, but the observed $\psi'\rightarrow \gamma f_2$
is not suppressed\cite{1lee}, and this is also confirmed by BES.
In the generalized hindered M1 transition model, the coupling
$G_{\omega'\omega f_2}$ for $ \omega'\rightarrow
\omega f_2$ should not be suppressed because by analogy the coupling
$G_{\psi' \psi\chi_{c2}}$ is not small due to the fact that
the E1 transition $\psi'\rightarrow \gamma\chi_{c2}$ is not hindered.
Therefore via $\psi'-\omega'$ mixing
the $\psi'\rightarrow\omega'\rightarrow \omega f_2$ decay is expected to be
not suppressed. It seems that within the scope of proposed models and
speculations the puzzles related to the VP and VT suppressions have not
been satisfactorily solved yet.
In order to understand the nature of these puzzles, systematical studies on
$J/\psi$ and $\psi'$ exclusive hadronic decays are needed. Many different
decay channels such as VP ($\rho\pi$, $K^* K$, $\omega\eta$, $\omega\eta'$,
$\phi\eta$,
$\phi\eta'$, and isospin violated $\omega\pi^0,~\rho\eta,~\rho\eta',~
\phi\pi^0,\cdots$), VT ($\omega f_2$, $\rho a_2$, $\phi f_2$,
$\phi f_2'$, $\cdots$)
AP($b_1\pi,\cdots$), TP ($a_2\pi, \cdots$), VS($\omega f_0,~\phi f_0,\cdots$),
VA ($\phi f_1,$ $\omega f_1$, $\cdots$) and three-body or many-body mesonic
decays $(\omega\pi^+\pi^-,~\phi K\bar{K},\cdots)$ and baryonic decays $
(p\bar{p},~n\bar{n},~\Lambda\bar{\Lambda},~
\Sigma\bar{\Sigma},~\Xi\bar{\Xi},\cdots)$ are worth studying and
may be helpful to reveal the essence of the puzzle and the nature of decay
machnisms. In addition, to test the hadron helicity conservation theorem,
measurements of the decay angular momentum distribution are also
important. E.g., it predicts a $\sin^2\theta$ distribution for
$J/\psi,~\psi'\rightarrow \omega f_2$ \cite{1gt}.
Since the $\eta_c,~\eta'_c$ systems are the counterparts of $J/\psi,~\psi'$,
it has been suggested to study exclusive hadronic decays of $\eta_c$ and
$\eta'_c$ \cite{1anse}\cite{1cgt}. It is argued that for any normal hadronic
channel {\it h}, based on the same argument as for $J/\psi$ and $\psi'$,
the following
relation should hold \cite{1cgt}
\begin{eqnarray}
P_h\equiv\frac{B(\eta'_c\rightarrow h)}
{B(\eta_c\rightarrow h)}\approx\frac{B(\eta'_c\rightarrow 2g)}
{B(\eta_c\rightarrow 2g)}\approx 1.
\end{eqnarray}
This relation differs from the ``0.14'' rule for $J/\psi,~\psi'$, because $
\eta'_c\rightarrow 2g$ is the overwhelmingly dominant decay mode, whereas for
$\psi'$ the $\psi'\rightarrow J/\psi\pi\pi$ and $\psi'\rightarrow
\gamma\chi_{cJ}~(J=0,1,2)$ transitions are dominant. As the ``0.14'' rule for
$J/\psi$ and $\psi'$, this relation for $\eta_c$ and $\eta'_c$ may serve
as a criterion to determine whether there exsit anomalous suppressions in the
$\eta_c,~\eta'_c$ systems. As pointed out in \cite{1anse}
that since the observed
$\eta_c\rightarrow VV(\rho\rho,~K^*\bar{K^*},~\phi\phi)$ and $p\bar{p}$ decays,
which are forbidden by helicity conservation, seem to be not suppressed, there
might be a $0^{-+}$ trigluonium component mixed in the $\eta_c$. It then
predicts a severe suppression for these decays of $\eta'_c$,
which is not close to and therefore not mixed
with the $0^{-+}$ trigluonium. The $\eta_c$ and $\eta'_c$ hadronic decays
are being searched for at BES/BEPC, and will be studied at the $\tau$-charm
factory in the future. In this connection, it might be interesting
to see whether
E760-E835 experiment can find $\eta'_c$ in $p\bar{p}\rightarrow\eta'_c
\rightarrow 2\gamma$. If $\eta'_c\rightarrow p\bar{p}$ is severely suppressed
by helicity conservation, as the counterpart of $\psi'\rightarrow\rho\pi$,
then it would be hopeless to see $\eta'_c$ in $p\bar{p}$ annihilation.
Therefore the E760-E835 experiment will further test helicity conservation
and shed light on the extented $``\rho\pi''$ puzzle.
On the other hand, the theoretial understanding for these puzzles and,
in general,
for the nature of exclusive hadronic decay mechanisms is still very limited.
It concerns how the $c\bar{c}$ pair convert into gluons and light quarks and,
more importantly, how the gluons and quarks hadronize into light hadrons.
The hadronization
must involve long distance effects and is governed by
nonperturbative dynamics. These problems certainly deserve a thorough
investigation in terms of both perturbative and nonperturbative QCD.
\section*{\normalsize\bf III. Search for Glueballs in Charmonium Decays}
Existence of the non-Abelian gluon field is the key hypothesis of QCD, and
observation of glueballs will be the most direct confirmation of the
existence of gluon field. Charmonium radiative decays into light hadrons
proceed via $c\bar{c}\rightarrow\gamma + g + g $ and are then the gluon-rich
channels. Therefore, charmonium especially $J/\psi$ radiative decays are
very important processes in the search for glueballs. Recent experimental
studies indicate that there are at least three possible candidates of
glueballs which are related to $J/\psi$ radiative decays.
$\bullet$~$\xi(2230)$~~~$J^{PC}=(?)^{++}$.
The new data from BES \cite{li}
\cite{jin} confirmed the Mark III result \cite{mark} and found four decay
modes of $\xi\rightarrow\pi^+\pi^-$, $K^+K^-$,
$K_SK_S$, $p\bar{p}$ in $J/\psi
\rightarrow \gamma\xi$ with a narrow width of $\Gamma_\xi\approx20$ MeV.
The branching ratios are found to be
$B(J/\psi\rightarrow\gamma\xi)\times B(\xi\rightarrow X)\approx
(5.6,3.3,2.7,1.5)\times 10^{-5}$ respectively for $X=\pi^+\pi^-, K^+K^-,
K_SK_S, p\bar{p}$.
Combining these data with the PS $185$ experiment on $p\bar{p}\rightarrow
\xi(2230)\rightarrow K\bar{K}$ \cite{ps185}:
$B(\xi\rightarrow p\bar{p})\times B(\xi\rightarrow K\bar{K})
<1.5\times 10^{-4}$ (for J=2)
reveals some distinct features of the
$\xi(2230)$: the very narrow partial decay widths to $\pi\pi$ and $K\bar{K}$
(less than $1$ MeV with banching ratios less than $5$\%); the large production
rate in $J/\psi$ radiative decays ($B(J/\psi\rightarrow\gamma\xi)>2\times
10^{-3}$); the flavor-symmetric couplings to $\pi\pi$ and $K\bar{K}$. These
features make $\xi(2230)$ unlikely to be a $q\bar{q}$ meson but likely to be
a $J^{PC}=(even)^{++}$ glueball \cite{chao}\cite{hjzc}.
The $\xi(2230)$ once was interpreted as an $s\bar{s}$ meson\cite{gold}. But a
recent quark model calculation\cite{harry} for decays of $1^{3}F_2$ and
$1^{3}F_4$ $s\bar{s}$ mesons shows that the widths of $1^{3}F_2$ and
$1^{3}F_4$ $s\bar{s}$ mesons are larger than $400$ MeV
and $130$ MeV respectively.
The partial width of $1^{3}F_4$ to $K\bar{K}$ is predicted to be $(14-118)$MeV,
also much larger than that of $\xi(2230)$. Moreover, the lattice
study of $SU(3)$ glueballs by the UKQCD group suggests the mass of $2^{++}$
glueball be $2270\pm100$MeV\cite{ukqcd}, consistent
with the mass of $\xi(2230)$.
But the spin of $\xi(2230)$ has not been
determined yet. ($J^{PC}=4^{++}$ will
not favor a glueball because it would require a non-S wave orbital angular
momentum between the constituent gluons, and then lead to higher mass and
lower production rate in $J/\psi$ radiative decays than $\xi(2230)$).
Moreover, in order to see through
the nature of $\xi(2230)$ (e.g., by further examining
the flavor-symmetric decays and the difference between
glueball and $q\bar{q}g$ hybrid), more data are needed for other decay modes,
such as $\eta\eta$, $\eta\eta'$, $\eta'\eta'$ and $\pi\pi\pi\pi$,
$\pi\pi K\bar{K}$, $\rho\rho$, $K^*\bar{K^*}$, $\omega\phi$, $\phi\phi$, etc.
$\bullet$~$\eta(1440)$~~~$J^{PC}=0^{-+}$.
For years this state has been regarded as a good candidate for the $0^{-+}$
glueball.
However, since both Mark III \cite{bai}
and DM2\cite{dm2} find three structures (two $0^{-+}$ and one $1^{++}$ ) in
the energy region $1400-1500$MeV in $J/\psi\rightarrow \gamma K\bar{K}\pi$,
the status of $\iota/\eta(1440)$ as a $0^{-+}$ glueball is somewhat shaky.
But the new (preliminary) generalized moment analysis of BES \cite{ma},
which avoids the complicated coupling effects from different intermediate
states ($K^*\bar{K}$ and $a_0\pi$),
indicates
that the $\eta(1440)$, being one of the three structures, may have a larger
production rate in $J/\psi$ radiative decays with
$B(J/\psi\rightarrow\gamma\eta(1440))
\cdot B(\eta(1440)\rightarrow K\bar{K}\pi)\approx2\times10^{-3}$. This may
reinforce the $\eta(1440)$ being a $0^{-+}$ glueball candidate.
While more data
and analyses are needed to clarify the discrepancies between Mark III,
DM2, and BES, some theoretiacl arguments support $\iota/\eta(1440)$ being a
$0^{-+}$ glueball.
The helicity conservation argument favors $0^{-+}$ glueball decaying
predominantly to $K\bar{K}\pi$ \cite{chano}. Working to lowest order in
$1/N_c$ and using chiral lagrangians also get the same
conclusion \cite{goun}. However, the lattice QCD calculation by UKQCD predicts
the mass of $0^{-+}$ glueball to be $\sim 2300$MeV \cite{ukqcd}, much higher
than 1440 MeV.
$\bullet$~$f_0(1500)$~~~$J^{PC}=0^{++}$.
The Crystal Barrel (CBAR) Collaboration
\cite{cbar} at LEAR has found $f_0(1500)$ in $p\bar{p}\rightarrow
\pi^0 f_0(1500)$ followed by $f_0(1500)\rightarrow\pi^0\pi^0,~\eta\eta,~\eta
\eta'$. This state might be the same particle as that found by WA91
Collaboration
in central production $pp\rightarrow p_f(2\pi^+ 2\pi^-)p_s$ \cite{wa91},
and that found by GAMS in $\pi^- p\rightarrow \eta\eta{'} n,~\eta
\eta n,~4\pi^0 n$,
namely, the $G(1590)$ \cite{gams}. So far no signals have been seen in
$J/\psi$ radiative decays in channels like $\pi\pi,~\eta\eta,~\eta\eta'$ for
$f_0(1500)$. However, it is reported recently that re-analysis of Mark III
data on $ J/\psi\rightarrow\gamma(4\pi)$ reveals a resonance with
$J^{PC}=0^{++}$ at $1505$ MeV, which has a strong $\sigma\sigma$ decay mode
\cite{bugg}. If this result is confirmed, $f_0(1500)$ may have been seen in
three gluon-rich processes, i.e., the $p\bar{p}$ annihilation, the central
production with double Pomeron exchange, and the $J/\psi$ radiative decays,
and is therefore a good candidate for $0^{++}$ glueball. It will be
interesting to see whether $f_0(1500)\rightarrow\sigma\sigma\rightarrow 4\pi$
is the main decay mode in the CBAR experiment. The mass of $f_0(1500)$ is
consistent with the UKQCD lattice calculation \cite{ukqcd}.
The theoretical understanding of glueballs is still rather limited. There are
different arguments regarding whether glueball decays are flavor-symmetric.
$\bullet$~Helicity Conservention.
It was argued by Chanowitz \cite{chano} that although
glueballs are $SU(3)$ flavor singlets it is inadequate to use this as a
criterion for identifying them because large $SU(3)$ breaking may affect their
decays. In lowest order perturbation theory the decay amplitude
is expected to be proportional to the quark mass
\begin{eqnarray}
M(gg\rightarrow q\bar{q})_{J=0}\propto m_q,
\end{eqnarray}
so that decays to $s\bar{s}$ are much stronger than $u\bar{u}+d\bar{d}$ for
$0^{++}$ and $0^{-+}$ glueballs. This is a consequence of ``helicity
conservation''- the same reason that $\Gamma(\pi\rightarrow \mu\nu)
\gg\Gamma(\pi
\rightarrow e\nu)$, and this might explain why
$\iota/\eta(1440)\rightarrow K\bar{K}\pi$
is dominant.
$\bullet$~Discoloring of gluons by gluons.
It was argued by Gershtein
{\it et al.} \cite{gers} that due to QCD axial anomaly the matrix element
$\alpha_s\!<0|G\tilde{G}|\eta'>$ gets a large value. Therefore, if the glueball
decay proceeds via production of a pair of gluons from the vacuum and
recombination of the gluons in the initial state with the produced gluons,
then decays into $\eta'$ will be favored. This may explain why the
$0^{++}~ G(1590)$ has a larger decay rate into $\eta\eta'$ than $\eta\eta,~
\pi\pi,~KK$.
$\bullet$~Glueball-$q\bar{q}$ mixing.
It was argued by Amsler and
Close \cite{ac} that for a pure glueball $G_0$ flavor democracy (equal
gluon couplings to $u\bar{u},~d\bar{d}$ and $s\bar{s}$) will lead to the
relative decay branching ratios $\pi\pi:K\bar{K}:\eta\eta:\eta\eta'=3:4:1:0$.
Then by mixing with nearby $q\bar{q}$ isoscalars the mixed glueball state
becomes
\begin{eqnarray}
|G>=|G_0> + \xi(|u\bar{u}> + |d\bar{d}> + \omega|s\bar{s}>),
\end{eqnarray}
and the observed decay branching ratios $\pi\pi:K\bar{K}:\eta\eta:\eta\eta'
=1:<0.1:0.27:0.19$ for $f_0(1500)$ may be explained in a color flux-tube
model with certain values for mixing angles $\xi$ and $\omega$ with the nearby
$f_0(1370)$ (an $u\bar{u}+d\bar{d}$ state) and an $s\bar{s}$ state in the 1600
MeV region. The problem for $f_0(1500)$ and $f_0(1710)$ has
also been discussed in Ref.\cite{torn}.
$\bullet$~Resemblance to charmonium decays.
It was argued\cite{chao} that
pure glueball decays may bear resemblance to charmonium decays, e.g., to the
$\chi_{c0}(0^{++})$ and $\chi_{c2}(2^{++})$ decays. Both $\chi_{c0}$ and
$\chi_{c2}$ decays may proceed via two steps: first the $c\bar{c}$ pair
annihilate into two gluons, and then the two gluons hadronize into light mesons
and baryons. The gluon hadronization appears to be flavor-symmetric. This is
supported by the $\chi_{c0}$ and $\chi_{c2}$ decays, e.g., $\chi_{c0}$ is
found to have the same decay rate to $\pi^+\pi^-$ as to $K^+K^-$, and the
same decay rate to $\pi^+\pi^- \pi^+ \pi^- $ as to $\pi^+ \pi^-K^+K^-$,
and this is also true for $\chi_{c2}$ decays.
For a glueball, say, a $2^{++}$ glueball, its
decay proceeds via two gluon hadronization, which is similar to the
second step of
the $\chi_{c2}$ decay. Therefore, a pure $2^{++}$ glueball may have
flavor-symmetric decays. Furthermore, the $2^{++}$ glueball, if lying in the
2230 MeV region, can only have little mixing with nearby L=3~ $2^{++}$
quarkonium
states, because these $q\bar{q}$ states have vanishing wave functions at the
origin due to high angular momentum barrier which will prevent the $q\bar{q}$
pair from being annihilated
into gluons and then mixed with the $2^{++}$ glueball.
This might explain why
$\xi(2230)$ has flavor-symmetric couplings to $\pi^+\pi^-$ and $K^+K^-$, if
it is nearly a pure $2^{++}$ glueball.
In addition, the gluon hadronization leads to many
decay modes for $\chi_{c0}$ and $\chi_{c2}$, therefore the $2^{++}$ glueball
may also have many decay modes. In comparison, the observed branching ratios
for $\xi(2230)\rightarrow\pi^+\pi^-,~K^+ K^-,~K_S K_S,~p\bar{p}$ may not
exceed 6 percent. This might be very different from the conventional
$q\bar{q}$ mesons, which usually have some dominant two-body decay modes.
Above discussions indicate that the decay pattern of glueballs could be
rather complicated, and a deeper theoretical understanding is needed to reduce
the uncertainties. As for the glueball mass spectrum, despite the remarkable
progress made in lattice QCD calculations \cite{teper}\cite{ukqcd}\cite{ibm},
uncertainties in estimating glueball masses are still not small. For
instance, for $0^{++}$ glueball UKQCD group gives $M=1550\pm50$MeV
\cite{ukqcd}, while IBM group gets $M=1740\pm71$MeV and $\Gamma=108\pm29$MeV
\cite{ibm}.
Another progress in the lattice calculation is the glueball matrix
elements. For example, a calculation for $<0|Tr(g^2 G_{\mu\nu}G_{\mu\nu})|G>$
predicts a branching ratio of $5\times 10^{-3}$ in $J/\psi$ radiative decays
for $0^{++}$ glueball \cite{liang}. This may provide useful information on
distinguishing between $f_0(1500)$ and $\theta(1720)$, or other possible
candidates for $0^{++}$ glueball. If $f_0(1500)$ is the $ 0^{++}$ glueball,
it should have some important decay modes (e.g., $4\pi$) to
show up in $J/\psi$
radiative decays.
In summary, while the situation in searching for glueballs via charmonium
decays is very encouraging, especially with the $\tau$-charm factory in the
future, more theoretical work should be done to make more certain predictions
on the glueball mass spectrum, the widths, the transition matrix elements, and
in particular the decay patterns.
\section*{\normalsize\bf IV. Prompt Charmonium Production at Tevatron and
Fragmentation of Quarks and Gluons}
The study of charmonium in high energy hadron
collisions may provide an important testing ground for both perturbative
QCD and nonperturbative QCD.
In earlier calculations \cite{ruc},
in hadronic collisions the leading order processes
\begin{equation}
gg\rightarrow g\psi,~~gg,q\bar{q}\rightarrow g\chi_c (\chi_c\rightarrow\gamma\psi),~~
qg\rightarrow q\chi_c (\chi_c\rightarrow\gamma\psi),
\end{equation}
were assumed to give dominant contributions to the cross section. But they
could not reproduce the observed data for charmonium with large transverse
momentum. This implies that some new production mechanisms should be important.
These are the quark fragmentation and gluon fragmentation.
\subsection*{\normalsize\bf 1. Quark Fragmentation}
In essence the quark fragmentation was first numerically evaluated in a
calculation
for the $Z^0$ decay $Z^0\rightarrow\psi c\bar{c}$ by Barger, Cheung, and
Keung\cite{barg}(for other earlier discussions on fragmentation mechanisms
see ref.\cite{hagi}). This decay proceeds via $Z^0\rightarrow c\bar{c}$,
followed
by the splitting $c\rightarrow\psi c$ or $\bar{c}\rightarrow\psi\bar{c}$ (see
Fig.1), of which the rate is two orders of magnitude larger than that for
$Z^0\rightarrow \psi gg$\cite{gub}, because the fragmentation contribution is
enhanced by a factor of $(M_Z/{m_c})^2$ due to the fact that in fragmentation
the charmonium ($c\bar{c}$ bound state) is produced with a seperation of order
$1/{m_c}$ rather than $1/{m_Z}$ as in the previous short-distance processes,
e.g., $Z^0\rightarrow \psi gg$.
\vskip 4cm
\begin{center}
\begin{minipage}{120mm}
{\footnotesize Fig.1 The quark
fragmentation mechanism. $\psi$ is produced by
the charm quark splitting $c \rightarrow\!\psi c$.}
\end{minipage}
\end{center}
These numerical calculations, which are based on the fragmentation mechanisms,
can be approximately (in the limit $m_c/{m_Z}\rightarrow 0$) re-expressed in
a more clear and concise manner in terms of the quark fragmentation
functions, which were
studied analytically by Chang and Chen\cite{chang}\cite{chen},
and by Braaten, Cheung, and Yuan\cite{bcy1}. The quark fragmentation
functions can be calculated in QCD using the Feynman diagram shown in Fig. 1.
For instance, the fragmentation function $D_{c\rightarrow \psi}(z,\mu)$, which
describes the probability
of a charm quark to split into the $J/\psi$ with longitudinal momentum fraction
z and at scale $\mu$, is given by\cite{bcy1}
\begin{eqnarray}
D_{c\rightarrow \psi}(z,3m_c)=\frac{8}{27\pi}\alpha_s(2m_c)^2\frac{|R(0)|^2}
{m_c^3}\frac{z(1-z)^2(16-32z+72z^2-32z^3+5z^4)}{(2-z)^6},
\end{eqnarray}
where $\mu=3m_c$ and R(0) is the radial wave function at the origin of
$J/\psi$. Large logarithms of $\mu/{m_c}$ for $\mu=O(m_Z)$ appearing in
$D_{i\rightarrow\psi}(z,\mu)$ can be summed up by solving
the evolution equation
\begin{eqnarray}
\mu\frac{\partial}{\partial \mu}D_{i\rightarrow\psi}(z,\mu)=\sum\limits_{j}
\int^{1}_{z}\frac{dy}{y}P_{i\rightarrow j}(z/y,\mu)D_{j\rightarrow\psi}(y,\mu),
\end{eqnarray}
where $P_{i\rightarrow j}(x,\mu)$ is the
Altarelli-Parisi function for the splitting
of the parton of type $i$ into a parton of type $j$ with longitudinal momentum
fraction $x$. The total rate for inclusive $\psi$ production is approximately
\begin{eqnarray}
\Gamma(Z^0\rightarrow\psi+X)=2\widehat{\Gamma}(Z^0\rightarrow c\bar{c})
\int_{0}^{1}dz D_{c\rightarrow\psi}(z, 3m_c).
\end{eqnarray}
Then the branching ratio for the decay of $Z^0$ into $\psi$ relative to decay
into $c\bar{c}$ is
\begin{eqnarray}
\frac{\Gamma(Z^0\rightarrow\psi c\bar{c})}{\Gamma(Z^0\rightarrow c\bar{c})}
=0.0234 \alpha_s(2m_c)^2\frac{|R(0)|^2}{m_c^3}\approx 2\times 10^{-4},
\end{eqnarray}
which agrees with the complete leading order calculation of $Z^0
\rightarrow\psi c\bar{c}$ in Ref.\cite{barg}.
Using the fragmentation functions, the production rates of the $B_c$ meson are
predicted in Ref.\cite{chang}. E.g., the branching ratio of $B_c$ in $Z^0$
decay is about $R\approx 7.2\times 10^{-5}$ (see also Ref.\cite{chang1}).
The quark fragmentation functions to P-wave mesons have also been calculated
\cite{chen}\cite{yuan}.
\subsection*{\normalsize\bf 2. Gluon Fragmentation}
As the quark fragmentation, the gluon fragmentation may also be the dominant
production mechanism for heavy quark-antiquark bound states (e.g., charmonium)
with large tansverse momentum.
In previous calculations, e.g. in the gluon fusion process, charmonium states
with large $P_T$ were assumed to be produced by short distance mechanisms,
i.e., the $c$ and $\bar{c}$ are created with transverse seperations of order
$1/{P_T}$, as shown in Fig.2(a) for $gg\rightarrow\eta_c g$. However, in the
gluon fragmentation mechamism the $\eta_c$ is produced by the gluon splitting
$g\rightarrow\eta_c g$ (while $J/\psi$ is produced by $g\rightarrow\psi gg$),
as shown in Fig.2(b).
For gluon fragmentation,
in the kinematic region where the virtual gluon and $\eta_c$ are colinear, the
propagator of this gluon is off shell only by an amount of order $m_c$, and
enhances the cross section by a factor of $P_T^2/{m_c^2}$. If $P_T$ is
large enough, this will overcome the extra power of the coupling constant
$\alpha_s$, as compared with the short distance leading order process
$gg\rightarrow\eta_c g$. The gluon fragmentation functions
were
calculated by Braaten and Yuan\cite{by}
\begin{eqnarray}
\int^{1}_{0} dz D_{g\rightarrow\eta_c}
(z,2m_c)=\frac{1}{72\pi}\alpha_s(2m_c)^2
\frac{|R(0)|^2}{m_c^3},
\end{eqnarray}
\begin{eqnarray}
\int^{1}_{0}dz D_{g\rightarrow\psi}(z,2m_c)=(1.2\times 10^{-3})
\alpha_s(2m_c)^3\frac{|R(0)|^2}{m_c^3}.
\end{eqnarray}
where the latter is estimated to be smaller
than the former by almost an order of magnitude.
\vskip 5cm
\begin{center}
\begin{minipage}{120mm}
{\footnotesize
Fig.2(a) A Feynman diagram for $gg\rightarrow c\bar{c}g$ that contributes to $\eta_c$
production at order $\alpha_s^3$;~~
Fig.2(b) A Feynman diagram for
$gg\rightarrow c\bar{c}gg$ that contributes to $\eta_c$ production at order
$\alpha_s^4$. For the virtual gluon at large $P_T$, with~$q_0=O(P_T),~
q^2=O(m_c^2)$, the contribution is dominant.}
\end{minipage}
\end{center}
The gluon fragmentation into P-wave heavy quarkonium was also
studied\cite{by1}. The P-wave state (e.g., $\chi_{cJ}$) can arise from two
sources i.e. the production of a color-singlet
P-wave state, and the production
of a $c\bar{c}$ pair in a color-octet S-wave state, which is then projected
onto the $\chi_{cJ}$ wave functions. With two parameters which characterize
the long-distance effects, i.e., the derivative of P-wave wavefunction at the
origin and the probability for an S-wave color-octet $c\bar{c}$ pair in
the color-singlet $\chi_{cJ}$ bound state, the fragmentation probabilities
for a high transverse momentum gluon to split into $\chi_{c0},~\chi_{c1},~
\chi_{c2}$ are estimated to be $0.4\times10^{-4},~1.8\times 10^{-4},~
2.4\times10^{-4},$ respectively\cite{by1}. They could be the main source
of $\chi_{cJ}$ production at large $P_T$ in $p\bar{p}$ colliders.
Since fragmentating gluons are approximately transverse, their products are
significantly polarized. Cho, Wise, and Trivedi\cite{cwt} find that in gluon
fragmentation to $\chi_{cJ}(1P)$ followed by $\chi_{cJ}\rightarrow
\gamma J/\psi$ the helicity levels of $\chi_{c1},~\chi_{c2}$, and $J/\psi $
are populated according to certain ratios, e.g., $D_{\chi_{c1}}^{h=0}:
D_{\chi_{c1}}^{|h|=1}\approx 1:1$, $D_{J/\psi}^{h=0}:D_{J/\psi}^{|h|=1}\approx
1:3.4$.
The gluon fragmentation to $J^{PC}=2^{-+}~~^1D_2$
quarkonia was also studied,
and these D-wave state's polarized fragmentation functions
were computed \cite{wise}.
\subsection*{\normalsize\bf 3. The $\psi'$ surplus problem at the Tevatron}
In 1994, theoretical calculations were compared with data on inclusive
$J/\psi$ and $\psi'$ production at large transverse momentum at the Tevatron
\cite{cdf}, where large production cross sections were observed. The
calculations include both the conventional leading order mechanisms and the
charm and gluon fragmentation contributions \cite{cg}\cite{bdfm}.
For $\psi$ production both fragmentation
directly into $\psi$ and fragmentation
into $\chi_c$ followed by the radiative decay $\chi_c \rightarrow \psi + \gamma$ are
considered. Fragmentation functions for
$g \rightarrow \psi, ~c \rightarrow \psi,~g \rightarrow \chi_c,~c \rightarrow \chi_c,$ and $\gamma\rightarrow\psi$
are used.
These calculations indicate that
(1) Fragmentation dominates over the leading-order mechanisms for $P_T >5$ GeV.
(2) The dominant production mechanism by an order of magnitude is gluon
fragmentation into $\chi_c$ followed by $\chi_c\rightarrow\gamma\psi$.
For $\psi'$ production the fragmentaions $g \rightarrow \psi',~c \rightarrow \psi',
~\gamma \rightarrow\psi'$ and the leading order mechanisms are included but no
contribution from any higher charmonium states is taken
into consideration. The
dominant production mechanisms are gluon-gluon fusion for $P_T<5$GeV, and
charm quark fragmentaion into $\psi'$ for large $P_T$. However, the calculated
production cross section of $\psi'$ is too small by more than an order of
magnitude (roughly a factor of 30) \cite{bdfm}\cite{rs1}.
This serious disagreement, the so-called $\psi'$ surplus problem, has caused
many theoretical speculations.
The radically excited $2^3P_{1,2}~(\chi_{c1}(2P)$ and $\chi_{c2}(2P))$ states
have been suggested to explain the $\psi'$ surplus problem \cite{cwt}
\cite{rs2}\cite{close}. These states can be produced via gluon and charm
fragmentaion as well as the conventional gluon fusion mechanism, and then
decay into $\gamma\psi'$ through E1 transitions. Large branching ratios of
$B(\chi_{cJ}(2P)\rightarrow\psi'(2S)+\gamma)=(5\sim10)\%$ (J=1, 2) are required to
explain the $\psi'$ production enhancement. Within the potential model with
linear confinment, the masses of these 2P states are predicted to be, e.g.,
$ M(\chi_{c0}(2P))=3920 MeV,~M(\chi_{c1}(2P))=3950 MeV,$ and
$M(\chi_{c2}(2P))=3980 MeV$ \cite{isgur}, therefore OZI-allowed hadronic
decays like $\chi_{c0}(2P)\rightarrow D\bar{D}$,~$ \chi_{c1}(2P)\rightarrow D^*\bar{D}+c.c.,$
and $\chi_{c2}(2P)\rightarrow D\bar{D},~D^*\bar{D}+c.c.$, can occur. It is not clear
whether these hadronic widths are narrow, making the brancing ratios
$B(\chi_{cJ}(2P)\rightarrow \psi'(2S)+\gamma)$ large enough to explain the $\psi'$
production data.
One possibility is that since decays
$\chi_{c2}(2P)\rightarrow D\bar{D},~D^*\bar{D}+c.c.$
proceed via $L=2$ partial waves, they could be suppressed \cite{cwt}.
These OZI-allowed hadronic decays are estimated in a flux-tube model and they
could be further suppressed (aside from the D-wave phase space for
$\chi_{c2}(2P))$ due to the node structure in the radial wave functions of
excited states \cite{page}. With suitble parameters used, the widths of
$\chi_{c2}(2P)$ and $\chi_{c1}(2P)$ could be as narrow as
$\Gamma\approx(1\sim 10)$MeV.
There is another possibility that the $\chi_{c1}(2P)$ could lie bellow the
$D^*\bar D$ threshold, and then with roughly estimated $\Gamma(\chi_
{c1}(2P)\rightarrow $ light hadrons)$\approx 640$ KeV, $\Gamma(\chi_{c1}(2P)\rightarrow
\gamma \psi') \approx 85$ KeV, one could get $B(\chi_{c1}(2P)\rightarrow
\gamma \psi')\approx 12\%$ \cite{cq}. This
possibility relies on the expectation
that color screening effect of light quark pair on the heavy $Q$-$\bar{Q}$
potential, observed in lattice QCD calculations, would lead to a screened
confinement potential which makes the level spacings of excited $c\bar{c}$
states lower than that obtained using the linear potential (e.g.,
$\psi(4160)$ and $\psi(4415)$ could be 4S and 5S rather than 2D and 4S
states, respectively).
Moreover, it is suggested that the $c\bar{c}g$ hybrid states could make a
significant contribution to $J/\psi$ and $\psi'$ signals at the
Tevatron\cite{close}, since the color octet production mechanism is
expected to be important, and hybrid states contain gluonic excitations in
which the $c\bar{c}$ are in the S-wave color octet configuration. In
particular, the negative parity hybrid states, including $(0, 1, 2)^{-+},~
1^{-+}$, lying in the range
$4.2\pm 0.2$GeV, could be a copious source of $J/\psi$ and $\psi'$,
through radiative and hadronic transitions.
\subsection*{\normalsize\bf 4. Color Octet Fragmentation Mechanism}
Baseed on a general factorization analysis of the annihilation and
production of heavy quarkonium \cite{bbl}\cite{lepage},
Braaten and Fleming proposed a
new mechanism, i.e. the color-octet fragmentation
for the $J/\psi$ and $\psi'$ production at large $P_T$
\cite{bf}.
In the framework of NRQCD theory \cite{lepage}\cite{bbl},
which is based on a double power series expansion
in the strong coupling constant $\alpha_s$ and the small velocity parameter
$v$ of heavy quark, the fragmentaion functions can be factored into
short-distance coefficients and long-distance factors that contain
all the nonperturbative dynamics of the formation of a bound state containing
the $c\bar{c}$ pair. E.g., for $g\!\rightarrow\!\psi$ fragmentation
\begin{equation}
D_{g\rightarrow\psi}(z,\mu)=\sum\limits_{n}d_n(z,\mu)<0|{\cal O}_n^{\psi}|0>,
\end{equation}
where ${\cal O}_n^{\psi}$ are local four fermion (quark) operators.
For the physical $\psi$ state, the wavefunction can be expressed as Fork
state decompositions which include dynamical gluons and color-octet
$(Q\bar Q)_8$ components
\begin{eqnarray}
|\psi> &=& O(1)|(Q\bar{Q})_1(^{3}S_1)>+O(v)|(Q\bar{Q})_8(^{3}P_J)g>
\nonumber \\
&+& O(v^2)|(Q\bar{Q})_8(^{3}S_1)gg> + \cdots .
\end{eqnarray}
Therefore there are two mechanisms for gluon fragmentation into $\psi$:
(1)Color-singlet fragmentation $g^*\rightarrow\!c\bar{c}gg.$
Here $c\bar{c}$ is produced in a color-singlet $^3S_1$ state.
The matrix element $<{\cal O}^{\psi}_1(^3S_1)>$ is of order $m_c^3v^3$, which
is related to the Fork state $|(c\bar{c})_1(^3S_1)>$ in $\psi$, so the
contribution to fragmentation function is of order $\alpha_s^3v^3$.
(2)Color-octet fragmentation $g^*\rightarrow c\bar{c}.$
Here $c\bar{c}$ is produced in a color-octet$~^3S_1$ state.
The matrix element $<{\cal O}^{\psi}_8(^3S_1)>$ is of order $m_c^3v^7$,
which is related to the Fork state
$|(c\bar{c})_8(^3S_1)gg>$ in $\psi$, so the contribution
to fragmentation function is of order $\alpha_s v^7$.
It is clear that the color-octet fragmentation $g^*\rightarrow c\bar{c}$ is enhanced
by a factor of $\sim\alpha_s^{-2}$ from the short-distance coefficients, and
suppressed by a factor of $\sim v^4$ from the long-distance matrix elements,
as compared with the color-singlet fragmentation. Since for charmonium
$v^2\approx 0.25\sim 0.30$ is not very small, the color-octet fragmentation
could be dominant in some cases, e.g., in the $\psi'$ production at
large transverse momentum.
In the case of $\psi'$, if the observed large cross section is really
due to color-octet
fragmentation the matrix element $<{\cal O}^{\psi'}_8(^3S_1)>$ will be
determined by fitting the CDF data on the $\psi'$ production rate at large
$P_T$
\begin{eqnarray}
<{\cal O}^{\psi'}_8(^3S_1)> =0.0042 GeV^3,
\end{eqnarray}
while the color-singlet matrix element $<{\cal O}^{\psi'}_1(^3S_1)>$
is determined by the $\psi'$ leptonic decay width which is related to
the wave function at the origin
\begin{eqnarray}
<{\cal O}^{\psi'}_1(^3S_1)>\approx \frac{3}{2\pi}|R_{\psi'}|^2=0.11 GeV^3.
\end{eqnarray}
The color-octet matrix element is smaller by a factor of 25 than the
color-singlet matrix element, consistent with
suppression by $v^4$. Therefore the color-octet fragmentation could be a
possible solution to the $\psi'$ surplus problem.
The color-octet fragmentation will also make a substantial contribution to
the $J/\psi$ production at large $P_T$, and may compete with gluon
fragmentation
into $\chi_{cJ}$ followed by $\chi_{cJ}\rightarrow\gamma J/\psi$.
The color-octet fragmentation mechanism might be supported by
the new data from CDF\cite{cdf} and D0\cite{d0} at the Tevatron.
New results for the fraction of $J/\psi$ which come from
the radiative decay of
$\chi_c$ are (see Fig.3 for the CDF result)
\begin{eqnarray}
CDF:~~f_{\chi}^{J/\psi}=
(32.3\pm 2.0\pm 8.5)\%~~(P_T^{J/\psi}>4GeV,~|\eta^{J/\psi}|<0.6),
\end{eqnarray}
\begin{eqnarray}
D0:~~~f_{\chi}^{J/\psi}=
(30\pm 7\pm 10)\%~~(P_T^{J/\psi}>8GeV,~|\eta^{J/\psi}|<0.6).
\end{eqnarray}
\vskip 5.5cm
\begin{center}
\begin{minipage}{120mm}
{\footnotesize Fig.3 The fraction of $J/\psi$ from $\chi_c$ as a function
of $P_T^{J/\psi}$ with the contribution from $b$ quark's removed,
measured by CDF \cite{cdfc}.}
\end{minipage}
\end{center}
This implies that the majority of prompt $J/\psi$ at large $P_T$ do not
come from $\chi_c$, and the gluon fragmentation into $\chi_c$ is not the
dominant mechanism for $J/\psi$ production at large $P_T$.
The observed production cross section of $J/\psi$ from $\chi_c$ is in
reasonable agreement with the theoretical calculations while the direct
$J/\psi$ production cross section is a large factor above the prediction
(see Fig.4 for the CDF result).
\vskip 11cm
\begin{center}
\begin{minipage}{120mm}
{\footnotesize Fig.4 Differential cross sections of prompt $J/\psi$ as
a function
of $P_T^{J/\psi}$ with the contribution from $b$ quark's removed,
measured by CDF \cite{cdfc}.
The dotted curve repesents the total fragmentation contribution (but
without the color-octet fragmentation), and the dashed curve represents
the leading-order contribution \cite{bdfm}.}
\end{minipage}
\end{center}
Although
this result might favor the color-octet fragmentation mechanism for the
direct production of $J/\psi$,
it is still premature to
claim that it is the real source of $J/\psi$ production.
In order to further test the color-octet fragmentation mechanism for the
production of $J/\psi$ and, in particular, $\psi'$ at the Tevatron, some
studies are required. First, the produced $\psi'$ should be transversely
polarized \cite{wise}, and the experimental observation of a large
transverse $\psi'$ spin alignment would provide strong support for the
color-octet production mechanism of $\psi'$. Another important test is
to apply the same mechanism to the $b\bar {b}$ systems.
The integrated and differential production cross sections for the
$\Upsilon(1S), \Upsilon(2S), \Upsilon(3S)$ have been measured by both
CDF \cite{cdfb} and D0 \cite{d0}. The production rates are generally found
to be higher than that with color-singlet fragmentations. The color-octet
production mechanism does help to explain some of the discrepancies
\cite{cl}.
In this connection it is worthwhile to note that the problem of $J/\psi$
and especially $\psi'$ surplus production has also been observed by the
fixed target experiment (e.g., in 800~GeV proton-gold collisions) \cite{fix}.
In collisions at lower energies, fragmentation is not expected to be
dominant. It is not clear whether the $\psi'$ surplus observed both at
the Tevatron and fixed-target (with the same enhancement factor of
about 25 relative to the expected production rates) has the same origin
or not. Further experimental and theoretical investigations are needed.
\section*{\normalsize\bf V. Some Results in Charmonium and Open Charm Physics}
Here some theoretical results on charmonium and open-charm hadrons
are reported.
{\bf $\bullet$}~~The $Q\bar{Q}$ spin dependent potential.
There have been many discussions about the $Q\bar{Q}$ spin dependent
potential (see e.g. \cite{ef}\cite{g}\cite{bnt}).
A new formula for the heavy-quark-antiquark spin-dependent potential is
given using the techniques developed in heavy-quark effective theory
\cite{cko}. The leading logarithmic quark mass terms
emerging from the loop contributions are explicitly extracted and summed up.
There is no renormalization scale ambiguity in this new formula. The
spin-dependent potential in the new formula is expressed in terms of three
independent color-electric and color-magnetic field correlation functions,
and it includes both the Eichten-Feinberg formula \cite{ef}\cite{g}
and one-loop QCD result \cite{bnt} as special cases.
For hyperfine splittings with $\Lambda_{\overline{MS}}=200-500 MeV$,
the new formula gives \cite{co}
$M(J/\psi)-M(\eta_c)\approx 110-120 MeV $,
$M(\Upsilon)-M(\eta_b)\approx 45-50 MeV $, and
\begin{equation}
M(^1P_1)-M(^3P_J)\approx 2-4 MeV
\end{equation}
for $c\bar{c}$, which is larger
than the present E760 result $(\sim 0.9 MeV)$ \cite{e760}, and other
theoretical
predictions (e.g. \cite{halzen}). But this tiny mass difference may be
sensitive to other effects, e.g., the coupled-channel mass shifts.
A set of general relations between the spin-independent and spin-dependent
potentials of heavy quark and antiquark interactions are derived from
reparameterization invariance in the Heavy Quark Effective Theory \cite{ck}.
They are useful in understanding the spin-independent and
spin-dependent relativistic corrections to the leading order
nonrelativistic potential.
{\bf $\bullet$}~~Relativistic corrections to $Q\bar{Q}$ decay widths and
the determination of $\alpha_s(m_Q)$.
Charmonium mass spectrum and decay rates can be very useful in determining
the QCD coupling constant $\alpha_s$. In recent years remarkable progresses
have been made in lattice calculations \cite{latticec}\cite{shig}.
On the other hand,
many decay processes may be subject to substantial relativistic corrections,
making the determination of $\alpha_s$ quite uncertain
\cite{ml}\cite{kwong}.
The decay rates of $V\rightarrow 3g$ and $V\rightarrow e^+ e^-$
for $V=J/\psi$ and $\Upsilon$ may be expressed in terms of the Bethe-Salpeter
amplitudes, and to the first order relativistic correction and QCD radiative
correction it is found that \cite{chl}
\begin{eqnarray}
\Gamma(V\rightarrow e^+e^-)=\frac{4\pi {\alpha}^2 e_Q^2}{m_Q^2}
|\int d^3 q (1-\frac{2{\vec q}^2}{3m_Q^2})\psi_{Sch}(\vec q)|^2%
(1-\frac{16\alpha_s}{3\pi}), \nonumber \\
\Gamma(V\rightarrow 3g)=\frac{40({\pi}^2-9)\alpha_s^3(m_Q)}{81m_Q^2}
|\int d^3 q[1-2.95{{\vec q}^2\over{m_Q^2}}]
\psi_{Sch}(\vec q)|^2 (1-\frac{S_Q\alpha_s}{\pi}),
\end{eqnarray}
where $S_c=3.7,~ S_b=4.9$ (defined in the $\overline{MS}$ scheme at the
heavy quark mass scale) \cite{ml}\cite{kwong}.
This result shows explicitly that the relativistic correction
suppresses the gluonic
decay much more severely than the leptonic decay.
Using the meson wavefunctions obtained by solving the BS equation
with a QCD-inspired interquark potential, and the experimental values
of decay rates \cite{pdg}, it is found that\cite{chl}
\begin{equation}
\alpha_{s}(m_c)=0.26-0.29,~~~ \alpha_s(m_b)=0.19-0.21,
\end{equation}
at $m_c=1.5~GeV$ and $m_b=4.9~GeV$.
These values for the QCD coupling
constant are substantially enhanced, as compared with the ones obtained
without relativistic corrections. However, it should be emphasized that
these numerical results can only serve
as an improved estimate rather than a precise determination, due to large
theoretical uncertainties related to the scheme dependence of QCD radiative
corrections \cite{blm} and higher order relativistic corrections.
This result is consistent with that obtained using finite size
vertex corrections
\cite{chiang}.
{\bf $\bullet$}~~Heavy meson decay constants.
Discussions on the heavy meson decay constants are very extensive.
In the framework of heavy quark effective theory (HQET), QCD sum rules
are used to estimate the nonperturbative effects
\cite{neu}\cite{ele}\cite{ball}\cite{hl}.
The first systematic investigation was given in \cite{neu}, and a further
improvement was obtained by seperating the subleading order from the
leading one \cite{ball}.
In a recent work \cite{hl1}
the SU(3) breaking effects in the leading and subleading parameters
appeared in the heavy quark expansion of decay constans of the heavy-light
mesons are systematically analyzed to two loops accuracy using QCD sum rules.
It is found that the SU(3) breaking effects in the decay constant
of the pseudoscalar are respectively
\begin{equation}
f_{B_s}/f_B=1.17\pm 0.03,~~~~~f_{D_s}/f_D=1.13\pm 0.03.
\end{equation}
These results are in agreement with recent lattice QCD calculations
\cite{shig}. In addition, the
ratios of vector to pseudoscalar meson decay constants are found to be
\begin{equation}
f_{B_s^*}/f_{B_s}=f_{B_s}/f_B=1.05\pm 0.02,
\end{equation}
and the SU(3) breaking effect
in the mass is about $82\pm 8$MeV.
Another approach to estimating nonperturbative effects on heavy mesons
is to combine HQET with chiral perturbation theory \cite{yan}.
In the framework of the heavy-light chiral perturbation theory
(HLCPT) the heavy meson decay constants are discussed \cite{gjms}
and the effects of excited states on the chiral loop corrections
are further considered \cite{falk}.
In a recent work the vector meson contributions are introduced in
HLCPT and the lagrangian and current to the order
$1/{m_Q}$ are constructed \cite{dghj}.
With this, to the order $1/{\Lambda_{csb}^2}$ ($\Lambda_{csb}$ is
the chiral symmetry breaking scale), corrections to $f_D$ and
$f_B$ arising from coupled-channel effects to order $1/{m_c} $ and $1/{m_b}$
are calculated.
At the tree level in HLCPT, using the relativistic B-S equation with kernel
containing a confinement term and a gluon exchange term in a covariant
generalization of the Coulomb gauge \cite{dai}, the decay constants
$f_D^{(0)}$ and $f_B^{(0)}$
when $m_Q\rightarrow \infty $ as well as the $1/{m_Q}$ corrections are calculated.
HLCPT and the heavy quark effective theory (HQET) are matched at the
scale $\Lambda_{csb}$. Adding the perturbative and
nonperturbative contributions
the values for $f_{D}$ and $f_{B}$ are found to be
\begin{equation}
f_D\approx f_B\approx 200~MeV,
\end{equation}
which is in agreement with lattice calculations
\cite{shig}.
We now turn to some new experimental results in open charm physics.
The CLEO Collaboration has given following results.
{\bf $\bullet$}~~More accurate or the first measurements of $D^0$ decays
\cite{c1}.
\begin{center}
\begin{tabular}{|c|c|c|}\hline
Channel & B($\%$) & PDG($\%$)\\\hline
$K^+K^-$ & 0.455$\pm$0.029$\pm$0.032 & 0.454$\pm$0.029\\\hline
$K^0\bar{K}^0$ & 0.048$\pm$0.012$\pm$0.013 & 0.11$\pm$0.04\\\hline
$K^0_SK^0_SK^0_S$ & 0.074$\pm$0.010$\pm$0.018 & 0.089$\pm$0.025\\\hline
$K^0_SK^0_S\pi^0$ & $<$0.063 $at$ 90 CL$\%$ &\\\hline
$K^+K^-\pi^0$ & 0.107$\pm$0.030 &\\\hline
\end{tabular}
\end{center}
The theoretical prediction for $B(K^+K^-)$ is in the range 0.14-0.6, and for
$B(K^0\bar{K}^0)$ is 0-0.3.
{\bf $\bullet$}~~Observation of the Cabibbo suppressed charmed baryon decay of
$\Lambda^+_c\rightarrow p\phi$ and $pK^+K^-$,
compared with $\Lambda^+_c\rightarrow pK^-\pi^+$ \cite{c2}.
\begin{equation}
B(p\phi)/B(pK\pi)=0.024\pm 0.006\pm 0.003,
\end{equation}
\begin{equation}
B(pKK)/B(pK\pi)=0.039\pm 0.009\pm 0.007,
\end{equation}
\begin{equation}
B(p\phi)/B(pKK)=0.62\pm 0.20\pm 0.12.
\end{equation}
The theoretical predictions range from 0.01 to 0.05 for $B(p\phi)/B(pK\pi)$.
{\bf $\bullet$}~~Measurement of the isospin-violating decay
$D^{*+}_s\rightarrow D^+_s\pi^0$ \cite{c3}.
\begin{equation}
\frac{\Gamma(D^{*+}_s\rightarrow D^+_s\pi^0)}{\Gamma(D^{*+}_s\rightarrow
D^+_s\gamma)}=0.062^{+0.020}_{-0.018}\pm 0.022.
\end{equation}
This isospin-violating decay is expected to proceed through OZI-allowed
decay $D^{*+}_s\rightarrow D^+_s\eta$ (via the $s\bar{s}$ component in $\eta$)
and the $\eta-\pi^0$ mixing \cite{chod}. This decay also implies that
$D^{*+}_s$ has natural spin-parity (most likely $1^-$).
{\bf $\bullet$}~~Measurement of the relative branching ratios of $D^+_s$
to $\eta e^+\nu$ and $\eta^{\prime}e^+\nu$, compared to $\phi e^+\nu$
\cite{c4}.
\begin{equation}
\frac{B(D^+_s\rightarrow\eta e^+\nu)}{B(D^+_s\rightarrow\phi e^+\nu)}
=1.24\pm 0.12\pm 0.15.
\end{equation}
\begin{equation}
\frac{B(D^+_s\rightarrow\eta' e^+\nu)}{B(D^+_s\rightarrow\phi e^+\nu)}
=0.43\pm 0.11\pm 0.07.
\end{equation}
These results favor the prediction of the ISGW2 model \cite{isgw2}.
{\bf $\bullet$}~~Measurement of $\Xi^+_c$ decay branching ratios relative to
$\Xi^+_c\rightarrow\Xi^-\pi^+\pi^+$ \cite{c5}.
\begin{center}
\begin{tabular}{|c|c|c|}\hline
Decay Mode & events & $B/B(\Xi^+_c\rightarrow\Xi^-\pi^+\pi^+)$\\\hline
$\Sigma^+K^-\pi^+$ & 119$\pm$23 & 1.18$\pm$0.26$\pm$0.17\\\hline
$\Sigma^+K^{*0}$ & 61$\pm$17 & 0.92$\pm$0.27$\pm$0.14\\\hline
$\Lambda K^-\pi^+\pi^+$ & 61$\pm$15 & 0.58$\pm$0.16$\pm$0.07\\\hline
$\Theta^-\pi^+\pi^+$ & 131$\pm$14 & 1.0\\\hline
\end{tabular}
\end{center}
There are also some experimental results from the ARGUS Collaboration.
{\bf $\bullet$}~~Leptonic branching ratios of $D^0$ \cite{a1}.
\begin{equation}
B(D^0\rightarrow e^+\nu_eX)=6.9\pm 0.3\pm 0.5 \%,
\end{equation}
\begin{equation}
B(D^0\rightarrow \mu^+\nu_{\mu}X)=6.0\pm 0.7\pm 1.2 \%.
\end{equation}
These values are smaller than the world average values \cite{pdg}.
{\bf $\bullet$}~~Measurement of the decay $D_{s2}^+(2573)\rightarrow D^0K^+$
\cite{a2}. The observed mass and width $\Gamma=(10.4\pm 8.3\pm 3.0)~
MeV$ of this resonance are consistent with that obtained by CLEO.
{\bf $\bullet$}~~Evidence for the $\Lambda_c^{*+}(2593)$ production \cite{a3}.
Finally, BES Collaboration has reported the leptonic branching ratio of
$D_s$ using $(148\pm 18\pm 13)~D_s$ events \cite{bs}.
\begin{equation}
B(D^+_s\rightarrow e^+\nu_eX)=(10.0^{+6.5+1.3}_{-4.6-1.2}) \%.
\end{equation}
\section*{\normalsize\bf VI. Conclusions}
While impressive progress in experiment
has been made in physics in the charm energy region, some
theoretical issues need to be clarified.
The new data and puzzles in exclusive hadronic decays of $J/\psi$
and $\psi'$ give new challenges to the theory of hadronic decays.
With the new observation for $\xi(2230)$ and $f_0(1500)$, the situation
in searching for glueballs is encouraging, but theoretical
uncertainties related to the properties of glueballs still remain and need
to be further reduced.
For the prompt production of charmonium at large transverse
momentum, gluon and quark fragmentations dominate over leading-order
parton fusions. Color-singlet fragmentation is not the dominant mechanism
for $J/\psi$ and $\psi'$ production. Color-octet fragmentation seems to
be important to explain the $J/\psi$ and, in particular, the $\psi'$
excess, but further tests are required. The mechanism of charmonium
production at fixed-target also needs studying.
The study of open charm physics is in continuous progress.
This is important for testing the Standard Model and understanding
both perturbative and nonperturbative QCD.
In the future, with new experiments at $e^+e^-$ colliders, hadronic
colliders, fixed
target, and, in particular, at the proposed $\tau$-charm factory, and
with the theoretical progress in lattice QCD and other nonperturbative
methods, a deeper understanding of physics in the charm energy region
will be achieved.
\section*{\normalsize\bf Acknowledgements}
I would like to thank my colleagues, in particular, Y.B. Dai,
Z.X. He, T. Huang,
Y.P. Kuang, J.M. Wu, and H. Yu for many very helpful discussions
and suggestions. I would also like to thank Y.F. Gu, S. Jin, J. Li, W.G. Li,
C.C. Zhang, Z.P. Zheng, and Y.C. Zhu for useful discussions
on experimental results. Thanks are also due to H.W. Huang, J.F. Liu, Y. Luo,
and especially C.F. Qiao for their help in preparing this report.
I also wish to thank V. Paradimitriou for providing me with the new
data from CDF.
| 2024-02-18T23:39:42.761Z | 1996-09-10T12:49:48.000Z | algebraic_stack_train_0000 | 164 | 9,208 |
|
proofpile-arXiv_065-843 |
\section{Introduction}
\par Billiards are an interesting and well studied class of
hamiltonian systems that display a wide variety of dynamical
behaviour depending on the shape of the boundary.
The ray equations commonly considered follow from a short wavelength
expansion of the Schr\"{o}dinger equation with Dirichlet (or Neumann)
boundary conditions. A particle thus moves freely between collisions
at the boundary where it suffers specular reflection.
As an example, a square
billiard generates regular dynamics that is restricted to a
torus in phase space due to the existence of two well behaved
constants of motion. In contrast, generic trajectories in the
enclosure formed by three intersecting discs
explore the entire constant energy
surface and hence the system is ergodic. Moreover, orbits
that are nearby initially move exponentially apart with time
and hence the system is said to be metrically chaotic.
\par Polygonal billiards are a class of systems
whose properties are not as well known. The ones accessible
to computations are rational angled polygons and they generically
belong to
the class of systems referred to as pseudo-integrable \cite{PJ}.
They possess two constants of motion like their integrable
counterparts \cite{rem0} but
the invariant surface $\Gamma$ is topologically equivalent to a sphere
with multiple holes \cite{PJ} and not a torus. Such systems
are characterised by the genus $g$ where $2g$ is the
numbers of cuts required in $\Gamma$ to produce a singly
connected region (alternately, $g$ is the number of holes in
$\Gamma$). As an example, consider the 1-step billiard
in Fig.~1. For any trajectory,
$p_x^2$ and $p_y^2$ are conserved. The
invariant surface consists of four sheets (copies) corresponding to the
four possible momenta, ($\pm p_x, \pm p_y$) that it can have
and the edges of these
sheets can be identified such that the resulting surface has the
topology of a double torus.
\par The classical dynamics no longer has the simplicity of an
integrable system where a transformation to {\it action} and {\it angle}
co-ordinates enables one to solve the global evolution equations
on a torus. On the other hand, the
dynamics is non-chaotic with the only interesting feature occurring
at the singular vertex with internal angle $3\pi/2$. Here,
families of parallel rays split and
traverse different paths, a fact that limits the extent
of periodic orbit families. This is in contrast to integrable
billiards (and to the $\pi/2$ internal angles in Fig. 1)
where families of rays do not see the vertex and continue
smoothly.
\par We shall focus here on the periodic orbits of
such systems for they form the skeleton on which
generic classical motion is built. They are also the
central objects of modern
semiclassical theories \cite{MC} which provide a
duality between the quantum spectrum and the
classical length and stabilities of periodic
orbits. In polygonal billiards, primitive periodic orbits
can typically be classified under two categories
depending on whether they suffer even or odd
number of bounces at the boundary. In both cases
however, the linearised flow is marginally unstable
since the Jacobian matrix, $J_p$, connecting the transverse
components ($u_{\perp}(t + T_p) = J_p\;u_{\perp}(t)$ where $
u_\perp = (q_\perp,p_\perp)^T$ and $T_p$ is the time period
of the orbit)
has unit eigenvalues. However, when the number of
reflections, $n_p$, at the boundary is even, the orbit
occurs in a 1-parameter family while it is isolated
when $n_p$ is odd. This follows from the fact
that the left (right) neighbourhood of an
orbit becomes the right (left) neighbourhood
on reflection so that an initial neighbourhood
can never overlap with itself after odd number
of reflections \cite{PJ}.
\par While isolated orbits are important and need
to be incorporated in any complete {\it periodic
orbit theory}, a family of identical orbits has
greater weight and such families proliferate faster than
isolated orbits \cite{gutkin}. Besides, in a number of cases
including the L-shaped billiard of Fig.~1, isolated
periodic orbits are altogether absent since the
initial and final momenta coincide only if the
orbit undergoes even number of reflections.
For this reason, we shall consider families of
periodic orbits in this article (see Fig.~(2) for
an example).
\par Not much is however known
about the manner in which they are organised and the
few mathematical results that exist \cite{gutkin}
concern the asymptotic properties of their proliferation
rate. For a sub-class of rational polygons where the
vertices and edges lie on an integrable lattice (the
so called almost-integrable systems \cite{gutkin}), these
asymptotic results are exact. It is known for example
that the number of periodic orbit families, $N(l)$,
increases quadratically with length, $l$, as $l \rightarrow
\infty$. For general rational polygons, rigorous results
show that $c_1 l^2 \leq N(l) \leq c_2 l^2$ for sufficiently
large values of $l$ \cite{gutkin,masur} while in case of
a regular polygon $P_n$ with $n$ sides, it is known that
$N(l) \simeq c_n l^2/A$ \cite{veech} where $c_n$ is
a number theoretic constant and $A$ denotes the area
of $P_n$.
Very little is known however about other aspects such as the
sum rules obeyed by periodic orbit families in
contrast to the limiting cases of integrable and
chaotic behaviour where these have been well studied.
Besides, it is desirable to learn about the variation
of the proliferation rate as a function of the genus
for this should tell us about the transition to
chaos in polygonal approximations of chaotic
billiards \cite{vega,cc}.
\par We shall concern ourselves primarily with
a basic sum rule obeyed by periodic orbits arising
from the conservation of probability. This leads
to the proliferation law, $N(l) = \pi b_0 l^2/ \langle a(l)
\rangle$ where $b_0$ is a constant. The quantity
$\langle a(l) \rangle$ is the average area occupied
by all families of periodic orbits with length less than
$l$ and is not a constant unlike integrable billiards.
We provide here the some numerical results on how
$\langle a(l) \rangle$ changes with the length of the
orbits and the genus of the invariant surface. While
this does not allow us to make quantitative predictions,
the qualitative behaviour sheds new light on the
proliferation law for short orbits and its variation
with the genus, $g$ of the invariant surface.
\par Finally, we shall also study correlations
in the length spectrum of periodic orbits. The numerical
results provided here are the first of its kind
for generic pseudo-integrable billiards and
corroborate theoretical predictions provided
earlier \cite{PLA}.
\par The organisation of this paper is as follows.
In section \ref{sec:sum-rule}, we provide the
basic sum rule and the proliferation law in
pseudo-integrable systems. In section \ref{sec:num-po},
we discuss algorithms to determine periodic orbit
families in generic situations. This is
followed by a numerical demonstration of the
results obtained in section \ref{sec:sum-rule}
as well an exploration of the $area \;law$.
Correlations are discussed in sections \ref{sec:corrl}
together with numerical results.
Our main conclusions are summarised in section \ref{sec:concl}.
\section{The Basic Sum Rule}
\label{sec:sum-rule}
\par The manner in which periodic orbits organise themselves
in closed systems is strongly linked to the existence of
sum rules arising from conservation laws. For example, the
fact that a particle never escapes implies that for
chaotic systems \cite{PCBE,HO}
\begin{equation} \langle \sum_p \sum_{r=1}^{\infty} {T_p \delta(t-rT_p)\over
\left |{ \det({\bf 1} - {{\bf J}_p}^r)} \right |} \rangle = 1\label{eq:hbolic}
\end{equation}
\noindent
where the summation over p refers to all primitive periodic
orbits, $T_p$ is time period, ${\bf J}_p$ is the stability
matrix evaluated on the orbit and the symbol $\langle . \rangle$
denotes the average value of the expression on the
left. Since the periodic orbits are
unstable and isolated, $\left | \det({\bf 1} - {{\bf J}_p}^r) \right |
\simeq e^{\lambda_p rT_p}$, where $\lambda_p$ is the Lyapunov
exponent of the orbit. The exponential proliferation of orbits
is thus implicit in eq.~(\ref{eq:hbolic}).
\par A transparent derivation of Eq.~(\ref{eq:hbolic}) follows
from the classical evolution operator
\begin{equation}
L^t{\circ}\phi({\bf x}) = \int {\bf dy}\; \delta({\bf x} -
{\bf f}^t({\bf y}))\;\phi({\bf y}) \label{eq:def1}
\end{equation}
\noindent
where $L^t$ governs the evolution of densities $\phi({\bf x})$,
{\bf x} = ({\bf q,p}) and ${\bf f}^t$ refers to the flow in
the full phase space.
We denote by $\Lambda_n(t)$ the eigenvalue
corresponding to an eigenfunction $\phi_n({\bf x})$ such that
$L^t{\circ}\phi_n({\bf x}) = \Lambda_n(t) \phi_n({\bf x})$.
The semi-group property, $L^{t_1}\circ L^{t_2}
= L^{t_1 + t_2}$, for continuous time implies that
the eigenvalues $\{\Lambda_n(t)\}$ are of the form $\{e^{\lambda_n t}\}$.
Further, for hamiltonian flows, Eq.~(\ref{eq:def1}) implies that
there exists a unit eigenvalue corresponding
to a uniform density so that $\lambda_0 = 0$. For
strongly hyperbolic systems, $\lambda_n = -\alpha_n + i\beta_n, n > 1$
with a negative real part implying that
\begin{equation}
{\rm Tr}\; L^t = 1 + \sum_n \exp\{-\alpha_nt + i\beta_nt\} \label{eq:trace0}
\end{equation}
\noindent
Eq.~(\ref{eq:hbolic}) is thus a restatement of Eq.~(\ref{eq:trace0})
with the trace expressed in terms of periodic orbit stabilities
and time periods.
\par For polygonal billiards, appropriate modifications
are necessary to take account of the fact that the flow
is restricted to an invariant surface that is two
dimensional. Further, since classical considerations do
not always yield the spectrum $\{\lambda_n(t)\}$, we shall
take resort to the semiclassical trace formula which
involves periodic orbit sums similar to the kind that
we shall encounter in the classical case.
\par Before considering the more general case of pseudo-integrable
billiards, we first introduce the appropriate {\it classical}
evolution operator for integrable systems. This is easily
defined as
\begin{equation}
L^t{\circ}\phi(\theta_1,\theta_2) =
\int d\theta_{1}'d\theta_{2}'\;
\delta (\theta_1 - \theta_{1}'^t)\delta (\theta_2 - \theta_{2}'^t)
\;\phi(\theta_1',\theta_2') \label{eq:prop}
\end{equation}
\noindent
where $\theta_1$ and $\theta_2$ are the angular coordinates on the
torus and evolve in time as $\theta_i^t = \omega_i (I_1,I_2)t +
\theta_i$ with $\omega_i = \partial H(I_1,I_2)/\partial I_i $ and
$I_i = {1\over 2\pi}\oint_ {\Gamma_i} {\bf p.dq} $.
Here $\Gamma _i, i = 1,2$ refer to the
two irreducible circuits on the torus and ${\bf p}$ is the momentum
conjugate to the coordinate ${\bf q}$.
\par It is easy to see that the eigenfunctions
$\{ \phi_n(\theta_1,\theta_2)\}$ are such that
$\phi_n(\theta_1^t,\theta_2^t) = \Lambda_n(t) \phi_n(\theta_1,\theta_2)$
where $\Lambda_n(t) = e^{i\alpha_n t}$. On demanding that
$\phi_n(\theta_1,\theta_2)$
be a single valued function of $(\theta_1,\theta_2)$, it follows
that $\phi_{\bf n}(\theta_1,\theta_2) = e^{i(n_1\theta_1 + n_2\theta_2)}$
where ${\bf n} = (n_1,n_2)$ is a point on the integer lattice.
Thus the eigenvalue,
$\Lambda_{\bf n}(t) = {\rm exp}\{it(n_1\omega_1 + n_2\omega_2)\}$.
\par To illustrate this, we consider a rectangular billiard
where the
hamiltonian expressed in terms of the actions,
${I_1,I_2}$ is $H(I_1,I_2) = \pi^2(I_1^2/L_1^2 + I_2^2/L_2^2)$
where $L_1,L_2$ are the lengths of the two sides.
With $I_1 = \sqrt{E}L_1\cos(\varphi )/\pi$ and
$I_2 = \sqrt{E} L_2\sin(\varphi )/\pi$, it is easy to
see that at a given energy, $E$, each torus is parametrised by a
particular value of $\varphi$. Thus
\begin{equation}
\Lambda_{\bf n}(t) =
e^{i2\pi t\sqrt{E}(n_1\cos(\varphi)/L_1 + n_2\sin(\varphi)/L_2)}
\label{eq:rect}
\end{equation}
\noindent
and the spectrum is continuous. The trace thus involves an
integration over $\varphi$ as well as a sum
over all $n_1,n_2$ :
\begin{equation}
{\rm Tr}~L^t = \sum_{\bf n} \int_{-\pi - \mu_n}^{\pi - \mu_n} d\varphi\;
e^{il\sqrt{E_{\bf n}}\sin(\varphi + \mu_{\bf n})}
= 2\pi \sum_{\bf n} J_0(\sqrt{E_{\bf n}}l) \label{eq:bessel}
\end{equation}
\noindent
where $J_0$ is a Bessel function,
$l = 2t\sqrt{E}$, $\tan(\mu_{\bf n}) = n_1L_2/(n_2L_1)$
and $E_n = \pi^2(n_1^2/L_1^2 + n_2^2/L_2^2)$.
On separating out ${\bf n} = (0,0)$ from the rest and restricting
the summation to the first quadrant of
the integer lattice, it follows that
\begin{equation}
{\rm Tr}~L^t = 2\pi + 2\pi N \sum_n J_0(\sqrt{E_n}l)
\label{eq:rect3}.
\end{equation}
\noindent
where $N=4$. Note that the first term on the right merely
states the fact that there exists a unit eigenvalue
on every torus labelled by $\varphi$. Also, though we
have not invoked semiclassics at any stage, the spectrum
$\{E_n\}$ corresponds to the Neumann spectrum of the
billiard considered. We shall subsequently show that
this is true in general and for now it remains to
express the trace of the evolution operator in
terms of periodic orbits.
\par The trace of $L^t$ in the integrable case can
be expressed as
\begin{equation}
{\rm Tr}\; L^t = \int d\varphi \int d\theta_1 d\theta_2 \;
\delta (\theta_1 - \theta_{1}^t)\delta (\theta_2 - \theta_{2}^t)
\end{equation}
\noindent
It follows immediately that the only orbits that contribute
are the ones that are periodic. For a rectangular billiard,
the integrals can be evaluated quite easily and yields
\begin{equation}
{\rm Tr}\; L^t = 4 \sum_{N_1} \sum_{N_2} {4L_1L_2\over l_{N_1,N_2}}
\delta(l- l_{N_1,N_2}) \label{eq:rect4}
\end{equation}
\noindent
where $\{N_1,N_2\}$ are the winding numbers on the torus and
label periodic orbits of length $l_{N_1,N_2}$. Using
Eqns.~(\ref{eq:rect3}) and (\ref{eq:rect4}), it follows that
\begin{equation}
4 \sum_{N_1} \sum_{N_2} {4L_1L_2\over l_{N_1,N_2}}
\delta(l- l_{N_1,N_2}) = 2\pi + 2\pi N \sum_n J_0(\sqrt{E_n}l)
\label{eq:rect5}
\end{equation}
\noindent
Thus, the dominant non-oscillatory contribution to the
trace that survives
averaging is $2\pi$ and this gives rise to the analogue
of Eq.~(\ref{eq:hbolic})
\begin{equation}
4 \langle \sum_{N_1} \sum_{N_2} {4L_1L_2\over l_{N_1,N_2}}
\delta(l- l_{N_1,N_2}) \rangle = 2\pi
\end{equation}
\noindent
The proliferation rate for the rectangular billiard thus
follows from these considerations.
\par These ideas can be generalised for polygonal billiards
that are pseudo-integrable even though the structure of
the invariant surface no longer allows one to use
{\it action} and {\it angle} variables.
\par For both integrable and pseudo-integrable polygonal
billiards,
the dynamics in phase space can be viewed in a singly connected region by
executing $2g$ cuts in the invariant surface and identifying
edges appropriately. At a given energy, the motion is
parametrised by the angle, $\varphi$, that a trajectory
makes with respect to one of the edges. As a trivial example,
consider the rectangular billiard.
The singly connected region is a larger rectangle consisting
of four copies corresponding
to the four directions that a trajectory can have and these
can be glued appropriately to form a torus \cite{keller}.
As a non-trivial
example, consider the L-shaped billiard of Fig.~(1) which is
pseudo-integrable with its invariant surface having, $g=2$.
Alternately, the surface can be represented by a singly connected
region in the plane (see Fig.~(3)) and consists of four copies
corresponding to the four possible directions an orbit can
have and these are glued appropriately.
A trajectory in phase space thus consists of parallel segments at an
angle $\varphi$ measured for example with respect to one of the sides.
It will be useful to note at this point that the same trajectory
can also be represented by parallel segments at angles
$\pi - \varphi$, $\pi + \varphi$ and $2\pi - \varphi$.
In general, the number of
directions for representing a trajectory equals the number of copies, $N$,
that constitute the invariant surface.
\par The classical propagator on an invariant surface parametrised
by $\varphi$ is thus
\begin{equation}
L^t (\varphi) {\circ}\phi({\bf q}) = \int d{\bf q'}\;
\delta({\bf q} - {\bf q}'^t(\varphi))
\;\phi({\bf q}') \label{eq:prop2}
\end{equation}
\noindent
where ${\bf q}$ refers to the position in the singly connected region
and ${\bf q}'^t(\varphi)$ is the
time evolution parametrised by $\varphi$ as described above.
\par The trace of $L^t (\varphi)$ takes into account all possible
invariant surfaces that exist and hence involves an additional integration
over $\varphi$. Thus
\begin{equation}
{\rm Tr}~L^t = \int d\varphi \int d{\bf q} \;\delta({\bf q} -
{\bf q}^t(\varphi)) \label{eq:trace3}
\end{equation}
\noindent
Clearly the only orbits that contribute are the ones that are
periodic. Further, the {\bf q} integrations are simpler if we
transform to a local coordinate system with one component
parallel to the trajectory and the other perpendicular.
Thus $\delta_{\|}(q_\| - q_\| ^t) = {1\over v}\delta (t-rT_p) $
where $v$ is the velocity, $T_p$ is the period of the
orbit and $r$ is the repetition number. Similarly, for
an orbit of period $T_p$ parametrised by the angle
$\varphi_p$, $\delta_{\bot} (q_\bot - q_\bot ^t) =
\delta (\varphi - \varphi_p)/{\left |\partial q_\bot/\partial
\varphi \right |_{\varphi = \varphi_p} }$ where
$\left |\partial q_\bot/\partial \varphi
\right |_{\varphi = \varphi_p} = rl_p$ for marginally
unstable billiards. Putting these results together and
noting that each periodic orbit occurs in general at
$N$ different values of $\varphi$, we finally have
\begin{equation}
{\rm Tr}~L^t = \sum_n \Lambda_n(t) = N~\sum_p \sum_{r=1}^{\infty}
{a_p\over rl_p}\delta(l-rl_p) \label{eq:trace4}
\end{equation}
\noindent
where $l = tv$ and the summation over $p$ refers to all primitive
periodic orbit families with length $l_p$ and occupying an area $a_p$.
Note that Eq.~(\ref{eq:rect4}) is a special case of
Eq.~(\ref{eq:trace4}) which holds for both integrable
and non-integrable polygonal billiards.
\par It is possible to re-express the periodic
orbit sum in Eq.~(\ref{eq:trace4}) starting with the appropriate
quantum trace formula \cite{neglect-isolated1}
\begin{equation}
\sum_n \delta (E - E_n) = d_{av}(E) + {1\over \sqrt{8\pi^3}}
\sum_p \sum_{r=1}^{\infty} {a_p\over \sqrt{krl_p}}\cos(krl_p -
{\pi\over 4} - \pi rn_p) \label{eq:richens}
\end{equation}
\noindent
Here $d_{av}(E)$ refers to the average
density of quantal eigenstates, $k = \sqrt{E}$,
$l_p$ is the length of a primitive
periodic orbit family. The phase $\pi r n_p$ is set to zero
while considering the Neumann spectrum while in the
Dirichlet case, $n_p$ equals the number of bounces
that the primitive orbit suffers at the boundary.
For convenience,
we have chosen $\hbar = 1$ and the mass $m=1/2$.
Starting with the function
\begin{equation}
g(l) = \sum_n f(\sqrt{E_n}l) e^{-\beta E_n} =
\int_\epsilon^\infty dE\; f(\sqrt{E}l) e^{-\beta E} \sum_n \delta(E-E_n)
\end{equation}
\noindent
where $f(x) = \sqrt{{2\over \pi x}} \cos(x - \pi/4)$
and $ 0 < \epsilon < E_0 $,
it is possible to show using Eq.~(\ref{eq:richens}) that
for polygonal billiards \cite{prl1}
\begin{equation}
\sum_p \sum_{r=1}^{\infty} {a_p \over rl_p} \delta (l-rl_p)
= 2\pi b_0 + 2\pi \sum_n f(\sqrt{E_n}l) \label{eq:myown1}
\end{equation}
\noindent
for $\beta \rightarrow 0^+$. In the above,
\begin{equation}
b_0 = \sum_p \sum_r {a_p (-1)^{rn_p} \over 4\pi}\int_0^{\epsilon}\; dE
f(\sqrt{E}l)f(\sqrt{E}rl_p)
\end{equation}
\noindent
and is a
constant \cite{prl1,rapid?}.
It follows from Eqns.~(\ref{eq:myown1}) and~(\ref{eq:trace4}) that
\begin{equation}
{\rm Tr}~L^t = \sum_p \sum_{r=1}^{\infty} { N a_p \over rl_p} \delta (l-rl_p)
= 2\pi Nb_0 + 2\pi N \sum_n f(\sqrt{E_n}l)
\label{eq:trace5}.
\end{equation}
\noindent
where $\{E_n\}$ are the Neumann eigenvalues of the system. As in case of
the rectangular billiard, the oscillatory contributions wash
out on averaging so that
\begin{equation}
\langle \sum_p \sum_{r=1}^{\infty} {a_p \over rl_p} \delta (l-rl_p)
\rangle = 2\pi b_0 \label{eq:basic}
\end{equation}
\noindent
This is the central result of this section and forms the
basic sum rule obeyed by periodic orbit families.
\par A few remarks about this derivation and the magnitude
of $b_0$ are however in order. Eq.~(\ref{eq:trace4}) is
exact for the L-shaped billiard and all other boundaries
which preclude the existence of isolated periodic orbits.
However, even for these shapes, the semiclassical
trace formula is only an approximation to the exact
density whenever the billiard in question is pseudo-integrable.
In this sense, the sum rule in Eq.~(\ref{eq:basic}) is
not expected to be exact. However, we believe the existence
of higher order corrections (such as diffraction) affects
the magnitude of the constant $b_0$ while preserving
the constant nature of the periodic orbit sum on the
left.
\par In the integrable case it is easy to show from
other considerations that $b_0 = 1/N$ where $N$ is the
number of sheets that constitute the invariant surface
\cite{prl1}. This also follows from Eqns.~(\ref{eq:rect5})
and (\ref{eq:trace5})
since $b_0N = 1$. For pseudo-integrable billiards,
each invariant surface parametrised by $\varphi$
has an eigenvalue, $\Lambda_0(\varphi) = 1$.
Thus the non-oscillatory
part of the trace should equal $\int \Lambda_0(\varphi)
\; d\varphi$ and to a first approximation
this yields $2\pi$ implying that $b_0 = 1/N$.
However each singular vertex connects
two distinct points at any angle $\varphi$ and hence
the integration over $\varphi$ is non-trivial.
We can therefore state that $b_0$ is approximately
$1/N$ in the pseudo-integrable case while this
is exact in the integrable case. We shall show
numerically that the magnitude of deviations (from $1/N$)
in the
pseudo-integrable case depends on the existence
of periodic orbit pairs at the singular vertex.
First however, we briefly describe the algorithms
used to determine periodic orbit families.
\section{Algorithms for Determining periodic orbits}
\label{sec:num-po}
\par Periodic orbits in polygonal billiards are
generally hard to classify. Unlike integrable billiards,
they cannot be described by two integers
which count the number of windings around the
two irreducible circuits on the torus though
in exceptional cases this can indeed be done.
However, since the invariant surface has
a well-defined genus, it is expected that
a set of integers ${\bf N} = \{N_1,N_2,\ldots,N_{2g}\}$
obeying the relationship
\begin{equation}
\omega_i = {2\pi N_i \over T_{\bf N}}
\end{equation}
\noindent
can be used to label periodic orbits. Here $\omega_i$
refers to the frequency corresponding to each
irreducible circuit $\Gamma_i$ and depends
on the energy, E and the angle $\varphi$ that
labels each invariant surface.
Note however
that not all points on this multi-dimensional
integer lattice are allowed since there are
constraints and this method of labelling orbits
becomes cumbersome for surfaces of higher
genus. Nevertheless, we illustrate the idea
here for the L-shaped billiard of Fig.~(1).
\par Let the length of the two bouncing
ball orbits in the X-direction be $L_1$ and
$L_2$ respectively and their lengths
in the Y-direction be $L_3$ and
$L_4$. These define the irreducible circuits
for the L-shaped billiard. Thus :
\begin{eqnarray}
{v\cos(\varphi) \over 2L_1}& = & {2\pi N_1 \over T_{\bf N}} \nonumber \\
{v\cos(\varphi) \over 2L_2}& = & {2\pi N_2\over T_{\bf N}} \nonumber \\
{v\sin(\varphi) \over 2L_3}& = & {2\pi N_3\over T_{\bf N}} \nonumber \\
{v\sin(\varphi) \over 2L_4}& = & {2\pi N_4\over T_{\bf N}} \label{eq:irc}
\eea
\noindent
This implies that the angle $\varphi$ at which a periodic
orbit can exist is such that
\begin{equation}
\tan(\varphi) = {N_3L_3 + N_4L_4 \over N_1L_1 + N_2L_2}
\label{eq:po-angle}
\end{equation}
\noindent
Eqns.~(\ref{eq:po-angle}) and (\ref{eq:irc}) merely express the fact that
any periodic orbit should have integer number of windings
around the irreducible circuits. Thus, the total displacement along the
X-direction should be $2(N_1L_1 + N_2L_2)$ while
the total displacement in the Y-direction should
be $2(N_3L_3 + N_4L_4)$ where $N_i$ are integers.
As mentioned before however, not all realizations
of $\{N_i\}$ correspond to real periodic
orbits and the final step consists in checking numerically
whether a periodic orbit at the angle $\varphi$ (given
by Eq.~(\ref{eq:po-angle})) exists. Note that one
member of each family necessarily resides at one of the
singular vertices and hence it is sufficient to
verify the existence of this orbit.
\par This method works equally well for other
billiards with steps (see Fig.~(1)) and the
number of integers necessary to describe
orbits increases with the number of steps.
An alternate method which exploits the
fact that periodic orbits occur in families
is often useful when the irreducible
circuits are not obvious and this is described
below.
\par Note that a non-periodic orbit originating
from the same point ${\bf q}$ (e.g. the singular vertex)
as a periodic orbit but with a
momentum slightly different
(from the periodic orbit) suffers a net transverse deviation
$q_\perp = (-1)^{n_\varphi}\sin(\varphi - \varphi _p)l_{\varphi}$. Here
$l_{\varphi}$ is the distance traversed by a non-periodic
orbit at an angle $\varphi$ after $n_\varphi$ reflections from the
boundary and $\varphi_p$ is the angle at
which a periodic orbit exists. This provides a correction
to the initial angle and a few iterations are normally
sufficient to converge on a periodic orbit with
good accuracy. In order to obtain all periodic orbits,
it is necessary to shoot trajectories from every
singular vertex since one member of each family
resides at one of these vertices.
\par Apart from the length of a periodic orbit,
it is also important to compute the area occupied by the
family. This can be achieved by shooting a single
periodic trajectory which resides at a singular vertex
and by noting that this orbit lies on the edge
of a family. Thus if the rest of the family
lies on the left neighbourhood initially, it
is necessary to determine the perpendicular distance
from a singular vertex to this trajectory every time
the initial neighbourhood lies towards this singular
vertex. The shortest of these perpendicular distances
gives the transverse extent of the family and the
area can thus be computed.
\par These algorithms have been used to generate
the lengths $\{l_p\}$ and areas $\{a_p\}$ of
primitive orbits in the L-shaped billiard as
well as the two and three step billiards. We
present our numerical results in the following
sections.
\section{Numerical results : the sum rule and area law}
\label{sec:num1}
\par We present here our numerical results on the 1-step
(L-shaped), 2-step and 3-step billiards. Their
invariant surfaces have genus 2, 3 and 4 respectively
and the quantities we shall study are the sum rule
derived in section \ref{sec:sum-rule} and the variation
of $\langle a(l) \rangle$ with $l$ and the genus, $g$
of the invariant surface.
\par The sum rule we wish to study can be re-expressed
as
\begin{equation}
S(l) = {1\over 2\pi} {\sum_p \sum_r}_{rl_p \leq l} {a_p\over rl_p}
\; \simeq b_0 l \label{eq:sum0}
\end{equation}
\noindent
and this is plotted in Fig.~(4) for four different
1-step (L-shaped) billiards. Notice first that in
each case the behaviour is linear as predicted
by Eq.~(\ref{eq:sum0}). Besides, in
three of the four cases, the slopes are quite
close ($b_0 \simeq 0.27$) and these correspond to non-degenerate
L-shaped billiards with sides that are unequal
and irrationally related. The one with a substantially
larger slope ($b_0 \simeq 0.32$) is a degenerate
case of a square with a quarter removed and for this
system there exist a
substantial number of periodic orbit pairs
at the same angle on the two adjacent edges at the
singular vertex. The differences between the degenerate
and non-degenerate case also persists in other quantities
as we shall shortly see.
\par We next plot $S(l)$ for non-degenerate examples
of a 1,2 and 3-step billiards in Fig.~(5). The number of orbits
considered are far less due to increased computational
effort though the linear behaviour is obvious in
all three cases. The slopes are again
close to 0.25 and vary from $b_0 \simeq 0.24$
to $b_0 \simeq 0.28$.
\par Thus, periodic orbits obey a basic sum rule
given by Eq.~(\ref{eq:sum0}) where $b_0$ is a
constant. Further, the magnitude of $b_0$ is close
to $0.25$ in all cases and the deviations from this
value are larger in the degenerate case where
periodic orbit pairs exist at the singular vertex.
\par The sum rule yields the proliferation
rate of periodic orbits
\begin{equation}
N(l) = {\pi b_0 l^2 \over \langle a(l) \rangle }
\label{eq:nl}
\end{equation}
\noindent
where $\langle a(l) \rangle$ is the average area
occupied by periodic orbit families with length
less than $l$. In case of integrable polygons
$\langle a(l) \rangle$ is a constant and equals
$AN$ where $A$ is the area of the billiard while
$b_0 = 1/N$. For pseudo-integrable cases,
the areas $a_p$ occupied by individual periodic
orbit families generically occupy a wide
spectrum bounded above by $NA$. As the length
of a family increases, encounters with
singular vertices are more frequent so that
the transverse extent of family decreases.
Quantitative predictions about the behaviour
of $\langle a(l) \rangle$ are however difficult and
we provide here some numerical results.
\par Fig.~(6) shows a plot of $\langle a(l) \rangle /
NA $ for three different L-shaped billiards one of
which is the degenerate case presented in Fig.~(4).
The average area increases initially before saturating
for large $l$ in all cases. The normalised saturation
value seems to be independent of the ratio of the
sides as long as they are irrationally related
(we have observed this for other cases not presented
here) but is very different for the degenerate
example of a square with a quarter removed.
\par A comparison of non-degenerate 1,2 and 3-step
billiards is shown in Fig.~(7). Recall that
the invariant surfaces for these have genus
respectively equal to $2,3$ and $4$. The
saturation observed for the 1-step (L-shaped)
case seems to be a common feature and interestingly
the normalised average saturation value
decreases. This is however expected since
the number of singular vertices increases
with the genus thereby reducing the average
transverse extent of orbit families at any
given length.
\par These observations allow us to conclude
that for short lengths, the proliferation law
is sub-quadratic. Asymptotically however,
$N(l) \sim l^2$ as in integrable billiards.
Further, the asymptotic proliferation rate
for billiards with the same area $A$
increases with the genus due to a decrease
in the asymptotic normalised saturation
value of $\langle a(l) \rangle$.
\par These numerical results provide a qualitative
picture of the area law and show that periodic
orbits in polygonal billiards are organised
such that they obey a sum rule. Quantitative
predictions would require a more extensive
numerical study such that empirical laws
for the saturation value and its variation
with the genus can be arrived at.
\section{correlations in the Length spectrum}
\label{sec:corrl}
\par Our numerical explorations so far have been
focussed on the average properties of the length
spectrum. We shall now attempt to understand the
nature of the fluctuations and characterise
their statistical properties.
\par Fluctuations in the length spectrum are generally
difficult to study from purely classical considerations.
There are notable exceptions however. Fluctuations
in the integrable case can be studied using the
Poisson summation formula since the lengths
of orbits are expressed in terms of integers
$\{N_1,N_2\}$ \cite{pramana}. The other extreme is the motion on surfaces of
constant negative curvature where the Selberg trace
formula provides an {\it exact} dual relationship between the
classical lengths and the eigenvalues of the Laplace-Beltrami
operator \cite{BV}. For other systems, a possible way of studying
fluctuations in the length spectrum lies in inverting the
semiclassical quantum trace formula. For pseudo-integrable billiards,
this has been achieved in section \ref{sec:sum-rule} and
the integrable density of lengths, $N(l)$ can be expressed
as
\begin{equation}
N(l) = {\pi b_0 l^2 \over \langle a(l) \rangle } +
{2\pi \over \langle a(l) \rangle} \sum_n \int_0^l dl' \;l'f(\sqrt{E_n}l')
\label{eq:nl-tot}
\end{equation}
\noindent
Statistical properties of the fluctuations can thus be
studied using techniques introduced for the fluctuations
in the quantum energy spectrum \cite{berry85,PLA}.
\par The correlations commonly studied are the nearest neighbour
spacings distribution, $P(s)$ and a two-point correlation referred to
as the spectral rigidity, $\Delta_3(L)$. The rigidity measures
the average mean square deviation of the staircase function
$N(l)$ from the best fitting straight line over $L$
mean level spacings. For a normalised
(unit mean spacing) Poisson
spectrum, $P(s) = e^{-s}$ while $\Delta_3(L) = L/15$. These
features are commonly observed in the quantum energy spectrum
of integrable systems as well as in their length spectrum.
For chaotic billiards, the quantum energy spectrum generically
produces non-Poisson statistics while the length spectrum
correlations are Poisson for long orbits at least over short
ranges \cite{prl2}.
There are however deviations that can be observed in
$\Delta_3(L)$ for short orbits or over longer ranges in the
spectrum \cite{prl2}. With this background, we now present our results
for pseudo-integrable billiards.
\par Figs.~(8) and (9) show plots of $P(s)$ and $\Delta_3(L)$
for a non-degenerate L-shaped billiard. A total of 3000
lengths have been considered after excluding the shortest
3000 orbits. The correlations are clearly Poissonian
as in case of integrable or chaotic billiards.
\par For the 3-step billiard where fewer lengths are
available (about 1250), we have carried out a similar study and the
results are shown in Figs.~(10) and (11). Deviations
from the Possion behaviour can now be seen especially in the
spectral rigidity. By considering shorter lengths in the
L-shaped billiard, similar deviations were observed.
\par The statistical properties of fluctuations in
the length spectrum of pseudo-integrable systems
are thus similar to those of
chaotic billiards. For long orbits the correlations
are Poisson while deviations exist for shorter
orbits.
\section{Discussions and Conclusions}
\label{sec:concl}
\par Quantum billiards are experimentally realizable
in the form of microwave cavities since the wave
equations are identical \cite{Stockman,Sridhar}.
They are also interesting in their own right
and have proved to be the testing grounds for
several ideas in the field of Quantum Chaos.
Of all possible shapes, polygonal
billiards have perhaps been the least understood
largely because very little was known about the
organisation of periodic orbits. The results
presented here have however been used recently
to obtain convergent semiclassical eigenvalues
with arbitrary non-periodic trajectories \cite{prl4?}
as well to demonstrate that periodic orbits
provide excellent estimates of two-point correlations
in the quantum energy spectrum \cite{rapid?}.
Though it is beyond the scope of this article to
review these recent developments on quantum polygonal
billiards, we refer the interested
reader to these articles and the references contained
therein.
\par In the previous sections we have explored the organisation
of periodic orbits in rational polygonal billiards that
are pseudo-integrable. Our main conclusions are as follows :
\bigskip
\par $\bullet$ Orbit families obey the sum rule $\langle \sum_p \sum_{r=1}
^{\infty} a_p\delta(l-rl_p)/(rl_p)\rangle = 2\pi b_0$ thereby
giving rise to the proliferation law $N(l) = \pi b_0 l^2/ \langle
a(l) \rangle$ for all rational polygons.
\par $\bullet$ The quantity $b_0$ is approximately $1/N$ in
generic generic rational billiards and deviations from
this value are observed to be significant in degenerate
situations.
\par $\bullet$ $\langle a(l) \rangle$ increases initially
before saturating to a value much smaller than the maximum
allowed area $NA$.
The asymptotic proliferation law is thus
quadratic even for systems that are not almost-integrable and
the density of periodic orbits lengths is far greater than
an equivalent integrable systems having the same area.
\par $\bullet$ The normalised average area $\langle a(l) \rangle /NA$
decreases with the genus of the billiard while $b_0$ is
approximately 0.25. Periodic orbits thus proliferate faster
with an increase in genus.
\par $\bullet$ The statistical properties of the fluctuations in the
length spectrum are Poisson when
the orbits considered are long.
\section{Acknowledgements}
\par Part of the work reviewed here was published earlier in
collaboration with Sudeshna Sinha and it a pleasure to
acknowledge the numerous discussions we had on this subject.
I have also benifitted directly or indirectly from
interesting discussions
with Predrag Cvitanovi\'{c}, Gregor Tanner and Bertrand Gerogeot.
| 2024-02-18T23:39:42.790Z | 1998-01-20T20:35:38.000Z | algebraic_stack_train_0000 | 169 | 6,163 |
|
proofpile-arXiv_065-923 | \section{Introduction}
During its early history the universe may have undergone a number of
phase transitions. One or more of these may have been first-order
transitions in which the universe was for a time trapped in a
metastable ``false vacuum'' state, with the transition proceeding by
the nucleation and expansion of bubble of the stable ``true vacuum''.
A crucial quantity for the development of such a transition is the
bubble nucleation rate per unit volume $\Gamma$. A semiclassical
procedure, based on a Euclidean ``bounce'' solution, has been
developed for the calculation of $\Gamma$ at zero temperature
\cite{bounce}, and extended to the case of bubble nucleation at
nonzero temperature \cite{highT}.
Recently \cite{ak} it was noted that there can be an enhancement of $\Gamma$ if a
continuous internal symmetry of the false vacuum is completely or partially
broken by the true vacuum. This enhancement can be understood as a consequence
of the fact that, instead
of a single possible final state, there is a continuous family of degenerate
true vacuum states into which the decay can occur. More formally, the effect
arises from the existence of additional zero eigenvalues in the spectrum of
small fluctuations about the bounce solution.
The primary focus of Ref.~\cite{ak} was on the case of a broken $U(1)$ symmetry
(see also \cite{ew1}). While a similar enhancement is expected for larger
symmetry groups, the treatment of the zero modes becomes somewhat more
complicated for the case of a non-Abelian symmetry.
In this note we develop the formalism needed to deal with the case of an
arbitrary symmetry. We also discuss some further implications of these
results, including the extension of these results to bubble nucleation in
curved space-time.
The remainder of this paper is organized as follows. In Sec.~2 we develop the
general formalism and estimate the magnitude of the enhancement that can be
achieved. As a concrete example, we apply this formalism to the case of
$SU(2)$ symmetry in Sec.~3. In Sec.~4 we discuss the extension to this work to
curved space-time, using the formalism of Coleman and De Luccia
\cite{cdl}. We show that although the zero mode contribution to the curved
space-time nucleation rate appears at first sight to be rather different
from its flat space-time counterpart, it does in fact give the expected
result in the the limit where gravitational effects are negligible.
Section~5 contains some brief concluding remarks.
\section{General Formalism}
We consider a field theory whose fields we assemble into a column vector
$\phi(x)$ with purely real components. The standard method \cite{bounce} for the
calculation of the quantum mechanical (i.e., zero temperature)
bubble nucleation rate per unit volume, $\Gamma$, is based on the
existence of a ``bounce'' solution $\phi_b(x)$ of the Euclidean field
equations that tends to
the false vacuum value $\phi_f$ at spatial infinity and
approaches (although not necessarily reaches) the true vacuum value
$\phi_t$ near a point that may be taken to be the origin. The result
may be written in the form
\begin{equation}
\Gamma = {1\over \Omega} {I_b \over I_f}
\end{equation}
Here $I_f$ and $I_b$ are the contributions to the Euclidean path
integral of $e^{-S_E}$ (where $S_E$ is the Euclidean action) from the
homogeneous false vacuum and bounce configurations, respectively, while
the division by $\Omega$, the volume of the four-dimensional Euclidean
space, arises in order to obtain a rate per unit volume. The
contribution to the path integral from a stationary point
$\bar\phi(x)$ can be evaluated by expanding the field as
\begin{equation}
\phi(x) = \bar\phi(x) + \sum_j c_j \, \psi_j(x)
\end{equation}
where the $\psi_j(x)$ form a complete set of orthonormal
eigenfunctions of the second variation operator
$S''_E(\bar\phi) \equiv \delta^2 S_E/ \delta \phi(x) \delta \phi(y)$.
For $\bar\phi(x)= \phi_f$, this gives, in leading approximation, a
product of Gaussian integrals over the real variables $c_j$ that results in
\begin{equation}
I_f = e^{-S_E(\phi_f)} \, \left\{ \det [S''(\phi_f)] \right\}^{-1/2}
\equiv e^{-S_E(\phi_f)} \, D_f
\end{equation}
The calculation of the bounce contribution is complicated by the
presence of zero eigenvalues in the spectrum of $S_E''(\phi_b)$. Four of these
correspond to translation of the bounce in the four-dimensional Euclidean
space. We will assume that the remainder are all associated with internal
symmetries of the false vacuum that are not symmetries of the bounce.
These zero modes
are handled by eliminating the corresponding normal mode coefficients
$c_i$ in favor of an equal number of collective coordinates $z_i$.
The zero modes about a bounce configuration $\phi_b(x,z)$ can then be
written in the form
\begin{equation}
\psi_i(x,z) = N_{ij}(z) {\partial \phi_b(x,z) \over \partial z_j}
\end{equation}
where the $N_{ij}$ satisfy the equation
\begin{equation}
[(N^\dagger N)^{-1}]_{kl} = \int d^4x
{\partial \phi_b^\dagger (x;z) \over \partial z_l} \,\,
{\partial \phi_b(x;z) \over \partial z_k} \equiv 2\pi (M_b)_{kl}
\label{Mdef}
\end{equation}
that follows from the orthonormality of the $\psi_i$. (The factor of $2\pi$ is
for later convenience.)
The bounce contribution can then be written as a product of two
factors. The first
\begin{equation}
I_b^{(1)}= e^{-S_E(\phi_b)} {1\over 2} \left| \det{}'[S''(\phi_b)]
\right|^{-1/2} \equiv e^{-S_E(\phi_b)} \, D_b
\end{equation}
arises from the integration over the modes with nonzero eigenvalues.
The prime indicates that the functional determinant is to be taken
in the subspace orthogonal to the zero modes, while the factor of $1/2$
arises from the integration over the single negative eigenvalue mode.
The second factor, from integrating over the remaining $n$ variables,
is
\begin{equation}
I_b^{(2)} = (2\pi)^{-n/2} \int d^n z \det \left[ {\partial c_i \over
\partial z_j} \right]
\end{equation}
where the factors of $2\pi$ compensate for the absence of $n$ Gaussian
integrations. To calculate the Jacobian determinant, we first equate
the change in the field resulting from an infinitesimal
change in the $z_i$ with that corresponding to a shift of the $c_i$,
to obtain
\begin{equation}
\psi_i(x,z) \,\, dc_i = {\partial \phi_b(x;z) \over \partial z_j}\,\, dz_j
\end{equation}
Using the orthonormality of the $\psi_j$, we then find that
\begin{equation}
(2\pi)^{-n/2} \det \left[ {\partial c_i \over \partial z_j} \right] =
(2\pi)^{n/2} \det [M_b(z) N^\dagger(z)] =
\left[ \det M_b(z) \right]^{1/2}
\end{equation}
so that
\begin{equation}
I^{(2)}_b = \int d^nz [\det M_b(z)]^{1/2}
\end{equation}
The fact that the zero modes all arise from symmetries of the theory might
lead one to expect that the integrand in this equation would be independent of
the $z_j$. Actually, this is true only if the measure $d^n z$ is invariant
under the symmetry transformations. If it is not, let $\mu(z)$ be such that
$\mu(z)^{1/2} d^n z$ is an invariant measure and write
\begin{equation}
I^{(2)}_b = \int d^nz \mu(z)^{1/2}[\mu(z)^{-1}\det M_b(z)]^{1/2}
\end{equation}
The quantity in brackets is now $z$-independent and can be taken outside the
integral.
One can always choose coordinates so that this expression can be written as
a product of a contribution from the translational zero modes and a contribution
from the internal symmetry zero modes. For the former, the natural choice of
collective coordinates are the spatial coordinates $z^\mu$ of the center of the
bounce. The derivative of the field with respect to these is, up to a sign, the
same as the spatial derivative of the field. Furthermore, with these coordinates
$\mu(z)=1$, so the integration over the $z^\mu$ simply gives a factor of
$\Omega$. Hence, the contribution of these modes to $I_b^{(2)}$ is $\Omega
J_b^{\rm trans}$, where\footnote{For the case of a spherically symmetric bounce
in a scalar field theory with a standard kinetic energy term, $J_b^{\rm trans}$
can be expressed in terms of the bounce action, with $J_b^{\rm trans} =
[S_E(\phi_b) - S_E(\phi_f) ]^2/4\pi^2 $.}
\begin{equation}
J_b^{\rm trans} = (2\pi)^{-2} \left[\prod_{\mu=1}^4 \int d^4 x
(\partial_\mu \phi )^2 \right]^{1/2}
\end{equation}
The internal symmetry zero modes arise from the action of the
gauge group $G$ on the bounce solution. Because the bounce tends
asymptotically toward the false vacuum $\phi_f$, normalizable modes
are obtained only from the unbroken symmetry group $H_f \subset G$ of
the false vacuum. Furthermore, there are no such modes from the
subgroup $K_b \subset H_f$ that leaves the bounce solution invariant.
Hence, the corresponding collective coordinates span the coset space $
H_f/K_b$. The contribution from these to $I_b^{(2)}$ is then
\begin{equation}
J_b^{H_f/K_b}\,\, {\cal V}(H_f/K_b) =
[\mu(g_0)^{-1}\det M_b^{H_f/K_b} (g_0)]^{1/2} \,\, {\cal V}(H_f/K_b)
\label{Jdef}
\end{equation}
where ${\cal V}(H_f/K_b) $ is the volume of the coset space, $M_b^{H_f/K_b}
$ is the submatrix corresponding to the internal symmetry zero modes, and
$g_0$ is an arbitrary point of $H_f/K_b$. To evaluate $J_b^{H_f/K_b}$ it
is convenient to take $g_0$ to correspond to the identity element of
$H_f$. Writing the group elements near the identity in the form
$e^{i\alpha_jT_j}$, we may take the collective coordinates to be the
parameters that multiply the $T_j$ that that span the coset $H_f/K_b$.
Evaluated at the identity element, the function $\mu(g)$ is then equal to
unity, while the derivatives with respect to the collective coordinates
are given simply by the action of the generators on the bounce solution.
Hence \begin{equation}
J_b^{H_f/K_b} = \left\{\det \left[(2\pi)^{-1} \int d^4x \phi_b^\dagger(x)
T_j^\dagger T_i \phi_b(x) \right]\right\}^{1/2}
\label{Jresult}
\end{equation}
with the $T_i$ being the generators of $H_f/K_b$.
Gathering our results together, we obtain
\begin{equation}
\Gamma = e^{-B}\,
{D_b J_b^{\rm trans} J_b^{H_f/K_b} \over D_f}
\, {\cal V}(H_f/K_b)
\end{equation}
where $B\equiv S_E(\phi_b) - S_E(\phi_f) $.
It is important to note that $K_b$ is determined
by the symmetry of the bounce, and not by that of the true vacuum; it is
conceivable (although we believe it unlikely) that the latter is invariant
under a larger subgroup $K_t \subset H_f$ than the former. Even
if $K_t$ is identical to $K_b$, it is not in general the same as the
unbroken symmetry group $H_t \subset G$ of the true vacuum. For
example, if $G$ is unbroken in the true vacuum and
completely broken in the false vacuum, $H_f$, and hence $K_t$, are
trivial even though $H_t=G$. In addition, the subgroup $K_b$ depends not
only on the symmetries of the true and false vacua, but also on their
relative orientation.
This last point can be illustrated using a theory with global
$SU(5)$ symmetry. Let us assume that there is a single scalar field
$\phi$, in the adjoint representation, with the potential such that
the false vacuum has unbroken $SU(4)\times U(1)$ symmetry and the
unbroken symmetry of the true vacuum is $SU(3)\times SU(2)\times
U(1)$. Without loss of generality we may choose the false vacuum
configuration to be of the form
\begin{equation}
\phi_f = {\rm diag}\, (a,a,a,a,-4a)
\end{equation}
The $SU(5)$ orientation of this field of this configuration influences
that of the true vacuum bubbles that nucleate within it. Thus, decays
to the true vacua with
\begin{equation}
\phi_t^{(1)} = {\rm diag}\, (b,b,b, -{3\over 2} b, -{3\over 2} b)
\end{equation}
and
\begin{equation}
\phi_t^{(2)} = {\rm diag}\, (-{3\over 2} b,-{3\over 2} b, b,b,b )
\end{equation}
are governed by inequivalent bounce solutions and proceed at different
rates \cite{guthew}. If the bounces have the maximum possible symmetry, then in
the former case $K_b = SU(3) \times U(1) \times U(1)$ and there are six
internal symmetry zero modes, while in the latter $K_b = SU(2)\times
SU(2)\times U(1)\times U(1)$ and there are eight such modes. Of
course, bubble nucleation could with equal probability lead to any
configuration obtained by applying an $SU(4)\times U(1)$
transformation to $\phi_t^{(1)}$ or $\phi_t^{(2)}$; this is taken into
account by the integration over the coset spaces $(SU(4)\times
U(1))/ (SU(3) \times U(1) \times U(1))$ and $(SU(4)\times
U(1))/( SU(2)\times SU(2)\times U(1)\times U(1))$. However, there are true
vacuum configurations that cannot be obtained by such transformations. In
general, there are no bounce solutions corresponding to these, reflecting the
fact that if a bubble of such a vacuum were to form, the external false vacuum
would exert forces that would realign the field in the bubble interior.
Finally, let us estimate the magnitude of the zero mode corrections that we
have found. For definiteness, we will consider the case of a scalar field
theory whose potential can be written as $V(\phi) = \lambda F(\phi)$ with
$F(\phi)$ containing no small dimensionless parameters. Standard scaling
arguments using the fact that the bounce is a stationary point of the action
show that the bounce has a radius $\sim 1/m$ (where $m$ is a characteristic
scalar mass) and an action (relative to that of the false vacuum) of order
$1/\lambda$. The typical magnitude of the bounce field is $\phi_b(x) \sim
m/ \sqrt{\lambda} $, while the $T_j$ are all of order unity, so $J_b^{H_f/K_b}
\sim (\lambda m^2)^{-N/2}$, where $N$ is the number of internal symmetry zero
modes. The ratio $D_b/D_f$ is of order unity in $\lambda$, but is proportional
to a dimensionful parameter $\sim m^{N+4}$ arising from the fact that the
contribution of the zero eigenvalue modes has been deleted from $D_f$.
Finally, the coset volume is of order unity. Overall, then, we have
\begin{equation}
\Gamma = c_1\, \lambda^{-(N+4)/2} m^4 e^{-c_2/\lambda}
\end{equation}
where $c_i$ and $c_2$ are of order unity; the effect of the internal symmetry
zero modes has been to enhance the nucleation rate by a factor of order
$\lambda^{-N/2}$. Phrased somewhat differently, the enhancement is
roughly by a factor of $B^{N/2}$.
\section{$SU(2)$ Symmetry}
As a concrete example, let us consider the case where the symmetry
group of the false vacuum is $H_f=SU(2)$ but the bounce solutions break this
symmetry. A natural set of collective coordinates is given by the Euler
angles. Thus, given one bounce solution $\phi_b^0(x)$, we can define a
three-parameter family of solutions by
\begin{equation}
\phi_b(x;\varphi,\theta,\psi) = e^{i\varphi T_3}\, e^{i\theta T_2}
\, e^{i\psi T_3} \, \phi_b^0(x) \equiv U(\varphi,\theta,\psi) \phi_b^0(x)
\end{equation}
where the $T_j$ are the appropriate (possibly reducible) representation of the
generators of $SU(2)$. Differentiation of this expression gives
\begin{eqnarray}
\partial_\varphi \phi_b(x;\varphi,\theta,\psi) &=& iU(\varphi,\theta,\psi)
\tilde T_3 \phi_b^0(x) \nonumber \\
\partial_\theta \phi_b(x;\varphi,\theta,\psi) &=&i U(\varphi,\theta,\psi)
\tilde T_2 \phi_b^0(x) \nonumber \\
\partial_\psi \phi_b(x;\varphi,\theta,\psi) &=& iU(\varphi,\theta,\psi)
T_3 \phi_b^0(x)
\end{eqnarray}
where
\begin{eqnarray}
\tilde T_3 &=& \, e^{-i\psi T_3} \,e^{-i\theta T_2}\, T_3\,
e^{i\theta T_2}\, e^{i\psi T_3} \nonumber \\
&=& \cos\psi \sin\theta \,T_1 + \sin\psi \sin\theta \,T_2
+ \cos\theta\, T_3 \nonumber \\
\tilde T_2 &=& \, e^{-i\psi T_3} \, T_2\, e^{i\psi T_3} \nonumber \\
&=& -\sin\psi\, T_1 + \cos\psi\, T_2
\end{eqnarray}
Thus, if $z_j=(\varphi,\theta,\psi)$,
\begin{equation}
\partial_j \phi_b(x;\varphi,\theta,\psi)
= i K_{jk} \,U(\varphi,\theta,\psi) T_k \phi_b^0(x)
\end{equation}
where
\begin{equation}
K = \left( \matrix{\cos\psi \sin\theta & \sin\psi \sin\theta &
\cos\psi\cos\theta \cr
-\sin\psi & \cos\psi & 0 \cr 0 & 0 & 1 } \right)
\end{equation}
Substitution of this into Eq.~(\ref{Mdef}) yields
\begin{equation}
(M_b^{SU(2)/K_b})_{il} =(2\pi)^{-1} K_{ij} K_{kl} \int d^4x
\phi_b^{0\dagger}(x)T_j^\dagger T_k \phi_b^0(x)
\end{equation}
Now recall that an invariant measure on $SU(2)$ is given by $\sin\theta
d\varphi \,d\theta \, d\psi$, so we may take $\mu(\varphi,\theta,\psi)
=\sin^2\theta = (\det K)^2$. Hence,
\begin{equation}
\mu^{-1} \det M_b^{SU(2)/K_b} = \det \left[ (2\pi)^{-1} \int d^4x
\phi_b^{0\dagger}(x)T_j^\dagger T_k \phi_b^0(x) \right]
\label{Mresult}
\end{equation}
This is independent of the collective coordinates, as promised, and is in
agreement with Eq.~(\ref{Jresult}).
Three specific cases may serve to illustrate some of the possible behaviors:
a) One $SU(2)$ doublet: If the bounce involves an $SU(2)$ doublet, then
the bounce completely breaks the $SU(2)$ symmetry. The coset volume factor is
\begin{equation}
{\cal V}(H_f/K_b) = {\cal V}(SU(2)) = 16\pi^2
\end{equation}
while $M_b^{SU(2)/K_b} = M_b^{SU(2)}$ is proportional to the unit matrix, with
\begin{equation}
\mu^{-1} \left[M_b^{SU(2)}\right]_{ij} = \delta_{ij}
(2\pi)^{-1} \int d^4x \phi_b^\dagger(x) \phi_b(x)
\end{equation}
(Because our formulas have been derived using real fields, one must use a
four-dimensional real representation, rather than a two-dimensional complex
representation, for the $T_j$ when obtaining this result from
Eq.~(\ref{Mresult}).)
b) One $SU(2)$ triplet: If the bounce is constructed from a single real
triplet whose direction is independent of $x$ [i.e., such that
$\phi_b(x)$ can be written as $(0,0,f(x))$], then $K_b=U(1)$. There are only
two zero modes and $H_f/K_b$ is the two-sphere spanned by $\theta$ and
$\varphi$ with
\begin{equation}
{\cal V}(H_f/K_b) = {\cal V}(SU(2)/U(1)) = 4\pi
\end{equation}
$M_b^{SU(2)/K_b}$ is now a $2\times 2$ matrix, with
\begin{equation}
\mu^{-1} \left[M_b^{SU(2)/U(1)}\right]_{ij} = \delta_{ij}
(2\pi)^{-1} \int d^4x {\mbox{\boldmath $\phi$}}^2_b(x)
\end{equation}
c) Two non-parallel $SU(2)$ triplets: If the bounce solution contains two
triplet fields that are not parallel, then the bounce has no continuous
symmetry. Because only integer spin fields are involved, \begin{equation}
{\cal V}(H_f/K_b) = {\cal V}(SO(3)) = 8\pi^2
\end{equation}
The matrix $\mu^{-1} M_b^{SO(3)}$ has three unequal eigenvalues.
\section{Bubble Nucleation in Curved Space-Time}
Coleman and De Luccia \cite{cdl} showed that the bounce formalism could
be extended to include the effects of gravity by requiring that both the
bounce and the homogeneous false vacuum configurations be solutions of the
coupled Euclidean matter and Einstein equations. For a scalar field theory
with $V(\phi) \ge 0$, as we henceforth assume, the false vacuum solution
consists of a uniform scalar field $\phi_f$ on a four-sphere of
radius
\begin{equation}
\tilde H_f^{-1} = \sqrt{ 3 M_{\rm Pl}^2 \over 8\pi V(\phi_f)}
\end{equation}
with total Euclidean action (including gravitational contributions)
\begin{equation}
S_E(\phi_f) = -{3 M_{\rm Pl}^4 \over 8 V(\phi_f)}
\end{equation}
The bounce solution has the same topology, with regions of
approximate true vacuum and false vacuum separated by a wall region. If
the matter mass scale $\cal M$ is much less than the Planck mass $M_{\rm Pl}$,
then both the radius $R_b$ of the true vacuum region and the difference between
the bounce action and the false vacuum action differ from the corresponding
flat space quantities by terms of order $({\cal M}/M_{\rm Pl})^2$.
The spectra of the small fluctuations about these solutions again
contain one zero mode for each symmetry of the Lagrangian that is broken by
the solution. However, because the Euclidean
solutions are on closed manifolds with finite volumes, the modes due
to symmetries broken by the false vacuum are normalizable, in contrast with the
flat space case. Hence, we would expect the
flat space factors given in Eq.~(\ref{Jdef}) to be replaced by
\begin{equation}
{ J_b^{G/K_b} \over J_f^{G/H_f} } \,
{{\cal V}(G/K_b) \over {\cal V}(G/H_f) }
= { J_b^{G/K_b} \over J_f^{G/H_f} }
\, {\cal V}(H_f/K_b)
\label{curvedJac}
\end{equation}
Although the volume factors give the same result as in flat space, the Jacobean
factors appear quite different. Yet, for ${\cal M} \ll M_{\rm Pl}$, where
gravitational corrections should be small, this should approach the flat space
result.
To see how this comes about, let us denote by $t_j$ the generators of
$H_f/K_b$ and by $s_j$ those of $G/H_f$. The Jacobean determinant in the numerator
of Eq~(\ref{curvedJac}) has contributions from matrix elements containing both
types of generators, whereas the determinant in the denominator only involves
$s_i s_j$ matrix elements. Because the $t_j$ annihilate the false vacuum, the
matrix elements involving these have nonzero contributions only from the
region, of volume $\sim R_b^4$, where the bounce solution differs from the
false vacuum and hence are suppressed by a factor of order $(\tilde
H_fR_b)^4 \sim
({\cal M}/M_{\rm Pl})^4$ relative to the $s_i s_j$ matrix elements.
(We are assuming that $R_b\sim {\cal M}^{-1}$; this will be the case for
generic values of the parameters.) This implies
that, up to corrections of order $({\cal M}/M_{\rm Pl})^8$, the determinant can
be written as a product of a determinant involving only the $t_i$ and one
involving only the $s_i$; i.e.,
\begin{equation}
{ J_b^{G/K_b} \over J_f^{G/H_f}} = { J_b^{G/H_f} \over J_f^{G/H_f} }\,
J_b^{H_f/K_b} \, [ 1+ O(({\cal M}/M_{\rm Pl})^8) ]
\end{equation}
The first factor on the right hand side differs from unity only by an amount
proportional to the fraction $\sim ({\cal M}/M_{\rm Pl})^4$ of the Euclidean
space where the bounce differs from the false vacuum. The second factor differs
from the corresponding flat-space term only by the replacement of the
matter fields of the flat space bounce by those of the curved space bounce, and
so clearly reduces to the flat space result as ${\cal M}/M_{\rm Pl}\rightarrow
0$.
The fact that the bounce solution is a closed manifold, with the
true and false vacuum regions both finite, suggests that it
can contribute not only to the nucleation of a true vacuum bubble within
a false vacuum background, but also to the nucleation of a false vacuum
bubble within a true vacuum background, with the rate for the latter
process obtained from that of the former by making the substitution
$\phi_f \rightarrow \phi_t$ \cite{truedecay}. To leading order, the ratio of
these two rates is \begin{equation} {\Gamma_{t\rightarrow f} \over
\Gamma_{f\rightarrow t}}
= e^{S_E(\phi_t) -S_E(\phi_f)}
= \exp\left[ -{3M_{\rm Pl}^4 \over 8}\left({1\over V(\phi_t)} -
{1\over V(\phi_f)}\right) \right]
\end{equation}
The continued nucleation and expansion of bubbles of one vacuum within
the other will result in a spacetime that is a rather inhomogeneous
mixture of the two vacua. There is an intriguing thermal
interpretation of this mixture if $V(\phi_f) - V(\phi_t)
\ll (V(\phi_f) + V(\phi_t))/2 \equiv \bar V$, so that the geometry of space is
approximately the same in the regions of either vacua, with a Hubble parameter
\begin{equation}
\bar H \approx \sqrt{8\pi \bar V \over 3 M_{\rm Pl}^2}
\end{equation}
It seems plausible that the fraction of space contained in each of the
vacua might tend to a constant, with the nucleation of
true vacuum bubbles in false vacuum regions being just balanced by the
nucleation of false vacuum bubbles in true vacuum regions. For such an
equilibrium to hold, the volumes $\Omega_f$ and $\Omega_t$ of false
and true vacuum must satisfy
\begin{equation} { \Omega_f \over \Omega_t} =
{\Gamma_{t\rightarrow f} \over \Gamma_{f\rightarrow t}} \approx
e^{-\Omega_{\rm hor} [V(\phi_f) - V(\phi_t)]/T_H }
\label{rateratio}
\end{equation}
where the horizon volume $\Omega_{\rm hor}= (4\pi/3) \bar H^{-3}$ and the
Hawking temperature $T_H = \bar H/2\pi$. If we view the de Sitter space as
being somewhat analogous to an ensemble of quasi-independent horizon volumes in
a thermal bath, then this leading contribution to the volume ratio is
essentially a Boltzmann factor.
The zero mode corrections to the nucleation rate are consistent with this
thermodynamic picture. Their effect is to multiply the ratio in
Eq.~(\ref{rateratio}) by
\begin{equation}
{{\cal V}(G/H_f) \over {\cal V}(G/H_t)} \, {J_f^{G/H_f} \over
J_t^{G/H_t} }
= \left ( {\bar H\over \sqrt{3}\,\pi} \right)^{N_t-N_f} \,
{{\cal V}(G/H_f) \over {\cal V}(G/H_t)}\,
\left[{ \det\left[ (\Omega_{\rm hor}T_H /2\pi) \left(\phi_f^\dagger T_i T_j
\phi_f\right) \right]
\over \det\left[ (\Omega_{\rm hor}T_H/2\pi )\left(\phi_t^\dagger T_i
T_j \phi_t \right)\right] }\right]^{1/2}
\label{ratioeq}
\end{equation}
where $N_f$ and $N_t$ are the number of internal symmetry zero modes in the false
and true vacua, respectively. We recognize the dimensionless ratio on the
right hand side as the ratio of two classical partition functions of the
form
\begin{equation}
\int {d^Nz d^Np\over (2\pi)^N} e^{-{\cal H}_z/T_H} =
\int d^N z \left[ (\Omega_{\rm hor}T_H /2\pi)
\left(\phi^\dagger T_i T_j \phi \right)\right]^{1/2}
\end{equation}
that follow from the effective Lagrangian
\begin{equation}
L_z = {1\over 2} \Omega_{\rm hor} \left(\phi^\dagger T_i T_j \phi
\right) \, \dot z_i \dot z_j
\end{equation}
that describes the collective coordinates dynamics for a horizon volume
with spatially uniform scalar field $\phi$.
The presence of a dimensionful prefactor in Eq.~(\ref{ratioeq}) is required
by the differing numbers of zero modes about the true and false vacua, which
implies a dimensional mismatch between the functional determinants over the
nonzero eigenvalue modes. This suggests that the factor of $(\bar
H/\sqrt{3}\, \pi)^{N_t-N_f}$ should, like the functional determinants
themselves, be
understood as related to the first quantum corrections to the vacuum
energies.
\section{Concluding Remarks}
We have seen that when a metastable false vacuum decays to a true
vacuum that breaks some of the internal symmetries of the false
vacuum, the presence of $N>0$ zero modes about the bounce
solution can lead to an enhancement of the bubble nucleation rate.
In a theory characterized by an overall scalar coupling $\lambda$,
the zero-temperature, quantum mechanical, tunneling
rate is increased by a factor of order $\lambda^{-N/2} \sim B^{N/2}$.
A straightforward extension of our methods to finite temperature
thermal tunneling shows that, although the $\lambda$-dependence is
changed, the enhancement is still of order $B^{N/2}$. Since the
nucleation rate falls exponentially with $B$, we therefore have
the curious situation that the enhancement is greatest when the
overall rate is smallest.
These results may be of particular interest for the symmetry-breaking
phase transitions that arise in the context of a grand unified
theory. For any given nucleation rate, the numerical effect of the
zero mode corrections will, of course, almost always be negligible
compared to the uncertainties due to the undetermined couplings in the
scalar field potential. The zero mode effects could, however, be
significant when there are competing decays to vacua with different
degrees of symmetry breaking, such as are encountered in many
supersymmetric models.
\vskip 1cm
\centerline{\large\bf Acknowledgments}
\vskip 5mm
A.K and E.W. would like to thank the Aspen Center for Physics, where
part of this work was done. This work was supported in part by the
U.S. Department of Energy. K.L. is supported in part by the NSF
Presidential Young Investigator program.
| 2024-02-18T23:39:43.038Z | 1996-09-13T23:49:03.000Z | algebraic_stack_train_0000 | 178 | 4,675 |
|
proofpile-arXiv_065-1141 | \section{ Introduction}
Gamma ray bursts pose two sets of problems. The first is to account for the
required large and sudden energy release. The prime candidate here is the
formation of a compact object or the merger of a compact binary: this can
trigger the requisite energy release (few $10^{51}$ erg s$^{-1}$ for an
isotropic burst at cosmological distances), with a characteristic dynamical
timescales as short as milliseconds. The second problem is how this energy is
transformed into a relativistically-outflowing plasma able to emit intense
gamma rays with a nonthermal spectrum.
The literature on the second of these problems is already extensive. There
have, in particular, been detailed calculations on the behavior of
relativistic winds and fireballs. We have ourselves, in earlier papers (e.g.
Rees \& M\'esz\'aros~, 1992, 1994, M\'esz\'aros~, Laguna \& Rees, 1993, M\'esz\'aros~, Rees \&
Papathanassiou, 1994 [MRP94], Papathanassiou \& M\'esz\'aros~, 1996; also
Paczy\'nski, 1990, Katz, 1994,
Sari, Narayan \& Piran, 1996) addressed the physical processes in relativistic
winds and fireballs. Motivating such work is the belief that compact objects
can indeed generate such outflows. There have, however, been relatively few
attempts to relate the physics of the outflow to a realistic model of
what might energize it. (Among these are early suggestions by Paczy\'nski, 1991,
M\'esz\'aros~ \& Rees, 1992, Narayan, Paczy\'nski \& Piran, 1992, Thompson, 1994, and
Usov, 1994). These have involved either the reconversion of a burst of
neutrino energy into pairs and gamma rays, or else strong magnetic fields.
Although the former model cannot be ruled out, the beamed neutrino annihilation
luminosity being marginal in black holes with a disrupted neutron star torus
(Jaroszy\'nski, 1996), initial calculations for neutron star mergers are
discouraging (e.g. Ruffert et.al. 1997); we here focus attention on magnetic
mechanisms. In the present paper we try to incorporate our earlier idealized
models into a more realistic context, and consider some of the distinctive
consequences of an outflow which is directional rather than isotropic.
\section{ Magnetically Driven Outflows}
Magnetic fields must be exceedingly high in order to transform rotational
energy quickly enough into Poynting flux. At the 'base' of the flow, at a
radius $r_l \sim 10^6-10^7$ cm, where the rotation speeds may be of order $c$,
the field strength must be at least $\sim 10^{14}$ G to transmit $10^{51}$ ergs
in a few seconds. These fields are of course higher than those in typical
pulsars. However , as several authors have noted, the field could be amplified
by differential rotation, or even by dynamo action (e.g. Thompson \& Duncan,
1993). If, for instance, a single fast-rotating star collapses or two neutron
stars spiral together producing (even very transiently) a differentially
rotating disc-like structure, it need only take a few dynamical timescales for
the field to amplify to the requisite level (which is, incidentally, at least
two orders of magnitude below the limit set by the virial theorem).
The most severe constraint on any acceptable model for gamma-ray bursts is that
the baryon density in the outflow must be low enough to
permit attainment of the requisite high Lorentz factors: the absolute minimum,
for wind-type models invoking internal dissipation, is $\Gamma \sim 10^2$;
impulsive models depending on interaction with an external medium require
$\Gamma \sim 10^3$. Since the overall efficiency is unlikely to exceed
$10^{-1}$, this requires any entrained baryons to acquire at least $10^3$
times their 'pro rata share' of the released energy. There are of course other
astrophysical situations where an almost baryon-free outflow occurs -- for
instance the wind from the Crab pulsar,
which may contain essentially no baryons. However, this is not too
surprising because the internal dissipation in pulsars is far too low to
generate the Eddington luminosity. On the other hand, in GRBs the overall
luminosities are $\mathrel{\mathpalette\simov >} 10^{13}L_{Ed}$. It is hardly conceivable that
the fraction channeled into thermal radiation is so low that radiation pressure
doesn't drive a baryon outflow at some level. The issue is whether this level
can be low enough to avoid excess 'baryon poisoning'.
When two neutron stars coalesce, some radiation-driven outflow will be induced
by tidal dissipation before coalescence (M\'esz\'aros~ \& Rees 1992). When the neutron
stars have been disrupted, bulk differential rotation is likely to lead to
more violent internal dissipation and a stronger radiation-driven outflow.
Almost certainly, therefore, some parts of the outflow must, for some reason,
be less accessible to the baryons.
\section{ Axisymmetric Debris and Jets Around Black Holes}
One such reason might be that the bursts come from a black hole orbited by
a disk annulus or torus (e.g. Paczy\'nski 1991, Levinson \& Eichler, 1993).
This is of course what happens when the
central part of the rotating gaseous configuration has collapsed within
its gravitational horizon; otherwise, there is no reason why material should
avoid the centre -- indeed, there is more likely to be a central density peak
in any non-collapsed configuration supported largely by rotation (either a
single star or a compact binary after disruption).
Such a configuration could come about in two ways:
\break
(i) The spinning disk that forms when two neutron stars merge (e.g. Davies
et.al. 1994), probably exceeds the maximum permitted mass for a single neutron
star; after viscosity had redistributed its angular momentum, it would evolve
into a black hole (of 2-3 $M_\odot$) surrounded by a torus of mass
about 0.1 $M_\odot$ (Ruffert et.al. 1996).
\break
(ii) The system may result from coalescence of neutron star and black hole
binary. If the hole mass is $\mathrel{\mathpalette\simov <} 5 M_\odot$, the neutron star would be
tidally disrupted before being swallowed, leading to a system resembling (i),
but with a characteristic radius larger by a factor two and a torus mass of
$\sim 1$ instead of $0.1 M_\odot$.
\break
Numerical simulations yield important insights into the formation of such
configurations (and the relative masses of the hole and the torus);
but the Lorentz factors of the outflow are sensitive to much smaller
mass fractions than they can yet resolve.
It is, however, a general feature of axisymmetric flows around black holes that
the region near the axis tends to be empty. This is because the hole can swallow
any material with angular momentum below some specific value: within a roughly
paraboloidal 'vortex' region around the symmetry axis (Fishbone \&
Moncrief, 1976), infall or outflow are the only options. Loops of magnetic
field anchored in the torus can enter this region, owing to 'buoyancy' effects
operating against the effective gravity due to centrifugal effects at the
vortex walls, just as they can rise from a flat gravitating disk. These
would flow out along the axis. There can, in addition, be an ordered poloidal
field threading the hole, associated with a current ring in the torus. This
ordered field (which would need to be the outcome of dynamo amplification rather
than just differential shearing) can extract energy via the Blandford-Znajek
(1977) effect. In the latter the role of the torus is mainly to anchor the
field: the power comes from the hole itself, whose spin energy can amount to
$\sim 10^{53}$ erg. Irrespective of the detailed field structure, there is good
reason to expect any magnetically-driven outflow to be less loaded with baryons
along the rotation axis than in other directions. Field lines that thread
the hole may be completely free of baryons.
\section{ Baryon-Free Outflows}
As a preliminary, we consider a magnetically-driven outflow in which baryonic
contamination can be neglected. In the context of many models this may seem an
unphysical limiting case: the difficulty is generally to understand how the
baryonic contamination stays below the requisite threshold. But the comments
in the previous section suggest that it is not impossible for the part of the
flow that emanates from near a black hole and is channelled along directions
aligned with the rotation axis.
There have been earlier discussions (dating back to Phinney, 1982) of
relativistic MHD flows from black holes in AGN.
In our present context, the values of L/M are larger by at
least ten orders of magnitude. This means that the effects of radiation pressure
and drag are potentially much stronger relative to gravity; also, pair
production due to photon-photon encounters is vastly more important.
For an outflow of magnetic luminosity $L$ and $e^\pm\gamma$ luminosity
$L_w \mathrel{\mathpalette\simov <} L$ channeled into jets of opening angle $\theta$ at a lower radius
$r_l=10^6 r_6$ cm the initial bulk Lorentz factor is $\Gamma_l \sim L/L_w$,
and the comoving magnetic field, temperature and pair density are
$B'_l \sim 2.5\times 10^{14} L_{51}^{1/2} \Gamma_l^{-1} r_6^{-1}\theta^{-1}$ G,
$T'_l \sim 2.5\times 10^{10} L_{51}^{1/4} \Gamma_l^{-3/4} r_6^{-1/2}
\theta^{-1/2}$ K and $n'_l \sim 4\times 10^{32} L_{51}^{3/4} \Gamma_l^{-9/4}
r_6^{-3/2} \theta^{-3/2}$ cm$^{-3}$ (primed quantities are comoving).
Unless $\Gamma_l >>1$ the jet will be loaded with pairs, very
opaque and in local thermal equilibrium. It behaves as a relativistic gas
which is ``frozen in" to the magnetic field, and
expands with $T' \propto r^{-1}$. The lab-frame transverse field
$B \propto r^{-1}$, and the comoving $B'\sim B/\Gamma$. The comoving
energy density (predominantly magnetic) is $\epsilon'\propto r^{-2}\Gamma^{-2}$,
and the the pair density is $n' \propto T'^3 \propto r^{-3}$ so $\Gamma \propto
n'/\epsilon' \propto r$, or $\Gamma \sim \Gamma_l (r/r_l)$.
When the comoving temperature approaches $m_e c^2/k$ the pairs start to
annihilate and their density drops exponentially, but as long as the scattering
depth $\tau'_T > 1$ the annihilation photons remain trapped and continue to
provide inertia, so $T' \propto r^{-1}$, $\Gamma \propto r$ persists until
$T'_a \sim 0.04 m_ec^2 \simeq$ 17 keV at a radius $r_a$, where
$\tau'_T (r_a) \sim 1$. Between $r_l$ and $r_a$ this leads to $(r_a/r_l) \sim
(T'_l/T'_a) \simeq 10^2 L_{51}^{1/4} \Gamma_l^{-3/4} r_6^{-1/2} \theta^{-1/2}$,
with $\Gamma_a \sim \Gamma_l (r_a/r_l) \simeq 10^2 L_{51}^{1/4} \Gamma_l^{1/4}
r_6^{-1/2} \theta^{-1/2}$. At $r_a$ the adiabatic density $n'_{a,ad} \sim n'_l
(r_a/r_l)^3 \simeq 4\times 10^{26}$ cm$^{-3}$ is mostly photons, while the
pair density from Saha's law is $n'_a \sim 1.5 \times 10^{18} \Gamma_l
r_6^{-1}$, the photon to pair ratio being $\sim 10^8$. The annihilation photons
streaming out from $r_a$ appear, in the source frame, as photons of energy
around $\Gamma_a 3 k T'_a \sim 5 L_{51}^{1/4} \Gamma_l^{1/4} r_6^{-1/2}
\theta^{-1/2}$ MeV.
Beyond $r_a$ the lab-frame pressure $B^2 \propto r^{-2}$ but
the inertia is drastically reduced, being provided only by the surviving
pairs, which are much fewer than the free-streaming photons.
In the absence of any restraining force, $\Gamma \propto n'/\epsilon'
\propto n'/B'^2 \propto B^2/n'$, and the gas would accelerate much faster than
the previous $\Gamma \propto r$. (The pair density is still above that needed
to carry the currents associated with the field, analogous to the
Goldreich-Julian density, so MHD remains valid). However, the Compton drag
time remains very short, since even after $\tau'_T < 1$ when most photons
are free-streaming, the pairs experience multiple scatterings with a small
fraction of the (much more numerous) photons for some distance beyond $r_a$.
One can define an ``isotropic" frame moving at $\Gamma_i \propto r$, in which
the annihilation photons are isotropic. In the absence of the magnetic pressure,
the drag would cause the electrons to continue to move with the radiation at
$\Gamma_i \propto r$. The magnetic pressure, however, acting against a much
reduced inertia, will tend to accelerate the electrons faster than this, and
as soon as $\Gamma \mathrel{\mathpalette\simov >} \Gamma_i$, aberration causes the photons to be
blueshifted and incident on the jet from the forward direction, so the
drag acts now as a brake. In the isotropic frame the jet electron energy is
$\gamma=\Gamma/\Gamma_i$ and its drag timescale is that needed for it to
encounter a number of photons whose cumulative energy after scattering equals
the energy per electron, $t_{dr,i} \sim m_e c^2/(u_{ph,i}\sigma c \gamma) =
(m_e c^2 4\pi r^2 \Gamma_i^3 / L \sigma_T \Gamma)$. In the lab frame this is
$\Gamma_i$ times longer, and the ratio of the drag time to the expansion time
$r/c$ must equal the ratio of the kinetic flux to the Poynting flux, $n'_j
m_ec^2 \Gamma^2 / [(B_l^2/4\pi)(r_l/r)^2]$, where $\sigma$ is the scattering
cross section and $n'_j$ is the comoving pair density in the jet. This
is satisfied for $\Gamma \sim \Gamma_a$ at $r_a$, and since the drag time is
much shorter than the annihilation time, the pair number is approximately
frozen, and $\Gamma \propto r^{5/2}$ for $r > r_a$. The upscattered photons
will, in the observer frame, appear as a power law extension of the
annihilation spectrum, with photon number index -2, extending from
$\sim 0.12 \Gamma_a m_ec^2$ to $\mathrel{\mathpalette\simov <} \Gamma_j m_ec^2$.
The acceleration $\Gamma\propto r^{5/2}$ abates when the annihilation
photons, of isotropic frame energy $0.12 m_ec^2(r_a/r)$, are
blueshifted in the jet frame to energies $\mathrel{\mathpalette\simov >} m_ec^2$. Their directions are
randomized by scattering, and collisions at large angles above threshold lead
to runaway $\gamma\gamma \to e^\pm$ (the compactness parameter still being
large). This occurs when $\Gamma_p \sim 10^7 L_{51}^{1/4} \Gamma_l^{1/4}
r_6^{-1/2} \theta^{-1/2}$, at $r_p \sim 10^{10} L_{51}^{1/4} \Gamma_l^{-3/4}
r_6^{1/2} \theta^{-1/2}$ cm. Thereafter the threshold condition implies
$\Gamma \propto r^2$, until the inertial limit $\Gamma_{in}\sim 10^9
L_{51}^{1/4} \Gamma_l^{1/4} r_6^{1/2} \theta^{-1/2}$ is reached at $r_{in}\sim
10^{11} L_{51}^{1/4} \Gamma_l^{-3/4} r_6 \theta^{-1/2}$ cm. Besides going into
pairs, a reduced fraction of the Poynting energy may continue going into a
scattered spectrum of number slope -2 up to $\Gamma_{in} m_ec^2$.
However, in an outflow of $L\sim 10^{51} L_{51}$ erg s$^{-1}$ maintained for
a time $t_w\sim$ few seconds, the relativistic jet would have to
push its way out through an external medium, with consequent dissipation at
the front end, as in double radio sources. Except for a brief initial transient
($\ll 1$ s in observer frame) the shock will be far outside the characteristic
radii discussed so far. The external shock will be ultrarelativistic, and
slows down as $r^{-1/2}$. The jet material, moving with $\Gamma \gg 1$,
therefore passes through a (reverse) shock, inside the contact
discontinuity, which strengthens as $r^{1/2}$ as the external shock slows down.
Since it is highly relativistic and the medium highly magnetized, this reverse
shock can emit much of the overall burst energy on a time $\sim t_w$.
When $t\sim t_w$ the external shock has reached $r_d \sim 10^{16}
L_{51}^{1/4} n_o^{-1/4} t_w^{1/2} \theta^{-1/2}$ cm, where $n_o$ is the
external density, and the Lorentz factor has dropped to $\sim 10^3
L_{51}^{1/8}n_o^{-1/8} t_w^{-1/4} \theta^{-1/4}$. This, as well as the
expected radiation of the shocks at (and after) this stage is rather
insensitive to whether the initial Lorentz factor is indeed $\sim 10^6-10^8$,
or whether baryon loading has reduced it to $\sim 10^3$. After this the
flow is in the impulsive regime and produces an ``external" shock GRB on an
observer timescale $r_d/c\Gamma^2 \simeq t_w$, as in, e.g. M\'esz\'aros~ \& Rees,
1993, MRP 1994.
\section{Radiation from High $\Gamma$ Magnetically Driven Jets}
If the jet is indeed baryon-free, and therefore has a Lorentz factor $\sim
10^7-10^9$, an extra mechanism can tap the Poynting energy along its length
(before it runs into external matter) -- namely, interaction of pairs in the
jet and annihilation photons along the jet with an ambient radiation field.
In our context, this would be radiation emitted by the torus or
baryon-loaded wind that is expected to flow outward, mainly in directions away
from the rotation axis, forming a funnel that surrounds the jet.
The ambient radiation causes an additional drag, which limits the
terminal Lorentz factor of the jet below the values calculated in \S 4 in
those regions where this is important. As a corollary, the Poynting
flux is converted into directed high energy radiation, after pair creation in
the jet by interaction with the annihilation radiation.
We cannot generally assume that the ambient radiation field is uniform across
the whole jet, because it may not be able to penetrate to the axis, but the
boost in photon energy can in principle be $\mathrel{\mathpalette\simov <} \Gamma^2$. This is similar
to what is discussed in AGN but with more extreme Lorentz factors and radiation
densities. Since the ambient radiation has a luminosity $\mathrel{\mathpalette\simov >} L_{Ed}$, and the
burst luminosity is larger by $10^{12}-10^{13}$, this mechanism is significant
only when the jet Lorentz factor has the very high values $\mathrel{\mathpalette\simov >} 10^6$
characteristic of these baryon-free outflows.
The photons from the sides of the funnel emitted into the jet have
energies $x_f=E_\gamma /m_ec^2\sim 1/20$.
When they pass transversely into the jet, and are scattered by
``cold" pairs, the scattering will be in the K-N regime for any $\Gamma >20$.
The electron will therefore recoil relativistically, acquiring a Lorentz factor
$\sim \Gamma/20$ (in the comoving frame), and will then cool by synchrotron
emission, isotropic in the jet frame. This radiation will, in the observer
frame, be beamed along the jet and blueshifted by $\Gamma$. One readily sees
that the net amplification (energy/incident energy) is $\sim \Gamma^2$.
If the external photons instead interact with one of the beamed gamma-rays of
energy $E$ in the source frame (typically near threshold for pair production)
the resultant pair will have a Lorentz factor of order $\Gamma / (E/m_e c2)$
in the comoving frame, and will again emit synchrotron, yielding the same
amplification factor as before.
The interaction of the ambient photons with the beamed annihilation
photons in the jet leads to a complicated cascade, where the jet Lorentz
factor must be calculated self-consistently with the drag exerted by the
ambient photons. A schematic cascade at $r\sim r_p$ where $\Gamma_p \sim
10^7$, would be as follows. \break
A) Interactions near $r_p$ between funnel photons of lab frame energy $x_f
\sim 10^{-1}$ and beamed annihilation jet photons of lab frame energy $x_1
\sim x_a \sim 10$ (produced by pair recombination in the outflow) lead to
$e^\pm$ of energy $\gamma_2 \sim x_1 \sim 10$ (lab), or $\gamma'_2 \sim
\Gamma_p/ \gamma_2 \sim 10^6$ (comoving jet frame). In the comoving magnetic
field $B' \sim 6\times 10^3$ G these pairs produce synchrotron photons of
energy $x'_2 \sim 10^1$ (comoving), or $x_2 \sim x'_2 \Gamma_p 2\times 10^8$
(lab). \break
B) Photons $x_2$ interact with ambient photons $x_f$ to produce pairs of energy
$\gamma_3 \sim x_2 \sim 2\times 10^8$ (lab), or $\gamma'_3 \sim \gamma_3 /
\Gamma_p \sim 2\times 10^1$ (comoving). In the same comoving field
these produce synchrotron photons $x'_3 \sim 10^{-8}$ (comoving) or $x_3\sim
10^{-1}$ (lab). \break
C) Photons $x_3 \sim 10^{-1}$ are below threshold for pair production with
funnel photons $x_f\sim 10^{-1}$, ending the cascade. The resulting photons
have lab energes $E_3 \sim x_3 m_ec^2 \sim 50 L_{51}^{1/4} \Gamma_l^{1/4}
r_6^{-1/2}\theta^{-1/2}$ KeV.
Of course, in a more realistic calculation the self-consistent jet Lorentz
factor may vary across the jet, and one needs to integrate over height.
However this simple example illustrates the types of processes involved.
\section{ Discussion}
>From the scenario above it follows that Poynting dominated (or magneticaly
driven) outflows from collapsed stellar objects would lead, if viewed along the
baryon-free jet core, to a $\gamma$-ray component resembling the GRB from the
external shocks in 'impulsive fireball' models. However the reverse shock is
here relativistic, and may play a larger role than in impulsive models.
Viewed at larger angles, the GRB component would ressemble that from internal
shocks (see below). The characteristic duration $t_w$ and the (possibly
complicated) light curve are controlled by the details of the Poynting
luminosity production as a function of time, e.g. the cooling or accretion time
of the debris around a centrally formed black hole.
Besides the 'standard' GRB emission, Poynting dominated outflows viewed along
the jet core would be characterized by additional radiation components from the
jet itself (\S\S 4, 5). The annihilation component peaks at $E_1 \sim 5
L_{51}^{1/4} \Gamma_l^{1/2} r_6^{-1/2} \theta^{-1/2}$ MeV (or 50 MeV if
$\theta \sim 0.1$, $\Gamma_l\sim 10$), with a power law extension of photon
number slope -2 going out to $\mathrel{\mathpalette\simov <} \Gamma_p m_ec^2 \sim 5 L_{51}^{1/4}
\Gamma_l^{1/2} r_6^{-1/2} \theta^{-1/2}$ TeV, if ambient photon drag limits
$\Gamma$ to $\mathrel{\mathpalette\simov <} \Gamma_p$ (otherwise, it could extend to $\Gamma_{in} m_ec^2
\sim 500$ TeV). If, as argued by Illarionov \& Krolik (1996) for AGN, an outer
skin of optical depth unity protects the core of the jet from ambient photons,
this annihilation component could have a luminosity not far below the GRB MeV
emission, and the $\Gamma$ of the jet core would not be limited by ambient
drag, only that of the skin. However, the skin depth is hard to estimate
without taking the drag self-consistently into account. The cascade from the
interaction of ambient (funnel) photons with jet photons leads to another jet
radiation component. The simplified estimate of \S 5 gives $E_3 \sim 50
L_{51}^{1/4} \Gamma_l^{1/4} r_6^{-1/2}\theta^{-1/2}$ KeV as a characteristic
energy, with a power law extension. The amplification factor of the cascade is
$A_c \mathrel{\mathpalette\simov <} \Gamma^2 \mathrel{\mathpalette\simov <} 10^{14} L_{51}^{1/2} \Gamma_l^{1/2} r_6^{-1}
\theta^{-1}$, and since the funnel emits $\sim L_{Edd}$, the luminosity could
be a (possibly small) fraction of $L$. The duration of these components
is $\sim t_w$, preceding the normal MeV burst by $\sim t_w$, and it may be
more narrowly beamed, arising from regions with $\Gamma_p \sim 10^7$, as
opposed to $\Gamma \mathrel{\mathpalette\simov <} 10^3$ for the MeV components, and $\Gamma_a \sim 10^2$
for the peak annihilation component.
Gamma-rays will be detected for a time $t_w$ (longer if external shocks are
expected). There may also be prolonged after effects, e.g. M\'esz\'aros~ \& Rees, 1997,
with X-ray and optical fluxes decreasing as a power law in time. In a rotating
black hole - torus system with a super-Eddington outflow the baryon
contamination will be minimal along the rotation axis and will increase at
larger angles to it. A narrow, largely baryon-free jet core such as discussed
in \S\S 4 and 5, would be surrounded by a debris torus producing a
baryon-loaded, slower outflow acting as a funnel, which injects X-ray seed
photons into the jet. This slower outflow would carry a fraction of the GRB
luminosity $L$ in kinetic energy form. For a slow outflow $\Gamma
\sim 10$, this kinetic energy can be reconverted into nonthermal X-rays after
a time $t_x \sim$ day if it is shock-heated at a radius $r\sim 10^{15}$ cm,
either by Alfv\'en wave heating, or by interaction with a pre-ejected
subrelativistic shell of matter of $\sim 10^{-3} M_\odot$ as has been detected
in SN 1987a. This could lead to the substantial X-ray afterglow detected $\sim$
days later in GRB 970228 (Costa, et al., 1997).
The observed time structure and luminosity of the gamma-ray burst would depend
on the angle between the rotation axis and the line of sight. Viewed obliquely,
the outflow has $\Gamma \mathrel{\mathpalette\simov <} 10$ and only an X-ray transient with some
accompanying optical emission would be seen. At smaller angles to the jet axis,
outflows with $\Gamma \mathrel{\mathpalette\simov <} 10^2$ would also be seen (which might originate at
$r > 10^{10}$ cm by entrainment of slower baryonic matter by the faster jet,
or they might already originate closer in), and the predominant radiation
would arise from internal shocks (Rees \& M\'esz\'aros~, 1994, Papathanassiou \& M\'esz\'aros~,
1996), which can have complex, multiple-peaked light curves. Closer to
the rotation axis, outflows with $\Gamma\sim 10^3$ may dominate, either at large
radii or already lower down, and radiation from external (deceleration) shocks
would be prominent (e.g. MRP94). Nearest to the rotation axis, if there is
indeed an almost baryon-free core to the jet, the annihilation radiation
power law component, the relativistic reverse shock radiation and the cascade
process (\S\S 4,5) yield significant extra contributions, which arrive ahead
of the external shock burst. The luminosity function for the burst population
may in part (or even entirely) be determined by the beam angle distribution,
jet edge effects and the observer angle relative to the jet axis. A time
variability would be expected from intermittency in the Poynting flux
extraction, or wobbling of the inner torus or black hole rotation axis, on
timescales down to $t_v \sim r_l/c \sim 10^{-3}- 10^{-4}$ s.
If the Poynting flux is derived from the accretion energy of a NS-NS remnant
torus of 0.1 $M_\odot$, and the gamma-rays are concentrated within an
angle $\theta \sim 10^o$, the efficiency of conversion of rest mass
into magnetic energy need only be $10^{-4}$ to generate a burst which (if
isotropic) would be inferred to have an energy of $10^{51}$ ergs, and only
$10^{-2}$ to simulate a $10^{53}$ erg isotropic burst. For a BH-NS merger the
torus may be $\sim 1 M_\odot$, and the corresponding magnetic efficiencies
required are $10^{-5}$ and $10^{-3}$ respectively. Thus, even bursts whose
inferred isotropic luminosites are $10^{52}-10^{53}$ erg, either due to high
redshifts or exceptional observed fluxes, require only very modest efficiencies.
Further reductions in the required efficiencies are possible if the Poynting
flux is due to the Blandford-Znajek mechanism, which can extract $\mathrel{\mathpalette\simov <}
10^{-1}$ of a near-maximally rotating Kerr black hole rest mass (as would be
expected from a NS-NS merger), leading to equivalent isotropic energies
$\sim 10^{53}(4\pi/2\pi\theta^2) \gg 10^{53}$ erg. Even without the latter,
somewhat speculative, possibility, it is clear that jet-like Poynting flows
from tori around black holes can produce even the most
intense bursts with comfortably low efficiencies. The detailed burst properties
-- particularly the rapid variability and bursts seen along the jet core, the
blueshifted annihilation and high energy cascade photons-- can help to pin
down the parameters of the model, while the afterglow can provide information
on the dynamics and mass flux in directions away from the jet axis.
\acknowledgements
We thank NASA NAG5-2857, NATO CRG-931446 and the Royal Society for support,
the Institute for Advanced Studies, Princeton, for its hospitality, and
Ira Wasserman for useful comments.
| 2024-02-18T23:39:43.702Z | 1997-03-20T18:15:20.000Z | algebraic_stack_train_0000 | 217 | 4,692 |
|
proofpile-arXiv_065-1212 | \section{Introduction}
The statistical mechanics of random surfaces has been of much interest over the
years in the random geometry approach to two-dimensional quantum gravity and
lower dimensional string theory. These models can be solved non-perturbatively
by viewing discretized Riemann surfaces as Feynman graphs of $N\times N$ matrix
models (see \cite{fgz} for a review). The large-$N$ limit of the matrix model
exhibits phase transitions which correspond to the continuum limit of the
dynamically triangulated random surface model and whose large-$N$ expansion
coincides with the genus expansion of the string theory. In this paper we will
study a class of matrix models originally proposed by Das et al. \cite{das} in
which the weighting of the coordination numbers of vertices (describing
intrinsic curvature) of the dynamical triangulation of the surface can be
varied. These ``curvature" matrix models have been solved exactly in the
large-$N$ limit for a Penner interaction potential by Chekhov and Makeenko
\cite{chekmak}, and more recently for an arbitrary potential by Kazakov,
Staudacher and Wynter \cite{kaz1,kaz1a} using group character expansion
methods.
There are many problems of interest which could be solved from the large-$N$
limit of such matrix models. Foremost among these are questions related to the
quantum geometry and fractal structure of the surfaces which contribute to the
string partition function. For instance, the string theory ceases to make sense
for string embedding dimensions $D>1$. However, there is no obvious
pathological reason for the existence of a barrier in the statistical
mechanical model at $D=1$, and it has been suggested \cite{polymer} that the
statistical mechanical model evolves into a different geometrical phase for
$D>1$, for instance a branched polymer (tree-like) phase, rather than the
stringy (intrinsically two-dimensional) phase. It was argued in \cite{das}
that curvature matrix models can represent such a transition by variation of
the weighting of the vertex coordination numbers. The trees are to be thought
of as connecting two-dimensional baby universes together in each of which the
usual $D<1$ behaviour is exhibited (see \cite{mak} for a review). Another
problem is the exact solution of two-dimensional quantum gravity with higher
curvature counterterms added to the action and the associated problem of the
existence of a phase transition from the gravitational phase to a flat phase of
the same string theory. By varying the vertex coordination numbers of a random
lattice to a flat one, the genus zero problem can be obtained as the
non-perturbative large-$N$ solution of an appropriate curvature matrix model
and it was demonstrated in \cite{kaz2} that there is no such phase transition.
Besides these problems, there are many other interesting physical situations
which require control over the types of surfaces that contribute to the random
sum.
The main subject of this paper is the curvature matrix model defined by the
partition function
$$ \refstepcounter{equnum}
Z_H[\lambda,t_q^*]=(2\pi\lambda)^{-N^2/2}\int
dX~~{\rm e}^{-\frac{N}{2\lambda}~{\rm tr}~X^2+N~{\rm tr}~V_3(XA)}
\label{partfn}\eqno (\thesection.\arabic{equnum}) $$
where
$$ \refstepcounter{equnum}
V_3(XA)=\frac{1}{3}(XA)^3
\label{V3XA}\eqno (\thesection.\arabic{equnum}) $$
and the integration is over the space of $N\times N$ Hermitian matrices $X$
with $A$ an invertible $N\times N$ external matrix. The Feynman diagram
expansion of (\ref{partfn}) can be written symbolically as
$$ \refstepcounter{equnum}
Z_H[\lambda,t_q^*]=\sum_{G_3}\prod_{v_q^*\in
G_3}\left(\lambda^{q/2}t_q^*\right)^{{\cal N}(v_q^*)}
\label{partgraphs}\eqno (\thesection.\arabic{equnum}) $$
where the sum is over all fat-graphs $G_3$ made up of 3-point vertices, the
weight associated to a vertex $v_q^*$ of the lattice dual to $G_3$ of
coordination number $q$ is
$$ \refstepcounter{equnum}
t_q^*=\frac{1}{q}\frac{{\rm tr}}{N}A^q~~~~~,
\label{dualweights}\eqno (\thesection.\arabic{equnum}) $$
and ${\cal N}(v_q^*)$ is the number of such vertices in $G_3$. This matrix
model therefore assigns a weight $t_q^*$ whenever $q$ 3-point vertices bound a
face of the associated surface discretization, and it thus allows one to
control the local intrinsic curvature $R_q=\pi(6-q)/q$ of the dynamical
triangulation. We will examine the structure of the solution at large-$N$ when
these dual weights are arranged so that the only triangulations which
contribute to the sum in (\ref{partgraphs}) are those whose vertices have
coordination numbers which are multiples of three.
There are several technical and physical reasons for studying such a model. The
method of exact solution developed in \cite{kaz1,kaz1a} is based on a treatment
of the large-$N$ limit of the Itzykson-Di Francesco character expansion formula
\cite{di}. This formula is most naturally suited to matrix models of random
surfaces whose vertices have even coordination numbers, and therefore the
analysis in \cite{kaz1,kaz1a,kaz2} was restricted to such situations. In the
following we will show that the large-$N$ limit of the Itzykson-Di Francesco
formula for the curvature matrix model (\ref{partfn}) can be used provided that
one arranges the group character sum carefully. This arrangement reflects the
discrete symmetries of the triangulation model that are not present in the
models with vertices of only even coordination number. We shall see that this
solution of the curvature matrix model actually implies a graph theoretical
equivalence between the dynamically triangulated model and certain even
coordination number models which were studied in \cite{kaz1,kaz1a}. We will
also show how to map the Hermitian matrix model (\ref{partfn}) onto a complex
one whose character expansion is especially suited to deal with coordination
numbers which are multiples of three and whose observables are in a one-to-one
correspondence with those of the even coordination number matrix models. As a
specific example, we solve the model explicitly for a simple power law
variation of the vertex weights (\ref{dualweights}) which agrees with expected
results from the random surface sums and which provides explicit insights into
the non-perturbative behaviour of observables of the curvature matrix model
(\ref{partfn}), and also into the phase structure of the Itzykson-Zuber model
\cite{iz} which is related to higher-dimensional Hermitian matrix models
\cite{fgz,mak,semsz}. There are several physical situations which are best
described using such dynamical triangulations \cite{polymer}. The analysis
which follows also applies to quite general odd coordination number models and
establishes a non-trivial application of the techniques developed in
\cite{kaz1,kaz1a}.
The organization of this paper is as follows. In section 2 we derive the
character expansion of the matrix model (\ref{partfn}) and discuss the
structure of the Itzykson-Di Francesco formula in this case. We also show that
certain symmetry assumptions that are made agree with the mapping of the model
onto a complex curvature matrix model which demonstrates explicitly how the
character sum should be taken. In section 3 we discuss the large-$N$ saddle
point solutions of these matrix models. We show that the Itzykson-Di Francesco
formula implies non-trivial correspondences between the Feynman graphs of the
matrix model (\ref{partfn}) and those of some models of random surfaces with
even coordination numbers. We also establish this correspondence more directly
using graph theory arguments. We analyse the critical behaviour of the matrix
model when all vertices are weighted equally and discuss the various relations
that exist among observables of the matrix models. We also demonstrate
explicitly using the Wick expansion of (\ref{partfn}) that the large-$N$
solution of section 2 is indeed correct. In section 4 we examine a simple
situation where the dual vertices of the dynamical triangulation are not
weighted equally. We show how this affects the large-$N$ saddle-point solution
of the matrix model and thereby establish the validity of our solution for
generic choices of the vertex coupling constants. Our results here may also be
relevant to the study of phase transitions in the Itzykson-Zuber model
\cite{mak,semsz}. In section 5 we briefly discuss the difficult problems which
occur when dealing with matrix models that admit complex-valued saddle-points
of the Itzykson-Di Francesco formula, and section 6 contains some concluding
remarks and potential physical applications of our analysis.
\section{Large-$N$ Character Expansion of the Partition Function}
We begin by describing the character expansion method for solving the matrix
model (\ref{partfn}) in the large-$N$ limit \cite{kaz1,kaz1a}. The external
field $A$ in (\ref{partfn}) explicitly breaks the invariance of the model under
unitary transformations $X\to UXU^\dagger$ which diagonalize the Hermitian
matrix $X$. Thus, unlike the more conventional matrix models with $A={\bf1}$
for which the dual vertices $v_q^*$ are all weighted equally \cite{fgz}, the
partition function (\ref{partfn}) cannot be written as a statistical theory of
the eigenvalues of $X$. The number of degrees of freedom can still, however, be
reduced from $N^2$ to $N$ by expanding the invariant function
$~{\rm e}^{\frac{N}{3}~{\rm tr}(XA)^3}$ in characters of the Lie group $GL(N,{\fam\Bbbfam C})$ as
\cite{kaz1}
$$ \refstepcounter{equnum}
~{\rm e}^{\frac{N}{3}~{\rm tr}(XA)^3}=c_N\sum_{\{h_i\}}{\cal X}_3[h]~\chi_{\{h_i\}}(XA)
\label{charexp}\eqno (\thesection.\arabic{equnum}) $$
where $c_N$ denotes an irrelevant numerical constant and
$$ \refstepcounter{equnum}
{\cal X}_3[h]=\left(\frac{N}{3}\right)^{\frac{1}{3}\sum_{i=1}^Nh_i}
\prod_{\epsilon=0,1,2}\frac{\Delta[h^{(\epsilon)}]}{\prod_{i=1}^{N/3}
\left(\frac{h_i^{(\epsilon)}-\epsilon}{3}\right)!}~{\rm sgn}\left[\prod_{0\leq
\epsilon_1<\epsilon_2\leq2}~\prod_{i,j=1}^{N/3}\left(h_i^{(\epsilon_2)}-h_j
^{(\epsilon_1)}\right)\right]
\label{char3}\eqno (\thesection.\arabic{equnum}) $$
The sum in (\ref{charexp}) is over unitary irreducible representations of
$GL(N,{\fam\Bbbfam C})$. They are characterized by their Young tableau weights
$\{h_i\}_{i=1}^N$ which are increasing non-negative integers, $0\leq
h_i<h_{i+1}$, and are defined by $h_i=i-1+b_i$ where $b_i$ is the number of
boxes in row $i$ of the associated Young tableau. Because of the 3-valence
coupling on the left-hand side of (\ref{charexp}), the sum is supported on
those weights which can be factored into three groups of equal numbers of
integers $h^{(\epsilon)}_i$, $i=1,\dots,\frac{N}{3}$, where $\epsilon=0,1,2$
labels their congruence classes modulo 3. The $GL(N,{\fam\Bbbfam C})$ characters can
be written using the Weyl character formula as
$$ \refstepcounter{equnum}
\chi_{\{h_i\}}(Y)=\frac{\det_{k,\ell}[y_k^{h_\ell}]}{\Delta[y]}
\label{weylchar}\eqno (\thesection.\arabic{equnum}) $$
where $y_i\in{\fam\Bbbfam R}$ are the eigenvalues of the Hermitian matrix $Y$ and
$$ \refstepcounter{equnum}
\Delta[y]=\prod_{i<j}(y_i-y_j)=\det_{k,\ell}[y_k^{\ell-1}]
\eqno (\thesection.\arabic{equnum}) $$
is the Vandermonde determinant.
Substituting (\ref{charexp}) into (\ref{partfn}) and diagonalizing $X$, the
integration over unitary degrees of freedom can be carried out explicitly using
the Schur orthogonality relations for the characters. The resulting expression
is a statistical mechanics model in Young tableau weight space which is a
special case of the Itzykson-Di Francesco formula \cite{kaz1,di}
$$ \refstepcounter{equnum}
Z_H[\lambda,t_q^*]=c_N\sum_{h=\{h^e,h^o\}}\frac{\prod_{i=1}^{N/2}
(h_i^e-1)!!h_i^o!!}{\prod_{i,j=1}^{N/2}(h_i^e-h_j^o)}~{\cal
X}_3[h]~\chi_{\{h\}}
(A)\left(\frac{\lambda}{N}\right)^{-\frac{1}{4}N(N-1)+\frac{1}{2}\sum_{i=1}
^{N/2}(h_i^e+h_i^o)}
\label{diformula}\eqno (\thesection.\arabic{equnum}) $$
The sum in (\ref{diformula}) is restricted to the even representations of
$GL(N,{\fam\Bbbfam C})$, i.e. those with Young tableau weights which split into an
equal number of even and odd integers $h_i^e$ and $h_i^o$,
$i=1,\dots,\frac{N}{2}$. This restriction arises because of the Gaussian
integration over the eigenvalues $x_i\in(-\infty,\infty)$ of the matrix $X$
\cite{kaz1}. Because of (\ref{char3}), these weights must also split equally
into their congruence classes modulo 3. The partition function
(\ref{diformula}) depends on only $N$ degrees of freedom (the Young tableau
weights $h_i$) and thus the curvature matrix model (\ref{partfn}) is formally
solvable in the large-$N$ limit.
For definiteness, we consider the matrix model of dually weighted discretized
surfaces for which the only graphs contributing in (\ref{partgraphs}) are those
triangulations which have $3m$, $m=1,2,\dots$, nearest-neighbour sites to each
vertex. This means that the vertex weights $t_q^*$ in (\ref{dualweights}) are
non-vanishing only when $q$ is a multiple of three. To realize this explicitly
in the matrix model (\ref{partfn}), we take the external matrix $A$ to be of
the block form
$$ \refstepcounter{equnum}
A=A^{(3)}\equiv\pmatrix{\bar A^{1/3}&0&0\cr0&\omega_3\bar
A^{1/3}&0\cr0&0&\omega_3^2\bar A^{1/3}\cr}
\label{A3}\eqno (\thesection.\arabic{equnum}) $$
where $\omega_3\in{\fam\Bbbfam Z}_3$ is a non-trivial cube root of unity and $\bar
A^{1/3}$ is an invertible $\frac{N}{3}\times\frac{N}{3}$ Hermitian matrix. The
character (\ref{weylchar}) can then be evaluated explicitly to get \cite{kaz1}
$$ \refstepcounter{equnum}
\chi_{\{h\}}(A^{(3)})=\chi_{\left\{\frac{h^{(0)}}{3}\right\}}(\bar
A)~\chi_{\left\{\frac{h^{(1)}-1}{3}\right\}}(\bar
A)~\chi_{\left\{\frac{h^{(2)}-2}{3}\right\}}(\bar A)~{\rm sgn}\left[
\prod_{0\leq\epsilon_1<\epsilon_2\leq2}~\prod_{i,j=1}^{N/3}\left(h_i^{
(\epsilon_2)}-h_j^{(\epsilon_1)}\right)\right]
\label{charA3}\eqno (\thesection.\arabic{equnum}) $$
and the statistical sum (\ref{diformula}) becomes
$$ \refstepcounter{equnum}\oldfalse{\begin{array}{c}
Z_H[\lambda,t_q^*]=c_N~\lambda^{-\frac14N(N-1)}\sum_{h=\{h^e,h^o\}}
\frac{\prod_i(h_i^e-1)!!h_i^o!!}
{\prod_{i,j}(h_i^e-h_j^o)}\prod_{\epsilon=0,1,2}\left(\frac{
\Delta[h^{(\epsilon)}]~\chi_{\left\{\frac{h^{(\epsilon)}-\epsilon}{3}
\right\}}(\bar A)}{\prod_i\left(\frac{h_i^{(\epsilon)}-\epsilon}{3}
\right)!}\right)\\~~~~~~~~~~~~~~~~
\times~{\rm e}^{\sum_ih_i[\frac{1}{2}\log(\frac{\lambda}{N})+
\frac{1}{3}\log(\frac{N}{3})]}\end{array}}
\label{di3valence}\eqno (\thesection.\arabic{equnum}) $$
Notice that the sign factors from (\ref{char3}) and (\ref{charA3}) cancel each
other out in (\ref{di3valence}).
To treat the large-$N$ limit, we assume that at $N=\infty$ the sum over
representations in (\ref{di3valence}) becomes dominated by a single,
most-probable Young tableau $\{h_i\}$. The problem that immediately arises is
that the groupings of the Young tableau weights in (\ref{di3valence}) into
equal numbers of even and odd integers and into equal numbers of mod 3
congruence elements need not occur symmetrically. There does not appear to be
any canonical way to split the weights up and distribute them in the
appropriate way. However, it is natural to assume, by the symmetry of the
weight distributions in (\ref{di3valence}), that the saddle-point localizes
around that configuration with equal numbers of even and odd weights, {\it
each} set of which groups with equal numbers into their mod 3 congruence
classes $h_i^{e(\epsilon)}$ and $h_i^{o(\epsilon)}$, $i=1,\dots,\frac{N}{6}$.
It is also natural, by symmetry again, to further assume that although the
different sets of weights $h_i^{e(\epsilon)},h_i^{o(\epsilon)}$ do not
factorize and decouple from each other, the groupings of the weights into even
and odd mod 3 congruence classes are distributed in the same way. We therefore
assume this symmetrical splitting in (\ref{di3valence}) and write the
statistical sum over the mod 3 congruence classes of weights
$h_i^{(\epsilon)}$. We also make the additional assumption that the product of
weights from two different congruence class groupings contributes the same in
the large-$N$ limit as the Vandermonde determinant for a single grouping, i.e.
that
$$ \refstepcounter{equnum}
\prod_{i,j}\left(h_i^{(\epsilon_2)}-h_j^{(\epsilon_1)}\right)=\prod_{i\neq
j}\left(h_i^{(\epsilon_1)}-h_j^{(\epsilon_1)}\right)
\eqno (\thesection.\arabic{equnum}) $$
for $\epsilon_2\neq\epsilon_1$. As we shall see, these symmetrical grouping
assumptions are indeed valid and lead to the appropriate solution of the
curvature matrix model (\ref{partfn}) at large-$N$.
We now rescale $h_i\to N\cdot h_i$ in (\ref{di3valence}), simplify the
factorials for large-$N$ using the Stirling approximations $h!!\sim~{\rm e}^{h(\log
h-1)/2}$ and $h!\sim~{\rm e}^{(h+\frac{1}{2})\log h-h}$, and apply the above symmetry
assumption retaining only the leading planar (order $1/N$) contributions to
(\ref{di3valence}). After some algebra, the partition function
(\ref{di3valence}) for the symmetrically distributed weights $h^{(0)}$ in the
large-$N$ limit can be written as
$$ \refstepcounter{equnum}
Z_H^{(0)}\sim\sum_{h^{(0)}}~{\rm e}^{N^2S_H[h^{(0)}]}
\label{partfnsym}\eqno (\thesection.\arabic{equnum}) $$
where the effective action is
$$ \refstepcounter{equnum}\oldfalse{\begin{array}{c}
S_H[h^{(0)}]=-\frac14\log\lambda+\frac{3}{2N^2}\sum_{i<j}^{N/3}
\log(h_i^{(0)}-h_j^{(0)})+\frac{1}{2N}
\sum_{i=1}^{N/3}h_i^{(0)}\left[2\log\lambda+\log\left(\lambda h_i^{(0)}
\right)-1\right]\\+
\frac{3}{N^2}\log I\left[\frac{Nh^{(0)}}{3},\bar A\right]\end{array}}
\label{effaction}\eqno (\thesection.\arabic{equnum}) $$
and we have introduced the Itzykson-Zuber integral \cite{iz}
$$ \refstepcounter{equnum}
I[h^{(0)},\bar
A]\equiv\int_{U(N/3)}dU~~{\rm e}^{\sum_{i,j}h_i^{(0)}\alpha_j|U_{ij}|^2}=
\chi_{\{h^{(0)}\}}(\bar A)\frac{\Delta[a]}{\Delta[h^{(0)}]\Delta[\alpha]}
\label{izformula}\eqno (\thesection.\arabic{equnum}) $$
where $a_i$ are the eigenvalues of the matrix $\bar A$ and $\alpha_i\equiv\log
a_i$. In arriving at (\ref{partfnsym}) we have used the Vandermonde determinant
decomposition
$$ \refstepcounter{equnum}
\Delta[h^{(\epsilon)}]=\Delta[h^{e(\epsilon)}]\Delta[h^{o(\epsilon)}]
\prod_{i,j=1}^{N/6}\left(h_i^{e(\epsilon)}-h_j^{o(\epsilon)}\right)
\label{vandecompsym}\eqno (\thesection.\arabic{equnum}) $$
and ignored irrelevant terms which are independent of $\lambda$ and the $h$'s.
In the case of the even coordination number models studied in
\cite{kaz1,kaz1a,kaz2} the natural weight distribution to sum over in the
Itzykson-Di Francesco formula (\ref{diformula}) is that of the original even
representation restriction (see subsection 3.1). In our case the natural weight
distribution appears to be that of the mod 3 congruence classes. However, in
contrast to the even coordination number models, there is no way to justify
this from the onset. We shall see later on that the appropriate distribution
for the Itzykson-Di Francesco formula at large-$N$ is essentially determined by
the discrete symmetries of the given curvature matrix model. Some evidence for
the validity of this assumption comes from comparing the Hermitian model here
with the {\it complex} curvature matrix model
$$ \refstepcounter{equnum}
Z_C[\lambda,t_q^*]=(2\pi\lambda)^{-N^2}\int
d\phi~d\phi^\dagger~~{\rm e}^{-\frac{N}{\lambda}~{\rm tr}~\phi^\dagger\phi-\frac{1}{3}~
{\rm tr}(\phi A\phi^\dagger B)^3}
\label{partfncomplex}\eqno (\thesection.\arabic{equnum}) $$
where the integration is now over the space of $N\times N$ complex-valued
matrices $\phi$, and $B$ is another external $N\times N$ invertible matrix.
Again one can expand the invariant, cubic interation term as in (\ref{charexp})
and the character expansion of (\ref{partfncomplex}) is \cite{kazproc}
$$ \refstepcounter{equnum}
Z_C[\lambda,t_q^*]=c_N\sum_{h=\{h^{(\epsilon)}\}}\frac{\prod_{i=1}^Nh_i!}
{\Delta[h]}~\chi_{\{h\}}(A)~\chi_{\{h\}}(B)~{\cal
X}_3[h]\left(\frac{\lambda}{N}
\right)^{-\frac{1}{2}N(N-1)+\sum_ih_i}
\label{complexcharexpB}\eqno (\thesection.\arabic{equnum}) $$
Now the Gaussian integration over the eigenvalues of the positive definite
Hermitian matrix $\phi^\dagger\phi$ goes over $[0,\infty)$ and so there is no
restriction to even and odd weights in (\ref{complexcharexpB}), only the
restriction to mod 3 congruence classes because of the 3-valence coupling. If
we take $B={\bf1}$, so that $\chi_{\{h\}}(B)\propto\Delta[h]$, and use the
decomposition (\ref{A3}), then the character expansion (\ref{complexcharexpB})
becomes
$$ \refstepcounter{equnum}
Z_C[\lambda,t_q^*]=c_N~\lambda^{-\frac12N(N-1)}
\sum_{h=\{h^{(\epsilon)}\}}\prod_{i=1}^Nh_i!
\prod_{\epsilon=0,1,2}\frac{\Delta[h^{(\epsilon)}]}{\prod_i\left(
\frac{h_i^{(\epsilon)}-\epsilon}{3}\right)!}~\chi_{\left\{\frac{h^{(\epsilon)}-
\epsilon}{3}\right\}}(\bar A)~{\rm e}^{\sum_ih_i^{(\epsilon)}[\log(
\frac{\lambda}{N})+\frac{1}{3}\log(\frac{N}{3})]}
\label{ZCcharsplit}\eqno (\thesection.\arabic{equnum}) $$
Thus in this case the congruence classes of weights completely factorize, and
we can now naturally assume that the $N=\infty$ configuration of weights is
distributed equally into these three classes. In this sense, the complex matrix
model (\ref{partfncomplex}) is a better representative of the dually-weighted
triangulated random surface sum, because one need not make any ad-hoc
assumptions about the weight distribution. The character expansion
(\ref{ZCcharsplit}) suggests that the correct statistical distribution for the
dynamical triangulation model at large-$N$ is over $h_i^{(\epsilon)}$, as was
assumed above, and we shall discuss the explicit relationship between these
Hermitian and complex matrix models in the next section. We now rescale $h_i\to
N\cdot h_i$ and simplify (\ref{ZCcharsplit}) for large-$N$ as for the Hermitian
model. After some algebra, the partition function (\ref{ZCcharsplit}) for the
symmetrically distributed weights $h^{(0)}$ in the large-$N$ limit can be
written as
$$ \refstepcounter{equnum}
Z_C^{(0)}\sim\sum_{h^{(0)}}~{\rm e}^{N^2S_C[h^{(0)}]}
\label{partfnsymC}\eqno (\thesection.\arabic{equnum}) $$
where the effective action is
$$ \refstepcounter{equnum}\oldfalse{\begin{array}{c}
S_C[h^{(0)}]=-\frac12\log\lambda+
\frac{6}{N^2}\sum_{i<j}^{N/3}\log(h_i^{(0)}-h_j^{(0)})+\frac{2}{N}
\sum_{i=1}^{N/3}h_i^{(0)}\left[\log\left(\lambda^{3/2}h_i^{(0)}
\right)-1\right]\\
+\frac{3}{N^2}\log I\left[\frac{Nh^{(0)}}{3},\bar A\right]\end{array}}
\label{effactionC}\eqno (\thesection.\arabic{equnum}) $$
\section{Saddle-point Solutions}
If the characteristic $h$'s which contribute to the sum in (\ref{partfnsym})
are of order $1$, then the large-$N$ limit of the Itzykson-Di Francesco formula
can be found using the saddle-point approximation. The attraction or repulsion
of the $\lambda$-dependent terms in (\ref{di3valence}) is compensated by the
Vandermonde determinant factors, so that, in spite of the unsymmetrical
grouping of Young tableau weights that occurs, the partition function is
particularly well-suited to a usual saddle-point analysis in the large-$N$
limit. In this section we shall see that the symmetry assumptions made above
give a well-posed solution of the matrix model at $N=\infty$. The partition
functions (\ref{partfn}) and (\ref{partfncomplex}) are dominated at large-$N$
by their saddle points which are the extrema of the actions in
(\ref{effaction}) and (\ref{effactionC}). In what follows we shall determine
the saddle-point solutions of both the Hermitian and complex matrix models and
use them to examine various properties of the random surface sum.
\subsection{Hermitian Model}
To find the saddle-point solution of the Hermitian matrix model, we minimize
the action (\ref{effaction}) with respect to the Young tableau weights. Then
the stationary condition $\frac{\partial S_H}{\partial h_i^{(0)}}=0$ leads to
the saddle-point equation
$$ \refstepcounter{equnum}
\frac{3}{N}\sum_{\buildrel{j=1}\over{j\neq
i}}^{N/3}\frac{1}{h_i^{(0)}-h_j^{(0)}}=-\log(\lambda^3h_i^{(0)})-2{\cal
F}(h_i^{(0)})
\label{saddleHdiscr}\eqno (\thesection.\arabic{equnum}) $$
where we have introduced the Itzykson-Zuber correlator
$$ \refstepcounter{equnum}
{\cal F}(h_i^{(0)})=3\frac{\partial\log I[h^{(0)}/3,\bar A]}{\partial
h_i^{(0)}}
\label{izcorr}\eqno (\thesection.\arabic{equnum}) $$
To solve (\ref{saddleHdiscr}), we assume that at large-$N$ the Young tableau
weights $h_i^{(0)}$, $i=1,\dots,\frac{N}{3}$, become distributed on a finite
interval $h\in[0,a]$ on the real line and introduce the normalized spectral
density $\rho_H(h)\equiv\frac{dx}{dh(x)}$, where $h(x)\in[0,a]$ is a
non-decreasing differentiable function of $x\in[0,1]$ with $h(3i/N)=N\cdot
h_i^{(0)}$. Discrete sums over $h_j^{(0)}$ are then replaced with integrals
over $h$ by the rule $\frac{3}{N}\sum_{j=1}^{N/3}\to\int_0^a dh$. Notice that
since the $h_i$'s are increasing integers, we have $0\leq\rho_H(h)\leq1$, and
so the spectral distribution is trivial, $\rho_H(h)=1$, on some sub-interval
$[0,b]$ with $0<b\leq1\leq a$ \cite{dougkaz}. The saddle-point equation
(\ref{saddleHdiscr}) then becomes an integral equation for the spectral
density,
$$ \refstepcounter{equnum}
{\int\!\!\!\!\!\!-}_{\!\!b}^a~dh'~\frac{\rho_H(h')}{h-h'}=-\log(\lambda^3h)-
\log\left(\frac{h}{h-b}\right)-2{\cal F}(h)~~~~~,~~~~~h\in[b,a]
\label{saddlepteq}\eqno (\thesection.\arabic{equnum}) $$
where we have saturated the spectral distribution function at its maximum value
$\rho_H(h)=1$ on $[0,b]$. The saddle-point solution of the matrix model can
thus be determined as the solution of the Riemann-Hilbert problem for the usual
resolvent function
$$ \refstepcounter{equnum}
{\cal H}_H(h)=\left\langle\frac{3}{N}\sum_{i=1}^{N/3}\frac{1}{h-h_i^{(0)}}
\right\rangle=\int_0^adh'~\frac{\rho_H(h')}{h-h'}
\label{resolv}\eqno (\thesection.\arabic{equnum}) $$
which is analytic everywhere in the complex $h$-plane away from the support
interval $[0,a]$ of $\rho_H$ where it has a branch cut. The discontinuity of
the resolvent across this cut determines the spectral density by
$$ \refstepcounter{equnum}
{\cal H}_H(h\pm i0)={\int\!\!\!\!\!\!-}_{\!\!0}^a~dh'~\frac{\rho_H(h')}{h-h'}\mp
i\pi\rho_H(h)~~~,~~~h\in[0,a]
\label{disceq}\eqno (\thesection.\arabic{equnum}) $$
In contrast to the more conventional Hermitian one-matrix models \cite{fgz},
the Riemann-Hilbert problem (\ref{saddlepteq}) involves the unknown
Itzykson-Zuber correlator ${\cal F}(h)$ which must be determined separately in
the large-$N$ limit. As shown in \cite{kaz1,kaz1a}, it is determined by the
vertex couplings (\ref{dualweights}) through the contour integral
$$ \refstepcounter{equnum}
q\tilde t_q^*\equiv3qt_{3q}^*=3\frac{{\rm tr}}{N}\bar A^q=\frac{1}{q}\oint_{\cal
C}\frac{dh}{2\pi i}~~{\rm e}^{q({\cal H}_H(h)+{\cal F}(h))}~~~,~~~q\geq1
\label{weightcont}\eqno (\thesection.\arabic{equnum}) $$
where the closed contour $\cal C$ encircles the support of the spectral
function $\rho_H(h)$, i.e. the cut singularity of ${\cal H}_H(h)$, with
counterclockwise orientation in the complex $h$-plane. Note that
(\ref{weightcont}) is the large-$N$ limit of a set of weights of size $N/3$ and
it follows from the identity \cite{kaz1}
$$ \refstepcounter{equnum}
{\rm tr}~\bar A^q=\sum_{k=1}^{N/3}\frac{\chi_{\left\{\tilde
h^{(0)}_k(q)/3\right\}}(\bar A)}{\chi_{\left\{h^{(0)}/3\right\}}(\bar A)}
\label{weightNid}\eqno (\thesection.\arabic{equnum}) $$
where
$$ \refstepcounter{equnum}
(\tilde h^{(0)}_k(q))_i=h^{(0)}_i+3q\delta_{ik}
\label{tildehk}\eqno (\thesection.\arabic{equnum}) $$
In the next section we shall discuss the evaluation of ${\cal F}(h)$ and the
corresponding structure of the curvature matrix model as the vertex weights
$\tilde t_q^*$ are varied. Notice that, strictly speaking, in most cases of
interest the characters corresponding to a specific choice of vertex weightings
cannot be represented via matrix traces as in (\ref{dualweights}) and need to
be defined by an analytical continuation. This can be accomplished by using the
Schur-Weyl duality theorem to represent the $GL(N,{\fam\Bbbfam C})$ characters as the
generalized Schur functions
$$ \refstepcounter{equnum}
\chi_{\{h\}}[t^*]=\det_{k,\ell}\left[P_{h_k+1-\ell}[t^*]\right]
\label{schurchar}\eqno (\thesection.\arabic{equnum}) $$
where $P_n[t^*]$ are the Schur polynomials defined by
$$ \refstepcounter{equnum}
\exp\left(N\sum_{q=1}^\infty z^qt_q^*\right)=\sum_{n=0}^\infty z^nP_n[t^*]
\label{schurpoly}\eqno (\thesection.\arabic{equnum}) $$
When the weights $t_q^*$ are given by (\ref{dualweights}), the Schur functions
(\ref{schurchar}) coincide with the Weyl characters (\ref{weylchar}).
Our first observation here is that the saddle-point equation (\ref{saddlepteq})
is identical to that of the even-even coordination number model discussed in
\cite{kaz1,kaz1a} which is defined by replacing the cubic interaction matrix
potential $V_3(XA)$ in (\ref{partfn}) by
$$ \refstepcounter{equnum}
V_{\rm even}(XA)=\sum_{q=1}^\infty t_{2q}(XA^{(2)})^{2q}
\label{VevenXA}\eqno (\thesection.\arabic{equnum}) $$
where
$$ \refstepcounter{equnum}
A^{(2)}=\pmatrix{\tilde A^{1/2}&0\cr0&-\tilde A^{1/2}\cr}
\label{Aeven}\eqno (\thesection.\arabic{equnum}) $$
with $\tilde A^{1/2}$ an invertible $\frac N2\times\frac N2$ Hermitian matrix,
and
$$ \refstepcounter{equnum}
2qt_{2q}=2qt_{2q}^*=2\frac{{\rm tr}}{N}\tilde A^q
\label{t2qdef}\eqno (\thesection.\arabic{equnum}) $$
The curvature matrix model with potential (\ref{VevenXA}) generates a sum over
random surfaces where arbitrary even coordination number vertices of the
primary and dual lattices are permitted and are weighted identically, so that
the model (\ref{VevenXA}) is self-dual. In fact, the same saddle-point equation
arises from the interaction potential
$$ \refstepcounter{equnum}
V_4(XA)=\frac{1}{4}(XA^{(2)})^4
\label{V4}\eqno (\thesection.\arabic{equnum}) $$
The matrix model with potential $V_4$ generates discretizations built up from
even-sided polygons with 4-point vertices.
As shown in \cite{kaz1}, the Itzykson-Di Francesco formula for the curvature
matrix model with potential (\ref{VevenXA}) follows from replacing ${\cal
X}_3[h]$ in (\ref{diformula}) by
$$ \refstepcounter{equnum}
\chi_{\{h\}}(A^{(2)})=\chi_{\left\{\frac{h^e}2\right\}}(\tilde
A)~\chi_{\left\{\frac{h^o-1}{2}\right\}}(\tilde A)~{\rm
sgn}\left[\prod_{i,j=1}^{N/2}\left(h_i^e-h_j^o\right)\right]
\label{chiAeven}\eqno (\thesection.\arabic{equnum}) $$
Now the even and odd weights can be naturally assumed to distribute equally,
and the effective action is
$$ \refstepcounter{equnum}
S_{\rm even}[h^e]=-\frac14\log\lambda
+\frac2{N^2}\sum_{i<j}^{N/2}\log(h_i^e-h_j^e)+\frac1N
\sum_{i=1}^{N/2}h_i^e\left[\log(\lambda h_i^e)-1\right]+4\log
I\left[\frac{Nh^e}2,\tilde A\right]
\label{Seven}\eqno (\thesection.\arabic{equnum}) $$
Defining a distribution function for the $N/2$ weights $h_i^e$ analogously to
that above and varying the action (\ref{Seven}) yields the saddle-point
equation (\ref{saddlepteq}) with $\lambda^3\to\lambda$. In this case the
Itzykson-Zuber correlator ${\cal F}(h)$ is determined as in (\ref{weightcont})
but now with $q\tilde t_q^*$ defined to be equal to (\ref{t2qdef}) (i.e. the
large-$N$ limit of a set of weights of size $N/2$).
Likewise, in the case of the 4-point model (\ref{V4}), we replace ${\cal
X}_3[h]$ by
$$ \refstepcounter{equnum}
{\cal X}_4[h]=\left(\frac{N}{4}\right)^{\frac14\sum_ih_i}\prod_{\mu=0}^3
\frac{\Delta[h^{(\mu)}]}{\prod_{i=1}^{N/4}\left(\frac{h_i^{(\mu)}-
\mu}4\right)!}~{\rm
sgn}\left[\prod_{0\leq\mu_1<\mu_2\leq3}\prod_{i,j=1}^{N/4}\left
(h_i^{(\mu_2)}-h_j^{(\mu_1)}\right)\right]
\label{X4h}\eqno (\thesection.\arabic{equnum}) $$
where now the character sum is supported on weights that factor equally into
their congruence classes $h_i^{(\mu)}$, $\mu=0,\dots,4$, $i=1,\dots,\frac N4$,
modulo 4. Because of the original weight constraint of the Itzykson-Di
Francesco formula, this means that the even weights $h_i^e$ distribute equally
into the mod 4 congruence classes $\mu=0,2$ and the odd weights $h_i^o$ into
the classes $\mu=1,3$. Again, assuming these weights all distribute equally
leads to the effective action
$$ \refstepcounter{equnum}
S_4[h^e]=-\frac14\log\lambda+\frac1{N^2}\sum_{i<j}^{N/2}
\log(h_i^e-h_j^e)+\frac1{2N}\sum_{i=1}^{N/2}h_i^e\left[\log(\lambda^2
h_i^e)-1\right]+2\log I\left[\frac{Nh^e}{2},\tilde A\right]
\label{S4}\eqno (\thesection.\arabic{equnum}) $$
Introducing a distribution function for the $N/2$ weights $h_i^e$ again leads
to precisely the same saddle-point equation (\ref{saddlepteq}) with
$\lambda^3\to\lambda^2$. Here and in the even-even model above, unlike the
3-point model of the previous section, the large-$N$ configuration of weights
naturally localizes onto $h_i^e$.
These three matrix models therefore all possess the same solution at
$N=\infty$, i.e. their random surface ensembles of genus zero graphs are
identical. Their $1/N$ corrections will differ somewhat because of the
different ways that the weights split into the respective congruence classes in
the three separate cases. The genus zero free energies are related in a simple
fashion. From (\ref{effaction}) and the definition of the Hermitian
distribution function, the genus zero free energy for the 3-point model is
$$ \refstepcounter{equnum}\oldfalse{\begin{array}{c}
S_H[\lambda,\tilde
t_q^*]=\frac{1}{6}\left(\frac{1}{2}\int_0^a\!\!{\int\!\!\!\!\!\!-}_{\!\!0}^adh~dh'~\rho_H
(h)\rho_H(h')\log|h-h'|+\int_0^adh~\rho_H(h)h\left[\log(\lambda^3h)-1\right]
\right.\\\left.+2\int_0^adh~\rho_H(h)\log I_c[h,\bar
A]\right)-\frac14\log\lambda\end{array}}
\label{SHsaddlept}\eqno (\thesection.\arabic{equnum}) $$
where $\rho_H(h)$ is the solution of the saddle-point equation
(\ref{saddlepteq}) and $\log I_c[h,\bar A]=3^2\log I[\frac{Nh^{(0)}}3,\bar A]$
is the Itzykson-Zuber integral (\ref{izformula}) at $N=\infty$. Similarly, the
genus zero free energies for the even-even and 4-point models can be written
using the same distribution function $\rho_H(h)$. Now, however, the rules for
replacing sums by integrals differ. For the even-even and 4-point models the
normalized sums corresponding to a spectral density normalized to unity are
$\frac2N\sum_{i=1}^{N/2}$ and the $N=\infty$ Itzykson-Zuber integral is $\log
I_c[h,\tilde A]=2^2\log I[\frac{Nh^e}2,\tilde A]$. Taking this into account
along with the change of $\lambda$ in the three cases, we find that the free
energies of the three models discussed above are all related by
$$ \refstepcounter{equnum}
3S_H[\lambda^{2/3},\tilde t_q^*]=2S_4[\lambda,\tilde t_q^*]=S_{\rm
even}[\lambda^2,\tilde t_q^*]
\label{freerels}\eqno (\thesection.\arabic{equnum}) $$
It is quite intriguing that the Itzykson-Di Francesco formula naturally implies
these relations by dictating the manner in which the Young tableau weights
should decompose with respect to the appropriate congruence classes in each
case. This is reflected in both the overall numerical coefficients and the
overall powers of $\lambda$ that appear in (\ref{freerels}). In the next
subsection we shall give purely graph-theoretical proofs of these
relationships. This yields a non-trivial verification of the assumptions made
in section 2 about the splitting of the Young tableau weights for the dynamical
triangulation model.
\subsection{Graph Theoretical Relationships}
Before pursuing further the properties of the curvature matrix model above, we
present a direct, graphical proof of the relationship (\ref{freerels}) between
the generating functions for the 3-point, 4-point, and even-even planar graphs.
We denote the ensembles of spherical graphs in these three cases by ${\cal
M}_3$, ${\cal M}_4$ and ${\cal M}_{\rm even}$, respectively. We first discuss
the mapping ${\cal M}_3\to{\cal M}_4$. Consider a planar graph $G\in{\cal
M}_3$. Using a version of the colouring theorem for a spherical topology, we
can colour the lines of $G$ by three colours, R, B, and G, to give a planar
graph $G^c$ with labelled lines (Fig. 1). Because $G$ consists of polygons
whose numbers of sides are all multiples of three, the colouring can be chosen
so that the three colours always occur in the sequence (R,B,G) as one goes
around any polygon of $G^c$ with clockwise orientation. We can now contract the
two 3-point vertices bounding each R-link of $G^c$ into a 4-point vertex (Fig.
2). A polygon of $3m$ sides with 3-point vertices in $G^c$ then becomes a
polygon of $2m$ sides with 4-point vertices. The resulting 2-coloured graph
$\tilde G^c$ thus belongs to ${\cal M}_4$. Notice that if each link has a
weight $\lambda$ associated to it, then the effect of the R-contractions is to
map $\lambda^3\to\lambda^2$ on ${\cal M}_3\to{\cal M}_4$.
\begin{figure}
\unitlength=0.90mm
\linethickness{0.4pt}
\begin{picture}(150.00,90.00)(0,10)
\small
\thicklines
\put(70.00,35.00){\line(1,0){30}}
\put(70.00,35.00){\line(-1,2){10}}
\put(60.00,55.00){\line(1,2){10}}
\put(100.00,35.00){\line(1,2){10}}
\put(110.00,55.00){\line(-1,2){10}}
\put(70.00,75.00){\line(1,0){30}}
\put(70.00,35.00){\line(-1,-1){10}}
\put(70.00,75.00){\line(-1,1){10}}
\put(100.00,35.00){\line(1,-1){10}}
\put(100.00,75.00){\line(1,1){10}}
\put(60.00,55.00){\line(-1,0){14}}
\put(110.00,55.00){\line(1,0){14}}
\put(110.00,25.00){\line(1,-2){5}}
\put(110.00,25.00){\line(1,0){11}}
\put(60.00,85.00){\line(-1,2){5}}
\put(60.00,85.00){\line(-1,0){11}}
\thinlines
\put(85.00,55.00){\line(0,1){40}}
\put(87.00,77.00){\makebox(0,0){R}}
\put(85.00,55.00){\line(0,-1){40}}
\put(87.00,37.00){\makebox(0,0){R}}
\put(80.00,17.50){\line(3,2){40}}
\put(90.00,17.50){\line(-3,2){40}}
\put(85.00,55.00){\line(2,-1){35}}
\put(85.00,55.00){\line(-2,-1){35}}
\put(85.00,55.00){\line(2,1){35}}
\put(105.00,82.50){\makebox(0,0){G}}
\put(65.00,82.50){\makebox(0,0){B}}
\put(107.00,68.00){\makebox(0,0){B}}
\put(117.00,57.00){\makebox(0,0){R}}
\put(107.00,43.00){\makebox(0,0){G}}
\put(63.00,43.00){\makebox(0,0){B}}
\put(63.00,68.00){\makebox(0,0){G}}
\put(105.00,27.50){\makebox(0,0){B}}
\put(65.00,27.50){\makebox(0,0){G}}
\put(85.00,55.00){\line(-2,1){35}}
\put(80.00,95.00){\line(3,-2){40}}
\put(90.00,95.00){\line(-3,-2){40}}
\put(115.00,17.50){\line(0,1){60}}
\put(117.00,57.00){\makebox(0,0){R}}
\put(55.00,35.00){\line(0,1){57.5}}
\put(53.00,57.00){\makebox(0,0){R}}
\put(80.00,21.00){\line(1,0){40}}
\put(90.00,91.00){\line(-1,0){40}}
\put(53.00,83.00){\makebox(0,0){R}}
\put(60.00,92.00){\makebox(0,0){G}}
\put(117.00,27.00){\makebox(0,0){R}}
\put(109.00,22.00){\makebox(0,0){G}}
\end{picture}
\begin{description}
\small
\baselineskip=12pt
\item[Figure 1:] A 3-colouring of a graph in ${\cal M}_3$ (thick lines) and its
associated dual graph in ${\cal M}_3^*$ (thin lines).
\end{description}
\end{figure}
\begin{figure}
\unitlength=0.90mm
\linethickness{0.4pt}
\begin{picture}(150.00,90.00)(0,10)
\small
\thicklines
\put(85.00,35.00){\line(1,1){20}}
\put(85.00,35.00){\line(-1,1){20}}
\put(85.00,75.00){\line(-1,-1){20}}
\put(85.00,75.00){\line(1,-1){20}}
\put(85.00,35.00){\circle*{2.00}}
\put(85.00,75.00){\circle*{2.00}}
\put(85.00,35.00){\line(-1,-1){10}}
\put(85.00,35.00){\line(1,-1){10}}
\put(65.00,55.00){\circle*{2.00}}
\put(105.00,55.00){\circle*{2.00}}
\put(95.00,25.00){\circle*{2.00}}
\put(95.00,25.00){\line(-1,-1){6}}
\put(95.00,25.00){\line(1,1){6}}
\put(95.00,25.00){\line(1,-1){6}}
\put(65.00,55.00){\line(-1,1){10}}
\put(65.00,55.00){\line(-1,-1){10}}
\put(85.00,75.00){\line(1,1){10}}
\put(85.00,75.00){\line(-1,1){10}}
\put(75.00,85.00){\circle*{2.00}}
\put(75.00,85.00){\line(-1,1){6}}
\put(75.00,85.00){\line(-1,-1){6}}
\put(75.00,85.00){\line(1,1){6}}
\put(105.00,55.00){\line(1,1){10}}
\put(105.00,55.00){\line(1,-1){10}}
\thinlines
\put(85.00,55.00){\line(1,1){20}}
\put(85.00,55.00){\line(-1,-1){20}}
\put(85.00,55.00){\line(-1,1){25}}
\put(85.00,55.00){\line(1,-1){25}}
\put(100.00,70.00){\line(-1,1){25}}
\put(100.00,70.00){\line(1,-1){20}}
\put(70.00,70.00){\line(1,1){20}}
\put(70.00,70.00){\line(-1,-1){20}}
\put(70.00,40.00){\line(-1,1){20}}
\put(70.00,40.00){\line(1,-1){25}}
\put(100.00,40.00){\line(1,1){20}}
\put(100.00,40.00){\line(-1,-1){20}}
\put(80.00,96.00){\line(-1,-1){20}}
\put(98.00,22.00){\line(1,1){15}}
\put(98.00,22.00){\line(-1,-1){10}}
\multiput(85.00,55.00)(0,2){20}{\line(0,1){1}}
\multiput(85.00,55.00)(0,-2){20}{\line(0,-1){1}}
\multiput(85.00,55.00)(2,0){20}{\line(1,0){1}}
\multiput(85.00,55.00)(-2,0){20}{\line(-1,0){1}}
\multiput(85.00,85.00)(2,0){7}{\line(1,0){1}}
\multiput(85.00,85.00)(-2,0){5}{\line(-1,0){1}}
\multiput(75.00,85.00)(-2,-1){10}{\line(3,0){1}}
\multiput(85.00,25.00)(-2,0){7}{\line(-1,0){1}}
\multiput(85.00,25.00)(2,0){5}{\line(1,0){1}}
\multiput(95.00,25.00)(2,1){10}{\line(3,0){1}}
\multiput(115.00,55.00)(0,2){8}{\line(0,1){1}}
\multiput(115.00,55.00)(0,-2){8}{\line(0,-1){1}}
\multiput(55.00,55.00)(0,2){6}{\line(0,1){1}}
\multiput(55.00,55.00)(0,-2){6}{\line(0,-1){1}}
\multiput(62.00,78.00)(0,2){6}{\line(0,1){1}}
\multiput(62.00,78.00)(0,-2){6}{\line(0,-1){1}}
\multiput(108.00,32.00)(0,2){5}{\line(0,1){1}}
\multiput(108.00,32.00)(0,-2){5}{\line(0,-1){1}}
\put(77.00,90.00){\makebox(0,0){G}}
\put(83.00,80.00){\makebox(0,0){B}}
\put(87.00,80.00){\makebox(0,0){G}}
\put(95.00,68.00){\makebox(0,0){B}}
\put(75.00,68.00){\makebox(0,0){G}}
\put(94.00,48.00){\makebox(0,0){G}}
\put(76.00,48.00){\makebox(0,0){B}}
\put(79.00,32.00){\makebox(0,0){G}}
\put(91.00,32.00){\makebox(0,0){B}}
\put(89.00,22.00){\makebox(0,0){G}}
\end{picture}
\begin{description}
\small
\baselineskip=12pt
\item[Figure 2:] A 2-colouring of a graph in ${\cal M}_4$ (thick lines), its
associated dual graph in ${\cal M}_4^*$ (thin lines), and the corresponding
diagram in ${\cal M}_{\rm even}$ (dashed lines). The 4-point vertices denoted
by solid circles are contracted from the R-links of the corresponding graph of
${\cal M}_3$ depicted in Fig. 1.
\end{description}
\end{figure}
Conversely, suppose $\tilde G\in{\cal M}_4$. We can form a 2-coloured graph
$\tilde G^c$, with colours B and G, such that the colours alternate along the
lines of $\tilde G^c$ (Fig. 2). There are two possible ways to orient lines of
$\tilde G^c$, by defining a direction to them from either B to G or from G to B
at each vertex. The 4-point vertices of the resulting oriented graph can then
be split into a line, labelled by R, bounded by two 3-point vertices. There are
two ways of doing this, by splitting the 4-point vertex either vertically or
horizontally with respect to the given orientation (Fig. 3). Thus to each
2-coloured graph $\tilde G^c\in{\cal M}_4$ there corresponds two distinct
(topologically inequivalent) 3-coloured graphs $G^c\in{\cal M}_3$. There are
also three distinct 2-coloured graphs $\tilde G^c\in{\cal M}_4$ for each
3-coloured graph $G^c\in{\cal M}_3$ corresponding to the three possible choices
of contraction colour R, B or G. This therefore defines a three-to-two mapping
on ${\cal M}_4\to{\cal M}_3$ and is just the statement of the first equality of
(\ref{freerels}).
\begin{figure}
\unitlength=0.90mm
\linethickness{0.4pt}
\begin{picture}(150.00,50.00)(0,10)
\small
\thicklines
\put(10.00,35.00){\circle*{2.00}}
\put(110.00,35.00){\circle*{2.00}}
\put(0.00,45.00){\line(1,-1){20}}
\put(0.00,25.00){\line(1,1){20}}
\put(0.00,45.00){\makebox(0,0){\Large$\searrow$}}
\put(0.00,25.00){\makebox(0,0){\Large$\nearrow$}}
\put(15.00,40.00){\makebox(0,0){\Large$\nearrow$}}
\put(15.00,30.00){\makebox(0,0){\Large$\searrow$}}
\put(30.00,35.00){\makebox(0,0){$\longrightarrow$}}
\put(50.00,42.00){\line(0,-1){14}}
\put(50.00,42.00){\line(-1,1){10}}
\put(50.00,42.00){\line(1,1){10}}
\put(50.00,28.00){\line(1,-1){10}}
\put(50.00,28.00){\line(-1,-1){10}}
\put(40.00,52.00){\makebox(0,0){\Large$\searrow$}}
\put(55.00,47.00){\makebox(0,0){\Large$\nearrow$}}
\put(40.00,18.00){\makebox(0,0){\Large$\nearrow$}}
\put(55.00,23.00){\makebox(0,0){\Large$\searrow$}}
\put(100.00,45.00){\line(1,-1){20}}
\put(100.00,25.00){\line(1,1){20}}
\put(100.00,45.00){\makebox(0,0){\Large$\searrow$}}
\put(105.00,30.00){\makebox(0,0){\Large$\swarrow$}}
\put(120.00,45.00){\makebox(0,0){\Large$\swarrow$}}
\put(115.00,30.00){\makebox(0,0){\Large$\searrow$}}
\put(130.00,35.00){\makebox(0,0){$\longrightarrow$}}
\put(150.00,35.00){\line(1,0){14}}
\put(150.00,35.00){\line(-1,1){10}}
\put(150.00,35.00){\line(-1,-1){10}}
\put(164.00,35.00){\line(1,1){10}}
\put(164.00,35.00){\line(1,-1){10}}
\put(140.00,45.00){\makebox(0,0){\Large$\searrow$}}
\put(145.00,30.00){\makebox(0,0){\Large$\swarrow$}}
\put(174.00,45.00){\makebox(0,0){\Large$\swarrow$}}
\put(169.00,30.00){\makebox(0,0){\Large$\searrow$}}
\put(0.00,47.00){\makebox(0,0){B}}
\put(20.00,47.00){\makebox(0,0){G}}
\put(0.00,23.00){\makebox(0,0){G}}
\put(20.00,23.00){\makebox(0,0){B}}
\put(60.00,54.00){\makebox(0,0){G}}
\put(40.00,54.00){\makebox(0,0){B}}
\put(60.00,16.00){\makebox(0,0){B}}
\put(40.00,16.00){\makebox(0,0){G}}
\put(52.00,35.00){\makebox(0,0){R}}
\put(157.00,37.00){\makebox(0,0){R}}
\put(120.00,47.00){\makebox(0,0){G}}
\put(100.00,47.00){\makebox(0,0){B}}
\put(100.00,23.00){\makebox(0,0){G}}
\put(120.00,23.00){\makebox(0,0){B}}
\put(140.00,23.00){\makebox(0,0){G}}
\put(174.00,47.00){\makebox(0,0){G}}
\put(140.00,47.00){\makebox(0,0){B}}
\put(174.00,23.00){\makebox(0,0){B}}
\end{picture}
\begin{description}
\small
\baselineskip=12pt
\item[Figure 3:] The two possible 4-point vertex splittings which respect the
given B and G colour orientation.
\end{description}
\end{figure}
Actually, there exists a simpler, line mapping between these two ensembles of
graphs in terms of dual lattices. Let ${\cal M}_3^*$ denote the collection of
planar graphs dual to those of ${\cal M}_3$ (i.e. lattices built up from
triangles that form $3m$-valence vertices), and ${\cal M}_4^*$ the dual
ensemble to ${\cal M}_4$ (i.e. the graphs formed of squares that meet to form
vertices of even coordination number). The sets ${\cal M}_3^*$ and ${\cal
M}_4^*$ are generated, respectively, by the curvature matrix models with matrix
potentials
$$ \refstepcounter{equnum}
V_3^{(*)}(XA)=\sum_{q=1}^\infty
t_{3q}^*(XA_3)^{3q}~~~~~,~~~~~V_4^{(*)}(XA)=\sum_{q=1}^\infty
t_{2q}(XA_4)^{2q}
\label{V3*}\eqno (\thesection.\arabic{equnum}) $$
where the $N\times N$ matrix $A_m$ is defined by $\frac{\rm tr} NA_m^k=\delta^k_m$.
The corresponding generating functions $S_H^{(*)}[\lambda,\tilde t_q^*]$ and
$S_4^{(*)}[\lambda,\tilde t_q^*]$ coincide, respectively, with
(\ref{effaction}) and (\ref{S4}) \cite{kaz1}. Again the lines of the graphs of
${\cal M}_3^*$ can be 3-coloured, and the deletion of all R-coloured lines
gives a map ${\cal M}_3^*\to{\cal M}_4^*$ (Figs. 1 and 2). This mapping is
three-to-one because of the three possible choices of deletion colour. The
inverse map ${\cal M}_4^*\to{\cal M}_3^*$, defined by inserting an R-link
across the diagonal of each square of the graphs of ${\cal M}_4^*$, is then
two-to-one because of the two possible choices of diagonal of a square.
The correspondence between the ensembles of graphs ${\cal M}_4$ and ${\cal
M}_{\rm even}$ is similar and has been noted in \cite{kaz1a} (see Fig. 2).
Given a graph $G^*\in{\cal M}_4^*$, we choose a diagonal of each square of
$G^*$. Connecting these diagonals together produces a graph in ${\cal M}_{\rm
even}$. Equivalently, we can place vertices in the face centers of the
corresponding dual graph $G\in{\cal M}_4$ and connect them together by drawing
lines through each of the 4-point vertices of $G$ (so that
$\lambda^4\to\lambda^8$ from the splitting of these 4-point vertices). In this
way we obtain a map ${\cal M}_4,{\cal M}_4^*\to{\cal M}_{\rm even}$ where the
vertices and face centers of graphs of ${\cal M}_{\rm even}$ correspond to the
vertices of graphs of ${\cal M}_4^*$, or equivalently the faces of ${\cal
M}_4$. Because there are two distinct ways of choosing the diagonal of a square
of $G^*\in{\cal M}_4^*$ (equivalently two ways of splitting a 4-point vertex of
$G\in{\cal M}_4$ with respect to a 2-colour orientation analogously to that in
Fig. 3), this mapping is two-to-one and corresponds to the second equality of
(\ref{freerels}). In particular, there is a three-to-one correspondence between
graphs of ${\cal M}_3,{\cal M}_3^*$ and ${\cal M}_{\rm even}$.
We stress that these graphical mappings are only valid on the sphere. In terms
of the Itzykson-Di Francesco formula, this means that the ${\cal O}(1/N)$
corrections to the large-$N$ saddle-point solutions of the corresponding
curvature matrix models will differ. These non-trivial correspondences are
predicted from the matrix model formulations, because in all cases we obtain
the same $N=\infty$ saddle-point equations but different splittings of Young
tableau weights into congruence classes thus leading to different overall
combinatorical factors in front of the graph generating function. Note that in
the case of the 4-point and even-even models the vertex weights $\tilde t_q^*$
are mapped into each other under the above graphical correspondence because of
the simple line map that exists between ${\cal M}_{\rm even}$ and ${\cal
M}_4^*$. In the case of the mappings onto ${\cal M}_3$ the weights $\tilde
t_q^*$ defined in (\ref{weightcont}) are mapped onto those defined by
(\ref{t2qdef}) because of the contraction of $3m$-sided polygons into
$2m$-sided polygons.
\subsection{Complex Model}
As mentioned in section 2, it is useful to compare the Hermitian 3-point
curvature matrix model with the complex one since in the latter case the Young
tableau weights completely factorize and the splitting of them into mod 3
congruence classes appears symmetrically. The saddle-point solution in the case
of the complex curvature matrix model is identical in most respects to that of
the Hermitian models above. Now the spectral density $\rho_C(h)$ for the Young
tableau weights $h_i^{(0)}$, $i=1,\dots,\frac N3$, obeys the saddle-point
equation
$$ \refstepcounter{equnum}
{\int\!\!\!\!\!\!-}_{\!\!b}^a~dh'~\frac{\rho_C(h')}{h-h'}=-\log(\lambda^{3/2}h)-
\log\left(\frac{h}{h-b}\right)-\frac{1}{2}{\cal F}(h)~~~~~,~~~~~h\in[b,a]
\label{saddlepteqC}\eqno (\thesection.\arabic{equnum}) $$
where the logarithmic derivative ${\cal F}(h)$ of the Itzykson-Zuber integral
is defined just as in (\ref{izcorr}). The solution for ${\cal F}(h)=0$ is thus
identical to those above. In particular, working out the free energy as before
we see that
$$ \refstepcounter{equnum}
S_C[\lambda,\tilde t_q^*]=4S_H[\rho_C;\sqrt{\lambda},\tilde
t_q^*]-\int_0^adh~\rho_C(h)\log I_c[h,\bar A]
\label{freeCHrel}\eqno (\thesection.\arabic{equnum}) $$
The combinatorical factors appearing in (\ref{freeCHrel}) can be understood
from the Wick expansions of the Hermitian and complex curvature matrix models.
First consider the case ${\cal F}(h)=0$. One factor of 2 appears in front of
$S_H$ in (\ref{freeCHrel}) because the number of independent degrees of freedom
of the $N\times N$ complex matrix model is twice that of the $N\times N$
Hermitian curvature matrix model. The other factor of 2 arises from the mapping
of Feynman graphs of the complex matrix model onto those of the Hermitian model
(Fig. 4). At each 6-point vertex of a complex graph we can place a
$\phi^\dagger$ line beside a $\phi$ line to give a graph with 3-point vertices
and ``thick" lines (each thick line associated with a $\phi^\dagger\phi$ pair
of lines). This maps the propagator weights as $\lambda^6\to\lambda^3$ and
there are $3!/3=2$ distinct ways of producing a complex graph by thickening and
splitting the lines of a graph in ${\cal M}_3$ in this way. This is the
relation (\ref{freeCHrel}) for $I_c\equiv1$. In the general case, there is a
relative factor of 4 in front of the Itzykson-Zuber correlator in
(\ref{saddlepteqC}) because the Feynman rules for the complex curvature matrix
model associate the weights
$$ \refstepcounter{equnum}
t_q^{C*}=\frac1q\frac{\rm tr} N\left(A^{1/2}\right)^q
\label{complexweights}\eqno (\thesection.\arabic{equnum}) $$
to the dual vertices $v_q^*$ of the Feynman graphs (compare with
(\ref{dualweights})). Thus for $q\tilde t_q^*\neq1$, the free energy of the
complex model differs from (\ref{SHsaddlept}) in a factor of 4 in front of the
integral involving the Itzykson-Zuber integral $I_c[h,\bar A]$.
The complex curvature matrix model with the above rules thus generates
``checkered" Riemann surfaces corresponding to the double-lined triangulations
shown in Fig. 4. A 3-colouring of a graph of ${\cal M}_3$ is now described by a
sort of 6-colouring whereby each colour R, B, and G is assigned an incoming and
outgoing orientation at each vertex. Again these relations are only valid at
$N=\infty$. However, the models lie in the same universality class (as with the
even coordination number models discussed above) and consequently their double
scaling limits will be the same. In particular, the equivalences
(\ref{freerels}) and (\ref{freeCHrel}) will hold to all genera near the
critical point \cite{ackm} (see the next subsection).
\begin{figure}
\unitlength=0.90mm
\linethickness{0.4pt}
\begin{center}
\begin{picture}(100.00,35.00)(0,10)
\small
\thicklines
\put(35.00,25.00){\circle*{2.50}}
\put(80.00,25.00){\circle*{2.50}}
\put(25.00,35.00){\line(1,-1){22}}
\put(25.00,15.00){\line(1,1){22}}
\put(21.00,25.00){\line(1,0){28}}
\put(24.00,24.50){\makebox(0,0){\Large$\to$}}
\put(42.00,24.50){\makebox(0,0){\Large$\to$}}
\put(25.00,35.00){\makebox(0,0){\Large$\searrow$}}
\put(25.00,15.00){\makebox(0,0){\Large$\nearrow$}}
\put(40.00,30.00){\makebox(0,0){\Large$\nearrow$}}
\put(40.00,20.00){\makebox(0,0){\Large$\searrow$}}
\put(60.00,25.00){\makebox(0,0){$\longrightarrow$}}
\put(81.00,25.00){\line(0,1){15}}
\put(79.00,25.00){\line(0,1){14}}
\put(79.00,25.00){\line(-1,-1){10}}
\put(81.00,25.00){\line(-1,-1){12}}
\put(79.00,25.00){\line(1,-1){10}}
\put(81.00,25.00){\line(1,-1){12}}
\put(79.00,37.00){\makebox(0,0){\Large$\downarrow$}}
\put(81.00,32.00){\makebox(0,0){\Large$\uparrow$}}
\put(70.00,16.00){\makebox(0,0){\Large$\nearrow$}}
\put(76.00,20.00){\makebox(0,0){\Large$\swarrow$}}
\put(89.00,15.00){\makebox(0,0){\Large$\nwarrow$}}
\put(86.00,20.00){\makebox(0,0){\Large$\searrow$}}
\end{picture}
\end{center}
\begin{description}
\small
\baselineskip=12pt
\item[Figure 4:] The mapping of a complex model 6-point vertex onto a Hermitian
model 3-point vertex. The incoming lines represent the $\phi^\dagger$ fields,
the outgoing lines the $\phi$ fields, and the solid circle denotes an insertion
of the matrix $A^{1/2}$.
\end{description}
\end{figure}
\subsection{Critical Behaviour and Correlation Functions}
We will now discuss to what extent the above models are universal. First, let
us find the explicit solution of the saddle-point equation (\ref{saddlepteq})
in the case ${\cal F}(h)=0$ \cite{kaz1} (i.e. $\bar A=\tilde A={\bf1}$ and
$q\tilde t_q^*=1$). Notice that in this case the dual potentials in
(\ref{VevenXA}) and (\ref{V3*}) become the Penner-type potentials
\cite{chekmak,di}
$$ \refstepcounter{equnum}\oldfalse{\begin{array}{c}
V_{\rm even}(XA)=-\frac{1}{2}\log\left({\bf1}-(XA^{(2)})^2\right)~~~~~,~~~~~
V_4^{(*)}(XA)=-\frac12\log\left({\bf1}-(XA_4)^2\right)\\V_3^{(*)}(XA)=-\frac13
\log\left({\bf1}-(XA_3)^3\right)\end{array}}
\label{Pennerpots}\eqno (\thesection.\arabic{equnum}) $$
so that the above arguments imply the equivalences between the curvature matrix
model for a dynamical triangulation and those of Penner models. The behaviour
of the saddle-point solution when the dual vertices $v_q^*$ are not weighted
equally will be examined in the next section.
The saddle-point equation (\ref{saddlepteq}) determines the continuous part of
the resolvent function ${\cal H}_H(h)$ across its cut. Using the normalization
$\int_0^adh~\rho_H(h)=1$ of the spectral density in (\ref{resolv}) implies the
asymptotic boundary condition ${\cal H}_H(h)\sim1/h$ at $|h|\to\infty$. It
follows that the solution for the resolvent function is given by \cite{fgz}
$$ \refstepcounter{equnum}\oldfalse{\begin{array}{lll}
{\cal H}_H(h)&=&\log\left(\frac{h}{h-b}\right)-\oint_{\cal C}\frac{ds}{2\pi
i}~\frac{1}{s-h}\sqrt{\frac{(h-a)(h-b)}{(s-a)(s-b)}}~{\int\!\!\!\!\!\!-}_{\!\!b}^adh'~
\frac{\rho_H(h')}{s-h'}\\&=&
\log\left(\frac{h}{h-b}\right)+\oint_{\cal C}\frac{ds}{2\pi
i}~\frac{1}{s-h}\sqrt{\frac{(h-a)(h-b)}{(s-a)(s-b)}}\left\{\log(\lambda^3s)+
\log\left(\frac{s}{s-b}\right)\right\}\end{array}}
\label{ressoln}\eqno (\thesection.\arabic{equnum}) $$
The contour integrations in (\ref{ressoln}) can be evaluated by blowing up the
contour $\cal C$ to infinity and catching the contributions from the cuts of
the two logarithms, on $(-\infty,0]$ and $[0,b]$, respectively, and also from
the simple pole at $s=h$. Keeping careful track of the signs of the square
roots on each cut and taking discontinuities across these cuts, after some
algebra we arrive at
$$ \refstepcounter{equnum}\oldfalse{\begin{array}{lll}
{\cal H}_H(h)&=&-\log(\lambda^3h)+\sqrt{(h-a)(h-b)}\left\{\int_0^b
-\int_{-\infty}^0\right\}\frac{dx}{x-h}~\frac{1}
{\sqrt{(x-a)(x-b)}}\\&=&\log\left\{\frac{a-b}{\lambda^3(\sqrt a+\sqrt
b)^2}\frac{\left(h+\sqrt{ab}+\sqrt{(h-a)(h-b)}\right)^2}{h\left(
h(a+b)-2ab+2\sqrt{ab(h-a)(h-b)}\right)}\right\}\end{array}}
\label{resexactHt=1}\eqno (\thesection.\arabic{equnum}) $$
It remains to solve for the endpoints $a,b$ of the support of the spectral
distribution function in terms of the parameters of the matrix potential. They
are determined by expanding the function (\ref{resexactHt=1}) for large-$h$,
and requiring that the constant term vanish and that the coefficient of the
$1/h$ term be 1. This leads to a pair of equations for $a,b$
$$ \refstepcounter{equnum}
\xi=\frac13(\eta-1)~~~~~,~~~~~3\lambda^6\eta^3=\eta-1
\label{bdryeq1}\eqno (\thesection.\arabic{equnum}) $$
where we have introduced the positive endpoint parameters
$$ \refstepcounter{equnum}
\xi=\frac14\left(\sqrt a-\sqrt b\right)^2~~~~~,~~~~~\eta=\frac14\left(\sqrt
a+\sqrt b\right)^2
\label{xietadef}\eqno (\thesection.\arabic{equnum}) $$
The cubic equation for $\eta$ in (\ref{bdryeq1}) can be solved explicitly. The
original matrix model free energy is analytic around $\lambda=0$. We shall see
below that it is analytic in $\eta$, so that we should choose the branch of the
cubic equation in (\ref{bdryeq1}) which is regular at $\lambda=0$. This
solution is found to be
$$ \refstepcounter{equnum}
\eta=-\frac12\beta^{1/3}-\frac1{18\lambda^6\beta^{1/3}}-i\sqrt3
\left(\beta^{1/3}-\frac1{9\lambda^6\beta^{1/3}}\right)
\label{etaregsol}\eqno (\thesection.\arabic{equnum}) $$
where
$$ \refstepcounter{equnum}
\beta=-\frac1{6\lambda^6}+\frac1{54\lambda^9}\sqrt{81\lambda^6-4}
\label{betadef}\eqno (\thesection.\arabic{equnum}) $$
The solution (\ref{etaregsol}) leads to the appropriate boundary condition
$\eta(\lambda=0)=1,\xi(\lambda=0)=0$, i.e. $a=b=1$, as expected since in this
case the spectral density should be trivial, $\rho_H(h)\equiv1$.
The solution (\ref{etaregsol}) is real-valued for
$\lambda<\lambda_c\equiv(2/9)^{1/3}$ which is the regime wherein all three
roots of the cubic equation for $\eta$ in (\ref{bdryeq1}) are real. For
$\lambda>\lambda_c$ there is a unique real-valued solution which does not
connect continuously with the real solution that is regular at $\lambda=0$. At
$\lambda=\lambda_c$ the pair of complex-conjugate roots for $\lambda>\lambda_c$
coalesce and become real-valued. Thus for $\lambda>\lambda_c$ the one-cut
solution above for ${\cal H}_H$ is no longer valid and the spectral density
$\rho_H$ becomes supported on more than one interval. The value
$\lambda=\lambda_c$ is therefore identified as the critical point of a phase
transition of the random surface theory which corresponds to a change of
analytic structure of the saddle-point solution of the matrix model. As usual,
at this critical point the continuum limit of the discretized random surface
model, describing the physical string theory, is reached. To determine the
precise nature of the geometry that is obtained in this continuum limit, we
need to study the scaling behaviour of the matrix model free energy near the
critical point. In the following we shall present two calculations illustrating
this critical behaviour.
\subsubsection{Propagator}
As shown in \cite{kaz1,kaz1a}, in the general case the resolvent function
${\cal H}_H(h)$ and the Itzykson-Zuber correlator ${\cal F}(h)$ completely
determine the solution of the random matrix model in the large-$N$ limit. For
instance, expectation values of the operators $\frac{\rm tr} NX^{2q}$ can be
obtained from the contour integral
$$ \refstepcounter{equnum}
\left\langle\frac{\rm tr} N X^{2q}\right\rangle_{{\cal
M}_3}=\frac{\lambda^q}q\oint_{\cal C}\frac{dh}{2\pi i}~h^q~{\rm e}^{q{\cal H}_H(h)}
\label{trNX2qH}\eqno (\thesection.\arabic{equnum}) $$
where the normalized average is with respect to the partition function
(\ref{partfn}) and the decomposition (\ref{A3}). The expression (\ref{trNX2qH})
holds only for the even moments of the matrix $X$ because it is only for such
moments that the character expansion of the right-hand side of (\ref{trNX2qH})
admits a symmetrical even-odd decomposition as in the Itzykson-Di Francesco
formula \cite{kaz1}. Although a character expansion for the odd moments can be
similarly derived, the weights will not split symmetrically and there does not
appear to be a general expression for them in terms of the weight distribution
function at large-$N$. For the even coordination number models above, the
reflection symmetry $X\to-X$ of the matrix model forces the vanishing of all
odd moments automatically and (\ref{trNX2qH}) represents the general expression
for the complete set of observables which do not involve the external field $A$
at large-$N$. In terms of the character expansion formula, the character sum
for $\langle\frac{\rm tr} NX^{2q+1}\rangle$ vanishes in the case of an even
potential because of the localization onto the even representations of
$GL(N,{\fam\Bbbfam C})$, whereas in the 3-point model $\langle\frac{\rm tr}
NX^{2q+1}\rangle_{{\cal M}_3}\neq0$ because its character sum is supported on
mod 3 congruence class representations.
The contour integral representation (\ref{trNX2qH}), with ${\cal H}_H$ replaced
by ${\cal H}_C$, also holds for the correlators $\langle\frac{\rm tr}
N(\phi^\dagger\phi)^q\rangle_C$ of the complex curvature matrix model. These
averages form the complete set of $A$-independent observables at large-$N$ in
this case because of the charge conjugation symmetry $\phi\to\phi^\dagger$ of
the complex model (which is the analogue here of the reflection symmetry of the
even coordination number models). Again this indicates that the complex
curvature matrix model is better suited to represent the triangulated random
surface sum, because it is more amenable to an explicit solution in the
large-$N$ limit. Furthermore, its complete set of correlation functions
coincide at $N=\infty$ with those of the equivalent matrix models defined above
with even potentials, after redefining the vertex weights according to
(\ref{dualweights}) and (\ref{complexweights}).
The contour integral on the right-hand side of (\ref{trNX2qH}) can be evaluated
by blowing up the contour $\cal C$, expanding the resolvent function
(\ref{resexactHt=1}) for large-$h$, and computing the residue at $h=\infty$.
For example, for $q=1$ we have
$$ \refstepcounter{equnum}
\left\langle\frac{\rm tr} NX^2\right\rangle_{{\cal
M}_3}=\lambda\left(\frac12+\langle h\rangle\right)
\label{trNX2H}\eqno (\thesection.\arabic{equnum}) $$
where the weight average $\langle
h\rangle=\langle\frac3N\sum_{i=1}^{N/3}h_i^{(0)}\rangle=\int_0^adh~\rho_H(h)h$
corresponds to the coefficient of the $1/h^2$ term in the asymptotic expansion
of (\ref{resexactHt=1}). This result also follows directly from differentiating
the free energies
$$ \refstepcounter{equnum}
\left\langle\frac{\rm tr}
NX^2\right\rangle=2\lambda^2\frac{\partial}{\partial\lambda}S[\lambda,t_q^*]
{}~~~~~,~~~~~\left\langle\frac{\rm tr}
N\phi^\dagger\phi\right\rangle_C=\lambda^2\frac{\partial}{\partial\lambda}
S_C[\lambda,t_q^*]
\label{propsfromfree}\eqno (\thesection.\arabic{equnum}) $$
and using the saddle-point equations. Notice that (\ref{propsfromfree}) and the
free energy relations (\ref{freerels}) and (\ref{freeCHrel}) imply, in
particular, that the matrix propagators of the 3-point, 4-point, even-even and
complex matrix models coincide with the appropriate redefinitions of the
coupling constants $\lambda$ and $t_q^*$. However, the equivalences of generic
observables in the various models will not necessarily hold. We shall return to
this point shortly.
Using (\ref{resexactHt=1}) and (\ref{bdryeq1}), the propagator (\ref{trNX2H})
can be written as
$$ \refstepcounter{equnum}
\left\langle\frac{\rm tr} NX^2\right\rangle_{{\cal
M}_3}=\lambda\left(\frac13-\frac{\eta^2}3+\eta\right)
\label{trNX2Hexpleta}\eqno (\thesection.\arabic{equnum}) $$
Expanding (\ref{trNX2Hexpleta}) using (\ref{etaregsol}) as a power series in
$\lambda$ yields
$$ \refstepcounter{equnum}
\left\langle\frac{\rm tr} NX^2\right\rangle_{{\cal
M}_3}=\lambda+\lambda^7+6\lambda^{13}+54\lambda^{19}+594\lambda^{25}+{\cal
O}(\lambda^{31})
\label{propexp}\eqno (\thesection.\arabic{equnum}) $$
We have verified, by an explicit Wick expansion of the matrix propagator (up to
and including the order shown in (\ref{propexp})) using the $3m$-sided polygon
constraints on the matrix traces (\ref{dualweights}), that (\ref{propexp})
indeed coincides with perturbative expansion of the curvature matrix model
(\ref{partfn}) and thus correctly counts the planar 3-point fat graphs
consisting of only $3m$-sided polygons. It also agrees with the Wick expansions
of the even coordination number models above \cite{kaz1} and of the complex
curvature matrix model.
To examine the scaling behaviour of observables of the matrix model near the
critical point, we introduce a renormalized, continuum cosmological constant
$\Lambda$ and momentum $\Pi$ by
$$ \refstepcounter{equnum}
\lambda^6=\lambda_c^6(1-\Lambda)~~~~~~,~~~~~~\eta=\eta_c-\Pi/2
\label{contdefs}\eqno (\thesection.\arabic{equnum}) $$
where $\eta_c=\eta(\lambda=\lambda_c)=3/2$. We then approach the critical point
along the line $\Lambda,\Pi(\Lambda)\to0$, where the function $\Pi(\Lambda)$ is
found by substituting the definitions (\ref{contdefs}) into (\ref{etaregsol})
to get
$$ \refstepcounter{equnum}
\Pi(\Lambda)=-\frac12\left[\Xi^{1/3}+\frac{9(\Lambda_*+1)}{\Xi^{1/3}}-6+
i\sqrt3\left(\Xi^{1/3}-\frac{9(\Lambda_*+1)}{\Xi^{1/3}}\right)\right]
\label{PiLambda}\eqno (\thesection.\arabic{equnum}) $$
with
$$ \refstepcounter{equnum}
\Xi=27\left(\Lambda_*+1+\sqrt{-\Lambda_*^3-2\Lambda_*^2-\Lambda_*}\right)
{}~~~~~,~~~~~\Lambda_*=\frac{\Lambda}{1-\Lambda}
\label{Xi*def}\eqno (\thesection.\arabic{equnum}) $$
Substituting (\ref{contdefs}) and (\ref{PiLambda}) into (\ref{trNX2Hexpleta})
and expanding around $\Lambda=0$ yields after some algebra
$$ \refstepcounter{equnum}
\left\langle\frac{\rm tr} NX^2\right\rangle_{{\cal
M}_3}=\frac29\Lambda^{3/2}+\dots\equiv c\cdot\Lambda^{1-\gamma_{\rm str}}+\dots
\label{corrcrit}\eqno (\thesection.\arabic{equnum}) $$
where the dots denote terms which are less singular as $\Lambda\to0$. The
leading non-analytic behaviour in (\ref{corrcrit}) identifies the critical
string exponent of this random surface model as
$$ \refstepcounter{equnum}
\gamma_{\rm str}=-1/2
\label{gravstr}\eqno (\thesection.\arabic{equnum}) $$
so that the system in the continuum limit represents pure two-dimensional
quantum gravity \cite{fgz}. The same argument also applies to the 4-point and
even-even models with the appropriate redefinitions of cosmological constants
in (\ref{contdefs}). Thus the equal weighting of all vertices in these random
surface models leads to curvature matrix models in the same universality class
as the more conventional Hermitian one-matrix models of two-dimensional quantum
gravity. This agrees with recent numerical results in \cite{bct}.
\subsubsection{General Correlators}
We now turn to a more general discussion of the evaluation of observables in
the matrix models above. The relations between correlators involving the
external matrix $A$ are a bit more subtle than those represented by the contour
integrations in (\ref{trNX2qH}) which coincide in all four matrix models for
all $q$. Because of the simple line maps that exist between the ensembles
${\cal M}_3^*$, ${\cal M}_4^*$ and ${\cal M}_{\rm even}$, all expectation
values are the same in these models, i.e.
$$ \refstepcounter{equnum}
\frac14\left\langle\frac{\rm tr} N(\phi^\dagger\phi
A_3)^{3q}\right\rangle_{C^*}=\left\langle\frac{\rm tr}
N(XA_3)^{3q}\right\rangle_{{\cal M}_3^*}
=\left\langle\frac{\rm tr} N(XA_4)^{2q}\right\rangle_{{\cal M}_4^*}
=\left\langle\frac{\rm tr} N(XA^{(2)})^{2q}\right\rangle_{{\cal M}_{\rm even}}
\label{even4*corrrel}\eqno (\thesection.\arabic{equnum}) $$
Analytically, this equality follows from the fact that these correlators are
given by derivatives of the free energies of the matrix models with respect to
the weights $\tilde t_q^*$. The powers in (\ref{even4*corrrel}) can be
understood from the fact that the reflection symmetry $A^{(2)}\to-A^{(2)}$ of
the ${\cal M}_4$ and ${\cal M}_{\rm even}$ models (which restrict the non-zero
averages to $\langle\frac{\rm tr} N(XA^{(2)})^{2q}\rangle_{{\cal M}_4,{\cal M}_{\rm
even}}$) corresponds to the ${\fam\Bbbfam Z}_3$-symmetry $A^{(3)}\to\omega_3A^{(3)}$,
$\omega_3\in{\fam\Bbbfam Z}_3$, of the ${\cal M}_3$ model (which restricts the
non-vanishing observables to $\langle\frac{\rm tr} N(XA^{(3)})^{3q}\rangle_{{\cal
M}_3}$ and $\langle\frac{\rm tr} N(\phi^\dagger\phi A^{(3)})^{3q}\rangle_C$). In
terms of the coloured graphs of subsection 3.2, these discrete symmetries
correspond to permutations of the colours of graphs in each ensemble. It is
these symmetries that are ultimately responsible for the localization of the
Itzykson-Di Francesco character sum onto the appropriate even or mod 3
representations in each case.
The correlators in (\ref{even4*corrrel}) are also given by a simple contour
integration. It can be shown that \cite{kaz1a}
$$ \refstepcounter{equnum}
\frac1N\frac\partial{\partial\tilde
t_q^*}\log\chi_{\left\{\frac{h^{(0)}}3\right\}}(\bar
A)=\sum_{k=1}^{N/3}\frac{\chi_{\left\{\tilde h^{(0)}_k(-q)/3\right\}}(\bar
A)}{\chi_{\left\{h^{(0)}/3\right\}}(\bar A)}
\label{charderivsum}\eqno (\thesection.\arabic{equnum}) $$
where $\tilde h_k^{(0)}$ is defined in (\ref{tildehk}). From the character
expansion of section 2 we see that the left-hand side of (\ref{charderivsum})
arises from a derivative of the dual model free energy
$S_H^{(*)}[\lambda,\tilde t_q^*]$ with respect to the vertex weights $\tilde
t_q^*$, whereas the right-hand side is identical to (\ref{weightNid}) with
$q\to-q$. In the large-$N$ limit, we can therefore represent the dual model
correlators in (\ref{even4*corrrel}) by the contour integrations
$$ \refstepcounter{equnum}
\left\langle\frac{\rm tr} N(XA_3)^{3q}\right\rangle_{{\cal
M}_3^*}=-\frac{\lambda^q}q\oint_{\cal C}\frac{dh}{2\pi i}~~{\rm e}^{-q({\cal
H}_H(h)+{\cal F}(h))}
\label{3ptdualcorrs}\eqno (\thesection.\arabic{equnum}) $$
Although it is possible to determine the complete set of observables of the
dual matrix models in terms of the saddle-point solution at large-$N$, the
situation is somewhat more complicated for the correlators of the ${\cal M}_3$
and ${\cal M}_4$ ensembles. The above discussion illustrates to what extent the
large-$N$ saddle-point solution of the Itzykson-Di Francesco formula can be
used to represent the observables of the matrix model. Those which do admit
such a representation typically appear to be obtainable from derivatives of the
free energy of the model and thus correspond in the random surface
interpretation to insertions of marked loops on the surfaces. Thus, strictly
speaking, the natural observable to compute in the large-$N$ limit of the
3-point matrix model is the free energy (\ref{SHsaddlept}). In the next
subsection we shall show how this calculation carries through.
\subsubsection{Free Energy}
The natural observable to compute in the triangulation model is the large-$N$
(genus zero) free energy (\ref{SHsaddlept}) with $I_c=1$. Splitting up the
integration range and setting $\rho_H(h)=1$ on $[0,b]$, we find
$$ \refstepcounter{equnum}\oldfalse{\begin{array}{c}
S_H=\frac13\int_b^adh
{}~\rho_H(h)h\left[\log(\lambda^3h)-1\right]+\frac13\int_b^adh~\rho_H(h)
\left[h\log h-(h-b)\log(h-b)-b\right]\\
+\frac16\int_b^a\!{\int\!\!\!\!\!\!-}^a_{\!\!b}dh~dh'~\rho_H(h)
\rho_H(h')\log|h-h'|+\frac{b^2}6\left(\log
b-\frac32\right)+\frac{b^2-3/2}4\log\lambda\end{array}}
\label{freeints}\eqno (\thesection.\arabic{equnum}) $$
The spectral density is found by computing the discontinuity (\ref{disceq}) of
the weight resolvent function (\ref{resexactHt=1}) across the cut $[b,a]$,
which yields
$$ \refstepcounter{equnum}
\rho_H(h)=\frac1\pi\left[\arctan\left(\frac{2\sqrt{ab(a-h)(h-b)}}{(a+b)h-2ab}
\right)-2\arctan\left(\frac{\sqrt{(a-h)(h-b)}}{h+\sqrt{ab}}\right)\right]
{}~~~~,~~~h\in[b,a]
\label{rhoHexpl}\eqno (\thesection.\arabic{equnum}) $$
The double integral in the free energy (\ref{freeints}) can be simplified by
integrating up the saddle-point equation (\ref{saddlepteq}) for $h\in[b,a]$ to
get
$$ \refstepcounter{equnum}\oldfalse{\begin{array}{lll}
{\int\!\!\!\!\!\!-}_{\!\!b}^adh'~\rho_H(h')\log|h-h'|&=&h\left[1-\log(\lambda^3
h)\right]+(h-b)\log(h-b)-h\log h+\frac14\log\lambda\\&
&+{\int\!\!\!\!\!\!-}_{\!\!b}^adh'~\rho_H(h')\log(h'-b)+b\left[\log(\lambda^3b)-1\right]
+b\log b\end{array}}
\label{saddlepteqint}\eqno (\thesection.\arabic{equnum}) $$
Substituting (\ref{saddlepteqint}) into (\ref{freeints}) and integrating by
parts, we find after some algebra
$$ \refstepcounter{equnum}\oldfalse{\begin{array}{c}
S_H=-\frac16\int_b^adh~\frac{d\rho_H(h)}{dh}\left\{h\left[h\log
h-\frac{h}2\log(h-b)+\log(h-b)\right]-b\left(1-\frac
b2\right)\log(h-b)\right\}\\+\frac b3\log b\left(1-\frac
b2\right)+\frac{b}6\log\lambda-\frac b{12}\left(b+15\right)+\frac{\langle\tilde
h\rangle}6\left(\log\lambda-\frac32\right)\end{array}}
\label{SPdrho}\eqno (\thesection.\arabic{equnum}) $$
where we have introduced the reduced weight average $\langle\tilde
h\rangle=\int_b^adh~\rho_H(h)h=\langle h\rangle-b^2/2$, and from
(\ref{rhoHexpl}) we have
$$ \refstepcounter{equnum}
\frac{d\rho_H(h)}{dh}=\frac{h-2\sqrt{ab}}{\pi\sqrt{(a-h)(h-b)}}~~~~~,~~~~~
h\in[b,a]
\label{drhoPdh}\eqno (\thesection.\arabic{equnum}) $$
The weight average $\langle h\rangle$ can be read off from (\ref{resexactHt=1})
to give
$$ \refstepcounter{equnum}
\langle h\rangle=-\eta^2/3+\eta-1/6
\label{havgpen}\eqno (\thesection.\arabic{equnum}) $$
To evaluate the remaining logarithmic integrals in (\ref{SPdrho}), it is
convenient to change variables from $h\in[b,a]$ to $x\in[-1,1]$ with
$h=\frac12(a+b)+\frac12(a-b)x$. This leads to a series of integrals over
elementary algebraic forms and forms involving logarithmic functions. The
latter integrals can be computed by rewriting the integrations over
$x\in[-1,1]$ as contour integrals, blowing up the contours and then picking up
the contributions from the discontinuities across the cuts of the logarithms.
The relevant integrals are then found to be
$$ \refstepcounter{equnum}\oldfalse{\begin{array}{l}
I_0(r)=\int_{-1}^1dx~\frac{\log(1+rx)}{\sqrt{1-x^2}}=\pi\log\left(
\frac{1+\sqrt{1-r^2}}2\right)\\I_1(r)=\int_{-1}^1dx~\frac{x\log(1+rx)}
{\sqrt{1-x^2}}=\frac\pi
r\left(1-\sqrt{1-r^2}\right)\\I_2(r)=\int_{-1}^1dx~\frac{x^2\log(1+rx)}
{\sqrt{1-x^2}}=\frac12\left(I_0(r)-\frac{I_1(r)}r+\frac\pi2\right)\\J(r)=
\int_{-1}^1dx~\frac{\log(1+x)}{(1+rx)\sqrt{1-x^2}}
=-\pi{\int\!\!\!\!\!\!-}_{\!\!-\infty}^{-1}
\frac{dy}{(1+ry)\sqrt{1-y^2}}+\frac{\pi
\log\left(\frac{1-r}r\right)}{\sqrt{1-r^2}}
\\~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
=\frac\pi{\sqrt{1-r^2}}\log\left(\frac{1-r}{1+\sqrt{1-r^2}}\right)\end{array}}
\label{logintids}\eqno (\thesection.\arabic{equnum}) $$
where $0\leq r\leq1$. Using the above identities and the boundary conditions
(\ref{bdryeq1}), after some tedious algebra we arrive finally at the simple
expression
$$ \refstepcounter{equnum}
S_H(\eta)=\frac14\log\lambda+\frac16\log\eta+\frac{\eta^2}{36}-\frac{7\eta}{36}
\label{SPetafinal}\eqno (\thesection.\arabic{equnum}) $$
for the free energy of the triangulation model, where we have ignored an
overall numerical constant.
To examine the scaling behaviour of the free energy about the critical point,
we once again introduce a renormalized, continuum cosmological constant
$\Lambda$ and momentum $\Pi$ as in (\ref{contdefs}). From the boundary equation
$3\lambda^6\eta^3=\eta-1$ we can calculate derivatives of $\eta$ with respect
to the cosmological constant to get
$$ \refstepcounter{equnum}
\frac{\partial\eta}{\partial\Lambda}=-\frac{3\lambda_c^6\eta^4}{\eta-\eta_c}
\label{deriveta}\eqno (\thesection.\arabic{equnum}) $$
which we note diverges at the critical point. From (\ref{SPetafinal}) and
(\ref{deriveta}) the first two derivatives of the free energy are then found to
be
$$ \refstepcounter{equnum}
\frac{\partial S_H}{\partial\Lambda}=-\lambda_c^6\eta^3(\eta-2)/6
{}~~~~~,~~~~~\frac{\partial^2S_H}{\partial\Lambda^2}=2\lambda_c^{12}\eta^6
\label{freederivsL}\eqno (\thesection.\arabic{equnum}) $$
Both derivatives in (\ref{freederivsL}) are finite at the critical point.
Taking one more derivative yields a combination that does not cancel the
singular $(\eta-\eta_c)^{-1}$ part of (\ref{deriveta}), i.e. there is a third
order phase transition at the critical point. Substituting (\ref{contdefs})
into the boundary equation (\ref{bdryeq1}) gives $\Pi^2\sim\Lambda$ near
criticality $\Lambda\to0$ (see (\ref{PiLambda}),(\ref{Xi*def})), and so
substituting $\eta\sim\eta_c-\Lambda^{1/2}$ into (\ref{freederivsL}) and
expanding about $\Lambda=0$ yields
$$ \refstepcounter{equnum}
\frac{\partial^2S_H}{\partial\Lambda^2}=c\cdot\Lambda^{1/2}+\dots
\label{SHexpcrit}\eqno (\thesection.\arabic{equnum}) $$
which identifies the expected pure gravity string exponent (\ref{gravstr}).
\section{Incorporation of the Itzykson-Zuber Correlator}
In this section we shall give a more quantitative presentation of the
evaluation of observables in the curvature matrix models and how they reproduce
features of the triangulated random surface sum. This will also further
demonstrate the validity of the weight splitting that was assumed to hold in
the Itzykson-Di Francesco formula, i.e. that the leading order contributions to
the partition function indeed do localize at $N=\infty$ onto the even
representations of $GL(N,{\fam\Bbbfam C})$ that split symmetrically into mod 3
congruence classes. For this we shall present an explicit evaluation of the
Itzykson-Zuber correlator ${\cal F}(h)$ using (\ref{weightcont}). This will
amount to an evaluation of the large-$N$ limit of the generalized Schur
functions introduced in the previous section.
Notice first that expanding both sides of (\ref{weightcont}) in powers of $q$
and equating the constant terms leads to the identity
$$ \refstepcounter{equnum}
1=\oint_{\cal C}\frac{dh}{2\pi i}~\{{\cal H}_H(h)+{\cal F}(h)\}
\label{aneq}\eqno (\thesection.\arabic{equnum}) $$
which, along with the normalization $\int_0^adh~\rho_H(h)=1$ of the spectral
density, implies that
$$ \refstepcounter{equnum}
\oint_{\cal C}dh~{\cal F}(h)=0
\label{Fhanalytic}\eqno (\thesection.\arabic{equnum}) $$
Thus, at least for some range of the couplings $\tilde t_q^*$, the function
${\cal F}(h)$ will be analytic in a neighbourhood of the cut of ${\cal
H}_H(h)$. This will be true so long as the Itzykson-Zuber integral does not
undergo a phase transition which changes its analyticity features in the
large-$N$ limit. The equation (\ref{weightcont}) can be used to determine
${\cal F}(h)$ once the coupling constants of the dynamical triangulation are
specified. The simplest choice is a power law variation of the couplings as the
coordination numbers are varied,
$$ \refstepcounter{equnum}
q\tilde t_q^*=t^{q-1+p}~~~~~~;~~~~~~t\in{\fam\Bbbfam R}~,~p\in{\fam\Bbbfam Z}
\label{weightchoice}\eqno (\thesection.\arabic{equnum}) $$
so that each vertex of the triangulated surface interacts with a strength
proportional to $t$ (with proportionality constant $t^p$) with each of its
nearest neighbours.
This simple choice of vertex weights in the triangulation model allows us to
study more precisely how the curvature matrix model represents features of the
random surface sum, and to further demonstrate that the actual saddle-point
localization of the partition function is not onto some configuration of Young
tableau weights other than the mod 3 representations. It will also allow us to
examine how the observables of the model behave as the weights are varied in
this simple case, without the complexities that would appear in the analytic
form of the solution for other choices of coupling constants. For generic
$t\neq1$ (i.e. ${\cal F}(h)\neq0$), the saddle-point solution of the matrix
model must satisfy certain physical consistency conditions so that it really
does represent the (continuum) genus zero contribution to the random surface
sum (\ref{partgraphs}) with the choice of couplings (\ref{weightchoice}). Using
Euler's theorem $V-E+F=2$ for a spherical topology and the triangulation
relation
$$ \refstepcounter{equnum}
\sum_{v_q^*\in G_3}q=2E=3F
\label{triangrel}\eqno (\thesection.\arabic{equnum}) $$
it follows that the planar surface sum in (\ref{partgraphs}) is
$$ \refstepcounter{equnum}
Z^{(0)}_H(\lambda,t)=t^{2(p-1)}\sum_{G_3^{(0)}}c_{G_3^{(0)}}\left(
\lambda^3t^{p+1}\right)^{A(G_3^{(0)})}
\label{partsumexpl}\eqno (\thesection.\arabic{equnum}) $$
where the sum is over all planar fat-graphs $G_3^{(0)}$ of area
$A(G_3^{(0)})\propto F(G_3^{(0)})$, and the constant $c_{G_3^{(0)}}$ is
independent of the coupling constants $\lambda$ and $t$. The perturbative
expansion parameter is $\lambda^6t^{2(p+1)}$ and the critical point, where a
phase transition representing the continuum limit of the discretized random
surface model takes place, is reached by tuning the expansion parameter to the
radius of convergence of the power series (\ref{partsumexpl}). The critical
line $\lambda_c(t)$ thus obeys an equation of the form
$$ \refstepcounter{equnum}
\lambda_c(t)^6\cdot t^{2(p+1)}=~{\rm constant}
\label{critline}\eqno (\thesection.\arabic{equnum}) $$
The solution of the model for $t\neq1$ should therefore have the same physical
characteristics as that with $t=1$, since in the random surface interpretation
of the curvature matrix model the only effect of changing $t$ is to rescale the
cosmological constant of the random surface sum for $t=1$ as
$\lambda^3\to\lambda^3t^{p+1}$. This geometrical property should be reflected
in the large-$N$ solution of the matrix model with a non-vanishing
Itzykson-Zuber correlator.
As discussed in \cite{kaz1a}, it is possible to invert the equation
$$ \refstepcounter{equnum}
G(h)=~{\rm e}^{{\cal H}_H(h)+{\cal F}(h)}
\label{Gdef}\eqno (\thesection.\arabic{equnum}) $$
to obtain $h$ as a function of $G$, by changing variables in the contour
integral (\ref{weightcont}) from $h$ to $G$ to get
$$ \refstepcounter{equnum}
q\tilde t_q^*=\oint_{{\cal C}_G}\frac{dG}{2\pi iG}~h(G)G^q
\label{weightinv}\eqno (\thesection.\arabic{equnum}) $$
where the contour ${\cal C}_G$ encircles the cut $[G(b),0]$ with clockwise
orientation in the complex $G$-plane. Then
$$ \refstepcounter{equnum}
h(G)=1+\sum_{q=1}^\infty\frac{q\tilde t_q^*}{G^q}+\sum_{q=1}^\infty g_qG^q
\label{hG}\eqno (\thesection.\arabic{equnum}) $$
where the coefficients
$$ \refstepcounter{equnum}
g_q\equiv\oint_{{\cal C}_G}\frac{dG}{2\pi
iG}~h(G)G^{-q}=\lambda^{-q}\left\langle\frac{\rm tr}
N(XA_3)^{3q}\right\rangle_{{\cal M}_3^*}
\label{gq}\eqno (\thesection.\arabic{equnum}) $$
determine the analytic part of the function $h(G)$. The second equality in
(\ref{gq}) follows from (\ref{3ptdualcorrs}) and the constant term in the
Laurent series expansion (\ref{hG}) is unity because of the normalization of
the spectral density $\rho_H$ \cite{kaz1a}. In the general case, the solution
for $G(h)$ as determined from (\ref{hG}) will be multi-valued. It was shown in
\cite{kaz1a} that the first set of sheets of the Riemann surface of this
multi-valued function are connected along, and hence determined by, the cut
structures of $~{\rm e}^{{\cal F}(h)}$, which map the point $h=\infty$ to $G=0$.
These sheets are found by inverting (\ref{hG}) with $g_q=0$. The remaining
sheets are determined by the cuts of $~{\rm e}^{{\cal H}_H(h)}$ and are associated
with the positive powers of $G$ in (\ref{hG}).
The solution is extremely simple, however, for the choice of couplings
(\ref{weightchoice}), as then the inversion of the equation (\ref{hG}) with
$g_q=0$ yields
$$ \refstepcounter{equnum}
G_1(h)=t+\frac{t^p}{h-1}
\label{sheet1}\eqno (\thesection.\arabic{equnum}) $$
This solution has a simple pole of residue $t^p$ at $h=1$, but no multivalued
branch cut structure. Thus we expect that the singularities of $~{\rm e}^{{\cal
F}(h)}$ will have a pole structure, rather than a cut structure. The remaining
sheets of the function $G(h)$ will be determined by the cut structure of
$~{\rm e}^{{\cal H}_H(h)}$. They are attached to the ``physical" sheet $G_1(h)$, on
which the poles of $~{\rm e}^{{\cal F}(h)}$ and the cuts of $~{\rm e}^{{\cal H}_H(h)}$ lie,
by these cuts. Note that the analytical structure of the character as
determined by this part of the Laurent series in (\ref{hG}) is anticipated from
the Schur character formula (\ref{schurchar}),(\ref{schurpoly}).
With this observation we can in fact obtain a closed form expression for the
Itzykson-Zuber correlator in the large-$N$ limit, in contrast to the generic
case \cite{kaz1a} where in general one obtains only another discontinuity
equation such as (\ref{saddlepteq}). The resolvent function (\ref{resolv}) can
be written as
$$ \refstepcounter{equnum}
{\cal H}_H(h)=\log\left(\frac{h}{h-b}\right)+\tilde{\cal H}_H(h)
\label{Hsplit}\eqno (\thesection.\arabic{equnum}) $$
where $\tilde{\cal H}_H(h)=\int_b^adh'~\rho_H(h')/(h-h')$ is the reduced
resolvent function associated with the non-trivial part of the density
$\rho_H$, and it has a branch cut on the interval $[b,a]$. Using
(\ref{Fhanalytic}) it can be written as
$$ \refstepcounter{equnum}
\tilde{\cal H}_H(h)=\oint_{\tilde{\cal C}}\frac{d\tilde h}{2\pi i}~\frac{\log
G(\tilde h)}{h-\tilde h}=-\oint_{{\cal C}_G}\frac{dG}{2\pi
i}~\frac{h'(G)}{h-h(G)}\log G
\label{resnontr}\eqno (\thesection.\arabic{equnum}) $$
where the contour $\tilde{\cal C}$ encircles the cut $[b,a]$ of $\tilde{\cal
H}_H(h)$ and we have changed variables from $h$ to $G$ as above. The contour
integral (\ref{resnontr}) in the complex $G$-plane can be evaluated for
large-$h$, which, by analytical continuation, determines it for all $h$. We can
shrink the contour ${\cal C}_G$ down to an infinitesimal one ${\cal C}_0$
hugging both sides of the cut $[G(b),0]$. In doing so, from (\ref{hG}) it
follows that we pick up contributions from the solution (\ref{sheet1}) and the
extra pole at $G=t$ corresponding to the point $h=\infty$. Thus integrating by
parts we find that (\ref{resnontr}) can be written as
$$ \refstepcounter{equnum}
\tilde{\cal H}_H(h)=\log G_1(h)-\log t+\oint_{{\cal C}_0}\frac{dG}{2\pi
i}~\left\{\frac{\partial}{\partial G}\left[\log
G\log(h-h(G))\right]-\frac{1}{G}\log(h-h(G))\right\}
\label{resshrunk}\eqno (\thesection.\arabic{equnum}) $$
In the contour integral over ${\cal C}_0$ in (\ref{resshrunk}), the total
derivative in $G$ gives the discontinuity across the cut $[G(b),0]$, which is
$\log(h-b)$. The other term there evaluates the $G=0$ limit of $\log(h-h(G))$
determined by (\ref{hG}) and (\ref{sheet1}). Using (\ref{Hsplit}), we then have
$$ \refstepcounter{equnum}
{\cal H}_H(h)=\log G_1(h)+\log\left(\frac{h}{(h-1)t+t^p}\right)
\label{Hsoln}\eqno (\thesection.\arabic{equnum}) $$
Since there is only the single (physical) sheet determined by the singularity
structure of $~{\rm e}^{{\cal F}(h)}$, we have $\log G_1(h)={\cal F}(h)+{\cal
H}_H(h)$ on the physical sheet, and combined with (\ref{Hsoln}) we arrive at
the expression
$$ \refstepcounter{equnum}
{\cal F}(h)=\log\left(\frac{(h-1)t+t^p}{h}\right)
\label{izlargeN}\eqno (\thesection.\arabic{equnum}) $$
for the large-$N$ limit of the Itzykson-Zuber correlator. As mentioned above,
the fact that $~{\rm e}^{{\cal F}(h)}$ has only a simple pole of residue $t^p-t$ at
$h=0$ is because there are no other sheets below $G_1(h)$ connected by cuts of
$~{\rm e}^{{\cal F}(h)}$. This is opposite to the situation that occurs in the
Gaussian case ($A=0$ in (\ref{partfn})), where $~{\rm e}^{{\cal H}_H(h)}$ has a
simple pole of residue 1 at $h=1$ and there are no upper branches above
$G_1(h)$ connected by cuts of $~{\rm e}^{{\cal H}_H(h)}$ \cite{kaz1a}. It can be
easily verified, by blowing up the contour $\cal C$, using the asymptotic
boundary condition ${\cal H}_H(h)\sim1/h+{\cal O}(1/h^2)$ for large-$h$ and
computing the residue at $h=\infty$ in (\ref{weightcont}), that
(\ref{izlargeN}) consistently yields the weights (\ref{weightchoice}).
Furthermore, for $p=t=0$, (\ref{izlargeN}) reduces to the solution of
\cite{kaz1a} in the case where only the vertex weight $\tilde t_1^*$ is
non-zero, while for $t=1$ (i.e. $\bar A=\tilde A={\bf1}$), (\ref{izlargeN})
yields ${\cal F}(h)=0$, as expected from its definition.
Notice that for $p\neq1$, ${\cal F}(h)$ here has a logarithmic branch cut
between $h=0$ and $h=\bar t\equiv1-t^{p-1}$, so that strictly speaking the
solution (\ref{izlargeN}) is only valid for $\bar t\leq0$, i.e. $0\leq t\leq1$
for $p\leq0$ and $t\geq a$ for $p>1$ (where its cut doesn't overlap with the
cut $[0,a]$). Outside of this region the analytic structure of the
Itzykson-Zuber correlator can be quite different. For $p=1$, we have ${\cal
F}(h)=\log t$ and the only effect of the Itzykson-Zuber integral in the
saddle-point equation (\ref{saddlepteq}) is to rescale the cosmological
constant as $\lambda^3\to\lambda^3t^2$. This is expected directly from the
original matrix integral (\ref{partfn}), since for $p=1$ the vertex weights can
be represented as traces of the external matrix $A=t\cdot{\bf1}$, while for
$p\neq1$ the $GL(N,{\fam\Bbbfam C})$ characters can only be defined via the
generalized Schur functions (\ref{schurchar}),(\ref{schurpoly}). For $p\neq1$,
we shall now see that (\ref{izlargeN}) changes the analytic structure of the
large-$N$ solution of the curvature matrix model, but that the $t=1$ physical
characteristics (the pure gravity continuum limit) are unchanged.
The saddle-point equation (\ref{saddlepteq}) with (\ref{izlargeN}) is then
solved by replacing the $t=1$ resolvent ${\cal H}_H(h;\lambda^3,1)$ in
(\ref{resexactHt=1}) by
$$ \refstepcounter{equnum}
{\cal H}_H(h;\lambda^3,t)={\cal H}_H(h;\lambda^3,1)+2\oint_{\cal
C}\frac{ds}{2\pi
i}~\frac{1}{s-h}\sqrt{\frac{(h-a)(h-b)}{(s-a)(s-b)}}~\log\left(
\frac{(s-1)t+t^p}{s}\right)
\label{ressolnt}\eqno (\thesection.\arabic{equnum}) $$
and compressing the closed contour $\cal C$ in (\ref{ressolnt}) to the cut
$[\bar t,0]$. Note that since $\bar t\leq0$ the sign of the square root in
(\ref{ressolnt}) is negative along this cut. Working out the contour
integration in (\ref{ressolnt}) as before then gives
$$ \refstepcounter{equnum}\oldfalse{\begin{array}{l}
{\cal H}_H(h;\lambda^3,t)={\cal
H}_H(h;\lambda^3t^{p+1},1)\\~~~~~~~~~~+2\log\left\{\frac{h^2\left(h(a+b)-2ab-
\bar t(2h-a-b)-2\sqrt{(h-a)(h-b)(\bar t-a)(\bar t-b)}\right)}{(h-\bar
t)^2\left(h(a+b)-2ab-2\sqrt{ab(h-a)(h-b)}\right)}\right\}\end{array}}
\label{resexact}\eqno (\thesection.\arabic{equnum}) $$
The endpoints $a,b$ of the support of the spectral distribution function are
found just as before and now the boundary conditions (\ref{bdryeq1}) are
replaced by
$$ \refstepcounter{equnum}
\xi_t=\lambda^6t^{2(p+1)}\eta_t^3~~~~~,~~~~~1-\bar t=t^{p-1}=\eta_t-3\xi_t
\label{bdryeq1t}\eqno (\thesection.\arabic{equnum}) $$
where
$$ \refstepcounter{equnum}
\xi_t=\frac14\left(\sqrt{a-\bar t}-\sqrt{b-\bar
t}\right)^2~~~~~,~~~~~\eta_t=\frac14\left(\sqrt{a-\bar t}+\sqrt{b-\bar
t}\right)^2
\label{xitetatdef}\eqno (\thesection.\arabic{equnum}) $$
and we have assumed that $t\neq0$ \footnote{\baselineskip=12pt It can be easily
seen that for $p=t=0$ the saddle-point equations are not satisfied anywhere.
This simply reflects the fact that it is not possible to close a regular,
planar triangular lattice on the sphere. As discussed in \cite{kaz1,kaz1a}, in
the matrix model formulations one needs to study ``almost" flat planar diagrams
in which positive curvature defects are introduced on the Feynman graphs to
close that triangular lattice on the sphere.}.
The boundary equations (\ref{bdryeq1t}) are identical to those of the $t=1$
model in (\ref{bdryeq1}) with the replacements $\xi\to\bar\xi_t\equiv
t^{1-p}\xi_t$, $\eta\to\bar\eta_t\equiv t^{1-p}\eta_t$ and
$\lambda^3\to\lambda^3t^{p+1}$. The weight average $\langle h\rangle_t$
corresponding to the $1/h^2$ coefficient of the asymptotic expansion of
(\ref{resexact}) is then
$$ \refstepcounter{equnum}
\langle
h\rangle_t=\frac12+t^{2(p-1)}\left(-\frac{\bar\eta_t^2}3+\bar\eta_t-\frac23
\right)
\label{htavg}\eqno (\thesection.\arabic{equnum}) $$
where we have used (\ref{bdryeq1t}). Substituting (\ref{htavg}) into
(\ref{trNX2H}) yields (\ref{trNX2Hexpleta}) with $\eta\to\bar\eta_t$ and an
additional overall factor of $t^{2(p-1)}$ which represents the overall factor
in (\ref{partsumexpl}) (the linear in $\lambda$ term is from the Gaussian
normalization of the partition function). Thus, although the precise analytical
form of the solution is different, the critical behaviour (and also the Wick
expansion) of the curvature matrix model for the choice of vertex weights
(\ref{weightchoice}) with $t\neq1$ is the same as that for $t=1$.
The correlators (\ref{trNX2qH}) will all have a structure similar to those at
$t=1$, as in (\ref{htavg}). The non-trivial analytical structure of the
saddle-point solution for $t\neq1$ is exemplified most in the function
(\ref{Gdef}), which in the present case is
$$ \refstepcounter{equnum}\oldfalse{\begin{array}{lll}
G(h;\lambda^3,t)&=&G(h;\lambda^3t^{p+1},1)\\&
&~\times\frac{th^3\left(h(a+b)-2ab-\bar t(2h-a-b)-2\sqrt{(h-a)(h-b)(\bar
t-a)(\bar t-b)}\right)^2}{(h-\bar
t)^3\left(h(a+b)-2ab-2\sqrt{ab(h-a)(h-b)}\right)^2}\\
&=&\frac{\left(h(a+b)-2ab-\bar t(2h-a-b)-2\sqrt{(h-a)(h-b)(\bar t-a)(\bar
t-b)}\right)^2}{\lambda^3t^p(\sqrt a+\sqrt b)^2(a-b)(h-\bar
t)^3\left(h(a+b)-2ab-2\sqrt{ab(h-a)(h-b)}\right)}\end{array}}
\label{Gexpl}\eqno (\thesection.\arabic{equnum}) $$
The inverse function $h(G)$ can be determined from (\ref{Gexpl}) using the
Lagrange inversion formula \cite{di}
$$ \refstepcounter{equnum}
h(G)=G+\sum_{k=1}^\infty\frac1{k!}\left(\frac\partial{\partial
G}\right)^{k-1}\varphi(G)^k
\label{lagrange}\eqno (\thesection.\arabic{equnum}) $$
where the function $\varphi$ is defined from (\ref{Gexpl}) by
$$ \refstepcounter{equnum}
\varphi(h)=h-G(h)
\label{varphidef}\eqno (\thesection.\arabic{equnum}) $$
Comparing with (\ref{hG}) for the choice of vertex weights
(\ref{weightchoice}), we can write down an expression for the generating
function of the dual moments of the triangulation model (or equivalently for
the 4-point and even-even models)
$$ \refstepcounter{equnum}
\sum_{q=1}^\infty\lambda^{-q}\left\langle\frac{\rm tr}
N(XA_3)^{3q}\right\rangle_{{\cal
M}_3^*}G^q=G-\frac{t^{p-1}G}{G-t}+\sum_{k=1}^\infty\frac1{k!}\left(\frac
\partial{\partial G}\right)^{k-1}\varphi(G)^k
\label{corrgenfn}\eqno (\thesection.\arabic{equnum}) $$
Because of the complicated structure of the function (\ref{Gexpl}), it does not
seem possible to write down a closed form for this generating function or
systematic expressions for the dual moments. Nonetheless, (\ref{corrgenfn})
does represent a formal solution for the observables of the triangulation
model.
The above critical behaviour, when a non-trivial Itzykson-Zuber correlator is
incorporated into the dynamical triangulation model, is anticipated from
(\ref{critline}) and thus yields a non-trivial verification of the assumptions
that went into the derivation of the large-$N$ limit of the Itzykson-Di
Francesco formula of section 2. The form of the function $G(h;\lambda^3,t)$ in
(\ref{Gexpl}) illustrates the analytical dependence of the saddle-point
solution on the vertex weights $\tilde t_q^*$. It also demonstrates how the
analytical, non-perturbative properties of the random surface sum
(\ref{partsumexpl}) change at $N=\infty$, although the perturbative expansion
of the free energy coincides with (\ref{partsumexpl}) and the physical
continuum limit (\ref{critline}) is unaltered. The discussion of this section
of course also applies to the 4-point and even-even models with the appropriate
redefinitions of coupling constants above, and also to the complex curvature
matrix model where now the incorporation of the Itzykson-Zuber correlator using
the saddle-point equation (\ref{saddlepteqC}) leads to the appropriate
rescaling of the cosmological constant $\lambda^{3/2}$ by $t^{(p+1)/4}$ as
predicted from the graphical arguments of subsection 3.3 (see
(\ref{complexweights})). The above derivation also suggests an approach to
studying phase transitions in the large-$N$ limit of the Itzykson-Zuber model,
as it shows in this explicit example the region where the analytic structure of
${\cal F}(h)$ changes (i.e. $\bar t>0$) and consequently the region wherein a
discontinuous change of the large-$N$ solution appears. This could prove
relevant to various other matrix models where the Itzykson-Zuber integral
appears \cite{mak,semsz}. The saddle-point solution above of the curvature
matrix model can nonetheless be trivially analytically continued to all
$t\in{\fam\Bbbfam R}$. This is expected since the random surface sum
(\ref{partsumexpl}) is insensitive to a phase transition in the Itzykson-Zuber
integral which appears in the large-$N$ solution of the matrix model only as a
manifestation of the character sum representation of the discretized surface
model.
\section{Complex Saddle Points of the Itzykson-Di Francesco Formula}
The curvature matrix models we have thus far studied have led to a unique,
stable saddle-point solution at large-$N$. From the point of view of the
Itzykson-Di Francesco formula of section 2, there is a crucial reason why this
feature has occured, namely the cancellation of sign factors that appear in
expressions such as (\ref{char3}). The models we have studied have been
arranged so that there is an overall cancellation of such sign factors which
arise from the splitting of the Young tableau weights into the appropriate
congruence classes. When the weights are then assumed to distribute equally the
resulting Vandermonde determinant factors stabilize the saddle-point and lead
to a unique real-valued solution for the free energy and observables of the
random matrix model. In this section we shall briefly discuss the problems
which arise when trying to solve the matrix models when the sign variation
terms in the character expansion formulas do not necessarily cancel each other
out.
The destabilization of the real saddle-point configuration of weights was
pointed out in \cite{kaz1} where it was shown that the configuration is {\it
complex} for the Hermitian one-matrix model with Penner potential \cite{di}
$$ \refstepcounter{equnum}
V_P(XA)=-\log({\bf1}-X)
\label{1matrixpen}\eqno (\thesection.\arabic{equnum}) $$
in (\ref{partfn}). The Itzykson-Di Francesco formula for this matrix model
follows from replacing ${\cal X}_3[h]$ by
$\chi_{\{h\}}(A)=\chi_{\{h\}}({\bf1})\propto\Delta[h]$ in (\ref{diformula}), so
that at $N=\infty$ we have
$$ \refstepcounter{equnum}
Z_P^{(0)}=c_N~\lambda^{-N^2/4}\sum_{h=\{h^e,h^o\}}\Delta[h^o]^2
\Delta[h^e]^2\prod_{i,j=1}^{N/2}
\left(h_i^o-h_j^e\right)~{\rm e}^{\frac12\sum_ih_i[\log(\frac{\lambda h_i}N)-1]}
\label{partpenner}\eqno (\thesection.\arabic{equnum}) $$
Now there is no problem with the decomposition of weights into the appropriate
congruence classes, but, as we shall see below, the rapid sign changes of the
mixed product over the even and odd weights destabilize the reality of the
saddle-point configuration of Young tableau weights. In the previous models
such mixed product factors did not pose any problem for the solution at
large-$N$ because they appeared in the denominators of the character expansions
and acted to make the more probable configurations of weights those with
identical distributions of even and odd weights, thus stabilizing the
saddle-point at $N=\infty$. In (\ref{partpenner}), however, the mixed product
$\prod_{i,j}(h_i^o-h_j^e)$ appears in the numerator and thus acts to make the
more probable configuration those with different distributions of even and odd
weights. Thus when a symmetric distribution over $h_i^e$ and $h_j^o$ in
(\ref{partpenner}) is assumed, this has the effect of destabilizing the
saddle-point leading to a complex-valued solution.
The matrix model with Penner potential (\ref{1matrixpen}) is equivalent to the
standard Hermitian one-matrix model for pure gravity \cite{fgz}, i.e. that with
potential $\frac14~{\rm tr}~X^4$ in (\ref{partfn}). Diagrammatically, a two-to-one
correspondence between planar Feynman graphs of these two matrix models exists
by splitting the 4-point vertices of the $X^4$ model as described in subsection
3.2 to give diagrams of the ``even-log" model with potential
$-\log({\bf1}-X^2)=-\log({\bf1}-X)-\log({\bf1}+X)$ (so that the face centers of
the $X^4$ model are mapped onto the vertices and face centers of the even-log
model). From the point of view of the Itzykson-Di Francesco formula, in the
character expansion (\ref{diformula}) for the $X^4$ model we replace ${\cal
X}_3[h]$ by ${\cal X}_4[h]$. The resulting partition function $Z_4$ is a sum
over mod 4 congruence classes of weights in which the distribution sums for the
classes $\mu=0,2$ and $\mu=1,3$ decouple from each other (so that even and odd
weights completely factorize) and each have the precise form at $N=\infty$ of
the partition function (\ref{partpenner}) \cite{kaz1}, i.e.
$Z_4^{(0)}=(Z_P^{(0)})^2$. This is just the graphical correspondence mentioned
above. Thus the Itzykson-Di Francesco formula at least reproduces correct
graphical equivalences in these cases.
To see how the complex saddle-points arise in the character expansion of the
Penner matrix model above, we assume that even and odd weights distribute
symmetrically in (\ref{partpenner}) and define a distribution function
$\rho_P(h)$ for the $N/2$ weights $h_i^e$. Varying the effective action in
(\ref{partpenner}) for the weights $h^e$ then leads to the large-$N$
saddle-point equation
$$ \refstepcounter{equnum}
{\int\!\!\!\!\!\!-}_{\!\!b}^adh'~\frac{\rho_P(h')}{h-h'}=-\frac13\log(\lambda
h)-\log\left(\frac h{h-b}\right)~~~~~,~~~~~h\in[b,a]
\label{pensaddlepteq}\eqno (\thesection.\arabic{equnum}) $$
The corresponding resolvent function ${\cal H}_P(h)$ can be determined by the
contour integration in (\ref{ressoln}) just as before using
(\ref{pensaddlepteq}) and we find after some algebra
$$ \refstepcounter{equnum}
{\cal H}_P(h)=\frac13\log\left\{\frac{(\sqrt a-\sqrt
b)^2(a-b)}{\lambda}\frac{h\left(h+\sqrt{ab}+\sqrt{(h-a)(h-b)}\right)^2}{
\left(h(a+b)-2ab+2\sqrt{ab(h-a)(h-b)}\right)^3}\right\}
\label{respen}\eqno (\thesection.\arabic{equnum}) $$
Expanding (\ref{respen}) for large-$h$ then leads to the boundary conditions
$$ \refstepcounter{equnum}
\xi=\frac{3}{5}\left(\eta-1\right)~~~~~,~~~~~
\left(\frac53\right)^3\lambda^2\eta^5=\left(\eta-1\right)^3
\label{bdryeqpen}\eqno (\thesection.\arabic{equnum}) $$
Consider the structure of the solutions to the boundary conditions
(\ref{bdryeqpen}). Again, the Wick expansion of the original matrix integral is
analytic about $\lambda=0$, and it can be shown that the free energy is
analytic about $\eta(\lambda=0)=1$. We should therefore choose the branches of
the quintic equation in (\ref{bdryeqpen}) which are regular at $\lambda=0$.
There are three solutions which obey this analyticity condition and they are
given by the iterative relations
$$ \refstepcounter{equnum}
\eta_n=1+\frac{5}{3}\omega_3^n\lambda^{2/3}(\eta_n)^{5/3}~~~~~,~~~~~n=0,1,2
\label{recxreg}\eqno (\thesection.\arabic{equnum}) $$
where $\omega_3\in{\fam\Bbbfam Z}_3$ is a non-trivial cube root of unity. The
remaining two solutions behave near $\lambda=0$ as $\eta\sim\pm\lambda^{-1}$.
The discrete ${\fam\Bbbfam Z}_3$-symmetry of the regular saddle-point solutions
(\ref{recxreg}) seems to be related to the fact that the Schwinger-Dyson field
equations of this matrix model determine the function $G_P(h)=~{\rm e}^{{\cal
H}_P(h)}$ as the solution of a third-order algebraic equation \cite{kaz1}
$$ \refstepcounter{equnum}
\lambda h^3G_P^3-\lambda
h^2(1+h)G_P^2+\left[\frac89-h+\frac1{648\lambda}\left(1-\sqrt{1-12\lambda}
\right)(1-12\lambda)\right]hG_P+h^2=0
\eqno (\thesection.\arabic{equnum}) $$
Initially, the endpoints $a,b$ of the support of the spectral density lie on
the positive real axis, so that one expects that only the real branch $\eta_0$
is a valid solution of the matrix model. However, the perturbative
expansion parameter of the free energy $S_P(\eta_0)$ would then be
$\lambda^{2/3}$. It is easy to see by a Wick expansion of the original matrix
integral that the genus zero expansion parameter is in fact $\lambda^2$.
Furthermore, one
can analyse the analytic properties of the solutions to the quintic boundary
equation in (\ref{bdryeqpen}), and even determine the critical point
$\lambda_c$ which in this case is the point where the two real and positive
roots coalesce. For $\lambda>\lambda_c$ all three roots which are
analytic about $\lambda=0$ become complex-valued.
This critical behaviour is similar
to that discussed for the cubic boundary equation (\ref{bdryeq1}) in subsection
3.4, and so apparently lies in the same universality class as the earlier
models.
However, the
critical value $\lambda_c$ determined this way does not agree with known
results \cite{fgz,akm}, so that the free energy $S_P$ determined from the
character expansion does not count the Feynman graphs of the Penner
model correctly.
The structure of complex saddle-points arises in many other matrix models. For
example,
for the quartic-type Penner
potential
$$ \refstepcounter{equnum}
V_P^{(4)}(XA)=-\log({\bf1}-X^4)
\label{pen4pot}\eqno (\thesection.\arabic{equnum}) $$
the boundary conditions determining the
endpoints $a,b$ of the support of the distribution function are
$$ \refstepcounter{equnum}
\xi=\frac{3}{7}\left(\eta-1\right)~~~~~,~~~~~
\left(\frac73\right)^3\lambda^4\eta^7=\left(\eta-1\right)^3
\label{bdryC}\eqno (\thesection.\arabic{equnum}) $$
Again, there are three regular solutions of (\ref{bdryC}) at $\lambda=0$
and the real root leads to an expansion in $\lambda^{4/3}$, whereas
the Wick expansion can be explicitly carried out and one
finds that the perturbative expansion parameter is $\lambda^4$.
All the models we have studied for which the matrix model saddle point
does not reproduce the graphical expansion share the feature that the
constraint of regularity at $\lambda=0$ yields
multiple solutions of the endpoint equations. Choosing the real root
(or any other single root) leads to the wrong solution
of the matrix model. Thus it appears that the saddle-point
should be defined by some kind of
analytical continuation that extends the support of the spectral density
$\rho_P$
into the complex plane and takes proper account of the multiple root
structure.
It would be interesting to resolve these problems and determine the general
technique for dealing with such complex saddle-points. It would also be
interesting to discover any connection between this saddle-point
destabilization
and the
well-known occurence of (single-branch) complex saddle points for the
eigenvalue distributions in generalized Penner models \cite{akm}.
\section{Conclusions}
In this paper we have shown that the character expansion techniques developed
in \cite{kaz1,kaz1a} can be applied to odd potentials. We have demonstrated
that the splitting of weights into congruence classes other than those of the
even representations of $GL(N,{\fam\Bbbfam C})$ leads to a proper solution of the
matrix model, provided that one writes the weight distribution that appears in
the character expansion over the appropriate congruence elements. The
Itzykson-Di Francesco formula then correctly reproduces relations between
different models.
{}From a mathematical perspective, the results of this paper raise some
questions concerning the large-$N$ limit of the Itzykson-Di Francesco formula.
For instance, a random surface model with a discrete ${\fam\Bbbfam Z}_p$-symmetry
corresponding to, say, a $p$-colouring of its graphs will be described by a
curvature matrix model with the ${\fam\Bbbfam Z}_p$-symmetry $A\to\omega_pA$,
$\omega_p\in{\fam\Bbbfam Z}_p$. This symmetry will be reflected in the Itzykson-Di
Francesco expansion as a localization of the group character sum onto mod $p$
congruence class representations of $GL(N,{\fam\Bbbfam C})$. The appropriate solution
of the model at $N=\infty$ will then involve resumming the Young tableau
weights over the mod $p$ congruence classes and assuming that the even-odd
decomposition factorizes symmetrically over these classes. However, it is not
immediately clear why such a symmetry assumption of the character expansion at
large-$N$ gives the appropriate solution of the discretized planar surface
theory (although a mapping onto a complex matrix model indicates how this
should work). At this stage there seems to be a mysterious ``hidden" symmetry
at play which makes the large-$N$ group theoretical approach to solving these
random surface models work. Furthermore, other intriguing features of the
Itzykson-Di Francesco formula, such as the appearence of phase transitions in
the Itzykson-Zuber integral represented through the large-$N$ limit of
generalized Schur functions, are purely a result of the character expansion
representation and correctly conspire to yield the proper solution of the
random surface model. It would be interesting to put all of these features into
some systematic framework for dealing with curvature matrix models in general.
\begin{figure}
\unitlength=0.90mm
\linethickness{0.4pt}
\begin{picture}(150.00,70.00)(0,10)
\small
\put(70.00,15.00){\line(1,0){50}}
\put(70.00,15.00){\line(1,2){25}}
\put(120.00,15.00){\line(-1,2){25}}
\put(70.00,15.00){\line(-1,-1){4}}
\put(120.00,15.00){\line(1,-1){4}}
\put(95.00,65.00){\line(0,1){4}}
\put(94.00,15.00){\line(-1,2){12}}
\put(96.00,15.00){\line(1,2){12}}
\put(83.00,41.00){\line(1,0){24}}
\put(81.00,15.00){\line(-1,2){5.5}}
\put(83.00,15.00){\line(1,2){5.5}}
\put(76.50,28.00){\line(1,0){11}}
\put(107.00,15.00){\line(-1,2){5.5}}
\put(109.00,15.00){\line(1,2){5.5}}
\put(102.50,28.00){\line(1,0){11}}
\put(94.00,41.00){\line(-1,2){5.5}}
\put(96.00,41.00){\line(1,2){5.5}}
\put(89.50,54.00){\line(1,0){11}}
\put(74.50,15.00){\line(-1,2){2.25}}
\put(76.50,15.00){\line(1,2){2.25}}
\put(73.25,21.50){\line(1,0){4.5}}
\put(87.50,15.00){\line(-1,2){2.25}}
\put(89.50,15.00){\line(1,2){2.25}}
\put(86.25,21.50){\line(1,0){4.5}}
\put(100.50,15.00){\line(-1,2){2.25}}
\put(102.50,15.00){\line(1,2){2.25}}
\put(99.25,21.50){\line(1,0){4.5}}
\put(113.50,15.00){\line(-1,2){2.25}}
\put(115.50,15.00){\line(1,2){2.25}}
\put(112.25,21.50){\line(1,0){4.5}}
\put(81.00,28.00){\line(-1,2){2.25}}
\put(83.00,28.00){\line(1,2){2.25}}
\put(79.75,34.50){\line(1,0){4.5}}
\put(107.00,28.00){\line(-1,2){2.25}}
\put(109.00,28.00){\line(1,2){2.25}}
\put(105.75,34.50){\line(1,0){4.5}}
\put(87.50,41.00){\line(-1,2){2.25}}
\put(89.50,41.00){\line(1,2){2.25}}
\put(86.25,47.50){\line(1,0){4.5}}
\put(100.50,41.00){\line(-1,2){2.25}}
\put(102.50,41.00){\line(1,2){2.25}}
\put(99.25,47.50){\line(1,0){4.5}}
\put(94.00,54.00){\line(-1,2){2.25}}
\put(96.00,54.00){\line(1,2){2.25}}
\put(92.75,60.50){\line(1,0){4.5}}
\end{picture}
\begin{description}
\small
\baselineskip=12pt
\item[Figure 5:] An example of a regular fractal-type graph that appears in the
dynamical triangulation.
\end{description}
\end{figure}
{}From a physical point of view, there are many random surface models which are
best dealt with using the dynamical triangulations studied in this paper, and
it would be interesting to exploit the relationships with the even coordination
number models to study the properties of these theories. For instance, one
sub-ensemble of the random surface sum (\ref{partgraphs}) is the collection of
regular fractal-like graphs (Fig. 5) which were shown in \cite{hw} to dominate,
in the high-temperature limit, the surface sum for two-dimensional quantum
gravity coupled to a large number of Ising spins when restricted to
two-particle irreducible Feynman diagrams. These fractal graphs can be
characterized by 3-point graphs $G_3$ where only the dual coordination numbers
$$ \refstepcounter{equnum}
q^*_k=3\cdot2^k~~~~~~,~~~~~~k\geq0
\label{fractalq}\eqno (\thesection.\arabic{equnum}) $$
are permitted. Here $k$ is the order of the fractal construction obtained
inductively by replacing each 3-point vertex of an order $k-1$ fractal graph
with a triangle\footnote{\baselineskip=12pt Note that the total number of
triangle sides along each outer side of the fractal-like structure of a single,
order $k+1$ fractal graph is $2^k-1$. Thus to be able to close the set of
3-point graphs of dual coordination numbers (\ref{fractalq}) on a spherical
topology (corresponding to $N=\infty$ in the matrix model), one needs to glue
an order $k$ and an order $k+1$ fractal graph together along their three
external corner legs (see Fig. 5) so that the valence of the dual vertices of
the faces joining the two fractal structures will coincide with
(\ref{fractalq}).}. The ensemble of fractal graphs corresponds to a branched
polymer phase of two-dimensional quantum gravity \cite{polymer}. We have shown
that when the $3m$-sided polygons in (\ref{partgraphs}) are weighted with a
power law variation with the number of sides, the continuum limit of the model
lies in the pure gravitational phase. The curvature matrix model (\ref{partfn})
with the dual vertex weights arranged as discussed in this paper can thus serve
as an explicit model for the transition from a theory of pure random surfaces
(associated with central charge $D<1$) to a model of branched polymers
(associated with $D\geq1$). This might help in locating the critical dimension
$D_c\geq1$ where the precise branched polymer transition takes place.
\newpage
| 2024-02-18T23:39:43.897Z | 1996-12-18T14:47:41.000Z | algebraic_stack_train_0000 | 234 | 19,902 |
|
proofpile-arXiv_065-1269 | \section*{Figure captions}
\begin{itemize}
\item[FIG. 1.] Characteristic oscillations in the
photon number distribution of dynamically
displaced number states for $\omega_L t_{in} = 250$,
$\Delta/\omega_L = 0.2$ and
$|n\rangle = |10\rangle$. In (a) and (b) the quantum limit, Eq. (\ref{8}),
is displayed for different values of $\alpha \equiv g/\hbar\omega_L$.
In (c) and (d) the semi-classical limit, Eq. (\ref{9}), is shown for
different values of $\Omega/\omega_L \equiv 2\sqrt{n}\alpha$.
The full line shows the Poisson distribution with $\bar{n} = 10$.
\item[FIG. 2.] Coherent destruction of tunneling monitored in the
cavity field. Displayed is Eq. (\ref{9}) for $l=n$,
$\Delta/\omega_L = 0.2$ as a function
of $\omega_L t_{in}$: upper figure $2\Omega/\omega_L = 2$,
lower figure $2\Omega/\omega_L = 2.3$. The first root of
$J_0(2\Omega/\omega_L)$ occurs at $2\Omega/\omega_L \approx 2.405$.
The effect is exhibited by a decrease
of the amplitude modulation in the cavity mode oscillations.
\end{itemize}
\end{multicols}
\end{document}
| 2024-02-18T23:39:44.088Z | 1996-09-11T17:19:13.000Z | algebraic_stack_train_0000 | 243 | 174 |
|
proofpile-arXiv_065-1410 | \section{Introduction}
Wavefront sensing, preservation and/or correction \Note{is essential in many optical systems, including in astronomy with low intensity point-like sources of rays, tightly focussed medium-intensity laser beams in microscopy and imaging, and for the delivery without aberrations of high-power laser beams for materials processing \cite{marois_DirectImagingMultiple_2008,rueckel2006adaptive,cizmar_SituWavefrontCorrection_2010,mauch2012adaptive}. Implicit in this is the understanding that most optical processes are phase rather than intensity dominant, thus phase and wavefront knowledge is paramount \cite{soldevila2018phase}.} \Note{It may be useful to point out that unlike object reconstruction by digital holography \cite{park2018quantitative} or computational imaging \cite{edgar2018principles}, here there is no object, no structured illumination, and no reference beam - it is the primary beam itself that must be probed and analysed by some in-line and preferably real-time device. Often the outcome of such a wavefront measurement is a means to correct it, perhaps by adaptive optics.} Such wavefront sensing techniques rely on the ability to measure the phase of light which can only be indirectly inferred from intensity measurements. Methods to do so include ray tracing schemes, intensity measurements at several positions along the beam path, pyramid sensors, interferometric approaches, computational approaches, the use of non-linear optics, computer generated holograms (CGHs), meta-materials and polarimetry \cite{navarro1999laser,almoro2006complete,chamot2006adaptive,velghe2005wave,yang2018generalized,bruning2013comparative,huang2015real,borrego2011wavefront,kamali2018review,dudley2014all,Ruelas2018,changhai2011performance,shin2018reference,baek2018high}. Perhaps the most well-known is the Shack-Hartmann wavefront sensor \cite{lane1992wave,vohnsen2018hartmann}. Its popularity stems from the simplicity of the configuration as well as the fact that the output can easily be used to drive an adaptive optical loop for wavefront correction. More recently a modal approach to beam analysis has been demonstrated \cite{flamm2013all,litvin2012azimuthal,schulze2012wavefront,schulze2013reconstruction,litvin2011poynting,schulze2013measurement,liu2013free,godin2014reconstruction}. Using both hard-coded CGHs and digital holograms on spatial light modulators (SLMs) (see \cite{forbes2016creation} for a review), the technique was shown to be highly versatile and accurate. These approaches to wavefront-sensing and corrections still suffer from slow refresh rates, often limited to 100s of Hz, are usually expensive (especially for non-visible applications), and are limited both in terms of spatial resolution and operational wavelength-range.
In this work we demonstrate a wavefront-sensor that is broadband (spanning over 1000 nm, from the visible to the mid-IR), fast (with a refresh rate in the kHz range), and inexpensive (100s of US dollars). We achieve this by building our wavefront-sensor around a digital micro-mirror device (DMD) and leveraging the advantages of the modal decomposition technique. This enables the rapid production of reconstructed intensity and phase-maps with an ``unlimited'' resolution, even though the employed detector is a single-pixel ``bucket-detector''. \Note{We demonstrate the technique using both a visible and NIR laser programmatically deteriorated with aberrations typical of moderately distorted beams, e.g., as would be experienced with thermally distorted high-power laser beams, propagation through a moderately turbulent atmosphere, and optically distorted beams due to tight focusing or large apertures. We demonstrated excellent wavefront reconstruction with measurement rates of 4000 Hz, fast enough to be considered real-time for most practical applications.}
\section{Background theory}
\label{sec:theory}
\noindent For the aid of the reader we briefly introduce the notion of wavefront and phase, outlining how it may be extracted by a modal decomposition approach.
\subsection{Wavefront and phase}
\noindent The wavefront of an optical field is defined as the continuous surface that is normal to the time average direction of energy propagation, i.e., normal to the time average Poynting vector $\mathbf{P}$
\begin{equation}
w(\mathbf{r},z)\perp\mathbf{P}(\mathbf{s},z),
\label{eq:wf1}
\end{equation}
where $z$ denotes the position of the measurement plane. The ISO standards define the wavefront more generally as the continuous surface that minimizes the power density weighted deviations of the direction of its normal vectors to the direction of energy flow in the measurement plane
\begin{equation}
\int\int|\mathbf{P}|\left|\frac{\mathbf{P}_t}{|\mathbf{P}|}-\nabla_t w\right|^2dA\rightarrow\mathrm{min},
\label{eq:wf2}
\end{equation}
where $\mathbf{P}_t=[P_x,\,P_y,\,0]'$. What remains then is to find the Poynting vector $\mathbf{P}$; this is computable from the knowledge of the optical field by
\begin{equation}
\mathbf{P}(\mathbf{s})=\frac{1}{2}\Re{\left[\frac{i}{\omega\epsilon_0}\epsilon^{-1}(\mathbf{s})[\nabla\times\mathbf{U}(\mathbf{s})]\times\mathbf{U}^\ast(\mathbf{s})\right]},
\label{eq:poynting1wf}
\end{equation}
where $\Re$ denotes the real component, for vector fields $\mathbf{U}$, and by
\begin{equation}
\mathbf{P}(\mathbf{s})=\frac{\epsilon_0\omega}{4}\left[i(U\nabla U^\ast-U^\ast\nabla U)+2k|U|^2\mathbf{e}_z\right]
\label{eq:poynting2wf}
\end{equation}
for scalar fields $U$, where $\omega$ is the angular frequency, $\epsilon_0$ the vacuum permittivity, $\epsilon$ the permittivity distribution. In the simple case of scalar, i.e. linearly polarized beams, the wavefront is equal to the phase distribution $\Phi(\mathbf{s})$ of the beam except for a proportionality factor
\begin{equation}
w(\mathbf{s})=\frac{\lambda}{2\pi}\Phi(\mathbf{s}) = \frac{\lambda}{2\pi}\text{arg}\{U(\mathbf{s})\},
\label{eq:wf3}
\end{equation}
where $\lambda$ is the wavelength. It is important to note that this expression is only valid so long as there are no phase jumps or phase singularities, because the wavefront is always considered to be a continuous surface. Nevertheless, this facilitates easy extract of the wavefront by a phase measurement.
From these expressions it is clear that if the optical field is completely known then the wavefront may readily be inferred. Here we outline how to do this by a modal expansion into a known basis, commonly referred to as modal decomposition.
\subsection{Modal decomposition}
\label{sec:modalDecomp}
\noindent Any unknown field, $U(\mathbf{s})$, can be written in terms of an orthonormal basis set, $\Psi_n(\mathbf{s})$,
\begin{equation}
\label{eq:U}
U(\mathbf{s}) = \sum_{n=1}^{\infty} c_n \Psi_n(\mathbf{s}) = \sum_{n=1}^{\infty} |c_n| e^{i\phi_n} \Psi_n(\mathbf{s}),
\end{equation}
with complex weights $c_n = |c_n| e^{i\phi_n}$ where $|c_n|^2$ is the power in mode $\Psi_n(\mathbf{s})$ and $\phi_n$ is the inter-modal phase, satisfying $ \sum_{n=1}^{\infty} |c_n|^2 = 1.$ Thus, if the complex coefficients can be found then the optical field and its wavefront can be reconstructed, usually requiring only a small number of measurements, especially in the case of common aberrations. Note that the resolution at which the wavefront may be inferred is not determined by the resolution of the detector. In other words, whereas only a few complex numbers are measured, the reconstructed resolution is determined by the resolution of the basis functions, which are purely computational.
The unknown modal coefficients, $c_n$, can be found by the inner product
\begin{equation}
\label{eq:mdoverlap}
c_n = \braket{\Psi_n|U} = \int \Psi_n^*(\mathbf{s}) U(\mathbf{s}) d\mathbf{s},
\end{equation}
\noindent where we have exploited the ortho-normality of the basis, namely
\begin{equation}
\braket{\Psi_n|\Psi_m} = \int \Psi_n^*(\mathbf{s}) \Psi_m(\mathbf{s}) d\mathbf{s} = \delta_{nm}.
\end{equation}
This may be achieved experimentally using a lens to execute an optical Fourier transform, $\mathfrak{F}$. Accordingly we apply the convolution theorem
\begin{equation}
\label{eq:convtheorem}
\mathfrak{F}\{f(\mathbf{s})g(\mathbf{s})\} = F(\mathbf{k}) * G(\mathbf{k}) = \int F(\mathbf{k})G(\mathbf{s}-\mathbf{k}) d\mathbf{k}
\end{equation}
\noindent to the product of the incoming field modulated with a transmission function, $T_n(\mathbf{s})$, that is the conjugate of the basis function, namely,
\begin{equation}
W_0(\mathbf{s}) = T_n(\mathbf{s}) U(\mathbf{s}) = \Psi_n^*(\mathbf{s}) U(\mathbf{s}),
\end{equation}
\noindent to find the new field at the focal plane of the lens as
\begin{equation}
\label{eq:mdlensft}
W_f(\mathbf{s}) = A_0~\mathfrak{F} \{W_0(\mathbf{s}) \} = A_0 \int \Psi_n^*(\mathbf{k}) U(\mathbf{s} - \mathbf{k}) d\mathbf{k}
\end{equation}
Here $A_0 = \exp(i4\pi f/ \lambda)/(i\lambda f)$ where $f$ is the focal length of the lens and $\lambda$ the wavelength of the light. If we set $\mathbf{s} = \mathbf{0}$, which experimentally is the on-axis (origin) intensity in the Fourier plane, then Eq.~(\ref{eq:mdlensft}) becomes
\begin{equation}
W_f(\mathbf{0}) = A_0 \int \Psi_n^*(\mathbf{k}) U(\mathbf{k}) d\mathbf{k}
\end{equation}
\noindent which is the desired inner product of Eq.~(\ref{eq:mdoverlap}). Therefore we can find our modal weightings from an intensity measurement of
\begin{equation}
I_n = |W_f(\mathbf{0})|^2 = |A_0|^2 |\braket{\Psi_n | U}|^2 = |c_n|^2.
\end{equation}
This is not yet sufficient to reconstruct the wavefront of the field as the inter-modal phases are also needed. The inter-modal phases $\Delta \phi_n$ for the modes $\Psi_n$ cannot be measured directly, however, it is possible to calculate them in relation to an arbitrary reference mode $\Psi_{\mathrm{ref}}$. This is achieved with two additional measurements, in which the unknown field is overlapped with the superposition of the basis functions \cite{flamm2009,schulze2013reconstruction}, effectively extracting the relative phases from the interference of the modes. Thus, in addition to performing a modal decomposition with a set of pure basis functions, $\Psi_n$, we perform an additional modal decomposition with each mode and a reference, described by the transmission functions
\begin{equation}
\label{eq:Tcos}
T^{\mathrm{cos}}_n (\mathbf{s}) = \frac{\left[ \Psi^*_{\mathrm{ref}}(\mathbf{s}) + \Psi^*_{n}(\mathbf{s}) \right]}{\sqrt{2}}
\end{equation}
\noindent and
\begin{equation}
\label{eq:Tsin}
T^{\mathrm{sin}}_n (\mathbf{s}) = \frac{\left[ \Psi^*_{\mathrm{ref}}(\mathbf{s}) + i\Psi^*_{n}(\mathbf{s}) \right]}{\sqrt{2}}.
\end{equation}
It is worth noting that, while in principle one measurement is sufficient for an inter-modal phase, two ensure that the phase value is not ambiguous. If the resulting intensity measurements are $I^{\mathrm{cos}}_n$ and $I^{\mathrm{sin}}_n$, then the inter-modal phase can be found from
\begin{equation}
\label{eq:intermodalPhase}
\Delta \phi_n = - \arctan \left[ \frac{2I^{\mathrm{sin}}_n - I_n - I_{\mathrm{ref}}}{2I^{\mathrm{cos}}_n - I_n - I_{\mathrm{ref}}} \right] \in [-\pi, \pi].
\end{equation}
Importantly, in order to reduce the error in the estimation of the inter-modal phase, the reference mode should return an intensity comparatively high to the average intensity of the other modes in the basis.
In the present context, the transmission functions are implemented as computer generated holograms (CGHs), and displayed on a DMD spatial light modulator. As a note, the amplitudes of the respective transmission functions are normalized to satisfy the condition that the encoded transmission function, $\widetilde{T_n}$, is $|\widetilde{T}_n| \in [0,1]$. As a result, generated or detected modes are still orthogonal but are no longer orthonormal, with deleterious effects for modal decomposition \cite{flamm2013all}. It has been shown that it is paramount to re-scale the measured intensities before normalising the measurements for $\sum_n I_n = 1$ \cite{flamm2013all}. This correction must be done for each CGH in the system by simply multiplying in the additional factors, with the equations below for a single CGH:
\begin{equation}
\label{eq:orthonormCA}
I_n = I_{\mathrm{meas.}} \braket{\widetilde{T}_n^{\mathrm{CA}}|\widetilde{T}_n^{\mathrm{CA}}}^{-1}
\end{equation}
\begin{equation}
\label{eq:orthonormPO}
I_n = I_{\mathrm{meas.}} |\widetilde{T}_n^{\mathrm{PO}}|^{-1}
\end{equation}
\noindent where $I_{\mathrm{meas.}}$ is the measured intensity which is re-scaled to result in $I_n$, depending on whether a Complex-Amplitude (CA) or a Phase Only (PO) CGH is used.\\
In order to encode the phase and amplitude of the desired transmission functions for implementation with a binary amplitude DMD, the following conditioning of the hologram is required~\cite{brown1966, lee1979}
\begin{equation}
\label{eq:Tdmd}
\widetilde{T}_n(\mathbf{s}) = \frac{1}{2} + \frac{1}{2} \mathrm{sign} \left[ \cos{(p(\mathbf{s}))} + \cos(q(\mathbf{s})) \right],
\end{equation}
where
\begin{equation}
p(\mathbf{s}) = \arg (T_n (\mathbf{s})) + \phi_g (\mathbf{s})
\end{equation}
\begin{equation}
q(\mathbf{s}) = \arcsin\!\!\left(\frac{|T_n(\mathbf{s})|}{|T_n(\mathbf{s})|_{max}}\right)
\end{equation}
and $T_n$ is the desired function to be encoded (for example Eqs.~(\ref{eq:Tcos}), (\ref{eq:Tsin}) and (\ref{eq:LG})) and $\phi_g$ is a linear phase ramp which defines the period and angle of the resulting grating. The target field will occur in the first order diffraction spot. \Note{Due to the nature of a binary amplitude-only hologram, the efficiency is low in comparison to a phase-only hologram on a SLM. Efficiencies on the order of 1.5\% are expected, but this issue can be mitigated by using a sensitive detector, or seen as a benefit if higher incoming laser powers are expected \cite{Mirhosseini2013}.}\\
In this work we use the Laguerre-Gaussian (LG) basis as our expansion with basis functions in two indices given as \cite{Kogelnik1966}
\begin{equation}
\label{eq:LG}
\begin{aligned}
\Psi^{\mathrm{LG}}_{p,\ell}(r,\theta) = \sqrt{\frac{2p!}{\pi(p + |\ell|)!}}
\left(\frac{r\sqrt{2}}{w_0} \right)^{|\ell|}
\!\!L_p^{|\ell|} \!\!\left(\frac{2r^2}{w^2_0}\right)
\exp \!\!\left(-\frac{r^2}{w^2_0} \right)
\exp (-i\ell\theta)
\end{aligned}
\end{equation}
\noindent where $w_0$ is the Gaussian beam waist and $L_p^{|\ell|} (\cdot)$ is the generalised Laguerre polynomial with azimuthal index $\ell$ and radial index $p$. While the choice of basis is arbitrary there is always an optimal basis to minimise the number of modes in the expansion. For example, if the measured mode has a rectangular shape then it is likely that the Hermite-Gaussian basis will be more suitable as it will require fewer terms in Eq.~(\ref{eq:U}) for an accurate reconstruction.
\section{Experimental setup and methodology}
\label{sec:expsetup}
\begin{figure}[t]
\centering
\includegraphics[width=0.88\linewidth]{setupfig}
\caption{Schematic representation of the experimental setup showing (a) mode (aberration) creation using a SLM and (b) modal decomposition using a DMD. When used as a wavefront measurement tool, part (a) would not be present and the incoming beam would shine directly onto the DMD. As an illustrative example, a modal decomposition of a defocus aberration to $\mathrm{LG}_{\ell=0}^{p\in[0,3]}$ is shown in (c), with modal weightings above each mode.}
\label{fig:expSetup}
\end{figure}
A schematic of the experimental setup is shown in Fig.~\ref{fig:expSetup}. We show a modal decomposition set-up which includes a DMD to display the CGH (the transmission function in Sec.~\ref{sec:theory}), a Fourier lens and a pinhole with a photo-detector to measure the on-axis intensity for the inner product outcome. In order to select the on-axis mode for the modal decomposition, the photodiode can be either fibre-coupled (using a single-mode fibre) or paired with a precision pin-hole ($5~\mu$m).
In this work we tested two DMD devices. The first, a DLP6500FYE device ($1920\times1080$ mirrors, $6.5~\mu$m pitch, and a refresh rate of 9.5~kHz), whose larger chip is on the one hand useful in displaying high order modes, but on the other hand is more affected by strain-induced curvature of the micromirror chip. Consequently, the results in this paper were primarily produced using the second device, a DLP3000, due to its smaller and thus optically flatter chip. This model has $608\times684$ mirrors (7.6~$\mu$m pitch, arranged in a diamond pattern) and a refresh rate of 4~kHz when switching through on-board memory patterns.
We imposed a known primary aberration onto an incoming Gaussian beam and directed it towards the DMD wavefront sensor. For tests in the visible ($\lambda = 635$ nm) a camera was used as the detector and the intensity at origin (``single pixel'') used, while for the NIR ($\lambda = 1550$ nm) a single mode fibre coupled InGaS photodiode was used. A custom trans-impedance amplifier converted the photodiode current into a voltage that was then measured by the 12~bit Analogue-to-Digital Converter (ADC) of an Arduino Due microcontroller, and sent to a computer. In order to operate the DMD at its fastest rate, the holograms were loaded onto its on-board flash memory.
\section{Reconstruction results}
\label{sec:results}
\subsection{Modal decomposition verification}
\label{subsec:verify}
In order to verify our wavefront sensor, a modal decomposition was performed on prepared Laguerre-Gaussian modes with $\ell\in[-3,3]$ and $p\in[0,3]$. Each mode was generated and a modal decomposition was performed for modal weights and inter-modal phases, with the results shown in Figs.~\ref{fig:sanityTests}. As expected, the azimuthal modal decomposition ($p=0$, $\ell\in[-5,5]$) at both wavelengths shows limited crosstalk, and thus a relatively linear measured intensity.
The inter-modal phase measurement was verified by generating beam made from a superposition of two LG$_{\ell=\pm1}^{p=0}$ modes with a known phase shift between them, as in Eq.~(\ref{eq:lgpm1super}). The reference mode was chosen as the $\ell=-1$ mode
\begin{equation}
\label{eq:lgpm1super}
T_n(\mathbf{s}) = \Psi^{\mathrm{LG}}_{\ell=-1}(\mathbf{s}) + e^{i\phi} \Psi^{\mathrm{LG}}_{\ell=1}(\mathbf{s}),
\end{equation}
where $T_n$ is the encoded transmission function and $\phi$ is the programmed inter-modal phase between the two modes.
As shown in Fig.~\ref{fig:sanityTests}, both the visible and NIR inter-modal phase tests are largely correct within experimental error. Here, the measurements were repeated ten times as the phase reconstruction was found to be sensitive to noise, as indicated by the shaded error regions in the figure. The error for the NIR measurements was found to be negatively affected by the performance of our custom transimpedance amplifier used to sample the intensities from the photodiode.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{expverify}
\caption{(a) Experimental modal decomposition verification of modal amplitudes, $|c_n|$, of the experimental setup where each mode is generated and subsequently detected for both azimuthal ($\ell$) and radial ($p$) modes. (b) Verification of the inter-modal phase measurement, $\phi_n$, where a superposition of $\mathrm{LG}_{\ell=\pm 1}^{p=0}$ with a specific inter-modal phase was programmed and measured for both wavelengths. The slight crosstalk and phase error is caused by deformations of the DMD surface. }
\label{fig:sanityTests}
\end{figure}
\subsection{Wavefront measurements}
\label{subsec:results}
Figure~\ref{fig:visResults} shows the reconstruction results for visible wavelengths with astigmatism and trefoil aberrations as examples. For both cases the measured wavefront is remarkably similar to the programmed aberration. A NIR wavefront measurement is shown the the right of Fig.~\ref{fig:visResults}, and is also found to be in excellent agreement with the simulation. The slight difference in ``flatness'' of the measured wavefront with respect to the simulated one was attributed to errors in the inter-modal phase measurements.
\section{Discussion}
For both the visible and NIR tests, the primary cause for inaccuracy is the inter-modal phase measurement. This is consistent with the verification tests in Fig.~\ref{fig:sanityTests}, where the inter-modal phase error was also more prominent than the intensity decomposition error. This is due to noise in the intensity measurements, mainly caused by displacements of the beam during the modal decomposition as a result of air-flow in the laboratory, and to some extent to the compounding of errors in Eq.~(\ref{eq:intermodalPhase}).
\Note{A simple error analysis reveals that the percentage error in the phase scales as $4 \Delta I/|I_n^{\Psi} - I_n|$, where $I_n^\Psi$ is the signal in the cosine or sin modes, $I_n^{\mathrm{cos}}$ or $I_n^{\mathrm{sin}}$, and $\Delta I$ is the error due to the detector. Consequently, the phase error will be negligible for modes of reasonable power since $\Delta I$ can be made very small while $I_n$ is high. On the other hand the phase error can be high for modes of low modal power content (small $I_n$). Fortuitously, our approach by very definition weights the modes according to modal power, so it is the low power modes that are least important in the reconstruction process. The use of a higher resolution ADC will result in more accurate reconstructions since the systematic error component of $\Delta I$ will be reduced. For example, 16 and 24~bit ADCs have dynamic ranges of 96~dB and 145~dB respectively, which corresponds to nano-Watt intensity measurement accuracy for incoming beams in the hundreds of milli-Watt range. Taking this as a typical case we find the percentage error in phase in the order of $\approx 10^{-6}$. Provided a suitable photodiode is used, these sensitivities are possible for both visible and NIR wavelengths.}
\Note{In addition, the accuracy of the reconstructed wavefront is dependent on the number of modes used for the decomposition and the complexity of the aberration, as described in Sec.~\ref{sec:theory}. A higher-order Zernike aberration requires more modes to reconstruct than a lower-order aberration. It has been shown that with only a few modes very complex phase structures can be mapped, often requiring fewer than 10 modes \cite{schulze2012wavefront,schulze2013reconstruction,litvin2011poynting,schulze2013measurement}. Further, in many practical applications (such as thermal aberrations of high-power laser beams or optical aberration of delivered beams) only a few lower-order aberrations are required to describe the beam. This is true even for the case of low to moderate turbulence, where the first few Zernike terms describe most of the observed wavefront-error. We can understand this by remembering that the rms wavefront error scales with the square of the Zernike coefficients (the sum of the squared coefficients to be precise), so that small coefficients become negligible. However, in very strong scattering media such as tissue or very strong turbulence where scintillation is experienced, we would expect our technique to require many modes for an accurate reconstruction with high error due to low modal powers. Our interest is in real-time analysis for real-time correction, and in such cases correction would be equally problematic.}
\Note{The resolution of the DMD and the size of the incoming beam sets an upper limit to the number of modes that can be tested and for a SLM with $1920\times1080$ resolution, this is on the order of hundreds of modes \cite{Rosales-Guzman2017}. We can expect similar performance from a DMD. The radius of an LG mode is given by $w_0\sqrt{2p + |\ell| + 1}$ and so for instance, with $w_0=0.5$~mm and a DLP3000 DMD which has a minimum dimension of 608 pixels with pitch 7.6~$\mu$m, an LG mode with $\ell=5$ and $p=5$ will fill the DMD. This is equivalent to more than $60$ modes whereas less than $10$ modes were needed for accurate wavefront reconstruction in this work.}
\begin{figure}[t]
\centering
\includegraphics[width=0.88\linewidth]{reconresults}
\caption{Simulated and measured (reconstructed) wavefront measurements for visible and NIR wavelengths of two aberration examples with an inset intensity comparison for the trefoil case. The differences in the intensity of the inset images are due to camera sensitivity.}
\label{fig:visResults}
\end{figure}
One of the benefits of our technique is the potential for real-time wavefront reconstruction. A camera was used for the visible measurements and so the decomposition was simply scripted at low speed ($\approx$60~Hz hologram rate) whereas for the NIR tests a photodiode was used which allowed for faster rates. Initial NIR tests were performed in a similar, scripted manner but a test was performed where we loaded the holograms into the DMDs frame buffer and took measurements at the maximum refresh rate of 4~kHz. The results were identical to the ``slow'' scripted version, proving that wavefront measurements can be done quickly using this method.
Given that multiple modal decomposition measurements are required to reconstruct a single wavefront, it is pertinent to elaborate on the achievable wavefront measurement rates of this technique. Different applications require different wavefront measurement rates, for instance, thermal aberrations typically are slowly evolving over time frames of seconds, while moderate atmospheric turbulence changes at rates of 100s of Hz \cite{Greenwood1977}.
Table~\ref{tab:rates} shows calculated wavefront reconstruction rates (wavefronts per second) for several different mode-sets. The maximum number of measurements required for the approach in this paper is $3N-2$ where $N$ is the total number of modes in the set. We see that even assuming many modes on a low speed device we are able to do wavefront sensing at video frame rates, whereas for realistic mode sets on better devices the rate becomes in the order of 100s to 1000s of Hz, fast enough to be considered real-time in most applications. A possible future improvement to the measurement algorithm could make use of compressive sensing techniques and a more targeted measurement regime, thus requiring fewer measurements and resulting in even faster wavefront sensing.
\begin{table}[]
\centering
\caption{Resulting wavefront measurement rate (Hz) for different DMD refresh rates and mode-sets. Larger mode sets will result in higher wavefront reconstruction accuracy.}
\label{tab:rates}
\begin{tabular}{r|c|c|c}
& $\mathrm{LG}_{p\in[0,3]}^{\ell\in[-3,3]}$ & $\mathrm{LG}_{p\in[0,5]}^{\ell\in[-5,5]}$ & $\mathrm{LG}_{p=0}^{\ell\in[-5,5]}$ \\
\hline
4 kHz & 48 & 20 & 129 \\
9.5 kHz & 115 & 48 & 306 \\
32 kHz & 390 & 163 & 1032
\end{tabular}
\end{table}
\Note{Finally we point out that the advantage of the modal approach to wavefront sensing is that it simultaneously provides all the required information to infer numerous physical properties of the laser beam, including the Poynting vector, orbital angular momentum density, laser beam quality factor ($M^2$), modal structure and so on, making our DMD modal decomposition approach highly versatile.}
\section{Conclusion}
We have demonstrated a fast, broadband and inexpensive wavefront-sensor built around a DMD spatial light modulator. Owning to the employed modal decomposition technique, the resolution of the reconstructed wavefronts is not determined by the resolution of detector, which is a spatially non-resolved photodiode. On the contrary it solely depends on the resolution of the basis functions, which are purely computational. These advantages allow high-resolution wavefront sensing in real-time. We expect that devices based on this novel approach will be invaluable for wavefront sensing of NIR wavelengths where other approaches are either too challenging or too expensive.
\section*{Funding}
EPSRC Centre for Doctoral Training in Intelligent Sensing and Measurement (EP/L016753/1); EPSRC QuantIC (EP/M01326X/1); ERC TWISTS (340507).
| 2024-02-18T23:39:44.680Z | 2019-01-23T02:30:02.000Z | algebraic_stack_train_0000 | 267 | 4,447 |
|
proofpile-arXiv_065-1418 | \section{Introduction}\label{intro}
Carbon dioxide (CO$_2$) is a greenhouse gas that contributes greatly to global warming. As the use of carbon-based fuel is a primary source of energy, it is desirable to develop technologies for efficient capture and sequestration of CO$_2$ produced from such sources. Significant efforts have been carried out to study adsorption of CO$_2$ on different materials including complicated structures such as covalent organic frameworks \cite{Zeng2016, Lan2010} and metal organic frameworks \cite{Zhang2016, Saha2010}. In this respect, CO$_2$ adsorption on boron clusters and surfaces offers an interesting alternative \cite{Sun2014PCCP, Sun2014JPCC} which deserves further investigation.
Boron, like its neighboring element carbon, possesses a remarkable variety of structures that could be of use in a wide range of applications \cite{Zhang2012, Endo2001, Carter2014}. Bulk boron polymorphs are mainly composed of 3D-icosahedral B$_{12}$ cage structures as basic building blocks \cite{Bullett1982, Perkins1996}, while small boron clusters prefer planar-type aromatic/antiaromatic structures \cite{Zhai2003, Sergeeva2014}.
In fact, neutral and charged clusters B$_{n}^{(+,-)}$, with ${n \leq 15}$, have been predicted theoretically \cite{Boustani1997, Ricca1996, Kiran2005,Tai2010}, and confirmed experimentally (or by combined experimental and theoretical studies) \cite{Zhai2003, Oger2007, Tai2010, Romanescu2012}, to be planar or quasiplanar. For ${{n} > 15}$, competing low-energy isomers start to occur, in particular for the positively charged clusters B$_{16}^+$ to B$_{25}^+$ which were reported to have ring-type structures, based on mobility measurements \cite{Oger2007}. On the other hand, the negatively charged B$_{n}^-$ clusters have shown to systematically conserve planar-like structures up to at least ${{n}=25}$ by joint photoelectron spectroscopy and quantum chemistry calculations \cite{SerAveZha11,PiaLiRom12, PopPiaLi13,PiaPopLi14}. Moreover, the neutral clusters B$_{16}$ and B$_{17}$ clusters are found to display planar-type geometries based on vibrational spectroscopy studies \cite{Romanescu2012}; in this case, the smallest 3D-like (tubular) structure was suggested to occur for B$_{20}$ \cite{Kiran2005PNAS}. Recently, B$_{27}^-$, B$_{30}^-$, B$_{35}$, B$_{35}^-$, B$_{36}$ and B$_{36}^-$ clusters have been discovered to possess quasiplanar geometries through combined experimental and theoretical studies \cite{PiaHiLi14, Li2014JACS, Li2014Ange, Li2015JCP}, while the B$_{40}$ cluster has been observed to occur with a fullerene structure \cite{Zhai2014}. Such quasiplanar clusters can be viewed as embryos for the formation of 2D boron sheets (borophenes) \cite{PiaHiLi14, Li2014JACS}. Several borophene polymorphs and boron nanotubes have been theoretically predicted \cite{Yang2008, Quandt2005, XWu2012, Weir1992} and also experimentally grown \cite{Ciuparu2004, Weir1992, Patel2015, Mannix2015}.
Previous computational studies have revealed an interestingly strong CO$_2$ adsorption behavior on some theoretical models of surfaces of solid $\alpha$-B$_{12}$ and $\gamma$-B$_{28}$ \cite{Sun2014PCCP} and relatively strong CO$_2$ binding energies on B$_{40}$ and B$_{80}$ fullerenes \cite{Dong2015, Gao2015, Sun2014JPCC}. For the most common boron planar type of clusters, as well as for 2D-boron sheets, however, chemical binding of CO$_2$ was theoretically predicted so far only in the case of chemically engineered systems, namely for charged transition metal (TM)-atom centered boron-ring clusters, TM\textendash B$_{8-9}^-$ \cite{Wang2015}, and for Ca-, Sc- coated boron sheets \cite{Tai2013}.
In the current work, we show the existence of strong chemical binding of the CO$_2$ molecule to the aromatic/antiaromatic planar-type B$_{n}$ clusters (${n}=10~\textrm{to}~13$). By means of first-principle calculations and by varying the CO$_2$ initial position, we identify various chemisorbed and physisorbed configurations. We find that the strong chemisorption occurs for all four clusters when the adsorbed CO$_2$ molecule is in the plane of the cluster, close to its edge, and that the strongest adsorption energy reaches 1.6~eV in the case of B$_{12}$. For B$_{11}$ and B$_{13}$ adsorption with dissociated CO$_2$ is also found to occur at some edge sites. We rationalize the mechanism of the strong adsorption as due to the strong and matching planar character of frontier orbitals of both the cluster and bent CO$_2$ molecule, together with the favorable redistribution of electronic charge in excess at the edges of the cluster, in the presence of the dipole moment of the bent CO$_2$.
\section{Methodology and systems}\label{method}
\subsection{Computational details}
All calculations were carried out using first-principles plane-wave pseudopotential density functional theory (DFT) method, as implemented in the Quantum ESPRESSO package \cite{Giannozzi2009}. The spin-polarized Perdew-Burke-Ernzerhof (PBE) \cite{Perdew1996} exchange-correlation functional within the generalized gradient approximation (GGA) was employed. We used scalar-relativistic Vanderbilt ultrasoft pseudopotentials \cite{Vanderbilt1990} generated from the following atomic configurations: $2s^{2}2p^{1}$ for B, $2s^{2}2p^{2}$ for C and $2s^{2}2p^{4}$ for O. A non-linear core correction was included in the B pseudopotential. We employed a cubic supercell with sides of 21~\AA\ for all calculations to avoid cluster interactions. A 1~$\times$~1~$\times$~1 Monkhorst-Pack \textbf{k}-point mesh was used with a Gaussian level smearing of 0.001 Ry. Threshold for electronic convergence was set to 10$^{-7}$~Ry, and structures were optimized until the forces on each atom were below 10$^{-4}$~Ry/a.u.
The CO$_{2}$ adsorption energy ($E_\textrm{ads}$) on the B clusters was computed as \cite{Sun2014JPCC}:
\begin{equation} \label{eq:E_ads}
E_\textrm{ads}=E_{\textrm{B}_{n}-\textrm{CO$_{2}$}}-{E}_{\textrm{B}_{n}}-{E}_\textrm{CO$_{2}$},
\end{equation}
\noindent where $E_{\textrm{B}_n-\textrm{CO}_2}$ is the total energy of the atomically relaxed system consisting of the B$_{n}$ cluster and adsorbed CO$_{2}$ molecule, $E_{\textrm{B}_n}$ is the total energy of the isolated (relaxed) B$_{n}$ cluster, and $E_{\textrm{CO}_{2}}$ is the total energy of the CO$_2$ molecule in the gas phase. Convergence tests for the plane-wave expansion of the electronic orbitals indicated that changing the kinetic energy cut-off from 64~Ry to 96~Ry resulted in $E_\textrm{ads}$ changes within 1~meV. We used the former wave-function cut-off, together with a 384-Ry cut-off for the augmentation charge density, in all calculations reported here.
\subsection{Geometry and relative stability of the B$_\textrm{10-13}$ clusters}
The initial boron-cluster structural configurations were constructed based on previous work \cite{Tai2010} that catalogued the stable structures of B$_{n}$ clusters (for ${n \leq 13}$). We performed structural optimization resulting in the lowest-energy cluster geometries and bond lengths, shown in Fig.~\ref{fig:bondlengths} that are consistent with the results in Ref.~\cite{Tai2010}. It can be seen that B$_{10}$ and B$_{12}$ clusters exhibit quasiplanar structures, while B$_{11}$ and B$_{13}$ clusters have planar structural geometries. Moreover, B$_{12}$ and B$_{13}$ clusters are characterized by three inner atoms that are compactly bound forming an inner triangle. The longest B\textendash B bonds of $\geq$1.8~\AA\ existing in these clusters belong to B$_{11}$ and B$_{13}$ clusters, and form a square configuration within the cluster (see Fig.~\ref{fig:bondlengths}). Among the B$_n$ clusters studies, B$_{12}$ is the energetically most stable with a binding energy of 5.37~eV/atom (calculated binding energies are given in Supplementary material, Part I: Table~S1).
\begin{figure}[!h]
\centering\includegraphics[width=0.7\linewidth]{bondlengths.png}
\caption{Obtained optimized structures of B$_{n}$ clusters (${n}=10-13$). Specific B\textendash B bond lengths, in \AA, are also indicated for each cluster. Insets show the side view of the cluster, demonstrating that B$_{11}$ and B$_{13}$ clusters exhibit planar structures, while B$_{10}$ and B$_{12}$ are quasiplanar with some of the atoms displaced by 0.31 and 0.34~\AA\ from the cluster plane for B$_{10}$ and B$_{12}$ clusters, respectively.}
\label{fig:bondlengths}
\end{figure}
\section{Results and discussions}\label{results}
\subsection{Chemisorption of CO$_2$ on B$_{n}$ clusters}
We considered different initial configurations of CO$_2$ relative to the B$_{n}$ clusters including various adsorption sites and orientations of the molecule. We found strong chemisorption of the CO$_2$ molecule along the contour edges of the B$_{n}$ clusters. In addition, physisorption states of the CO$_2$ molecule were observed at larger distances from the cluster or when placed on top of the B-cluster plane. The strong binding of CO$_2$, with adsorption energies between $-1.6$ and $-1$~eV, is a common feature for all four B$_{n}$-clusters.
Figure~\ref{fig:init_final} shows the obtained optimized configurations of the B$_{n}$\textendash CO$_{2}$ systems characterized by the strongest adsorption energy ($E_\textrm{ads}$, shown in Table~\ref{tab:E_ads}), together with their corresponding initial configurations, shown as insets. The strongest adsorption overall was found for the B$_{12}$\textendash CO$_{2}$ system with a chemisorption energy of $-1.60$~eV, followed by close values of about $-1.4$~eV for B$_{11}$ and B$_{13}$, and somewhat weaker, but still robust chemisorption on B$_{10}$. Besides similar strong values of the CO$_{2}$ adsorption, all four B$_{n}$\textendash CO$_{2}$ systems share common features regarding the adsorption geometry. Thus, chemisorption occurs when the CO$_2$ molecule is initially placed near the edge sites and in-plane with respect to the B$_{n}$ cluster (see the insets in Fig.~\ref{fig:init_final}) for all clusters. Furthermore, final configurations indicate that chemisorbed B$_{n}$\textendash CO$_2$ systems tend to keep a planar geometry. The CO$_2$ molecule bends by a similar angle of $\sim$122\textdegree~for all B clusters considered as it chemisorbs on the B cluster. It should be noted that this angle corresponds to the equilibrium geometry predicted theoretically for the negatively charged CO$_2$ molecule \cite{GutBarCom98}. Following the formation of a C\textendash B and O\textendash B bond (with lengths of $\sim$1.6 and $\sim$1.4~\AA, respectively), the O\textendash C bond lengths of the molecule (initially 1.18~\AA) also elongate asymmetrically to $\sim$1.2~\AA\ (the O\textendash C bond further away from the cluster) and to $\sim$1.5~\AA\ (for the O\textendash C bond that is linked to the B cluster). Distances between B atoms at which O and C atoms are bound (denoted B$^{(1)}$ and B$^{(2)}$ respectively, in Fig.~\ref{fig:init_final}, with the binding O denoted O$^{(1)}$) increase by $0.3~-~0.7$~\AA\ with respect to their bond lengths in isolated clusters. Other edge chemisorption sites were also found for all the four clusters (with $E_\textrm{ads}$~$<$~$-1.10$~eV).
\begin{figure}[!h]
\centering\includegraphics[width=0.7\linewidth]{init_final.png}
\caption{Obtained optimized structures of CO$_2$ with B$_{n}$ clusters for the strongest adsorption, where B, C and O atoms are shown in grey, yellow and red, respectively. The distances between the cluster and molecule are given in angstroms. Insets represent initial positions prior to the interaction of the CO$_2$ molecule with B clusters, with the molecule placed in the cluster plane at less than 2~\AA\ distance from the cluster. Boron bonds shorter than 2~\AA\ are represented by rods.}
\label{fig:init_final}
\end{figure}
\begin{table}[!h]
\centering
\caption{Strongest adsorption energies (in eV) obtained for the relaxed configurations with adsorbed CO$_2$ molecule on the B$_{n}$, ${n}=10-13$, clusters and with dissociated molecule (CO + O) in the cases of B$_{11}$ and B$_{13}$ (second line). The adsorption energies correspond to the final configurations shown in Figs. \ref{fig:init_final} and \ref{fig:dissoc}. The adsorption energy of the dissociated CO$_2$, $E_\textrm{ads}^\textrm{dissociated}$, was obtained using Eq.~\ref{eq:E_ads}.}
\begin{tabular}{l c c c c}
\hline
& \textbf{B$_\textrm{10} $} & \textbf{B$_\textrm{11} $} & \textbf{B$_\textrm{12} $} & \textbf{B$_\textrm{13} $}\\
\hline
$E_\textrm{ads}$ (eV) & $-1.11$ & $-1.42$ & $-1.60$ & $-1.43$\\
\hline
$E_\textrm{ads}^\textrm{dissociated}$ (eV) & --- & $-2.19$ & --- & $-1.66$\\
\hline
\end{tabular}
\label{tab:E_ads}
\end{table}
Dissociation of CO$_2$ was also observed in B$_{11}$ and B$_{13}$ clusters at some specific B sites, wherein some of B bonds broke in order for the dissociated O and C\textendash O fragments to bind to the (deformed) cluster, as shown in Fig.~\ref{fig:dissoc}. For B$_{11}$ and B$_{13}$ clusters with dissociated CO$_2$, the chemisorption energies ($E_\textrm{ads}^\textrm{dissociated}$) are $-2.19$~eV and $-1.66$~eV, respectively.
We also found physisorbed CO$_2$ configurations with physisorption energies ranging from $-11$ to $-30$~meV for distances between 3.5 and 4~\AA~from the B$_{n}$ cluster (measured as the smallest interatomic separation). The physisorption configurations include the CO$_2$ molecule placed above the cluster or placing the C of the molecule further away in the cluster plane, with the O atoms in or out of the cluster plane (as shown in Fig.~\ref{fig:physi_correct} for the case of B$_{12}$). An example describing the in-plane physisorption and chemisorption states of CO$_2$ on B$_{12}$ cluster is given in Part II of Supplementary material.
\begin{figure}[!h]
\centering\includegraphics[width=0.7\linewidth]{dissoc.png}
\caption{Obtained optimized structures of the CO$_2$ molecule adsorbing on (a) B$_{11}$ and (b) B$_{13}$ clusters where dissociation of the molecule occurs. Insets show the initial position prior to the interaction with the molecule placed in the cluster plane at a distance of less than 2~\AA\ from the cluster.}
\label{fig:dissoc}
\end{figure}
\begin{figure}[!h]
\centering\includegraphics[width=0.4\linewidth]{physi_correct.png}
\caption{Representative image of a typical physisorption state of CO$_2$ molecule on B$_{12}$ cluster obtained when the molecule is initially placed near an edge atom of the cluster, and rotated 90\textdegree ~out of the cluster plane. The CO$_2$ molecule maintains its linear structure as it moves away from the cluster.}
\label{fig:physi_correct}
\end{figure}
The binding energies we have found here for the chemisorbed CO$_2$ molecule on the neutral, metal-free planar-type clusters (in the range 1.1 \textendash~1.6 eV for B$_{10-13}$) are significantly larger than previously obtained for 3D-type cluster structures ($\sim$0.4~eV for B$_{40}$ and $\sim$0.8~eV for B$_{80}$ \cite{Sun2014JPCC, Dong2015, Gao2015}). To the best of our knowledge, this is the first study that provides evidence of the strong chemical binding of the CO$_2$ molecule to planar-type B clusters, although good adsorption was theoretically found for a few diatomic molecules on selected B clusters \cite{ValFarTab15, SunWanLi13, SloKanPan10}. The CO$_2$ binding energies to B$_{11-13}$ we obtained are also larger than those reported for the chemically engineered TM\textendash B$_\textrm{8-9}^-$ clusters and for the metallated/charged fullerenes ($1.1-1.2$~eV) \cite{Wang2015,Dong2015, Gao2015}. We note that previous studies have indicated that charging the Boron fullerenes or engineered TM\textendash B$_\textrm{8-9}$ negatively tends to enhance the adsorption of CO$_2$ \cite{Gao2015, Wang2015}, which suggests that even stronger adsorption could be obtained for B$_{n}^-$ planar clusters.
Furthermore, we expect the strong bonding character obtained here for CO$_2$ to B$_\textrm{10-13}$ to persist for the larger planar-type B clusters. In fact, we have examined the binding properties of CO$_2$ to a semi-infinite Boron $\alpha$-sheet \cite{TanIsm07,note_sheet} and also found chemisorption with ${E_\textrm{ads}\approx-0.3}~\textrm{eV}$ and a similar type of CO$_2$ adsorption geometry (including the $\sim$122\textdegree~O$^{(1)}$\textendash C\textendash O bond angle) at the edge of the Boron sheet \cite{note_sheet}. The latter may be viewed as the edge of a planar B$_{N}$ cluster in the limit of large $N$.
Finally, we stress that the large chemisorption energy we find is a robust feature of the system that persists even in the presence of Hubbard on-site interactions that are implemented via GGA + U calculations \cite{note_U}. The interactions provided by U increase the CO$_2$ HOMO-LUMO gap (next section), and are actually found to enhance the adsorption strength (binding energy) of the CO$_2$ molecule to the B clusters.
\subsection{Electronic properties of the distorted and undistorted isolated systems}
In order to better understand the strong chemisorption of CO$_2$ on all considered B planar-type clusters, we have examined the atomic-orbital-resolved density of states of the isolated clusters and bent molecule, focusing on the atoms participating in the formation of the chemisorption bonds. As we have seen in the previous section, the CO$_2$ bond angle changes from 180\textdegree~(free molecule) to approximately 122\textdegree~in the chemisorbed geometry, which is indicative of a negative charging of the molecule. Moreover, the bending itself of the CO$_2$ molecule significantly modifies its electronic spectrum, and in particular considerably reduces its HOMO-LUMO gap \cite{Tai2013b}. In fact, an important point to note is that, when the molecule bends, the previously degenerate (from linear CO$_2$) highest-occupied and lowest-unoccupied $\pi$ states of the molecule both split into in-plane and out-of-plane orbitals, leaving exclusively O and C $2p$-related in-plane molecular orbitals as the frontier orbitals of the 122\textdegree - bent molecule (see Supplementary material, Fig.~S2 and also Fig.~\ref{fig:pdos_mo}(a)).
The splitting, after the molecule bends, of the lowest-unoccupied $\pi$ ($p_{z}$,$p_{y}$) level, in particular, is very large (3.7~eV) compared to the HOMO level splitting (0.4~eV, Fig.~S2) and the overall HOMO-LUMO gap also drastically decreases (by 6.6~eV in our calculations, Fig.~S2(b) and Fig.~\ref{fig:pdos_mo}(a)) with respect to the linear molecule (Fig.~S2(a)). Figure~\ref{fig:pdos_mo}(a) shows, for the resulting bent CO$_2$, the in-plane components of the $2p$-related O$^{(1)}$ and C projected density of states (PDOS) along the B\textendash C bond direction ($p_{y}$ component) and perpendicular to it ($p_{x}$ component). The corresponding molecular orbitals for the levels closest to the gap are also displayed in the figure. As can be seen, the bent CO$_2$ molecule has fully planar-type HOMO and LUMO states (denoted as H$_\textrm{A1}$ and L$_\textrm{A1}$ in Fig.~\ref{fig:pdos_mo}), in strong contrast with the linear CO$_2$ molecule (Fig.~S2(a)). The PDOS in Fig.~\ref{fig:pdos_mo}(a) also shows that, while the HOMO of the bent molecule retains very strong O$^{(1)}$ and C $2p_{y}$-orbital character, the LUMO exhibits both a strong $2p_{y}$ component and a substantial $2p_{x}$ component (both antibonding) from the O$^{(1)}$ and C atoms.
In Fig.~\ref{fig:pdos_mo}(b), we display, for the isolated B$_{12}$ cluster, the same type of $p_{x}$ and $p_{y}$ in-plane components of the density of states projected on the $2p$-orbitals of B$^{(1)}$ and B$^{(2)}$ atoms (the $2p_{z}$ component is shown in the Supplementary material, Fig.~S3). Such in-plane states are the ones which may interact/hybridize with the frontier orbitals of the bent CO$_2$. In Fig.~\ref{fig:pdos_mo}(b), we also display for the levels closest to the HOMO-LUMO gap and having the highest in-plane PDOS, the corresponding molecular orbitals. These states are characterized by lobes protruding over the cluster's edge within the cluster plane.
It can be observed from Fig.~\ref{fig:pdos_mo}(b) (and comparison with the full $p$-state PDOS in Fig.~S3(b)) that there is an especially large density of in-plane orbitals of the {\it{peripheral B atoms}} (B$^\textrm{(1)}$ and B$^\textrm{(2)}$) in the upper (2 to 3~eV) region of the cluster occupied-state spectrum. We note that previous calculations indicated that the B clusters which we are considering have in total in the occupied spectrum only 3 to 4 $p_{z}$-type (out-of-plane) molecular orbitals \cite{Zubarev2007}, delocalized over all cluster atoms, which is also what we find. The high density of in-plane $p_{x}$ and $p_{y}$ orbitals from peripheral (B$^\textrm{(1)}$ and B$^\textrm{(2)}$) atoms in the top (2 to 3~eV) part of the cluster occupied-state spectrum is a feature common to all four clusters considered in this work.
The in-plane molecular states of the cluster in the energy region [$-5$~eV, $-1$~eV], in Fig.~\ref{fig:pdos_mo}(b), strongly contribute to the electronic charge density of the cluster along its contour edge. In Fig.~\ref{fig:B12distortedchargedens}, we display the electronic charge density of the isolated B$_{12}$ cluster with the distorted geometry as in the adsorbed B$_{12}$\textendash CO$_2$ system. The electronic charge distribution is similar to that of the free/undistorted B$_{12}$ cluster (Fig.~S1 in Supplementary material); it is largely concentrated at the contour edges of the cluster. This inhomogeneous electronic distribution makes the contour edges negatively charged and leaves the inner B atoms with a reduced electron density. These properties are observed in all four clusters investigated here (Fig.~S1 in the Supplementary material).
\begin{figure}[H]
\centering\includegraphics[width=0.7\linewidth]{pdos_mo.png}
\caption{Atomic $2p_{x}$ and $2p_{y}$ projected density of states (PDOS) of the isolated bent CO$_2$ molecule (a) and B$_{12}$ cluster (b) summed over the two atoms directly involved in the chemisorption bonds in the configuration shown in Fig.~\ref{fig:init_final}c, i.e., O$^{(1)}$ and C in panel (a) and B$^\textrm{(1)}$ and B$^\textrm{(2)}$ in panel (b). The bent molecule and cluster are also shown with the corresponding $\hat{{x}}$ and $\hat{{y}}$ directions: the $y$-direction is aligned with the B\textendash C bond and the $x$-axis is perpendicular to it, still remaining in the plane of the cluster. Some of the occupied, $E < 0$, and empty, $E > 0$, states (probability density with orbital phase change) of the bent CO$_2$ molecule and of the B$_{12}$ cluster are shown next to their respective PDOS. The isosurface level is set to 0.001 $e$~\AA$^{-3}$.}
\label{fig:pdos_mo}
\end{figure}
\begin{figure}[!h]
\centering\includegraphics[width=0.35\linewidth]{B12distortedChargeDens.png}
\caption{Electronic charge density contour plot calculated for an isolated B$_{12}$ cluster with the same distorted atomic structure as in the adsorbed B$_{12}$\textendash CO$_2$ system. The distortion is occuring mostly at the atoms which take part in the binding (the bottom two B atoms in the plot). It can be seen that the electronic charge density is systematically largest at the cluster contour edges leaving thus an extended positively charged area in the central part of the cluster. One can also observe that the adsorption of the CO$_2$ molecule causes it to lose its 3-fold symmetry.}
\label{fig:B12distortedchargedens}
\end{figure}
\subsection{Discussion of the chemisorption mechanism}
To identify the dominant CO$_2$ molecular orbital involved in the chemisorption, we examined the differential charge density, i.e., the difference between the charge density of the chemisorbed B$_{n}$\textendash CO$_2$ system and that of the isolated B$_{n}$ and CO$_2$. In Fig.~\ref{fig:chargediff}, we present differential charge-density isosurfaces illustrating the electronic charge difference associated with the chemisorption of CO$_2$ on B$_{12}$. The shape of the energy-gain isosurface in the region of the CO$_2$ molecule has strong similarities with the probability density isosurface of the LUMO of the bent CO$_2$ molecule (refer to L$_\textrm{A1}$ of Fig.~\ref{fig:pdos_mo}). The LUMO CO$_2$ orbital will interact with some planar high-energy occupied molecular orbital(s) of the cluster (in Fig.~\ref{fig:pdos_mo}(b)) and, based on the probability densities of the molecular orbitals of the interacting B$_{12}$\textendash CO$_2$ system (the highest occupied states are shown in Fig.~S4 in the Supplementary material), we find that the L$_\textrm{A1}$ molecular orbital of CO$_2$ interacts (hybridizes) predominantly with the H$_\textrm{B3}$ molecular orbital of the cluster (see Fig.~\ref{fig:pdos_mo}(b)). These molecular orbitals have lobes protruding from the edges of the cluster/molecule with substantial orbital overlap suggesting that strong interaction between cluster and molecule can take place in this region.
\begin{figure}[!h]
\centering\includegraphics[width=0.4\linewidth]{chargediff.png}
\caption{Differential electron density isosurface ($\Delta$$\rho$) for the B$_{12}$\textendash CO$_2$ system (see text). Gray color represents electron deficient region ($\Delta$$\rho < 0$), while orange denotes electron rich region ($\Delta$$\rho > 0$) with respect to the isolated B$_{12}$ cluster and CO$_2$ molecule. A large electron rich region can be observed for the adsorbed CO$_2$ molecule, indicating that CO$_2$ acquired excess electrons becoming effectively negatively charged. The isosurface level is set to 0.004 $e$~\AA$^{-3}$. It can be observed that the overall shape of the electron-gain (orange) differential charge density isosurface in the region of the CO$_2$ molecule resembles that of probability density of the LUMO of bent CO$_2$ (refer to L$_\textrm{A1}$ of Fig.~\ref{fig:pdos_mo}).}
\label{fig:chargediff}
\end{figure}
From Fig.~\ref{fig:chargediff} it can be inferred that the CO$_2$ molecule gained excess negative charge from the B cluster. We performed a Lowdin charge analysis, which (although it cannot provide exact values of the atomic charges in a hybridized system and is basis dependent) is useful to give charging trends. Thus, the C atom (binding with B$^\textrm{(2)}$) gained $0.27~e$, and the O atom (binding with B$^\textrm{(1)}$) gained $0.04~e$, while only a very small charge transfer for the other O atom is seen ($\sim$ 0.001~$e$). Similar total amount of Lowdin charge was lost by the B cluster. Strong charge transfer between the B structures and the chemisorbed CO$_2$ molecule has been reported earlier and related to the Lewis acid-base interactions \cite{Sun2014PCCP,Sun2014JPCC,Wang2015}. The electronic charge transfer from the edges of the cluster (with excess negative charge) to the molecule can be also rationalized considering that the bent CO$_2$, in difference to the linear molecule, has a net dipole moment which is substantial: 0.724~ea$_0$ \cite{MorHay79}. The positive end of the dipole is closer to the B cluster and the negative side further away from the cluster, facilitating the interaction with the edge sites of B cluster that exhibit higher electronic density.
In addition to the strong chemisorption of the full CO$_2$ molecule on B clusters, we also found cases where the molecule dissociated into C\textendash O and O fragments (Fig.~\ref{fig:dissoc}), each of which is bound separately to the B cluster, having typical bond lengths of 1.2 and 1.5~\AA\ (for both B$_{11}$ and B$_{13}$), respectively. The dissociation is attributed to the presence of longer bond lengths and lower charge density of the clusters, together with the specific choice of adsorption sites closest to the long B\textendash B bonds. The dissociation of the molecule takes place at B\textendash B edges where the charge density of the cluster is relatively low (Fig.~S1 in the Supplementary material) and the B atoms have less bonding with other B atoms. Both B$_{11}$ and B$_{13}$ clusters have considerably smaller HOMO-LUMO gap values than the other two clusters which do not display dissociative adsorption (Table~S1 in Supplementary material). The smaller gap indicates higher chances of interaction between the cluster and molecular states, allowing also more varied types of adsorption configurations, as we observe in our calculations.
\section{Conclusion}
We investigated the adsorption of CO$_2$ on B$_{n}$ ($n=10-13$) clusters by using first-principles density-functional theory. These clusters have been predicted theoretically and confirmed experimentally to have planar or quasiplanar geometries. We obtained different chemisorbed and physisorbed configurations depending on the initial position of the CO$_2$ molecule. In particular, the chemisorption is obtained for an in-plane position of the molecule close to the cluster contour edges, with adsorption, thus, at the cluster edge sites. CO$_2$ chemisorbs strongly to all four clusters considered, while the strongest CO$_2$ binding energy, amounting to 1.6~eV, is calculated for B$_{12}$. The CO$_2$ chemisorption energies we found for the B$_{10-13}$ clusters are considerably larger than previously obtained for the neutral B$_{80}$ and B$_{40}$ fullerene-type clusters. To the best of our knowledge, this is the first time such strong chemical binding of CO$_2$ to the planar-type B clusters is evidenced. The CO$_2$ binding energies to B$_{11-13}$ we obtained are also larger than previously reported for the chemically engineered TM\textendash B$_{8-9}^-$ clusters and doped/charged B fullerenes. We explain the strong chemisorption by the planarity of the B clusters which are characterized by a high density of protruding occupied in-plane molecular-orbital states near the cluster gap, associated with peripheral B atoms, and excess electronic charge at the cluster edges. These properties facilitate binding with the bent CO$_2$ molecule, which has exclusively in-plane frontier orbitals and a non-vanishing dipole moment.
\section{Acknowledgements}\label{acknowledgements}
This work was funded by the UP System Enhanced Creative Work and Research Grant ECWRG 2018-1-009. A.B.S.-P. is grateful to the Abdus Salam International Centre for Theoretical Physics (ICTP) and the OPEC Fund for International Development (OFID) for the OFID-ICTP postgraduate fellowship under the ICTP/IAEA Sandwich Training Educational Programme, and to the Philippines Commission on Higher Education (CHEd) for the Faculty Development Program (FacDev)\textendash Phase II.
\bibliographystyle{unsrtnat}
| 2024-02-18T23:39:44.714Z | 2019-01-23T02:30:35.000Z | algebraic_stack_train_0000 | 269 | 5,102 |
|
proofpile-arXiv_065-1590 | \section{Introduction}
The three-body problem is one of the oldest problems in classical dynamics that continues to throw up surprises. It has challenged scientists from Newton's time to the present. It arose in an attempt to understand the Sun's effect on the motion of the Moon around the Earth. This was of much practical importance in marine navigation, where lunar tables were necessary to accurately determine longitude at sea (see Box 1). The study of the three-body problem led to the discovery of the planet Neptune (see Box 2), it explains the location and stability of the Trojan asteroids and has furthered our understanding of the stability of the solar system \cite{laskar}. Quantum mechanical variants of the three-body problem are relevant to the Helium atom and water molecule \cite{gutzwiller-book}.
\begin{center}
\begin{mdframed}
{\bf Box 1:} The {\bf Longitude Act} (1714) of the British Parliament offered \pounds 20,000 for a method to determine the longitude at sea to an accuracy of half a degree. This was important for marine navigation at a time of exploration of the continents. In the absence of accurate clocks that could function at sea, a lunar table along with the observed position of the Moon was the principal method of estimating the longitude. Leonhard Euler\footnote{Euler had gone blind when he developed much of his lunar theory!}, Alexis Clairaut and Jean-Baptiste d'Alembert competed to develop a theory accounting for solar perturbations to the motion of the Moon around the Earth. For a delightful account of this chapter in the history of the three-body problem, including Clairaut's explanation of the annual $40^\circ$ rotation of the lunar perigee (which had eluded Newton), see \cite{bodenmann-lunar-battle}. Interestingly, Clairaut's use of Fourier series in the three-body problem (1754) predates their use by Joseph Fourier in the analysis of heat conduction!
\end{mdframed}
\end{center}
\begin{center}
\begin{mdframed}
{\bf Box 2: Discovery of Neptune:} The French mathematical astronomer Urbain Le Verrier (1846) was intrigued by discrepancies between the observed and Keplerian orbits of Mercury and Uranus. He predicted the existence of Neptune (as was widely suspected) and calculated its expected position based on its effects on the motion of Uranus around the Sun (the existence and location of Neptune was independently inferred by John Adams in Britain). The German astronomer Johann Galle (working with his graduate student Heinrich d'Arrest) discovered Neptune within a degree of Le Verrier's predicted position on the very night that he received the latter's letter. It turned out that both Adams' and Le Verrier's heroic calculations were based on incorrect assumptions about Neptune, they were extremely lucky to stumble upon the correct location!
\end{mdframed}
\end{center}
The three-body problem admits many `regular' solutions such as the collinear and equilateral periodic solutions of Euler and Lagrange as well as the more recently discovered figure-8 solution. On the other hand, it can also display chaos as serendipitously discovered by Poincar\'e. Though a general solution in closed form is not known, Sundman while studying binary collisions, discovered an exceptionally slowly converging series representation of solutions in fractional powers of time.
The importance of the three-body problem goes beyond its application to the motion of celestial bodies. As we will see, attempts to understand its dynamics have led to the discovery of many phenomena (e.g., abundance of periodic motions, resonances (see Box 3), homoclinic points, collisional and non-collisional singularities, chaos and KAM tori) and techniques (e.g., Fourier series, perturbation theory, canonical transformations and regularization of singularities) with applications across the sciences. The three-body problem provides a context in which to study the development of classical dynamics as well as a window into several areas of mathematics (geometry, calculus and dynamical systems).
\begin{center}
\begin{mdframed}
{\bf Box 3: Orbital resonances:} The simplest example of an orbital resonance occurs when the periods of two orbiting bodies (e.g., Jupiter and Saturn around the Sun) are in a ratio of small whole numbers ($T_S/T_J \approx 5/2$). Resonances can enhance their gravitational interaction and have both stabilizing and destabilizing effects. For instance, the moons Ganymede, Europa and Io are in a stable $1:2:4$ orbital resonance around Jupiter. The Kirkwood gaps in the asteroid belt are probably due to the destabilizing resonances with Jupiter. Resonances among the natural frequencies of a system (e.g., Keplerian orbits of a pair of moons of a planet) often lead to difficulties in naive estimates of the effect of a perturbation (say of the moons on each other).
\end{mdframed}
\end{center}
\section{Review of the Kepler problem}
As preparation for the three-body problem, we begin by reviewing some key features of the two-body problem. If we ignore the non-zero size of celestial bodies, Newton's second law for the motion of two gravitating masses states that
\begin{equation}
m_1 \ddot {\bf r}_1 = \al \frac{ ({\bf r}_2 - {\bf r}_1)}{|{\bf r}_1 - {\bf r}_2|^3} \quad \text{and} \quad
m_2 \ddot {\bf r}_2 = \al \frac{ ({\bf r}_1 - {\bf r}_2)}{|{\bf r}_1 - {\bf r}_2|^3}.
\label{e:two-body-newton-ode}
\end{equation}
Here, $\al = G m_1 m_2$ measures the strength of the gravitational attraction and dots denote time derivatives. This system has six degrees of freedom, say the three Cartesian coordinates of each mass ${\bf r}_1 = (x_1,y_1,z_1)$ and ${\bf r}_2 = (x_2,y_2,z_2)$. Thus, we have a system of 6 nonlinear (due to division by $|{\bf r}_1-{\bf r}_2|^3$), second-order ordinary differential equations (ODEs) for the positions of the two masses. It is convenient to switch from ${\bf r}_1$ and ${\bf r}_2$ to the center of mass (CM) and relative coordinates
\begin{equation}
{\bf R} = \frac{m_1 {\bf r}_1 + m_2 {\bf r}_2}{m_1 + m_2} \quad \text{and} \quad
{\bf r} = {\bf r}_2 - {\bf r}_1.
\end{equation}
In terms of these, the equations of motion become
\begin{equation}
M \ddot {\bf R} = 0 \quad \text{and} \quad m \ddot {\bf r} = - \frac{\al}{|{\bf r}|^3} {\bf r}.
\end{equation}
Here, $M = m_1 + m_2$ is the total mass and $m = m_1 m_2/M$ the `reduced' mass. An advantage of these variables is that in the absence of external forces the CM moves at constant velocity, which can be chosen to vanish by going to a frame moving with the CM. The motion of the relative coordinate ${\bf r}$ decouples from that of ${\bf R}$ and describes a system with three degrees of freedom ${\bf r} = (x,y,z)$. Expressing the conservative gravitational force in terms of the gravitational potential $V = - \alpha/|{\bf r}|$, the equation for the relative coordinate ${\bf r}$ becomes
\begin{equation}
\dot {\bf p} \equiv m \ddot {\bf r} = - {\bf \nabla}_{\bf r} V = - \left(\dd{V}{x}, \dd{V}{y}, \dd{V}{z} \right)
\end{equation}
where ${\bf p} = m \dot {\bf r}$ is the relative momentum. Taking the dot product with the `integrating factor' $\dot {\bf r} = (\dot x, \dot y, \dot z)$, we get
\begin{equation}
m \dot {\bf r} \cdot \ddot {\bf r} = \fr{d}{dt}\left(\frac{1}{2} m \dot {\bf r}^2 \right) = - \left( \dd{V}{x} \; \dot x + \dd{V}{y} \; \dot y + \dd{V}{z} \; \dot z \right) = - \fr{dV}{dt},
\label{e:energy-kepler-cm-frame}
\end{equation}
which implies that the energy $E \equiv \frac{1}{2} m \dot {\bf r}^2 + V$ or Hamiltonian $\frac{{\bf p}^2}{2m} + V$ is conserved. The relative angular momentum ${\bf L} = {\bf r} \times m \dot {\bf r} = {\bf r} \times {\bf p}$ is another constant of motion as the force is central\footnote{The conservation of angular momentum in a central force is a consequence of rotation invariance: $V = V(|{\bf r}|)$ is independent of polar and azimuthal angles. More generally, Noether's theorem relates continuous symmetries to conserved quantities.}: $\dot {\bf L} = \dot {\bf r} \times {\bf p} + {\bf r} \times \dot {\bf p} = 0 + 0$. The constancy of the direction of ${\bf L}$ implies planar motion in the CM frame: ${\bf r}$ and ${\bf p}$ always lie in the `ecliptic plane' perpendicular to ${\bf L}$, which we take to be the $x$-$y$ plane with origin at the CM (see Fig.~\ref{f:lrl-vector}). The Kepler problem is most easily analyzed in plane-polar coordinates ${\bf r} = (r, \tht)$ in which the energy $E = \frac{1}{2} m \dot r^2 + V_{\rm eff}(r)$ is the sum of a radial kinetic energy and an effective potential energy $V_{\rm eff} = \fr{L_z^2}{2 m r^2} + V(r)$. Here, $L_z = m r^2 \dot \tht$ is the vertical component of angular momentum and the first term in $V_{\rm eff}$ is the centrifugal `angular momentum barrier'. Since ${\bf L}$ (and therefore $L_z$) is conserved, $V_{\rm eff}$ depends only on $r$. Thus, $\tht$ does not appear in the Hamiltonian: it is a `cyclic' coordinate. Conservation of energy constrains $r$ to lie between `turning points', i.e., zeros of $E - V_{\rm eff}(r)$ where the radial velocity $\dot r$ momentarily vanishes. One finds that the orbits are Keplerian ellipses for $E < 0$ along with parabolae and hyperbolae for $E \geq 0$: $r(\tht) = \rho(1 + \eps \cos \tht)^{-1}$ \cite{goldstein,hand-finch}. Here, $\rho = L_z^2/m\al$ is the radius of the circular orbit corresponding to angular momentum $L_z$, $\eps$ the eccentricity and $E = - \frac{\al}{2\rho} (1 - \eps^2)$ the energy.
\begin{figure}[h]
\center
\includegraphics[width=8cm]{lrl-vector.pdf}
\caption{\footnotesize Keplerian ellipse in the ecliptic plane of motion showing the constant LRL vector ${\bf A}$. The constant angular momentum ${\bf L}$ points out of the ecliptic plane.}
\label{f:lrl-vector}
\end{figure}
In addition to $E$ and ${\bf L}$, the Laplace-Runge-Lenz (LRL) vector ${\bf A} = {\bf p} \times {\bf L} - m \al \: \hat r$ is another constant of motion. It points along the semi-major axis from the CM to the perihelion and its magnitude determines the eccentricity of the orbit. Thus, we have $7$ conserved quantities: energy and three components each of ${\bf L}$ and ${\bf A}$. However, a system with three degrees of freedom has a six-dimensional phase space (space of coordinates and momenta, also called the state space) and if it is to admit continuous time evolution, it cannot have more than 5 independent conserved quantities. The apparent paradox is resolved once we notice that $E$, ${\bf L}$ and ${\bf A}$ are not all independent; they satisfy two relations\footnote{Wolfgang Pauli (1926) derived the quantum mechanical spectrum of the Hydrogen atom using the relation between $E, {\bf L}^2$ and ${\bf A}^2$ before the development of the Schr\"odinger equation. Indeed, if we postulate circular Bohr orbits which have zero eccentricity (${\bf A} = 0$) and quantized angular momentum ${\bf L}^2 = n^2 \hbar^2$, then $E_n = - \fr{m \al^2 }{2 \hbar^2 n^2}$ where $\al = e^2/4 \pi \epsilon_0$ is the electromagnetic analogue of $G m_1 m_2$.}:
\begin{equation}
{\bf L} \cdot {\bf A} = 0 \quad \text{and} \quad E = \frac{{\bf A}^2 - m^2 \alpha^2}{2 m {\bf L}^2}.
\end{equation}
Newton used the solution of the two-body problem to understand the orbits of planets and comets. He then turned his attention to the motion of the Moon around the Earth. However, lunar motion is significantly affected by the Sun. For instance, ${\bf A}$ is {\it not} conserved and the lunar perigee rotates by $40^\circ$ per year. Thus, he was led to study the Moon-Earth-Sun three-body problem.
\section{The three-body problem}
We consider the problem of three point masses ($m_a$ with position vectors ${\bf r}_a$ for $a = 1,2,3$) moving under their mutual gravitational attraction. This system has 9 degrees of freedom, whose dynamics is determined by 9 coupled second order nonlinear ODEs:
\begin{equation}
m_a \fr{d^2{\bf r}_a}{dt^2} = \sum_{b \neq a} G m_a m_b \fr{{\bf r}_b-{\bf r}_a}{|{\bf r}_b-{\bf r}_a |^3} \quad \text{for} \quad a = 1,2 \; \text{and} \; 3.
\label{e:newtonian-3body-ODE}
\end{equation}
As before, the three components of momentum ${\bf P} = \sum_a m_a \dot {\bf r}_a$, three components of angular momentum ${\bf L} = \sum_a {\bf r}_a \times {\bf p}_a$ and energy
\begin{equation}
E = \frac{1}{2} \sum_{a=1}^3 m_a \dot {\bf r}_a^2 - \sum_{a < b} \frac{G m_a m_b}{|{\bf r}_a - {\bf r}_b|} \equiv T + V
\end{equation}
furnish $7$ independent conserved quantities. Lagrange used these conserved quantities to reduce the above equations of motion to 7 first order ODEs (see Box 4).
\begin{center}
\begin{mdframed}
{\bf Box 4: Lagrange's reduction from 18 to 7 equations:} The 18 phase space variables of the 3-body problem (components of ${\bf r}_1, {\bf r}_2, {\bf r}_3, {\bf p}_1, {\bf p}_2, {\bf p}_3$) satisfy 18 first order ordinary differential equations (ODEs) $\dot {\bf r}_a = {\bf p}_a$, $\dot {\bf p}_a = -{\bf \nabla}_{{\bf r}_a} V$. Lagrange (1772) used the conservation laws to reduce these ODEs to a system of 7 first order ODEs. Conservation of momentum determines 6 phase space variables comprising the location ${\bf R}_{\rm CM}$ and momentum ${\bf P}$ of the center of mass. Conservation of angular momentum ${\bf L} = \sum {\bf r}_a \times {\bf p}_a$ and energy $E$ lead to 4 additional constraints. By using one of the coordinates as a parameter along the orbit (in place of time), Lagrange reduced the three-body problem to a system of $7$ first order nonlinear ODEs.
\end{mdframed}
\end{center}
{\bf Jacobi vectors} (see Fig.~\ref{f:jacobi-coords}) generalize the notion of CM and relative coordinates to the 3-body problem \cite{Rajeev}. They are defined as
\begin{equation}
\label{e:jacobi-coord}
{\bf J}_1 = {\bf r}_2 - {\bf r}_1, \quad {\bf J}_2 = {\bf r}_3 - \fr{m_1 {\bf r}_1 + m_2 {\bf r}_2}{m_1+m_2} \quad \text{and} \quad {\bf J}_3 = \fr{m_1 {\bf r}_1 + m_2 {\bf r}_2 + m_3 {\bf r}_3}{m_1 + m_2 +m_3}.
\end{equation}
${\bf J}_3$ is the coordinate of the CM, ${\bf J}_1$ the position vector of $m_2$ relative to $m_1$ and ${\bf J}_2$ that of $m_3$ relative to the CM of $m_1$ and $m_2$. A nice feature of Jacobi vectors is that the kinetic energy $T = \frac{1}{2} \sum_{a = 1,2,3} m_a \dot {\bf r}_a^2$ and moment of inertia $I = \sum_{a = 1,2,3} m_a {\bf r}_a^2$, regarded as quadratic forms, remain diagonal\footnote{A quadratic form $\sum_{a,b} r_a Q_{ab} r_b$ is diagonal if $Q_{ab} = 0$ for $a \ne b$. Here, $\ov{M_1} = \ov{m_1} + \ov{m_2}$ is the reduced mass of the first pair, $\ov{M_2} = \ov{m_1+m_2}+\ov{m_3}$ is the reduced mass of $m_3$ and the ($m_1$, $m_2$) system and $M_3 = m_1 + m_2 + m_3$ the total mass.}:
\begin{equation}
\label{e:jacobi-coord-ke-mom-inertia}
T = \frac{1}{2} \sum_{1 \leq a \leq 3} M_a \dot {\bf J}_a^2 \quad \text{and} \quad I = \sum_{1 \leq a \leq 3} M_a {\bf J}_a^2.
\end{equation}
What is more, just as the potential energy $- \al/|{\bf r}|$ in the two-body problem is a function only of the relative coordinate ${\bf r}$, here the potential energy $V$ may be expressed entirely in terms of ${\bf J}_1$ and ${\bf J}_2$:
\begin{equation}
V = - \frac{G m_1 m_2}{|{\bf J}_1|} - \frac{G m_2 m_3}{|{\bf J}_2 - \mu_1 {\bf J}_1|} - \frac{G m_3 m_1}{|{\bf J}_2 + \mu_2 {\bf J}_1|} \quad \text{where} \quad \mu_{1,2} = \frac{m_{1,2}}{m_1 + m_2}.
\label{e:jacobi-coord-potential}
\end{equation}
Thus, the components of the CM vector ${\bf J}_3$ are cyclic coordinates in the Hamiltonian $H = T + V$. In other words, the center of mass motion ($\ddot {\bf J}_3 = 0$) decouples from that of ${\bf J}_1$ and ${\bf J}_2$.
An instantaneous configuration of the three bodies defines a triangle with masses at its vertices. The moment of inertia about the center of mass $I_{\rm CM} = M_1 {\bf J}_1^2 + M_2 {\bf J}_2^2$ determines the size of the triangle. For instance, particles suffer a triple collision when $I_{\rm CM} \to 0$ while $I_{\rm CM} \to \infty $ when one of the bodies flies off to infinity.
\begin{figure}[h]
\center
\includegraphics[width=6cm]{jacobi-vectors-resonance.pdf}
\caption{\footnotesize Jacobi vectors ${\bf J}_1, {\bf J}_2$ and ${\bf J}_3$ for the three-body problem. {\bf O} is the origin of the coordinate system while CM$_{12}$ is the center of mass of particles 1 and 2.}
\label{f:jacobi-coords}
\end{figure}
\section{Euler and Lagrange periodic solutions}
The planar three-body problem is the special case where the masses always lie on a fixed plane. For instance, this happens when the CM is at rest ($\dot {\bf J}_3 = 0$) and the angular momentum about the CM vanishes (${\bf L}_{\rm CM} = M_1 {\bf J}_1 \times \dot {\bf J}_1 + M_2 {\bf J}_2 \times \dot {\bf J}_2 = 0$). In 1767, the Swiss scientist Leonhard Euler discovered simple periodic solutions to the planar three-body problem where the masses are always collinear, with each body traversing a Keplerian orbit about their common CM. The line through the masses rotates about the CM with the ratio of separations remaining constant (see Fig.~\ref{f:euler-periodic}). The Italian/French mathematician Joseph-Louis Lagrange rediscovered Euler's solution in 1772 and also found new periodic solutions where the masses are always at the vertices of equilateral triangles whose size and angular orientation may change with time (see Fig.~\ref{f:lagrange-periodic}). In the limiting case of zero angular momentum, the three bodies move toward/away from their CM along straight lines. These implosion/explosion solutions are called Lagrange homotheties.
\begin{figure}
\centering
\begin{subfigure}[t]{3in}
\centering
\includegraphics[width=5cm]{euler-three-body.pdf}
\caption{\footnotesize Masses traverse Keplerian ellipses with one focus at the CM.}
\label{f:euler-periodic}
\end{subfigure}
\quad
\begin{subfigure}[t]{3in}
\centering
\includegraphics[width=3cm]{euler-soln-eq-mass.pdf}
\caption{\footnotesize Two equal masses $m$ in a circular orbit around a third mass $M$ at their CM.}
\label{f:euler-eq-mass}
\end{subfigure}
\caption{\footnotesize Euler collinear periodic solutions of the three-body problem. The constant ratios of separations are functions of the mass ratios alone.}
\label{f:three-body-periodic}
\end{figure}
It is convenient to identify the plane of motion with the complex plane $\mathbb{C}$ and let the three complex numbers $z_{a=1,2,3}(t)$ denote the positions of the three masses at time $t$. E.g., the real and imaginary parts of $z_1$ denote the Cartesian components of the position vector ${\bf r}_1$ of the first mass. In Lagrange's solutions, $z_a(t)$ lie at vertices of an equilateral triangle while they are collinear in Euler's solutions. In both cases, the force on each body is always toward the common center of mass and proportional to the distance from it. For instance, the force on $m_1$ in a Lagrange solution is
\begin{equation}
{\bf F}_1 = G m_1 m_2 \fr{{\bf r}_2 - {\bf r}_1}{|{\bf r}_2 - {\bf r}_1|^3} + G m_1 m_3 \fr{{\bf r}_3 - {\bf r}_1}{|{\bf r}_3 - {\bf r}_1|^3} = \fr{Gm_1}{d^3} \left( m_1 {\bf r}_1 + m_2 {\bf r}_2 + m_3 {\bf r}_3 - M_3 {\bf r}_1 \right)
\end{equation}
where $d = |{\bf r}_2 - {\bf r}_1| = |{\bf r}_3 - {\bf r}_1|$ is the side-length of the equilateral triangle and $M_3 = m_1 + m_2 + m_3$. Recalling that ${\bf r}_{\rm CM} = (m_1 {\bf r}_1 + m_2 {\bf r}_2 + m_3 {\bf r}_3)/M_3,$ we get
\begin{equation}
{\bf F}_1 = \fr{Gm_1}{d^3} M_3 \left( {\bf r}_{\rm CM} - {\bf r}_1 \right) \equiv G m_1 \delta_1 \fr{{\bf r}_{\rm CM} - {\bf r}_1}{|{\bf r}_{\rm CM} - {\bf r}_1|^3}
\end{equation}
where $\delta_1 = M_3 |{\bf r}_{\rm CM} - {\bf r}_1|^3/d^3$ is a function of the masses alone\footnote{Indeed, ${\bf r}_{\rm CM} - {\bf r}_1 = \left(m_2 ({\bf r}_2-{\bf r}_1) + m_3 ({\bf r}_3 - {\bf r}_1) \right)/M_3 \equiv \left( m_2 {\bf b} + m_3 {\bf c} \right)/ M_3$ where ${\bf b}$ and ${\bf c}$ are two of the sides of the equilateral triangle of length $d$. This leads to $|({\bf r}_{\rm CM}-{\bf r}_1)/d| = \sqrt{m_2^2 + m_3^2 + m_2 m_3 }/M_3$ which is a function of masses alone. }. Thus, the equation of motion for $m_1$,
\begin{equation}
m_1 \ddot {\bf r}_1 = G m_1 \delta_1 \fr{{\bf r}_{\rm CM} - {\bf r}_1}{|{\bf r}_{\rm CM} - {\bf r}_1|^3},
\end{equation}
takes the same form as in the two-body Kepler problem (see Eq.~\ref{e:two-body-newton-ode}). The same applies to $m_2$ and $m_3$. So if $z_a(0)$ denote the initial positions, the curves $z_a(t) = z(t) z_a(0)$ are solutions of Newton's equations for three bodies provided $z(t)$ is a Keplerian orbit for an appropriate two-body problem. In other words, each mass traverses a rescaled Keplerian orbit about the common centre of mass. A similar analysis applies to the Euler collinear solutions as well: locations of the masses is determined by the requirement that the force on each one is toward the CM and proportional to the distance from it (see Box 5 on central configurations).
\begin{figure}
\centering
\includegraphics[width=6cm]{lagrange-three-body.pdf}
\caption{\footnotesize Lagrange's periodic solution with three bodies at vertices of equilateral triangles. The constant ratios of separations are functions of the mass ratios alone.}
\label{f:lagrange-periodic}
\end{figure}
\begin{center}
\begin{mdframed}
{\bf Box 5: Central configurations:} Three-body configurations in which the acceleration of each particle points towards the CM and is proportional to its distance from the CM (${\bf a}_b= \om^2 ({\bf R}_{\rm CM} - {\bf r}_b)$ for $b = 1,2,3$) are called `central configurations'. A central configuration rotating at angular speed $\om$ about the CM automatically satisfies the equations of motion (\ref{e:newtonian-3body-ODE}). Euler collinear and Lagrange equilateral configurations are the only central configurations in the three-body problem. In 1912, Karl Sundmann showed that triple collisions are asymptotically central configurations.
\end{mdframed}
\end{center}
\section{Restricted three-body problem}
The restricted three-body problem is a simplified version of the three-body problem where one of the masses $m_3$ is assumed much smaller than the primaries $m_1$ and $m_2$. Thus, $m_1$ and $m_2$ move in Keplerian orbits which are not affected by $m_3$. The Sun-Earth-Moon system provides an example where we further have $m_2 \ll m_1$. In the planar circular restricted three-body problem, the primaries move in fixed circular orbits around their common CM with angular speed $\Omega = (G (m_1 + m_2)/d^3 )^{1/2}$ given by Kepler's third law and $m_3$ moves in the same plane as $m_1$ and $m_2$. Here, $d$ is the separation between primaries. This system has $2$ degrees of freedom associated to the planar motion of $m_3$, and therefore a 4-dimensional phase space just like the planar Kepler problem for the reduced mass. However, unlike the latter which has three conserved quantities (energy, $z$-component of angular momentum and direction of LRL vector) and is exactly solvable, the planar restricted three-body problem has only one known conserved quantity, the `Jacobi integral', which is the energy of $m_3$ in the co-rotating (non-inertial) frame of the primaries:
\begin{equation}
E = \left[ \frac{1}{2} m_3 \dot r^2 + \frac{1}{2} m_3 r^2 \dot \phi^2 \right] - \frac{1}{2} m_3 \Om^2 r^2 - G m_3 \left( \fr{m_1}{r_1} + \fr{m_2}{r_2} \right) \equiv T + V_{\rm eff}.
\end{equation}
Here, $(r,\phi)$ are the plane polar coordinates of $m_3$ in the co-rotating frame of the primaries with origin located at their center of mass while $r_1$ and $r_2$ are the distances of $m_3$ from $m_1$ and $m_2$ (see Fig.~\ref{f:restricted-3body-setup}). The `Roche' effective potential $V_{\rm eff}$, named after the French astronomer \'Edouard Albert Roche, is a sum of centrifugal and gravitational energies due to $m_1$ and $m_2$.
\begin{figure}[h]
\center
\includegraphics[width=5cm]{restricted-three-body.pdf}
\caption{\footnotesize The secondary $m_3$ in the co-rotating frame of primaries $m_1$ and $m_2$ in the restricted three-body problem. The origin is located at the center of mass of $m_1$ and $m_2$ which coincides with the CM of the system since $m_3 \ll m_{1,2}$.}
\label{f:restricted-3body-setup}
\end{figure}
A system with $n$ degrees of freedom needs at least $n$ constants of motion to be exactly solvable\footnote{A Hamiltonian system with $n$ degrees of freedom is exactly solvable in the sense of Liouville if it possesses $n$ independent conserved quantities in involution, i.e., with vanishing pairwise Poisson brackets (see Boxes 6 and 10).}. For the restricted 3-body problem, Henri Poincar\'e (1889) proved the nonexistence of any conserved quantity (other than $E$) that is analytic in small mass ratios ($m_3/m_2$ and $(m_3+m_2)/m_1$) and orbital elements (${\bf J}_1$, $M_1 \dot {\bf J}_1$, ${\bf J}_2$ and $M_2 \dot {\bf J}_2$) \cite{diacu-holmes,musielak-quarles,barrow-green-poincare-three-body}. This was an extension of a result of Heinrich Bruns who had proved in 1887 the nonexistence of any new conserved quantity algebraic in Cartesian coordinates and momenta for the general three-body problem \cite{whittaker}. Thus, roughly speaking, Poincar\'e showed that the restricted three-body problem is not exactly solvable. In fact, as we outline in \S\ref{s:delaunay-hill-poincare}, he discovered that it displays chaotic behavior.
{\noindent \bf Euler and Lagrange points}\footnote{Lagrange points $L_{1-5}$ are also called libration (literally, balance) points.} (denoted $L_{1-5}$) of the restricted three-body problem are the locations of a third mass ($m_3 \ll m_1, m_2$) in the co-rotating frame of the primaries $m_1$ and $m_2$ in the Euler and Lagrange solutions (see Fig.~\ref{f:euler-lagrange-points}). Their stability would allow an asteroid or satellite to occupy a Lagrange point. Euler points $L_{1,2,3}$ are saddle points of the Roche potential while $L_{4,5}$ are maxima (see Fig.~\ref{f:effective-potential}). This suggests that they are all unstable. However, $V_{\rm eff}$ does not include the effect of the Coriolis force since it does no work. A more careful analysis shows that the Coriolis force stabilizes $L_{4,5}$. It is a bit like a magnetic force which does no work but can stabilize a particle in a Penning trap. Euler points are always unstable\footnote{Stable `Halo' orbits around Euler points have been found numerically.} while the Lagrange points $L_{4,5}$ are stable to small perturbations iff $(m_1+m_2)^2 \geq 27 m_1 m_2$ \cite{symon}. More generally, in the unrestricted three-body problem, the Lagrange equilateral solutions are stable iff
\begin{equation}
(m_1 + m_2 + m_3)^2 \geq 27(m_1 m_2 + m_2 m_3 + m_3 m_1).
\end{equation}
The above criterion due to Edward Routh (1877) is satisfied if one of the masses dominates the other two. For instance, $L_{4,5}$ for the Sun-Jupiter system are stable and occupied by the Trojan asteroids.
\begin{figure}[h]
\center
\includegraphics[width=5cm]{L-points.pdf}
\caption{\footnotesize The positions of Euler $(L_{1,2,3})$ and Lagrange $(L_{4,5})$ points when $m_1 \gg m_2 \gg m_3$. $m_2$ is in an approximately circular orbit around $m_1$. $L_3$ is almost diametrically opposite to $m_2$ and a bit closer to $m_1$ than $m_2$ is. $L_1$ and $L_2$ are symmetrically located on either side of $m_2$. $L_4$ and $L_5$ are equidistant from $m_1$ and $m_2$ and lie on the circular orbit of $m_2$.}
\label{f:euler-lagrange-points}
\end{figure}
\begin{figure}[h]
\center
\includegraphics[width=8cm]{effective-potential-restricted-3-body-v1.pdf}
\caption{\footnotesize Level curves of the Roche effective potential energy $V_{\rm eff}$ of $m_3$ in the co-rotating frame of the primaries $m_1$ and $m_2$ in the circular restricted three-body problem for $G = 1$, $m_1 = 15, m_2 = 10$ and $m_3 = .1$. Lagrange points $L_{1-5}$ are at extrema of $V_{\rm eff}$. The trajectory of $m_3$ for a given energy $E$ must lie in the Hill region defined by $V_{\rm eff}(x,y) \leq E$. E.g., for $E=-6$, the Hill region is the union of two neighborhoods of the primaries and a neighborhood of the point at infinity. The lobes of the $\infty$-shaped level curve passing through $L_1$ are called Roche's lobes. The saddle point $L_1$ is like a mountain pass through which material could pass between the lobes.}
\label{f:effective-potential}
\end{figure}
\section{Planar Euler three-body problem}
Given the complexity of the restricted three-body problem, Euler (1760) proposed the even simpler problem of a mass $m$ moving in the gravitational potential of two {\it fixed} masses $m_1$ and $m_2$. Initial conditions can be chosen so that $m$ always moves on a fixed plane containing $m_1$ and $m_2$. Thus, we arrive at a one-body problem with two degrees of freedom and energy
\begin{equation}
E = \frac{1}{2} m \left(\dot x^2 + \dot y^2 \right) -\frac{\mu_1}{r_1} - \frac{\mu_2}{r_2}.
\label{e:euler-three-body-energy}
\end{equation}
Here, $(x,y)$ are the Cartesian coordinates of $m$, $r_a$ the distances of $m$ from $m_a$ and $\mu_a = G m_a m$ for $a = 1,2$ (see Fig.~\ref{f:elliptic-coordinates}). Unlike in the restricted three-body problem, here the rest-frame of the primaries is an inertial frame, so there are no centrifugal or Coriolis forces. This simplification allows the Euler three-body problem to be exactly solved.
Just as the Kepler problem simplifies in plane-polar coordinates $(r, \tht)$ centered at the CM, the Euler 3-body problem simplifies in an elliptical coordinate system $(\xi, \eta)$. The level curves of $\xi$ and $\eta$ are mutually orthogonal confocal ellipses and hyperbolae (see Fig.~\ref{f:elliptic-coordinates}) with the two fixed masses at the foci $2f$ apart:
\begin{equation}
x = f \: \cosh\xi \: \cos\eta \quad \text{and} \quad
y = f \: \sinh\xi \: \sin\eta.
\label{e:elliptical-coordinates-transformation}
\end{equation}
Here, $\xi$ and $\eta$ are like the radial distance $r$ and angle $\tht$, whose level curves are mutually orthogonal concentric circles and radial rays. The distances of $m$ from $m_{1,2}$ are $r_{1,2}= f (\cosh \xi \mp \cos \eta)$.
\begin{figure}[h]
\center
\includegraphics[width=6cm]{elliptic-coordinates-2.pdf}
\caption{\footnotesize Elliptical coordinate system for the Euler 3-body problem. Two masses are at the foci $(\pm f,0)$ of an elliptical coordinate system with $f=2$ on the $x$-$y$ plane. The level curves of $\xi$ and $\eta$ (confocal ellipses and hyperbolae) are indicated. }
\label{f:elliptic-coordinates}
\end{figure}
The above confocal ellipses and hyperbolae are Keplerian orbits when a single fixed mass ($m_1$ or $m_2$) is present at one of the foci $(\pm f,0)$. Remarkably, these Keplerian orbits survive as orbits of the Euler 3-body problem. This is a consequence of Bonnet's theorem, which states that if a curve is a trajectory in two separate force fields, it remains a trajectory in the presence of both. If $v_1$ and $v_2$ are the speeds of the Keplerian trajectories when only $m_1$ or $m_2$ was present, then $v = \sqrt{v_1^2 + v_2^2}$ is the speed when both are present.
Bonnet's theorem however does not give us all the trajectories of the Euler 3-body problem. More generally, we may integrate the equations of motion by the method of separation of variables in the Hamilton-Jacobi equation (see \cite{mukunda-hamilton} and Boxes 6, 7 \& 8). The system possesses {\it two} independent conserved quantities: energy and Whittaker's constant \footnote{When the primaries coalesce at the origin ($f \to 0$), Whittaker's constant reduces to the conserved quantity ${\bf L}^2$ of the planar 2-body problem.} \cite{gutzwiller-book, whittaker}
\begin{equation}
w = {\bf L}_1 \cdot {\bf L}_2 + 2 m f \left( -\mu_1\cos\theta_1 + \mu_2\cos\theta_2 \right) = m^2 r_1^2 \, r_2^2 \; \dot \tht_1 \dot \tht_2 + 2f m \left( -\mu_1\cos\theta_1 + \mu_2\cos\theta_2 \right).
\label{e:whittakers-constant}
\end{equation}
Here, $\tht_a$ are the angles between the position vectors ${\bf r}_a$ and the positive $x$-axis and ${\bf L}_{1,2} = m r_{1,2}^2 \dot \tht_{1,2} \hat z$ are the angular momenta about the two force centers (Fig.~\ref{f:elliptic-coordinates}). Since $w$ is conserved, it Poisson commutes with the Hamiltonian $H$. Thus, the planar Euler 3-body problem has two degrees of freedom and two conserved quantities in involution. Consequently, the system is integrable in the sense of Liouville.
More generally, in the three-dimensional Euler three-body problem, the mass $m$ can revolve (non-uniformly) about the line joining the force centers ($x$-axis) so that its motion is no longer confined to a plane. Nevertheless, the problem is exactly solvable as the equations admit three independent constants of motion in involution: energy, Whittaker's constant and the $x$ component of angular momentum \cite{gutzwiller-book}.
\begin{center}
\begin{mdframed}
{\bf Box 6: Canonical transformations:} We have seen that the Kepler problem is more easily solved in polar coordinates and momenta $(r, \tht, p_r, p_\tht)$ than in Cartesian phase space variables $(x, y, p_x, p_y)$. This change is an example of a canonical transformation (CT). More generally, a CT is a change of canonical phase space variables $({\bf q}, {\bf p}) \to ({\bf Q} ({\bf p}, {\bf q}, t), {\bf P}({\bf p}, {\bf q}, t))$ that preserves the form of Hamilton's equations. For one degree of freedom, Hamilton's equations $\dot q = \dd{H}{p}$ and $\dot p = -\dd{H}{q}$ become $\dot Q = \dd{K}{P}$ and $\dot P = -\dd{K}{Q}$ where $K(Q,P,t)$ is the new Hamiltonian (for a time independent CT, the old and new Hamiltonians are related by substitution: $H(q,p) = K(Q(q,p),P(q,p))$). The form of Hamilton's equations is preserved provided the basic Poisson brackets do not change i.e.,
\begin{equation}
\{ q,p \} = 1, \;\; \{ q,q \} = \{ p,p\} = 0 \quad \Rightarrow \quad \{ Q,P \} = 1, \;\; \{ Q,Q \} = \{ P,P \} = 0.
\end{equation}
Here, the Poisson bracket of two functions on phase space $f(q,p)$ and $g(q,p)$ is defined as
\begin{equation}
\{ f(q,p), g(q,p) \} = \dd{f}{q} \dd{g}{p} - \dd{f}{p} \dd{g}{q}.
\end{equation}
For one degree of freedom, a CT is simply an area and orientation preserving transformation of the $q$-$p$ phase plane. Indeed, the condition $\{ Q, P \} = 1$ simply states that the Jacobian determinant $J = \det \left( \dd{Q}{q}, \dd{Q}{p} \;\vert\; \dd{P}{q}, \dd{P}{p} \right) = 1$ so that the new area element $dQ \, dP = J \, dq \, dp$ is equal to the old one. A CT can be obtained from a suitable generating function, say of the form $S(q,P,t)$, in the sense that the equations of transformation are given by partial derivatives of $S$:
\begin{equation}
p = \dd{S}{q}, \quad Q = \dd{S}{P} \quad \text{and} \quad K = H + \dd{S}{t}.
\end{equation}
For example, $S = qP$ generates the identity transformation ($Q = q$ and $P = p$) while $S = - qP$ generates a rotation of the phase plane by $\pi$ ($Q = -q$ and $P = -p$).
\end{mdframed}
\end{center}
\begin{center}
\begin{mdframed}
{\bf Box 7: Hamilton Jacobi equation:} The Hamilton Jacobi (HJ) equation is an alternative formulation of Newtonian dynamics. Let $i = 1, \ldots, n$ label the degrees of freedom of a mechanical system. Cyclic coordinates $q^i$ (i.e., those that do not appear in the Hamiltonian $H({\bf q},{\bf p},t)$ so that $\partial H/ \partial q^i = 0$) help to understand Newtonian trajectories, since their conjugate momenta $p_i$ are conserved ($\dot p_i = \dd{H}{q^i} = 0$). If all coordinates are cyclic, then each of them evolves linearly in time: $q^i(t) = q^i(0) + \dd{H}{p_i} t$. Now time-evolution is {\it even simpler} if $\dd{H}{p_i} = 0$ for all $i$ as well, i.e., if $H$ is independent of both coordinates and momenta! In the HJ approach, we find a CT from old phase space variables $({\bf q}, {\bf p})$ to such a coordinate system $({\bf Q},{\bf P})$ in which the new Hamiltonian $K$ is a constant (which can be taken to vanish by shifting the zero of energy). The HJ equation is a nonlinear, first-order partial differential equation for Hamilton's principal function $S({\bf q},{\bf P},t)$ which generates the canonical transformation from $({\bf q},{\bf p})$ to $({\bf Q},{\bf P})$. As explained in Box 6, this means $p_i = \dd{S}{q^i}$, $Q^j = \dd{S}{P_j}$ and $K = H + \dd{S}{t}$. Thus, the HJ equation \begin{equation}
H\left({\bf q}, \dd{S}{{\bf q}},t \right) + \dd{S}{t} = 0
\end{equation}
is simply the condition for the new Hamiltonian $K$ to vanish. If $H$ is time-independent, we may `separate' the time-dependence of $S$ by writing $S({\bf q},{\bf P},t) = W({\bf q},{\bf P}) - Et$ where the `separation constant' $E$ may be interpreted as energy. Thus, the time independent HJ-equation for Hamilton's characteristic function $W$ is
\begin{equation}
H\left({\bf q},\frac{\partial W}{\partial {\bf q}}\right) = E.
\label{e:time-indep-HJ}
\end{equation}
E.g., for a particle in a potential $V({\bf q})$, it is the equation $\ov{2m}\left( \fr{\partial W}{\partial {\bf q}}\right)^2 + V({\bf q}) = E$. By solving (\ref{e:time-indep-HJ}) for $W$, we find the desired canonical transformation to the new conserved coordinates ${\bf Q}$ and momenta ${\bf P}$. By inverting the relation $(q,p) \mapsto (Q,P)$ we find $(q^i(t),p_j(t))$ given their initial values. $W$ is said to be a {\it complete integral} of the HJ equation if it depends on $n$ constants of integration, which may be taken to be the new momenta $P_1, \ldots, P_n$. When this is the case, the system is said to be integrable via the HJ equation. However, it is seldom possible to find such a complete integral. In favorable cases, {\it separation of variables} can help to solve the HJ equation (see Box 8).
\end{mdframed}
\end{center}
\begin{center}
\begin{mdframed}
{\bf Box 8:} {\bf Separation of variables:} In the planar Euler 3-body problem, Hamilton's characteristic function $W$ depends on the two `old' elliptical coordinates $\xi$ and $\eta$. The virtue of elliptical coordinates is that the time-independent HJ equation can be solved by separating the dependence of $W$ on $\xi$ and $\eta$: $W(\xi, \eta) = W_1(\xi) + W_2(\eta)$. Writing the energy (\ref{e:euler-three-body-energy}) in elliptical coordinates (\ref{e:elliptical-coordinates-transformation}) and using $p_\xi = W_1'(\xi)$ and $p_\eta = W_2'(\eta)$, the time-independent HJ equation (\ref{e:time-indep-HJ}) becomes
\begin{equation}
E = \frac{W_1'(\xi)^2 + W_2'(\eta)^2 - 2mf(\mu_1+\mu_2)\cosh\xi -2mf(\mu_1-\mu_2)\cos\eta}{2mf^2(\cosh^2\xi-\cos^2\eta)}.
\end{equation}
Rearranging,
\begin{equation}
W_1'^2 - 2Emf^2\cosh^2\xi - 2mf(\mu_1+\mu_2)\cosh\xi = -W_2'^2 -2Emf^2\cos^2\eta + 2mf(\mu_1-\mu_2)\cos\eta.
\end{equation}
Since the LHS and RHS are functions only of $\xi$ and $\eta$ respectively, they must both be equal to a `separation constant' $\al$. Thus, the HJ partial differential equation separates into a pair of decoupled ODEs for $W_1(\xi)$ and $W_2(\eta)$. The latter may be integrated using elliptic functions. Note that Whittaker's constant $w$ (\ref{e:whittakers-constant}) may be expressed as $w = - 2 m f^2 E - \al$.
\end{mdframed}
\end{center}
\section{Some landmarks in the history of the 3-body problem}
\label{s:delaunay-hill-poincare}
The importance of the three-body problem lies in part in the developments that arose from attempts to solve it \cite{diacu-holmes,musielak-quarles}. These have had an impact all over astronomy, physics and mathematics.
Can planets collide, be ejected from the solar system or suffer significant deviations from their Keplerian orbits? This is the question of the stability of the solar system. In the $18^{\rm th}$ century, Pierre-Simon Laplace and J. L. Lagrange obtained the first significant results on stability. They showed that to first order in the ratio of planetary to solar masses ($M_p/M_S$), there is no unbounded variation in the semi-major axes of the orbits, indicating stability of the solar system. Sim\'eon Denis Poisson extended this result to second order in $M_p/M_S$. However, in what came as a surprise, the Romanian Spiru Haretu (1878) overcame significant technical challenges to find secular terms (growing linearly and quadratically in time) in the semi-major axes at third order! This was an example of a perturbative expansion, where one expands a physical quantity in powers of a small parameter (here the semi-major axis was expanded in powers of $M_p/M_S \ll 1$). Haretu's result however did not prove instability as the effects of his secular terms could cancel out (see Box 9 for a simple example). But it effectively put an end to the hope of proving the stability/instability of the solar system using such a perturbative approach.
The development of Hamilton's mechanics and its refinement in the hands of Carl Jacobi was still fresh when the French dynamical astronomer Charles Delaunay (1846) began the first extensive use of canonical transformations (see Box 6) in perturbation theory \cite{gutzwiller-three-body}. The scale of his hand calculations is staggering: he applied a succession of 505 canonical transformations to a $7^{\rm th}$ order perturbative treatment of the three-dimensional elliptical restricted three-body problem. He arrived at the equation of motion for $m_3$ in Hamiltonian form using $3$ pairs of canonically conjugate orbital variables (3 angular momentum components, the true anomaly, longitude of the ascending node and distance of the ascending node from perigee). He obtained the latitude and longitude of the moon in trigonometric series of about $450$ terms with secular terms (see Box 9) eliminated. It wasn't till 1970-71 that Delaunay's heroic calculations were checked and extended using computers at the Boeing Scientific Laboratories \cite{gutzwiller-three-body}!
The Swede Anders Lindstedt (1883) developed a systematic method to approximate solutions to nonlinear ODEs when naive perturbation series fail due to secular terms (see Box 9). The technique was further developed by Poincar\'e. Lindstedt assumed the series to be generally convergent, but Poincar\'e soon showed that they are divergent in most cases. Remarkably, nearly 70 years later, Kolmogorov, Arnold and Moser showed that in many of the cases where Poincar\'e's arguments were inconclusive, the series are in fact convergent, leading to the celebrated KAM theory of integrable systems subject to small perturbations (see Box 10).
\begin{center}
\begin{mdframed}
{\bf Box 9: Poincar\'e-Lindstedt method:} The Poincar\'e-Lindstedt method is an approach to finding series solutions to a system such as the anharmonic oscillator $\ddot x + x + g x^3 = 0$, which for small $g$, is a perturbation of the harmonic oscillator $m \ddot x + k x = 0$ with mass $m = 1$ and spring constant $k = 1$. The latter admits the periodic solution $x_0(t) = \cos t$ with initial conditions $x(0) = 1$, $\dot x(0) = 0$. For a small perturbation $0 < g \ll 1$, expanding $x(t) = x_0(t) + g x_1(t) + \cdots$ in powers of $g$ leads to a linearized equation for $x_1(t)$
\begin{equation}
\label{e:x1-lindstedt}
\ddot x_1 + x_1 + \cos^3 t = 0.
\end{equation}
However, the perturbative solution
\begin{equation}
x(t) = x_0 + g x_1 + {\cal O}(g^2) = \cos t + g \left[ \ov{32} (\cos 3t - \cos t) - \fr{3}{8} t \sin t \right] + {\cal O}(g^2)
\end{equation}
is unbounded due to the linearly growing {\it secular} term $(-3/8)t \sin t$. This is unacceptable as the energy $E =\frac{1}{2} \dot x^2 + \frac{1}{2} x^2 + \ov{4} g x^4$ must be conserved and the particle must oscillate between turning points of the potential $V = \frac{1}{2} x^2 + \fr{g}{4} x^4$. The Poincar\'e-Lindstedt method avoids this problem by looking for a series solution of the form
\begin{equation}
x(t) = x_0(\tau) + g \tl x_1(\tau) + \cdots
\end{equation}
where $\tau = \om t$ with $\om = 1 + g \om_1 + \cdots$. The constants $\om_1, \om_2, \cdots$ are chosen to ensure that the coefficients of the secular terms at order $g, g^2, \cdots$ vanish. In the case at hand we have
\begin{equation}
x(t) = \cos (t + g \om_1 t) + g \tl x_1(t) + {\cal O}(g^2)
= \cos t + g \tl {\tl x}_1(t) + {\cal O}(g^2) \quad \text{where} \quad \tl {\tilde x}_1(t) = \tl x_1(t) - \om_1 t \sin t.
\end{equation}
$\tl {\tilde x}_1$ satisfies the same equation (\ref{e:x1-lindstedt}) as $x_1$ did, leading to
\begin{equation}
\tl x_1(t) = \ov{32} (\cos 3t - \cos t) + \left(\om_1 - \fr{3}{8} \right) t \sin t.
\end{equation}
The choice $\om_1 = 3/8$ ensures cancellation of the secular term at order $g$, leading to the approximate bounded solution
\begin{equation}
x(t) = \cos \left(t + \frac{3}{8} g t \right) + \frac{g}{32} \left(\cos 3t - \cos t \right) + {\cal O}\left(g^2 \right).
\end{equation}
\end{mdframed}
\end{center}
\begin{center}
\begin{mdframed}
{\bf Box 10: Action-angle variables and invariant tori:} Time evolution is particularly simple if all the generalized coordinates $\tht^j$ are cyclic so that their conjugate momenta $I_j$ are conserved: $\dot I_j = - \dd{H}{\tht^j} = 0$. A Hamiltonian system with $n$ degrees of freedom is integrable in the sense of Liouville if it admits $n$ canonically conjugate ($\{ \tht^j, I_k \} = \del^j_k$\footnote{The Kronecker symbol $\del^j_k$ is equal to one for $j = k$ and zero otherwise}) pairs of phase space variables $(\tht^j, I_j)$ with all the $\tht^j$ cyclic, so that its Hamiltonian depends only on the momenta, $H = H({\bf I})$. Then the `angle' variables $\tht^j$ evolve linearly in time $(\tht^j(t) = \tht^j(0) + \om^j \: t)$ while the momentum or `action' variables $I_j$ are conserved. Here, $\om^j = \dot \tht^j = \dd{H}{I_j}$ are $n$ constant frequencies. Typically the angle variables are periodic, so that the $\tht^j$ parametrize circles. The common level sets of the action variables $I_j = c_j$ are therefore a family of tori that foliate the phase space. Recall that a torus is a Cartesian product of circles. For instance, for one degree of freedom, $\tht^1$ labels points on a circle $S^1$ while for 2 degrees of freedom, $\tht^1$ and $\tht^2$ label points on a 2-torus $S^1 \times S^1$ which looks like a vada or doughnut. Trajectories remain on a fixed torus determined by the initial conditions. Under a sufficiently small and smooth perturbation $H({\bf I}) + g H'({\bf I}, {\vec \tht})$, Andrei Kolmogorov, Vladimir Arnold and J\"urgen Moser showed that some of these `invariant' tori survive provided the frequencies $\om^i$ are sufficiently `non-resonant' or `incommensurate' (i.e., their integral linear combinations do not get `too small').
\end{mdframed}
\end{center}
George William Hill was motivated by discrepancies in lunar perigee calculations. His celebrated paper on this topic was published in 1877 while working with Simon Newcomb at the American Ephemeris and Nautical Almanac\footnote{Simon Newcomb's project of revising all the orbital data in the solar system established the missing $42''$ in the $566''$ centennial precession of Mercury's perihelion. This played an important role in validating Einstein's general theory of relativity.}. He found a new family of periodic orbits in the circular restricted (Sun-Earth-Moon) 3-body problem by using a frame rotating with the Sun's angular velocity instead of that of the Moon. The solar perturbation to lunar motion around the Earth results in differential equations with periodic coefficients. He used Fourier series to convert these ODEs to an infinite system of linear algebraic equations and developed a theory of infinite determinants to solve them and obtain a rapidly converging series solution for lunar motion. He also discovered new `tight binary' solutions to the 3-body problem where two nearby masses are in nearly circular orbits around their center of mass CM$_{12}$, while CM$_{12}$ and the far away third mass in turn orbit each other in nearly circular trajectories.
The French mathematician/physicist/engineer Henri Poincar\'e began by developing a qualitative theory of differential equations from a global geometric viewpoint of the dynamics on phase space. This included a classification of the types of equilibria (zeros of vector fields) on the phase plane (nodes, saddles, foci and centers, see Fig.~\ref{f:zeroes-classification}). His 1890 memoir on the three-body problem was the prize-winning entry in King Oscar II's $60^{\rm th}$ birthday competition (for a detailed account see \cite{barrow-green-poincare-three-body}). He proved the divergence of series solutions for the 3-body problem developed by Delaunay, Hugo Gyld\'en and Lindstedt (in many cases) and covergence of Hill's infinite determinants. To investigate the stability of 3-body motions, Poincar\'e defined his `surfaces of section' and a discrete-time dynamics via the `return map' (see Fig.~\ref{f:poincare-return-map}). A Poincar\'e surface $S$ is a two-dimensional surface in phase space transversal to trajectories. The first return map takes a point $q_1$ on $S$ to $q_2$, which is the next intersection of the trajectory through $q_1$ with $S$. Given a saddle point $p$ on a surface $S$, he defined its stable and unstable spaces $W_s$ and $W_u$ as points on $S$ that tend to $p$ upon repeated forward or backward applications of the return map (see Fig.~\ref{f:homoclinic-points}). He initially assumed that $W_s$ and $W_u$ on a surface could not intersect and used this to argue that the solar system is stable. This assumption turned out to be false, as he discovered with the help of Lars Phragm\'en. In fact, $W_s$ and $W_u$ can intersect transversally on a surface at a homoclinic point\footnote{Homoclinic refers to the property of being `inclined' both forward and backward in time to the same point.} if the state space of the underlying continuous dynamics is at least three-dimensional. What is more, he showed that if there is one homoclinic point, then there must be infinitely many accumulating at $p$. Moreover, $W_s$ and $W_u$ fold and intersect in a very complicated `homoclinic tangle' in the vicinity of $p$. This was the first example of what we now call chaos. Chaos is usually manifested via an extreme sensitivity to initial conditions (exponentially diverging trajectories with nearby initial conditions).
\begin{figure}
\centering
\begin{subfigure}[t]{3cm}
\centering
\includegraphics[width=3cm]{centre.pdf}
\caption{\footnotesize center}
\label{f:centre}
\end{subfigure}
\quad
\begin{subfigure}[t]{3cm}
\centering
\includegraphics[width=3cm]{node.pdf}
\caption{\footnotesize (stable) node}
\label{f:node}
\end{subfigure}
\begin{subfigure}[t]{3cm}
\centering
\includegraphics[width=2.1cm]{spiral.pdf}
\caption{\footnotesize (unstable) focus}
\label{f:focus}
\end{subfigure}
\quad
\begin{subfigure}[t]{3cm}
\centering
\includegraphics[width=3cm]{saddle.pdf}
\caption{\footnotesize saddle}
\label{f:saddle}
\end{subfigure}
\quad
\caption{\footnotesize Poincar\'e's classification of zeros of a vector field (equilibrium or fixed points) on a plane. (a) Center is always stable with oscillatory motion nearby, (b,c) nodes and foci (or spirals) can be stable or unstable and (d) saddles are unstable except in one direction.}
\label{f:zeroes-classification}
\end{figure}
\begin{figure}[h]
\center
\includegraphics[width=4cm]{poincare-return-map.pdf}
\caption{\footnotesize A Poincare surface $S$ transversal to a trajectory is shown. The trajectory through $q_1$ on $S$ intersects $S$ again at $q_2$. The map taking $q_1$ to $q_2$ is called Poincar\'e's first return map.}
\label{f:poincare-return-map}
\end{figure}
\begin{figure}[h]
\center
\includegraphics[width=4cm]{homoclinic-points.pdf}
\caption{\footnotesize The saddle point $p$ and its stable and unstable spaces $W_s$ and $W_u$ are shown on a Poincar\'e surface through $p$. The points at which $W_s$ and $W_u$ intersect are called homoclinic points, e.g., $h_0,$ $h_1$ and $h_{-1}$. Points on $W_s$ (or $W_u$) remain on $W_s$ (or $W_u$) under forward and backward iterations of the return map. Thus, the forward and backward images of a homoclinic point under the return map are also homoclinic points. In the figure $h_0$ is a homoclinic point whose image is $h_1$ on the segment $[h_0,p]$ of $W_s$. Thus, $W_u$ must fold back to intersect $W_s$ at $h_1$. Similarly, if $h_{-1}$ is the backward image of $h_0$ on $W_u$, then $W_s$ must fold back to intersect $W_u$ at $h_{-1}$. Further iterations produce an infinite number of homoclinic points accumulating at $p$. The first example of a homoclinic tangle was discovered by Poincar\'e in the restricted 3-body problem and is a signature of its chaotic nature.}
\label{f:homoclinic-points}
\end{figure}
When two gravitating point masses collide, their relative speed diverges and solutions to the equations of motion become singular at the collision time $t_c$. More generally, a singularity occurs when either a position or velocity diverges in finite time. The Frenchman Paul Painlev\'e (1895) showed that binary and triple collisions are the only possible singularities in the three-body problem. However, he conjectured that non-collisional singularities (e.g. where the separation between a pair of bodies goes to infinity in finite time) are possible for four or more bodies. It took nearly a century for this conjecture to be proven, culminating in the work of Donald Saari and Zhihong Xia (1992) and Joseph Gerver (1991) who found explicit examples of non-collisional singularities in the $5$-body and $3n$-body problems for $n$ sufficiently large \cite{saari}. In Xia's example, a particle oscillates with ever growing frequency and amplitude between two pairs of tight binaries. The separation between the binaries diverges in finite time, as does the velocity of the oscillating particle.
The Italian mathematician Tulio Levi-Civita (1901) attempted to avoid singularities and thereby `regularize' collisions in the three-body problem by a change of variables in the differential equations. For example, the ODE for the one-dimensional Kepler problem $\ddot x = - k/x^2$ is singular at the collision point $x=0$. This singularity can be regularized\footnote{Solutions which could be smoothly extended beyond collision time (e.g., the bodies elastically collide) were called regularizable. Those that could not were said to have an essential or transcendent singularity at the collision.} by introducing a new coordinate $x = u^2$ and a reparametrized time $ds = dt/u^2$, which satisfy the nonsingular oscillator equation $u''(s) = E u/2$ with conserved energy $E = (2 \dot u^2 - k)/u^2$. Such regularizations could shed light on near-collisional trajectories (`near misses') provided the differential equations remain physically valid\footnote{Note that the point particle approximation to the equations for celestial bodies of non-zero size breaks down due to tidal effects when the bodies get very close}.
The Finnish mathematician Karl Sundman (1912) began by showing that binary collisional singularities in the 3-body problem could be regularized by a repararmetrization of time, $s = |t_1-t|^{1/3}$ where $t_1$ is the the binary collision time \cite{siegel-moser}. He used this to find a {\it convergent} series representation (in powers of $s$) of the general solution of the 3-body problem in the absence of triple collisions\footnote{Sundman showed that for non-zero angular momentum, there are no triple collisions in the three-body problem.}. The possibility of such a convergent series had been anticipated by Karl Weierstrass in proposing the 3-body problem for King Oscar's 60th birthday competition. However, Sundman's series converges exceptionally slowly and has not been of much practical or qualitative use.
The advent of computers in the $20^{\rm th}$ century allowed numerical investigations into the 3-body (and more generally the $n$-body) problem. Such numerical simulations have made possible the accurate placement of satellites in near-Earth orbits as well as our missions to the Moon, Mars and the outer planets. They have also facilitated theoretical explorations of the three-body problem including chaotic behavior, the possibility for ejection of one body at high velocity (seen in hypervelocity stars \cite{hypervelocity-stars}) and quite remarkably, the discovery of new periodic solutions. For instance, in 1993, Chris Moore discovered the zero angular momentum figure-8 `choreography' solution. It is a stable periodic solution with bodies of equal masses chasing each other on an $\infty$-shaped trajectory while separated equally in time (see Fig.~\ref{f:figure-8}). Alain Chenciner and Richard Montgomery \cite{montgomery-notices-ams} proved its existence using an elegant geometric reformulation of Newtonian dynamics that relies on the variational principle of Euler and Maupertuis.
\begin{figure}[h]
\center
\includegraphics[width=5cm]{figure-8.pdf}
\caption{\footnotesize Equal-mass zero-angular momentum figure-8 choreography solution to the 3-body problem. A choreography is a periodic solution where all masses traverse the same orbit separated equally in time.}
\label{f:figure-8}
\end{figure}
\section{Geometrization of mechanics}
Fermat's principle in optics states that light rays extremize the optical path length $\int n({\bf r}(\tau)) \: d\tau$ where $n({\bf r})$ is the (position dependent) refractive index and $\tau$ a parameter along the path\footnote{The optical path length $\int n({\bf r}) \, d\tau$ is proportional to $\int d\tau/\la$, which is the geometric length in units of the local wavelength $\la({\bf r}) = c/n({\bf r}) \nu$. Here, $c$ is the speed of light in vacuum and $\nu$ the constant frequency.}. The variational principle of Euler and Maupertuis (1744) is a mechanical analogue of Fermat's principle \cite{lanczos}. It states that the curve that extremizes the abbreviated action $\int_{{\bf q}_1}^{{\bf q}_2} {\bf p}\cdot d{\bf q}$ holding energy $E$ and the end-points ${\bf q}_1$ and ${\bf q}_2$ fixed has the same shape as the Newtonian trajectory. By contrast, Hamilton's principle of extremal action (1835) states that a trajectory going from ${\bf q}_1$ at time $t_1$ to ${\bf q}_2$ at time $t_2$ is a curve that extremizes the action\footnote{The action is the integral of the Lagrangian $S = \int_{t_1}^{t_2} L({\bf q},\dot {\bf q}) \: dt$. Typically, $L = T - V$ is the difference between kinetic and potential energies.}.
It is well-known that the trajectory of a free particle (i.e., subject to no forces) moving on a plane is a straight line. Similarly, trajectories of a free particle moving on the surface of a sphere are great circles. More generally, trajectories of a free particle moving on a curved space (Riemannian manifold $M$) are geodesics (curves that extremize length). Precisely, for a mechanical system with configuration space $M$ and Lagrangian $L = \frac{1}{2} m_{ij}({\bf q}) \dot q^i \dot q^j$, Lagrange's equations $\DD{p_i}{t} = \dd{L}{q^i}$ are equivalent to the geodesic equations with respect to the `kinetic metric' $m_{ij}$ on $M$\footnote{A metric $m_{ij}$ on an $n$-dimensional configuration space $M$ is an $n \times n$ matrix at each point ${\bf q} \in M$ that determines the square of the distance ($ds^2 = \sum_{i,j = 1}^n m_{ij} dq^i dq^j$) from ${\bf q}$ to a nearby point ${\bf q} + d {\bf q}$. We often suppress the summation symbol and follow the convention that repeated indices are summed from $1$ to $n$.}:
\begin{equation}
m_{ij} \: \ddot q^j(t) = - \frac{1}{2} \left(m_{ji,k} + m_{ki,j} - m_{jk,i} \right) \dot q^j(t) \: \dot q^k(t).
\label{e:Lagrange-eqns-kin-metric-and-V}
\end{equation}
Here, $m_{ij,k} = \partial m_{ij}/\partial q^k$ and $p_i = \dd{L}{\dot q^i} = m_{ij}\dot q^j$ is the momentum conjugate to coordinate $q^i$. For instance, the kinetic metric ($m_{rr} = m$, $m_{\tht \tht} = m r^2$, $m_{r \tht} = m_{\tht r} = 0$) for a free particle moving on a plane may be read off from the Lagrangian $L = \frac{1}{2} m (\dot r^2 + r^2 \dot \tht^2)$ in polar coordinates, and the geodesic equations shown to reduce to Lagrange's equations of motion $\ddot r = r \dot \tht^2$ and $d(m r^2 \dot \tht)/dt = 0$.
Remarkably, the correspondence between trajectories and geodesics continues to hold even in the presence of conservative forces derived from a potential $V$. Indeed, trajectories of the Lagrangian $L = T - V = \frac{1}{2} m_{ij}({\bf q}) \dot q^i \dot q^j - V({\bf q})$ are {\it reparametrized}\footnote{The shapes of trajectories and geodesics coincide but the Newtonian time along trajectories is not the same as the arc-length parameter along geodesics.} geodesics of the Jacobi-Maupertuis (JM) metric $g_{ij} = (E- V({\bf q})) m_{ij}({\bf q})$ on $M$ where $E = T + V$ is the energy. This geometric formulation of the Euler-Maupertuis principle (due to Jacobi) follows from the observation that the square of the metric line element
\begin{equation}
ds^2 = g_{ij} dq^i dq^j = (E-V) m_{ij} dq^i dq^j = \frac{1}{2} m_{kl} \fr{dq^k}{dt} \fr{dq^l}{dt} m_{ij} dq^i dq^j = \frac{1}{2} \left( m_{ij} \dot q^i dq^j \right)^2 = \ov{2} ({\bf p} \cdot d{\bf q})^2,
\end{equation}
so that the extremization of $\int {\bf p} \cdot d{\bf q}$ is equivalent to the extremization of arc length $\int ds$. Loosely, the potential $V({\bf q})$ on the configuration space plays the role of an inhomogeneous refractive index. Though trajectories and geodesics are the same curves, the Newtonian time $t$ along trajectories is in general different from the arc-length parameter $s$ along geodesics. They are related by $\DD{s}{t} = \sqrt{2} (E-V)$ \cite{govind-himalaya}.
This geometric reformulation of classical dynamics allows us to assign a local curvature to points on the configuration space. For instance, the Gaussian curvature $K$ of a surface at a point (see Box 11) measures how nearby geodesics behave (see Fig. \ref{f:geodesic-separation}), they oscillate if $K > 0$ (as on a sphere), diverge exponentially if $K < 0$ (as on a hyperboloid) and linearly separate if $K = 0$ (as on a plane). Thus, the curvature of the Jacobi-Maupertuis metric defined above furnishes information on the stability of trajectories. Negativity of curvature leads to sensitive dependence on initial conditions and can be a source of chaos.
\begin{center}
\begin{mdframed}
{\bf Box 11: Gaussian curvature:} Given a point $p$ on a surface $S$ embedded in three dimensions, a normal plane through $p$ is one that is orthogonal to the tangent plane at $p$. Each normal plane intersects $S$ along a curve whose best quadratic approximation at $p$ is called its osculating circle. The principal radii of curvature $R_{1,2}$ at $p$ are the maximum and minimum radii of osculating circles through $p$. The Gaussian curvature $K(p)$ is defined as $1/R_1 R_2$ and is taken positive if the centers of the corresponding osculating circles lie on the same side of $S$ and negative otherwise.
\end{mdframed}
\end{center}
\begin{figure}
\centering
\begin{subfigure}[t]{5cm}
\centering
\includegraphics[width=4cm]{geodesic-planar.pdf}
\caption{\footnotesize Nearby geodesics on a plane ($K = 0$) separate linearly.}
\label{f:planar-geodesics}
\end{subfigure}
\quad
\begin{subfigure}[t]{5cm}
\centering
\includegraphics[width=2.3cm]{geodesic-spherical.pdf}
\caption{\footnotesize Distance between neighboring geodesics on a sphere ($K > 0$) oscillates.}
\label{f:spherical-geodesics}
\end{subfigure}
\quad
\begin{subfigure}[t]{5cm}
\centering
\includegraphics[width=4cm]{geodesic-hyperbolic.pdf}
\caption{\footnotesize Geodesics on a hyperbolic surface ($K < 0$) deviate exponentially}
\label{f:hyperbolic-geodesics}
\end{subfigure}
\caption{\footnotesize Local behavior of nearby geodesics on a surface depends on the sign of its Gaussian curvature $K$.}
\label{f:geodesic-separation}
\end{figure}
In the planar Kepler problem, the Hamiltonian (\ref{e:energy-kepler-cm-frame}) in the CM frame is
\begin{equation}
H = \fr{p_x^2+p_y^2}{2m} - \fr{\al}{r} \quad \text{where} \quad \al = GMm > 0 \;\;\text{and} \;\; r^2 = x^2 + y^2.
\end{equation}
The corresponding JM metric line element in polar coordinates is $ds^2 = m\left(E+\fr{\al}{r}\right)\left(dr^2+r^2d\theta^2\right)$. Its Gaussian curvature $K = -E \al/2m(\al + Er)^3$ has a sign opposite to that of energy everywhere. This reflects the divergence of nearby hyperbolic orbits and oscillation of nearby elliptical orbits. Despite negativity of curvature and the consequent sensitivity to initial conditions, hyperbolic orbits in the Kepler problem are not chaotic: particles simply fly off to infinity and trajectories are quite regular. On the other hand, negativity of curvature without any scope for escape can lead to chaos. This happens with geodesic motion on a compact Riemann surface\footnote{ A compact Riemann surface is a closed, oriented and bounded surface such as a sphere, a torus or the surface of a pretzel. The genus of such a surface is the number of handles: zero for a sphere, one for a torus and two or more for higher handle-bodies. Riemann surfaces with genus two or more admit metrics with constant negative curvature.} with constant negative curvature: most trajectories are very irregular.
\section{Geometric approach to the planar 3-body problem}
We now sketch how the above geometrical framework may be usefully applied to the three-body problem. The configuration space of the planar 3-body problem is the space of triangles on the plane with masses at the vertices. It may be identified with six-dimensional Euclidean space ($\mathbb{R}^6$) with the three planar Jacobi vectors ${\bf J}_{1,2,3}$ (see (\ref{e:jacobi-coord}) and Fig.~\ref{f:jacobi-coords}) furnishing coordinates on it. A simultaneous translation of the position vectors of all three bodies ${\bf r}_{1,2,3} \mapsto {\bf r}_{1,2,3} + {\bf r}_0$ is a symmetry of the Hamiltonian $H = T+V$ of Eqs. (\ref{e:jacobi-coord-ke-mom-inertia},\ref{e:jacobi-coord-potential}) and of the Jacobi-Maupertuis metric
\begin{equation}
\label{e:jm-metric-in-jacobi-coordinates-on-c3}
ds^2 = \left( E - V({\bf J}_1, {\bf J}_2) \right) \sum_{a=1}^3 M_a \: |d{\bf J}_a|^2.
\end{equation}
This is encoded in the cyclicity of ${\bf J}_3$. Quotienting by translations allows us to define a center of mass configuration space $\mathbb{R}^4$ (the space of centered triangles on the plane with masses at the vertices) with its quotient JM metric. Similarly, rotations ${\bf J}_a \to \colvec{2}{\cos \tht & -\sin \tht}{\sin \tht & \cos \tht} {\bf J}_a$ for $a = 1,2,3$ are a symmetry of the metric, corresponding to rigid rotations of a triangle about a vertical axis through the CM. The quotient of ${\mathbb{R}}^4$ by such rotations is the {\it shape space} ${\mathbb{R}}^3$, which is the space of congruence classes of centered oriented triangles on the plane. Translations and rotations are symmetries of any central inter-particle potential, so the dynamics of the three-body problem in any such potential admits a consistent reduction to geodesic dynamics on the shape space ${\mathbb{R}}^3$. Interestingly, for an {\it inverse-square} potential (as opposed to the Newtonian `$1/r$' potential)
\begin{equation}
V = -\sum_{a < b} \fr{G m_a m_b}{|{\bf r}_a - {\bf r}_b|^2} = -\fr{G m_1 m_2}{|{\bf J}_1|^2} - \fr{G m_2 m_3}{|{\bf J}_2 - \mu_1 {\bf J}_1|^2} - \fr{G m_3 m_1}{|{\bf J}_2+\mu_2 {\bf J}_1|^2} \quad \text{with} \quad
\mu_{1,2}= \frac{m_{1,2}}{m_1 + m_2},
\end{equation}
the zero-energy JM metric (\ref{e:jm-metric-in-jacobi-coordinates-on-c3}) is also invariant under the scale transformation ${\bf J}_a \to \la {\bf J}_a$ for $a = 1,2$ and $3$ (see Box 12 for more on the inverse-square potential and for why the zero-energy case is particularly interesting). This allows us to further quotient the shape space ${\mathbb{R}}^3$ by scaling to get the shape sphere ${\mathbb{S}}^2$, which is the space of similarity classes of centered oriented triangles on the plane\footnote{Though scaling is not a symmetry for the Newtonian gravitational potential, it is still useful to project the motion onto the shape sphere.}. Note that collision configurations are omitted from the configuration space and its quotients. Thus, the shape sphere is topologically a $2$-sphere with the three binary collision points removed. In fact, with the JM metric, the shape sphere looks like a `pair of pants' (see Fig.~\ref{f:horn-shape-sphere}).
\begin{figure}
\centering
\begin{subfigure}[t]{3in}
\centering
\includegraphics[width=5cm]{horn-shape-sphere.pdf}
\caption{\footnotesize The negatively curved `pair of pants' metric on the shape sphere ${\mathbb{S}}^2$.}
\label{f:horn-shape-sphere}
\end{subfigure}
\quad
\begin{subfigure}[t]{3in}
\centering
\includegraphics[width=5cm]{round-shape-sphere-resonance.pdf}
\caption{\footnotesize Locations of Lagrange, Euler and collision points on a geometrically {\it unfaithful} depiction of the shape sphere ${\mathbb{S}}^2$.
The negative curvature of ${\mathbb{S}}^2$ is indicated in Fig.~\ref{f:horn-shape-sphere}. Syzygies are instantaneous configurations where the three bodies are collinear (eclipses).}
\label{f:round-shape-sphere}
\end{subfigure} \caption{\footnotesize `Pair of pants' metric on shape sphere and Lagrange, Euler and collision points.}
\label{f:shape-sphere}
\end{figure}
For equal masses and $E=0$, the quotient JM metric on the shape sphere may be put in the form
\begin{equation}
\label{e:jm-metric-zero-energy-shape-sphere}
ds^2 = Gm^3 h(\eta,\xi_2) \left(d\eta^2+\sin^2 2\eta \;d\xi_2^2\right).
\end{equation}
Here, $0 \le 2 \eta \le \pi$ and $0 \le 2 \xi_2 \le 2 \pi$ are polar and azimuthal angles on the shape sphere ${\mathbb{S}}^2$ (see Fig.~\ref{f:round-shape-sphere}). The function $h$ is invariant under the above translations, rotations and scalings and therefore a function on ${\mathbb{S}}^2$. It may be written as $v_1 + v_2 + v_3$ where $v_1 = I_{\rm CM}/(m |{\bf r}_2 - {\bf r}_3|^2)$ etc., are proportional to the inter-particle potentials \cite{govind-himalaya}. As shown in Fig.~\ref{f:horn-shape-sphere}, the shape sphere has three cylindrical horns that point toward the three collision points, which lie at an infinite geodesic distance. Moreover, this equal-mass, zero-energy JM metric (\ref{e:jm-metric-zero-energy-shape-sphere}) has negative Gaussian curvature everywhere except at the Lagrange and collision points where it vanishes. This negativity of curvature implies geodesic instability (nearby geodesics deviate exponentially) as well as the uniqueness of geodesic representatives in each `free' homotopy class, when they exist. The latter property was used by Montgomery \cite{montgomery-notices-ams} to establish uniqueness of the `figure-8' solution (up to translation, rotation and scaling) for the inverse-square potential. The negativity of curvature on the shape sphere for equal masses extends to negativity of scalar curvature\footnote{Scalar curvature is an average of the Gaussian curvatures in the various tangent planes through a point} on the CM configuration space for both the inverse-square and Newtonian gravitational potentials \cite{govind-himalaya}. This could help to explain instabilities and chaos in the three-body problem.
\begin{center}
\begin{mdframed}
{\bf Box 12:} {\bf The inverse-square potential} is somewhat simpler than the Newtonian one due to the behavior of the Hamiltonian $H = \sum_a {\bf p}_a^2/2m_a - \sum_{a < b} G m_a m_b/|{\bf r}_a -{\bf r}_b|^2$ under scale transformations ${\bf r}_a \to \la {\bf r}_a$ and ${\bf p}_a \to \la^{-1} {\bf p}_a$: $H(\la {\bf r}, \la^{-1} {\bf p}) = \la^{-2} H({\bf r}, {\bf p})$ \cite{Rajeev}. The infinitesimal version ($\la \approx 1$) of this transformation is generated by the dilatation operator $D = \sum_a {\bf r}_a \cdot {\bf p}_a$ via Poisson brackets $\{{\bf r}_a, D \} = {\bf r}_a$ and $\{{\bf p}_a, D \} = - {\bf p}_a$. Here, the Poisson bracket between coordinates and momenta are $\{ r_{ai}, p_{bj} \} = \del_{ab} \del_{ij}$ where $a,b$ label particles and $i,j$ label Cartesian components. In terms of Poisson brackets, time evolution of any quantity $f$ is given by $\dot f = \{ f, H \}$. It follows that $\dot D = \{ D, H \} = 2 H$, so scaling is a symmetry of the Hamiltonian (and $D$ is conserved) only when the energy vanishes. To examine long-time behavior we consider the evolution of the moment of inertia in the CM frame $I_{\rm CM} = \sum_a m_a {\bf r}_a^2$ whose time derivative may be expressed as $\dot I = 2D$. This leads to the Lagrange-Jacobi identity $\ddot I = \{\dot I, H \} = \{2D, H \} = 4 E$ or $I = I(0) + \dot I(0) \: t + 2E \: t^2$. Hence when $E > 0$, $I \to \infty$ as $t \to \infty$ so that bodies fly apart asymptotically. Similarly, when $E < 0$ they suffer a triple collision. When $E = 0$, the sign of $\dot I(0)$ controls asymptotic behavior leaving open the special case when $E = 0$ and $\dot I(0) = 0$. By contrast, for the Newtonian potential, the Hamiltonian transforms as $H(\la^{-2/3} {\bf r}, \la^{1/3} {\bf p}) = \la^{2/3} H({\bf r}, {\bf p})$ leading to the Lagrange-Jacobi identity $\ddot I = 4E - 2V$. This is however not adequate to determine the long-time behavior of $I$ when $E < 0$.
\end{mdframed}
\end{center}
| 2024-02-18T23:39:45.397Z | 2019-01-23T02:32:59.000Z | algebraic_stack_train_0000 | 294 | 12,614 |
|
proofpile-arXiv_065-1676 | \section{Introduction} \label{sec:intro}
The Gamma-ray Burst (GRB) is the brightest explosion in the universe and is characterized by high variability.
Gamma-ray bursts can be divided into short gamma-ray bursts (SGRBs) and long gamma-ray bursts (LGRBs) based on the bimodal distribution of the observed durations in the BATSE era \citep{kouveliotou1993identification}.
Long bursts originate in star-forming regions in galaxies and are observed in association with massive star-collapse supernovae \citep{woosley1993gamma,fruchter2006long}.
Short bursts are located in low star-forming regions of their host galaxies, and they are believed to originate from compact binaries \citep{eichler1989nucleosynthesis,narayan1992gamma,gehrels2005short,fong2009hubble,leibler2010stellar,fong2013locations,berger2014short}.
The duration of the burst ($T_{90}$) is often inconsistent across energy ranges and different instruments \citep{qin2012comprehensive}, so other characteristics of GRBs are important to consider.
The characteristics such as the delay in the arrival of photons of different energies \citep{norris2000connection,norris2006short},
the $E_{\gamma, {\rm iso}}$ and $E_{p,z}$ correlation \citep{amati2002intrinsic}, as well as the star formation rate of the host galaxy \citep{li2016comparative}.
However, there are still some special cases, such as long-duration bursts with short burst characteristics \citep{gehrels2006new} and short-duration bursts with long burst characteristics \citep{zhang2021peculiarly,ahumada2021discovery}.
All these issues are intertwined, so it is challenging to interpret all observed properties in a consistent picture.
The confusion stems from our lack of understanding of the central engine and dissipation mechanisms.
Studying the power density spectra of the prompt emission of gamma-ray bursts can help to solve these puzzles \citep{beloborodov1998self,dichiara2013average}.
While most observations and theories indicate that gamma-ray bursts do not repeat,
some work has attempted to explore the possibility of quasi-periodic oscillation (QPO) signals in gamma-ray bursts \citep{dichiara2013search,zhang2016central,tarnopolski2021comprehensive}.
In this work, we investigated possible QPOs and repetitive behaviors in the prompt emission of gamma-ray bursts observed by Fermi.
We use the Bayesian test to find and confirm periodic or quasi-periodic oscillations in red noise that builds upon the procedure outlined \cite{vaughan2010bayesian}.
To identify possible repetitive events, we calculate the autocorrelation function proposed by \cite{paynter2021evidence}.
We have found a special sample, GRB201104A, and we have taken into consideration the possible repetition artifacts caused by the lens effect.
Furthermore, we extend the spectral analysis to Bayesian inference of lens and non-lens models.
We believe that this procedure should be taken into account in future research to certify gravitationally lensed gamma-ray bursts.
The paper is organized as follows:
In Section \ref{sec:search_method}, we describe the search method we use to identify repeating events and QPOs in Fermi's gamma-ray bursts.
In Section \ref{sec:obs_ana}, we present observations of GRB 201104A and a detailed analysis of its properties.
In Section \ref{sec:lens}, we performed Bayesian inferences under the lensing hypothesis, considering both light curves and spectrum data.
In Section \ref{sec:sd}, we summarize our results with some discussion.
\section{Comprehensive search} \label{sec:search_method}
In modern astrophysics, the analysis of time series is an essential tool.
These two methods were used to search for possible QPO and repetitive behaviors in the current Fermi-GBM data \citep{meegan2009fermi}.
\subsection{Autocorrelation function} \label{sec:ACF}
Signal autocorrelation can be used to measure the time delay between two temporally overlapping signals.
It may be a property of its own, or it may be due to gravitational lensing.
The standard autocorrelation function (ACF) is defined as follows:
\begin{equation}
C(k) = \frac{\sum_{t=0}^{N-k} (I_t-\overline I)(I_{t+k}-\overline I)}
{\sum_{t=0}^{N} (I_t-\overline I)^2}.
\label{eq:correlation}
\end{equation}
To fit the ACF sequence, we apply the Savitzky-Golay filter $F(\delta t)$.
The values of the window length and the order of the polynomial are set to be 101 and 3, respectively \citep{paynter2021evidence}.
The dispersion ($\sigma$) between the ACF and the fit $F(k)$ is
\begin{equation}
\sigma^2 = \frac{1}{N}\sum_{j=0}^{N} [C(k) - F(k)]^2,
\label{eq:dispersion}
\end{equation}
where $N$ is the total number of bins. As usual,
we identify the $3\sigma$ outliers as our candidates.
\subsection{Power density spectra} \label{sec:PDS}
To identify possible quasi-periods in the time series data, red noise must be modeled in order to assess the significance of any possible period.
For the above purpose, we developed a procedure based on \cite{vaughan2005simple,vaughan2010bayesian,vaughan2013random} and \cite{covino2019gamma}.
Also, we refer to \cite{beloborodov1998self,guidorzi2012average,guidorzi2016individual,dichiara2013average,dichiara2013search} for details on analyzing power density spectra in gamma-ray bursts.
Power density spectra (PDS) are derived by discrete Fourier transformation and normalized according to \cite{leahy1983searches}.
We consider two models to fit the PDS \citep{guidorzi2016individual}, the first is a single power-law function plus white noise called PL model,
\begin{equation}
S_{\rm PL}(f) = N\,f^{-\alpha} + B
\qquad ,
\label{}
\end{equation}
here $N$ is a normalization factor, $f$ is the sampling frequency, and its lower limit is related to the length of the time series, which is $1/T$ (the time interval).
The upper limit of $f$ is the Nyquist frequency, which is $1/(2{\delta_t})$, where $\delta_t$ is the time bin size of data.
The value of white noise $B$ is expected to be 2, which is the expected value of a $\chi{^2_2}$ distribution for pure Poissonian variance in the Leahy normalization.
In some GRBs, there is a distinct broken power law. As such, we consider another model called BPL,
\begin{equation}
S_{\rm BPL}(f) = N\,\Big[1 + \Big(\frac{f}{f_{\rm b}}\Big)^{\alpha}\Big]^{-1} + B
\qquad ,
\label{}
\end{equation}
There is one more parameter here, the break frequency $f_b$.
We employ a Bayesian inference \citep{thrane2019introduction,van2021bayesian} approach for parameter estimation and model selection by using the nested sampling algorithm Dynesty \citep{speagle2020dynesty,skilling2006nested,higson2019dynamic} in {\tt Bilby} \citep{ashton2019bilby}.
The maximum likelihood function we use is called $Whittle$ likelihood function \citep{vaughan2010bayesian}.
After we have the posterior distribution of the model parameters, we calculate the global significance of every frequency in the PDS according to $T_\text{R} = {\rm max_j}
R_j $, where $R = 2P/S$, $P$ is the simulated or observed PDS, and $S$ the best-fit PDS model.
This method selects the maximum deviation from the continuum for each simulated PDS.
The observed $T_\text{R}$ values are compared to the simulated distribution and significance is assessed directly.
The corrections for the multiple trials performed were included in the analysis because the same procedure was applied to the simulated as well as to the real data.
\section{Observation and Data analysis}\label{sec:obs_ana}
We analyzed the gamma-ray bursts in the Fermi GBM Burst Catalog \citep{gruber2014fermi,bhat2016third,von2014second,von2020fourth} using the above method.
In total, we examined the PDS of 248 short bursts and 920 long bursts (bounded by $T_{90}$ of 3 seconds) with peak counts greater than 50 in 64 ms tiem bin.
Among these samples, we did not find a quasi-periodic signal with a significance exceeding 3 $\sigma$, but among the candidates of the ACF check, we found an interesting sample, which is GRB 201104A. The basic observation information and data reduction are as follows.
The Fermi-GBM team reports the detection of a possible long burst GRB 201104A (trigger 626140861.749996 / 201104001) \citep{2020GCN.28823....1F}.
At the same time, Fermi-LAT \citep{atwood2009large} also observed high-energy high photons from this source with high significance \citep{2020GCN.28828....1O}.
In addition, we try to search the observation data of other telescopes through GCN \footnote{https://gcn.gsfc.nasa.gov},
but there is no X-ray and optical data available with other telescopes.
We present a further analysis of GRB 201104A observed by Fermi instruments in this work.
Fermi-GBM \citep{meegan2009fermi} has 12 sodium iodide (NaI) detectors and 2 bismuth germanate (BGO) detectors.
According to the pointing direction and count rate of the detectors, we selected a NaI detector (n8) and a BGO detector (b1) respectively.
The Fermi-GBM data is processed with {\tt GBM Data Tools} \citep{GbmDataTools}, which makes it incredibly easy for users to customize.
We performed a standard unbinned maximum likelihood analysis for GRB \footnote{https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools} using {\tt Fermitools} \footnote{https://github.com/fermi-lat/Fermitools-conda/wiki} (version 2.0.8),
and we determined the probability of each photon originating from this source.
In Figure \ref{fig:1} (a), we present the GBM and LAT light curves for several energy bands.
It appears to be a long burst of three episodes and with a weak extending component.
ACF and PDS results are also shown in (b) and (c) in Figure \ref{fig:1}.
A double peak in the ACF means that the light curve is similar after two time shifts, while there is also a peak in the PSD at a frequency of 0.3 Hz.
While the significance level of this quasi-periodicity is not high enough, we also analyze in detail each possible episode in the following section.
\subsection{Spectal analysis}\label{sec:spec_ana}
In order to confirm the duration of this burst, we recalculated the $T_{90}$ \citep{koshut1996systematic} of the GBM n8 detector in the energy range of 50-300 keV.
We then used the Bayesian block technique \citep{scargle2013studies} to determine the time interval of this burst (see the left of Figure \ref{fig:2}).
We divide the main burst [$T_0$ -0.1 , $T_0$ + 8.3 s] into three episodes,
which are Episode a [$T_0$ -0.1 , $T_0$ + 2.7 s], Episode b [$T_0$ + 2.7, $T_0$+ 5.5 s] and Episode c [$T_0$ + 5.5, $T_0$ + 8.3 s].
We perform both time-integrated and time-resolved spectral analyses of GRB 201104A, and the specific time interval is shown in Table \ref{tab:tab1}.
For each time interval, we extract the corresponding source spectra,
background spectra and instrumental response files following the procedure described Section \ref{sec:specinf} and \cite{GbmDataTools}.
In general, the energy spectrum of GRB can be fitted by an empirical smoothly joined broken power-law function (the so-called “Band” function; \citep{band1993batse}).
The Band function takes the form of
\begin{equation}
N(E)=
\begin{cases}
A(\frac{E}{100\,{\rm keV}})^{\alpha}{\rm exp}{(-\frac{E}{E_0})}, \mbox{if $E<(\alpha-\beta)E_{0}$ }\\
A[\frac{(\alpha-\beta)E_0}{100\,{\rm keV}}]^{(\alpha-\beta)}{\rm exp}{(\beta-\alpha)}(\frac{E}{100\,{\rm keV}})^{\beta},
\mbox{if $E > (\alpha-\beta)E_{0}$}
\end{cases}
\label{eq:band}
\end{equation}
Where \emph{A} is the normalization constant, \emph{E} is the energy in unit of keV, $\alpha$ is the low-energy photon spectral index, $\beta$ is the high-energy photon spectral index, and \emph{E$_{0}$} is the break energy in the spectrum.
The peak energy in the $\nu F_\nu$ spectrum is called $E_{p}$, which is equal to $E_{0}\times(\alpha+2)$.
In addition, when the count rate of high-energy photons is relatively low, the high-energy spectral index $\beta$ often can not be constrained.
In this case, we consider using a cutoff power-law (CPL) function,
\begin{equation}
{ N(E)=A(\frac{E}{100\,{\rm keV}})^{\alpha}{\rm exp}(-\frac{E}{E_c}) },
\end{equation}
where \emph{$\alpha$} is the power law photon spectral index, \emph{E$_{c}$} is the break energy in the spectrum,
and the peak energy $E_{p}$ is equal to $E_{c}\times(2+\alpha)$.
In our joint spectral fitting analysis of Fermi-GBM and Fermi-LAT,
we use the $pgstat$ statistic \footnote{https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/\\XSappendixStatistics.html} in Bayesian inference.
The process used is the same as that described in Section \ref{sec:PDS}.
The difference is in the model and likelihood function, as well as the additional process of folding the model and the instrumental response file.
The fitting results of the spectrum are presented in Table \ref{tab:tab1}, and the evolution of the time-resolved spectrum is illustrated in Figure \ref{fig:2} (b).
Evidently, the spectral evolution does not satisfy the $intensity-tracking$ pattern \citep{golenetskii1983correlation,lu2012comprehensive}.
Furthermore, the posterior parameter of the spectra analysis of Episode a and Episode b are very similar, which will be further analyzed in Section \ref{sec:lens}.
\subsection{Amati correlation}\label{sec:amati}
Through the use of the posterior parameters of the spectra analysis in Section \ref{sec:spec_ana} we attempt to classify GRB 201104A using the Amati correlation \citep{amati2002intrinsic,zhang2009discerning}.
We calculate the isotropic equivalent energy $E_{\gamma, {\rm iso}}$ with the cosmological parameters of \emph{H$_{0}$} = $\rm 69.6 ~kms^{-1}~Mpc^{-1}$, $\Omega_{\rm m}= 0.29$,
and $\Omega_{\rm \Lambda}= 0.71$.
Due to lack of exact redshift observations, we calculate $E_{\gamma, {\rm iso}}$ and $E_{p,z}$ in different redshifts (from 0.01 to 5).
The star signs of the different colors in Figure \ref{fig:4} (a) display the results obtained with different redshifts for each episode,
and clearly they all depart from the range of LGRB (type II).
\subsection{$T_{90}$-related distributions}\label{sec:t90dis}
Additionally, we examine other $T_{90}$-relate distributions to determine the characteristics of each time period.
\cite{minaev2020p} proposed a new classification scheme combining the correlation of $E_{\gamma, {\rm iso}}$ and $E_{p,z}$ and the bimodal distribution of $T_{90}$.
The parameter $EH$ is proposed to characterize the amati correlation,
\begin{equation}\label{key}
EH = \dfrac{(E_{p,z}/100{\rm keV})}{(E_{\gamma,iso}/10^{52}{\rm erg})^{0.4}}.
\end{equation}
The $T_{90,z}$ - $EH$ trajectories calculated in different redshifts (0.001 - 5) for each episode of GRB 201104A are shown in Figure \ref{fig:4} (b).
Through the trigger list given by Fermi GBM Burst catalog \citep{gruber2014fermi,bhat2016third,von2014second,von2020fourth},
we collected the $E_p$ of each burst and calculated the hardness ratio (HR),
which is the ratio of the observed counts in 50 – 300 keV compared to the counts in the 10 – 50 keV band \citep{goldstein2017ordinary}.
In Figure \ref{fig:4} (c) and (d), we plotted the $E_p$ and HR of each episode for GRB 201104A and other catalog's bursts together,
and fitting the distribution by using a two-component Gaussian mixture model with {\tt scikit-learn} \citep{scikit-learn}.
When the extending emission is not considered, each episode will gradually approach SGRB (type I) in classification.
\subsection{Spectral lag}\label{sec:spec_lag}
In most GRBs, there is a lag between different energy bands, which is called the spectral lag.
A cross-correlation function (CCF) can be used to quantify such an effect since the pulse peaks at different energy bands are delayed.
The method is widely used to calculate spectral lag \citep{band1997gamma,ukwatta2010spectral}.
We calculated the CCF function for GRB 201104A in different energy bands from $T_0$-1 to $T_0$+10 s (see the left of Figure \ref{fig:5}),
and calculated the peak value of CCF after polynomial fitting.
By using Monte Carlo simulations, we can estimate the uncertainty of lags \citep{ukwatta2010spectral} (see the right of Figure \ref{fig:5}).
In general, LGRBs exhibit a relatively significant spectral delay \citep{norris2000connection,gehrels2006new}, but not for short bursts \citep{norris2006short}.
Besides, a fraction of short GRBs even show negative lags \citep{yi2006spectral}.
The spectral lags we get for GRB 220104A are all negative, and the spectral lag for 10 - 20 keV to 250 - 300 keV is -0.097 $\pm$ 0.083 s.
There is statistical work on the spectral lags of LGRB and SGRB \citep{bernardini2015comparing},
As a result of the spectral lags, GRB 201104A appears to be more like a short burst.
\section{Lensing hypothesis}\label{sec:lens}
To explain the similar spectra of Episode a and Episode b and their long-duration but short-burst characteristics,
we propose that Episode b is the result of lensing in Episode a and Episode c is a relatively soft and weak extended radiation,
so no repeats of it have been observed.
In a gravitational lensing system, photons that travel longer distances arrive first,
because a shorter path means that the light passes through deeper gravitational potential well of the lens, where the time dilation is stronger.
The source flux is loweer for the photons coming relatively later than for those earlier.
There will therefore be at least one early pulse followed by a weaker pulse for a lensed GRB.
The time delay between these two pulses is determined by the mass of the gravitational lens.
For lensing of a point mass, we have \citep{krauss1991new,narayan1992determination,mao1992gravitational}
\begin{equation}
(1+z_\text{l})M_l = \frac{c^3\Delta t}{2G}\left(\frac{r-1}{\sqrt{r}} +\ln r\right)^{-1}.
\label{eq:mass_redshift}
\end{equation}
where $\Delta t$ is the time delay, $r$ is the ratio of the fluxes of the two pulses, and $(1+z_\text{l})M_l$ is the redshifted lens mass.
With the measured $\Delta t$ and $r$, it is straightforward to calculate the redshifted mass $(1+z_\text{l})M_l$.
Using Bayesian inference methods of light curve and energy spectrum data, we will estimate the parameters and compare lens and non-lens models.
\subsection{Light curve inference}\label{sec:lcinf}
\cite{paynter2021evidence} developed a Python package called {\tt PyGRB} to create light-curves from either pre-binned data or time-tagged photon-event data.
We extend the analysis to the Fermi data as well \citep{wang2021grb}.
Here the same method as in Section \ref{sec:PDS} is used to obtain the posterior distributions of the parameters.
Bayesian evidence ($\mathcal{Z}$) is calculated for model selection and can be expressed as
\begin{equation}
\mathcal{Z} = \int \mathcal{L}(d|\theta) \pi(\theta) d\theta,
\end{equation}
where $\theta$ is the model parameters, and $\pi(\theta)$ is the prior probability.
For TTE data from various instruments, the photon counting obeys a Poisson process and
the likelihood $\ln\mathcal{L}$ for Bayesian inference takes the form of
\begin{align}
\ln {\cal L}(\vec{N}|\theta) = & \sum_i
\ln{\cal L}(N_i|\theta) \\
= & \sum_i N_i\ln\Big(\delta t_i B + \delta t_i S(t_i|\theta)\Big) \nonumber\\
& - \Big(\delta t_i B + \delta t_i S(t_i|\theta)\Big) -\log(N_i!),
\end{align}
where $N_i$ stands for observed photon count in each time bin, and the model predicted photon count consists of the background count $\delta t_i B$ and the signal count $\delta t_i S(t_i|\theta)$.
Note that the differences of $\mathcal{Z}$ among models are important for our purpose.
Usually the light curve of a gamma-ray burst is a pulse shape of fast-rising exponential decay (FRED),
\begin{equation}
S(t|\Delta,A,\tau,\xi) = A \exp \left[ - \xi \left( \frac{t - \Delta}{\tau} + \frac{\tau}{t-\Delta} \right) \right],
\end{equation}
where $\Delta$ is the start time of pulse, $A$ is the amplitude factor, $\tau$ is the duration parameter of pulse, and $\xi$ is the asymmetry parameter used to adjust the skewness of the pulse.
Through different FRED functions, we define different light curve models $S(t_i|\theta)$ to describe whether the pulses are lensed images or not,
the lensing and null scenarios as
\begin{align}
S_\text{lens}(t|\theta_\text{lens}) = &S(t|\Delta,A,\tau,\xi) \nonumber\\
&+ r^{-1} \cdot S(t|\Delta+\Delta_t,A,\tau,\xi) + B,
\end{align}
\begin{align}
S_\text{non-lens}(t|\theta_\text{non-lens}) =& S(t|\Delta_1,A_1,\tau_1,\xi_1) \nonumber\\
&+ S(t|(\Delta_1+\Delta_t,A_2,\tau_2,\xi_2) + B.
\end{align}
For lens model, $r$ is the flux ratio between two pulses (see Equation (\ref{eq:mass_redshift})) and \emph{B} is a constant background parameter.
The ratio of the $\mathcal{Z}$ for two different models is called as the Bayes factor (BF) and the logarithm of the Bayes factor reads
\begin{align}
\ln\text{BF}^\text{lens}_\text{non-lens} = \ln({\cal Z}_\text{lens}) - \ln({\cal Z}_\text{non-lens}) .
\end{align}
As a statistically rigorous measure for model selection,
if $\ln{\rm(BF)} > 8$ we have the ``strong evidence'' in favor of one hypothesis over the other \citep{thrane2019introduction}.
This method was used to analyze the time series of Episode A and Episode B.
We masked the light curve after Episode b with Poisson background in order to exclude the influence of other time periods.
In \ref{tab:tab2}, the results of Bayesian inference based on the light curves of the NaI and BGO detectors are presented.
\subsection{Pearson correlation coefficient}\label{sec:Pearson}
The pulse shape we observe in reality does not correspond to a simple FRED function, and if multiple FRED functions are used to construct the model,
Bayesian inference will become significantly more difficult.
Calculating the Pearson correlation coefficient is a simple and effective method of time series analysis.
Pearson correlation coefficients were calculated for Episode a and Episode b for two different energy bands (see Figure \ref{fig:6}).
In the NaI detector energy band (50 - 300 keV), the results are: r=0.61, p=1.24e-5, and in the BGO detector energy band (300 - 40000 keV), the results are: r=0.43, p=3.80e-3.
The results suggest that there is a general correlation between Episodes a and b.
\subsection{Spectral inference}\label{sec:specinf}
Under the lensed gamma-ray burst hypothesis, in addition to requiring the same shape of the light curves, the sameness of the spectrum must also be considered.
The cumulative hardness comparison in different energy bands is a simple but statistically powerful methodology \citep{mukherjee2021hardness}.
And such a method has been used in some works \citep{wang2021grb,veres2021fermi,lin2021search} as one of the indicators to confirm the lensed GRB.
We propose a procedure that considers spectral fitting with Bayesian inference for lens and non-lens models in order to achieve this goal.
Typically, the detector response files of GBM contain one or more detector response matrices (DRMs) encoding the energy dispersion and calibration of incoming photons at various energies to recorded energy channels \citep{GbmDataTools}.
The matrix also encodes the effective area of the detector as a function of energy relative to the source to which the detector is pointing.
Due to the strong angular dependence of the response (and the degree of angular dependence varies with energy), the effective area can fluctuate significantly.
Therefore, we should select the DRM that corresponds to the period we are interested in.
By interpolating the time series DRMs, we can obtain the DRM of the center time of the interaction we are interested in, and generate the corresponding response file.
It is relatively simple to get the response file of LAT, which is generated by using $gtrepden$ in {\tt Fermitools}.
Since GRBs are relatively short in duration, so that the accumulation of LAT background counts is negligible.
To obtain the GBM background file, we use polynomial fitting for the two periods before and after the burst, and then interpolate to obtain the background of the selected time.
However, although background and instrument responses do not differ significantly in the same GRB event, it is necessary to consider these variations when searching for lensing effects in different GRB events.
The likelihood function used in the spectral inference is the $pgstat$ mentioned in Section \ref{sec:spec_ana}.
And we use a Band function (Equation \ref{eq:band}) and a ratio parameter $r$ to construct the lens model,
\begin{align}
N_\text{lens}(E|\theta_\text{lens}) =& N_\text{Band1}(E|\alpha,\beta,E_c,A) \nonumber \\
&+ N_\text{Band2}(E|\alpha,\beta,E_c,A \cdot r^{-1}).
\end{align}
The non-lens model is composed of two independent Band functions,
\begin{align}
N_\text{non-lens}(E|\theta_\text{non-lens}) = &N_\text{Band1}(E|\alpha_1,\beta_1,E_{c1},A_1) \nonumber \\
&+ N_\text{Band2}(E|\alpha_2,\beta_2,E_{c2},A_2).
\end{align}
It should be noted that $N_{band1}$ and $N_{band2}$ will fold the response files of Episode a and Episode b respectively.
We use the above model to fit the spectra of two episodes at once.
The method of model comparison is consistent with Section \ref{sec:lcinf}.
As shown in Table \ref{tab:tab2}, the results of the spectral inference ($\ln{\rm(BF)}= 8.21$) are more inclined to suggest that the spectra of these two time periods differ only in the normalization constant (see Figure \ref{fig:7}).
Indeed, gamma-ray bursts with nearly identical spectra in different episodes are extremely rare.
\section{SUMMARY and Discussion}\label{sec:sd}
In this work, we investigated Fermi's gamma-ray bursts for possible QPOs and repetitive behavior events, as well as performed a detailed analysis of burst 201104A.
Our findings are the following:
\begin{itemize}
\item Following the current analysis method, there is no significant QPO signal greater than 3 $\sigma$ in the light curve of 248 short bursts and 920 long bursts selected.
However, some GRBs exhibit repetitive behavior, such as GRB 201104A.
\item GRB 201104A has similar temporal evolutions for both Episodes a and b, as well as the posterior parameters of spectral fitting result.
According to the Amati correlation diagram, each episode is closer to the group of SGRBs.
Classifying it as a short burst is also favored in the $T_{90}$-related distributions when the extended emission component is not taken into account.
The spectral lag of different energy bands is negative or close to no delay, which is in accordance with SGRB characteristics.
Consequently, GRB 201104A is a long-duration burst with characteristics of SGRBs.
\item The Bayesian inferences of the light curve do not fully support the lens model.
Nevertheless, the spectral inference result supports the lens model, at least showing that the spectra of these two episodes are very consistent.
\end{itemize}
A long-duration burst with the characteristics of a short burst can be explained very naturally by the gravitational-lensing scenario, although this is a very coincidental circumstance.
As well, it is not impossible that this event may be just a very special occurrence that comprises at least two intrinsically similar episodes.
In other words, there is a repetition mechanism in GRB, like the central engine with memory that \cite{zhang2016central} once proposed.
This type of burst has many features characteristic of a SGRB,
but unless the Li-Paczynski macronova (also known as the kilonova) can be detected \citep{li1998transient},
this will undoubtedly be the strongest evidence for a SGRB of compact-binary origin \citep{yang2015possible,jin2015light}.
Furthermore, our proposed procedure is essentially consistent with the method proposed by \cite{mukherjee2021hardness} for comparing the cumulative hardness in different energy bands.
In this procedure, we use the Bayes factor to determine the consistency of the spectrum.
Over time series analysis, the advantage is that low count rate instruments can be considered in Bayesian inference, such as the spectra of high-energy photons detected by Fermi-LAT.
For certifying different GRB events as gravitational lensing events, this procedure could take into account the effects of instrumental responses as well as background.
There is currently a problem with this procedure, which is that the episodes used for spectral inference must be selected in advance.
Therefore, this procedure will only give a posterior distribution of the ratio $r$ without a time delay caused by gravitational lensing.
We can set the time delay and time window as free parameters to solve this problem.
Since each model calculation must take into account the change in the spectral fitting file, the calculation time will greatly increase.
Our future work will optimize this step in order to search for gravitationally lensed GRBs.
Currently, the most complete procedure is to use both time series and spectrum data to perform Bayesian inference for lens and non-lens models.
\section*{Acknowledgments}
We thank S. Covinofor and Yi-Zhong Fan for their important help in this work.
We appreciate Zi-Min Zhou for his helpful suggestions.
We acknowledge the use of the Fermi archive's public data.
This work is supported by NSFC under grant No. 11921003.
\software{\texttt{Matplotlib} \citep{Hunter:2007}, \texttt{Numpy} \citep{harris2020array}, \texttt{scikit-learn} \citep{scikit-learn},
\texttt{bilby} \citep{ashton2019bilby}, \texttt{GBM Data Tools} \citep{GbmDataTools}, \texttt{Fermitools}}
| 2024-02-18T23:39:45.826Z | 2022-05-27T02:08:34.000Z | algebraic_stack_train_0000 | 314 | 4,711 |
|
proofpile-arXiv_065-1699 | \section{Introduction}
\label{sec1}
Non-orthogonal multiple access (NOMA) is currently viewed as one of the key physical layer (PHY) technologies to meet the requirements of 5G network evolutions as well as those of Beyond 5G wireless networks. Since 2013, there has been a vast amount of literature on the subject, and using information theoretic arguments it was shown that NOMA can increase capacity over conventional orthogonal multiple access (OMA). Also, while other NOMA schemes have been proposed and investigated, the vast majority of the NOMA literature has been devoted to the so-called Power-domain NOMA (PD-NOMA), which is based on imposing a power imbalance between user signals and detecting these signals using a successive interference cancellation (SIC) receiver (see, e.g., \cite{YS,ZD,LD,ZDING,XLEI,SY,MS}). The specialized literature basically divides NOMA into two major categories, one being Power-domain NOMA and the other Code-domain NOMA, although there are approaches in some other domains (see, e.g., \cite{HSLETTER}). It is also worth mentioning the recently revived NOMA-2000 concept \cite{HSMAGAZINE,HSA,AME,AAX} that is based on using two sets of orthogonal signal waveforms \cite{IC,MV}. This technique does not require any power imbalance between user signals and falls in the Code-domain NOMA category due to the fact that the signal waveforms in one of the two sets are spread in time or in frequency.
Well before the recent surge of literature on NOMA during the past decade, the concept of multiple-input multiple-output (MIMO) was generalized to Multi-User MIMO (MU-MIMO), where the multiple antennas on the user side are not employed by the same user, but instead they correspond to multiple users \cite{QH,AM,SK,XC,BF}. For example, considering the uplink of a cellular system, two users with a single antenna each that are transmitting signals to the base station (BS) equipped with multiple antennas form an MU-MIMO system. Note that MU-MIMO is also known as Virtual MIMO as it can be seen from the titles of \cite{SK,XC,BF}. Naturally, the question arises as to what relationship PD-NOMA has with the MU-MIMO concept and how it compares to it in terms of performance. When the users in MU-MIMO share the same time and frequency resources simultaneously, the system becomes a NOMA scheme, and the only difference from PD-NOMA in this case is that user signals inherently have different powers in PD-NOMA while no power difference is involved in MU-MIMO. Therefore, this type of MU-MIMO can be referred to as Power-Balanced NOMA or Equal-Power NOMA.
In this paper, focusing on the uplink, we give a unified presentation of PD-NOMA and MU-MIMO by introducing a power imbalance parameter in the system model and we optimize this parameter to minimize the average bit error probability (ABEP) for a given total transmit power by the users. In the considered system scenario, the channels are uncorrelated Rayleigh fading channels, which is a typical assumption for the cellular uplink. The study reveals a very important result. Specifically, it was found that the minimum value of ABEP is achieved with zero power imbalance, i.e., when equal average powers are received by the BS from each of the users. This means that the power imbalance, which is the basic principle of PD-NOMA, is actually undesirable when the fading channels are uncorrelated.
The paper is organized as follows. In Section \ref{sec2}, we briefly recall the principle of PD-NOMA and MU-MIMO and give a unified system model which covers both of these techniques. Next, in Section \ref{sec3}, we optimize the power imbalance parameter in this model to minimize the ABEP, and we show that the optimum corresponds to the case of perfect power balance, as in MU-MIMO. In Section \ref{sec4}, we report the results of computer simulations confirming the theoretical findings of Section \ref{sec3}. Finally, Section \ref{sec5} summarizes our conclusions and points out some future research directions.
\section{Power Domain NOMA and Multi-User MIMO}
\label{sec2}
\subsection{Power-Domain NOMA}
The principle of PD-NOMA is to transmit user signals on the same time and frequency resource blocks by assigning different powers to them. On the receiver side, a SIC receiver is employed to detect the user signals. Fig. \ref{Fig:Illustration of PD NOMA} shows the concept of a 2-user PD-NOMA uplink. User 1 in this figure transmits a high signal power shown in blue, and User 2 transmits a low signal power shown in red. Assuming that the path loss is the same for both users, these signals arrive to the BS with the same power imbalance, and the BS detects them using the well-known SIC concept. Specifically, it first detects the strong User 1 signal, and then, it subtracts the interference of this signal on the weak User 2 signal in order to detect the latter signal.
\begin{figure}[htbp]
\centering
\includegraphics[width=3.0 in]{Fig1}
\caption{Illustration of Power-domain NOMA uplink with two users.}
\label{Fig:Illustration of PD NOMA}
\end{figure}
The concept of superposing a strong user signal and a weak user signal and their detection using SIC has long been described in textbooks (see, e.g., \cite{DT}) in order to show that orthogonal multiple access is not optimal and that a higher capacity can be achieved by going to non-orthogonal multiple access. The NOMA literature, which basically started in 2013 directly followed this concept, and the resulting scheme was dubbed PD-NOMA as it assigns different powers to users. Note that although the SIC receiver is the main receiver which appears in the literature, receivers based on maximum-likelihood (ML) detection were also investigated in some recent papers \cite{JS,HS}, where higher performance was reported at the expense of an increased complexity.
\subsection{Multi-User MIMO}
Multi-User MIMO is the terminology given to a MIMO system when the multiple antennas on the user side do not correspond to a single user. For example, a cellular system in which a BS equipped with multiple antennas communicating with a number of single-antenna users forms an MU-MIMO system. Note that an MU-MIMO system is a simple OMA scheme if the users do not make simultaneous use of the same time and frequency resources, but when these resources are shared simultaneously it becomes a NOMA scheme, and this is what we consider here. Also note that with respect to conventional point-to-point MIMO links, MU-MIMO is what orthogonal frequency-division multiple access (OFDMA) is to orthogonal frequency-division multiplexing (OFDM). While OFDM and MIMO refer to point-to-point transmission links, OFDMA and MU-MIMO designate multiple access techniques based on the same principles. As in the PD-NOMA outlined in the previous subsection, here too we will focus on the uplink of a 2-user MU-MIMO system. In fact, Fig. 1 can also be used to describe this MU-MIMO system if the user signals have equal power and the BS is equipped with multiple antennas, which is always the case in state-of-the-art cellular networks. Therefore, the MU-MIMO we consider here is an equal-power NOMA system, in which the SIC receiver is not appropriate for signal detection. For both PD-NOMA and MU-MIMO, the optimum receiver is in fact the ML receiver, which makes its decisions by minimizing the Euclidean distance from the received noisy signal. In PD-NOMA with a large power imbalance between user signals, the SIC receiver essentially provides the ML detector performance, but this concept is not applicable to a power-balanced MU-MIMO system, because detection of one of the signals in the presence of interference from the other signal will lead to an excessive error rate.
\subsection{Unified System Model}
We now give a unified simple model for the uplinks in 2-user PD-NOMA and MU-MIMO systems with one antenna on the user side and two antennas on the BS side. Omitting the time index (which is not needed), the signals received by the first antenna and the second antenna of the BS are respectively given by:
\begin{align}
\label{eq1}
r_{1}=\sqrt{\alpha }h_{11}x_{1}+\sqrt{1-\alpha }h_{12}x_{2}+w_{1}
\end{align}
and
\begin{align}
\label{eq2}
r_{2}=\sqrt{\alpha }h_{21}x_{1}+\sqrt{1-\alpha }h_{22}x_{2}+w_{2}
\end{align}
where $x_1$ and $x_2$ are the symbols transmitted by the first and the second user, respectively, $\alpha$ is the power imbalance factor ($1/2\le\alpha<1$) with $\alpha=1/2$ corresponding to MU-MIMO, $h_{ij}$ designates the response of the channel between user $j$ and receive antenna $i$, and $w_1$ and $w_2$ are the additive white Gaussian noise (AWGN) terms. The channels are assumed to be unity-variance uncorrelated Rayleigh fading channels, and with this assumption the power imbalance at the receiver is identical to that imposed at the user side. In practice the users are randomly distributed within the cell, and the signal of each user is subjected to a path loss that depends on its distance to the BS. But in these situations, an appropriate power control can be used to impose a power imbalance factor $\alpha$ at the receiver, and the model above remains valid.
Let us combine equations (\ref{eq1}) and (\ref{eq2}) and write the vector equation:
\begin{align}
\label{eq3}
R=HX+W
\end{align}
where $R=\begin{pmatrix}
r_{1} \\r_{2}
\end{pmatrix}, H=\begin{pmatrix}
h_{11}&h_{12} \\
h_{21}&h_{22} \\
\end{pmatrix}, X=\begin{pmatrix}
\sqrt{\alpha }x_{1} \\\sqrt{1-\alpha }x_{2}
\end{pmatrix}$, and $W=\begin{pmatrix}
w_{1} \\w_{2}
\end{pmatrix}$.
The ML receiver makes its decisions by minimizing the Euclidean distance metric $||R-HX||^2$ over all values of the symbol vector $X$. For a constellation size of $M$, this involves the computation of $M^2$ metrics and their comparisons in order to find the minimum value. Obviously, the ML receiver complexity is higher than that of the SIC receiver, which only involves the computation of 2$M$ metrics \cite{HS}.
\section{Optimizing the Power Imbalance}
\label{sec3}
The problem we address now is to determine the optimum power imbalance factor $\alpha$ leading to the smallest ABEP for a given total transmit power by the two users. The ABEP is usually evaluated by first evaluating the pairwise error probability (PEP), which corresponds to detecting a different codeword (symbol vector) from the transmitted one. Once the PEP is evaluated for all error events, it is used to derive the ABEP through weighing, summing, and averaging over all codewords. Note that an error event occurs when at least one of the symbols in the transmitted codeword is detected in error. In the case at hand, the error event implies that a symbol error occurs for at least one of the users. The ABEP can be upper bounded using the well-known union bound:
\begin{align}
\label{eq4}
ABEP\leqslant \frac{1}{M^{2}}\sum _{X}\sum _{\hat{X}\neq X}\frac{N(X,\hat{X})}{2\log_{2}(M)}P(X\to\hat{X} )
\end{align}
where $P(X\to\hat{X})$ denotes the PEP corresponding to the transmission of codeword $X$ and detection of codeword $\hat{X}\neq X$, and $N(X,\hat{X})$ denotes the number of bits in error corresponding to that error event. The parameter $M$ is the number of constellation points, and consequently the denominator $2\log_{2}(M)$ in these sums is the number of bits per transmitted codeword. The sum indexed $\hat{X}\neq X$ represents the ABEP conditional on the transmission of a codeword $X$. Finally, the sum indexed $X$ and the division by $M^2$ are for averaging the conditional bit error probabilities with respect to the transmitted codewords. Note that Eq. (\ref{eq4}) is written assuming that both users transmit symbols from the same constellation, and this assumption is made in the following analysis.
Using the pairwise error probability, we will now use a simple technique to demonstrate that the optimum NOMA scheme corresponding to the system model in Subsection II.C is in fact the NOMA scheme that is perfectly balanced in terms of transmit power. An upper bound on the PEP for spatial multiplexing type 2x2-MIMO systems operating on uncorrelated Rayleigh fading channels is given in [20, p. 79] as:
\begin{align}
\label{eq5}
P(X\to \hat{X})\leq \begin{bmatrix}
\frac{1}{1+\frac{1}{4N_{0}} \begin{Vmatrix}
X-\hat{X}\end{Vmatrix}^{2}}\end{bmatrix}^{2}
\end{align}
In this equation, $1/N_{0}$ is the signal-to-noise ratio (the bit energy being normalized by one), and referring back to the definition of $X$, the vector $X-\hat{X}$ is given by:
\begin{align}
\label{eq6}
X-\hat{X}=\begin{pmatrix}
\sqrt{\alpha }(x_{1}-\hat{x}_{1}) \\\sqrt{1-\alpha }(x_{2}-\hat{x}_{2})
\end{pmatrix}
\end{align}
where $\hat{x}_{1}$ (resp. $\hat{x}_{2}$) denotes the decision made on symbol $x_{1}$ (resp. $x_{2}$).
Let us consider now two symmetric error events $E_1$ and $E_2$ defined as $(x_1-\hat{x}_{1}=u,x_2-\hat{x}_{2}=v )$ and $(x_1-\hat{x}_{1}=v,x_2-\hat{x}_{2}=u )$, respectively, and without any loss of generality, assume $|u|>|v|$ . The squared Euclidean norm which appears in the denominator on the right-hand side of eqn. (\ref{eq5}) can be written as:
\begin{align}
\label{eq7}
\begin{Vmatrix}
X-\hat{X}\end{Vmatrix}_{E_{1}}^{2}=\alpha \begin{vmatrix}
u\end{vmatrix}^{2}+(1-\alpha)\begin{vmatrix}
v\end{vmatrix}^{2}
\end{align}
for error event $E_1$, and
\begin{align}
\label{eq8}
\begin{Vmatrix}
X-\hat{X}\end{Vmatrix}_{E_{2}}^{2}=\alpha \begin{vmatrix}
v\end{vmatrix}^{2}+(1-\alpha)\begin{vmatrix}
u\end{vmatrix}^{2}
\end{align}
for error event $E_2$.
Note that for $\alpha=1/2$, which corresponds to power-balanced NOMA (or MU-MIMO), we have:
\begin{align}
\label{eq9}
\begin{Vmatrix}
X-\hat{X}\end{Vmatrix}_{\alpha =1/2}^{2}=\frac{1}{2} (\begin{vmatrix}
u\end{vmatrix}^{2}+\begin{vmatrix}
v\end{vmatrix}^{2})
\end{align}
in both error events.
Using (\ref{eq7})-(\ref{eq9}), we can write:
\begin{align}
\begin{split}
\label{eq10}
&\begin{Vmatrix}
X-\hat{X}\end{Vmatrix}_{E_{1}}^{2}-\begin{Vmatrix}
X-\hat{X}\end{Vmatrix}_{\alpha=1/2}^{2} \\&=\alpha \begin{vmatrix}
u\end{vmatrix}^{2}+(1-\alpha)\begin{vmatrix}
v\end{vmatrix}^{2}-\frac{1}{2} (\begin{vmatrix}
u\end{vmatrix}^{2}+\begin{vmatrix}
v\end{vmatrix}^{2}) \\&=(\alpha-1/2)\begin{vmatrix}
u\end{vmatrix}^{2}+(1-\alpha-1/2)\begin{vmatrix}
v\end{vmatrix}^{2} \\&=(\alpha-1/2)(\begin{vmatrix}
u\end{vmatrix}^{2}-\begin{vmatrix}
v\end{vmatrix}^{2})>0
\end{split}
\end{align}
and
\begin{align}
\begin{split}
\label{eq11}
&\begin{Vmatrix}
X-\hat{X}\end{Vmatrix}_{E_{2}}^{2}-\begin{Vmatrix}
X-\hat{X}\end{Vmatrix}_{\alpha=1/2}^{2} \\&=\alpha \begin{vmatrix}
v\end{vmatrix}^{2}+(1-\alpha)\begin{vmatrix}
u\end{vmatrix}^{2}-\frac{1}{2} (\begin{vmatrix}
u\end{vmatrix}^{2}+\begin{vmatrix}
v\end{vmatrix}^{2}) \\&=(\alpha-1/2)\begin{vmatrix}
v\end{vmatrix}^{2}+(1-\alpha-1/2)\begin{vmatrix}
u\end{vmatrix}^{2} \\&=-(\alpha-1/2)(\begin{vmatrix}
u\end{vmatrix}^{2}-\begin{vmatrix}
v\end{vmatrix}^{2})<0
\end{split}
\end{align}
Referring back to the PEP upper bound in (\ref{eq5}), the squared distances $||X-\hat{X}||_{E_{1}}^{2}$and$||X-\hat{X}||_{E_{2}}^{2}$corresponding to the two considered error events are symmetric with respect to the squared distance $||X-\hat{X}||_{\alpha=1/2}^{2}$, and exploiting the Gaussian-like shape of the $ \lceil \frac{1}{1+z^{2}}\rceil^{2}$ function, we can write:
\begin{align}
\begin{split}
\label{eq12}
&2\begin{bmatrix}
\frac{1}{1+\frac{1}{4N_{0}}\begin{Vmatrix}
X-\hat{X}\end{Vmatrix}_{\alpha=1/2 }^{2}}\end{bmatrix}^{2}\\
&< \begin{bmatrix}
\frac{1}{1+\frac{1}{4N_{0}}\begin{Vmatrix}
X-\hat{X}\end{Vmatrix}_{E_{1}}^{2}}\end{bmatrix}^{2}+\begin{bmatrix}
\frac{1}{1+\frac{1}{4N_{0}}\begin{Vmatrix}
X-\hat{X}\end{Vmatrix}_{E_{2}}^{2}}\end{bmatrix}^{2}
\end{split}
\end{align}
This means that with $\alpha\neq1/2$ corresponding to PD-NOMA, while the error event $E_1$ leads to a lower PEP than in the power-balanced case, error event $E_2$ leads to a higher PEP, and the sum of the two PEPs is smaller in the amplitude-balanced case. Note that this property holds for all pairs of symmetric error events in which $|u|^2\neq|v|^2$, and the PEP corresponding to error events with $|u|^2=|v|^2$ is independent of the power imbalance factor $\alpha$. Consequently, by averaging the PEP over all error events determined by the signal constellation, we find that the smallest average error probability is achieved when the NOMA scheme is perfectly balanced in terms of power.
We will now illustrate this property of the error events using the QPSK signal constellation assuming a power imbalance factor $\alpha=0.9$ (which corresponds to a power imbalance of 9.5 dB) and $1/N_0=100$, which corresponds to an SNR of 20 dB. Note that the PEP in this constellation is independent of the transmitted codeword, and therefore we can assume that the transmitted codeword is ($x_1=1+j,x_2=1+j$ ), and examine the 15 possible error events corresponding to the transmission of this codeword. Table \ref{TABLE1} shows the value of $||X-\hat{X}||^2$ as well as the PEP for $\alpha=0.5$ and $\alpha=0.9$ corresponding to the 15 error events, denoted ($E_1,E_2,...,E_{15}$).
\begin{table}
\caption{Comparison of Power-Balanced NOMA ($\alpha=0.5$) and PD-NOMA with $\alpha=0.9$ in terms of PEP corresponding to different error events.}
\Large
\begin{center}
\resizebox{\linewidth}{!}{
\renewcommand\arraystretch{0.8}
\begin{tabular}{cccccc}
\toprul
Error Event&$x_{1}-\hat{x}_{1}$&$x_{2}-\hat{x}_{2}$&$||X-\hat{X}||^{2}$&PEP($\alpha=0.5$)&PEP($\alpha=0.9$)\\
\midrul
$E_{1}$&$2$&$0$&$4\alpha$&$3.84\times10^{-4}$&$1.2\times10^{-4}$\\
$E_{2}$&$0$&$2$&$4(1-\alpha)$&$3.84\times10^{-4}$&$\textcolor{red}{8.1\times10^{-3}}$\\
$E_{3}$&$2j$&$0$&$4\alpha$&$3.84\times10^{-4}$&$1.2\times10^{-4}$\\
$E_{4}$&$0$&$2j$&$4(1-\alpha)$&$3.84\times10^{-4}$&$\textcolor{red}{8.1\times10^{-3}}$\\
$E_{5}$&$2$&$2$&$4$&$10^{-4}$&$10^{-4}$\\
$E_{6}$&$2$&$2j$&$4$&$10^{-4}$&$10^{-4}$\\
$E_{7}$&$2j$&$2$&$4$&$10^{-4}$&$10^{-4}$\\
$E_{8}$&$2j$&$2j$&$4$&$10^{-4}$&$10^{-4}$\\
$E_{9}$&$2+2j$&$0$&$8\alpha$&$10^{-4}$&$3\times10^{-5}$\\
$E_{10}$&$0$&$2+2j$&$8(1-\alpha)$&$10^{-4}$&$\textcolor{red}{2.3\times10^{-3}}$\\
$E_{11}$&$2+2j$&$2$&$4(1+\alpha)$&$4.3\times10^{-5}$&$2.7\times10^{-5}$\\
$E_{12}$&$2$&$2+2j$&$4(2-\alpha)$&$4.3\times10^{-5}$&$8.1\times10^{-5}$\\
$E_{13}$&$2+2j$&$2j$&$4(1+\alpha)$&$4.3\times10^{-5}$&$2.7\times10^{-5}$\\
$E_{14}$&$2j$&$2+2j$&$4(2-\alpha)$&$4.3\times10^{-5}$&$8.1\times10^{-5}$\\
$E_{15}$&$2+2j$&$2+2j$&$8$&$2.5\times10^{-5}$&$2.5\times10^{-5}$\\
\bottomrul
\end{tabular}
\end{center}
\label{TABLE1}
\end{table}
The table shows that error event $E_1$ leads to a slightly higher PEP with $\alpha=0.5$, but with $\alpha=0.9$ the symmetric error event $E_2$ leads to a PEP that is higher by more than an order of magnitude. The same observation holds for the error events ($E_3,E_4$). Next, error events ($E_5,E_6,E_7,E_8$) lead to the same PEP of $10^{-4}$ for both values of $\alpha$. Proceeding further, while error event $E_9$ leads to a higher PEP for $\alpha=0.5$, its symmetric event $E_{10}$ gives a PEP that is more than an order of magnitude higher for $\alpha=0.9$. Finally, the error events ($E_{11},E_{12},E_{13},E_{14},E_{15}$) lead to PEP values in the range of $10^{-5}-10^{-4}$, with a small difference between the two values of $\alpha$. To get an upper bound on the ABEP, we compute the sum of the 15 PEPs after weighing them by the corresponding number of bits and we divide it by 4 because each codeword in QPSK carries 4 bits (2 bits per user). By doing this, we get an ABEP upper bound of $8\times10^{-4}$ for PD-NOMA with $\alpha=0.5$ and of $5\times10^{-3}$ for PD-NOMA with $\alpha=0.9$, the latter being dominated by the PEP values indicated in red on the last column of the table, which correspond to error events ($E_{2},E_{4},E_{10}$). This result shows that compared to $\alpha=0.5$, the power imbalance factor $\alpha=0.9$ increases the ABEP by almost an order of magnitude. Higher values of $\alpha$ incurred higher BER degradations that are not shown in Table \ref{TABLE1}.
\section{Simulation Results}
\label{sec4}
Using the QPSK and 16QAM signal formats, a simulation study was performed to evaluate the influence of the power imbalance factor on the bit error rate (BER) of PD-NOMA and confirm the theoretical findings reported in the previous section. Following the mathematical description of Section \ref{sec3}, we considered an uplink with two users transmitting toward a BS on two uncorrelated Rayleigh fading channels. Also as described in Section \ref{sec3}, the receiver employs ML detection and assumes that the channel state information (CSI) is perfectly known.
The simulation results are reported in Fig. \ref{Fig:QPSK} for QPSK and in Fig. \ref{Fig:16QAM} for 16QAM. As can be seen in both figures, the best performance results are obtained with $\alpha=1/2$, and they degrade as this parameter is increased. With $\alpha=0.9$ corresponding to an amplitude imbalance of 9.5 dB, the SNR degradation with respect to the amplitude balanced case at the BER of $10^{-3}$ is about 4.5 dB in QPSK and 3 dB in 16QAM. With $\alpha=0.95$ corresponding to an amplitude imbalance of 12.8 dB, the degradation is about 7 dB in QPSK and 6 dB in 16QAM. With $\alpha=0.98$ corresponding to an amplitude imbalance of 16.9 dB, the degradation is increased to 11.5 dB in QPSK and 9.5 dB in 16QAM. Finally, with $\alpha=0.99$ corresponding to an amplitude imbalance of 19.95 dB, the degradation is as high as 14 dB in QPSK and 12.5 dB in 16QAM. These results confirm the theoretical finding of the previous section that the best performance in memoryless NOMA is achieved when the user signals have perfect power balance at the receiver. This is actually what Multi-User MIMO (or Virtual MIMO) with power control does, and this technique became popular in wireless networks almost a decade before the surge of interest in Power-domain NOMA. Also note that the QPSK results of Fig. \ref{Fig:QPSK} and those of Table \ref{TABLE1} confirm that the ABEP upper bound given by eqn. (\ref{eq4}) is very tight. Indeed, the upper bound derived from Table \ref{TABLE1} reads a BER of $8\times10^{-4}$ for $\alpha=0.5$ and $5\times10^{-3}$ for $\alpha=0.9$, and the simulation results with $E_b/N_0$ = 20 dB read a BER of $5\times10^{-4}$ for $\alpha=0.5$ and $3\times10^{-3}$ for $\alpha=0.9$.
\begin{figure}[htbp]
\centering
\includegraphics[width=3.0 in]{qpsk}
\caption{BER performance of PD-NOMA with QPSK and different values of the power imbalance.}
\label{Fig:QPSK}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=3.0 in]{16qam}
\caption{BER performance of PD-NOMA with 16QAM and different values of the power imbalance.}
\label{Fig:16QAM}
\end{figure}
\section{Conclusions}
\label{sec5}
This paper analyzed the power imbalance factor between user signals on the uplink of a 2-user Power-domain NOMA system assuming uncorrelated Rayleigh fading channels and a maximum-likelihood receiver for signal detection. The results revealed that for a given total transmit power by the users, the minimum value of the average bit error probability is achieved when the power imbalance is zero, i.e., when the powers received from the two users are identical, as in Multi-User MIMO with perfect power control. This finding leads to the questioning the PD-NOMA principle when the channels are uncorrelated. The principle of transmitting in the same frequency band and at the same time instants a strong user signal and a weak user signal along with their detection using a SIC receiver has been long used to demonstrate that non-orthogonal multiple access leads to a higher capacity than orthogonal multiple access. The research community later used this principle to design NOMA without questioning whether the power imbalance is needed. Our analysis has shown that it is actually better from an average bit error rate point of view to have a perfect balance between the received signal powers. Our study was made for a 2-user NOMA uplink and assuming that the same signal constellation is used by both users. Extension to more than 2 users and also to NOMA systems where the users employ different constellations remain as topics for further studies.
| 2024-02-18T23:39:45.931Z | 2022-05-09T02:07:50.000Z | algebraic_stack_train_0000 | 318 | 4,314 |
|
proofpile-arXiv_065-1716 | \section{INTRODUCTION}
The properties of the MBE grown nanostructures are determined by the complex interplay between the characteristics of used materials and growth conditions. In particular, the quality of the interfaces between the barrier and QW layer strongly impacts the optical and electrical quality of nanostructures.
The realistic transition between the barrier and the quantum well material differs from the idealized image of the perfectly-cut interface both regarding lateral and the growth direction. The main concern related to the latter issue is the intermixing, i.e., smoothing of the band profile due to unintentional migration of the chemical elements across the interface. Even in the case of negligible intermixing, the interface still can exhibit imperfection in the lateral direction in the form of a variation of the position of the interface. The variation of the position may be related to the single monolayer steps \cite{deveaud1984observation} but it can also have a larger extent for rough interfaces \cite{Gaj_1994_PRB,regreny1987growth,grieshaber1994rough,Kossacki_1995_SSC}.
In the last case, irregularities of the interface introduce severe local deformation. Therefore magnetic ions near the interface experience different strain values than ions placed in the center of the QW. The distribution of the deformation of the crystal lattice at the sites of magnetic ions placed in various positions in nanostructure can affect the distribution of the ions' spin-lattice relaxation (SLR) time.
The magnetic ion relaxation can also be affected by interactions with carriers. The presence of the carrier gas increases the SLR rate \cite{Scherbakov_2001_PRB,konig2003epr}, due to effective energy transfer mediation between the ion and the lattice. In addition, the distribution of carriers may vary across the nanostructure. Consequently, a non-trivial SLR time distribution across the nanostructure may also be observed.
All of the above shows that determining the detailed distribution of properties such as local deformation or distribution of carriers is essential for better design and
fabrication of devices combining electronic and spin effects. Moreover, this implies that one can utilize the control of the spatial distribution of the carrier gas density to adjust the properties of spin-lattice dynamics for spintronics applications.
Studying such properties essentially requires a local approach, while techniques operating on mean values (such as Electron Paramagnetic Resonance -- EPR) are of limited use. Even the optical techniques, which are usually considered local, do not exhibit spatial resolution high enough to distinguish the signal originating from different depths of the QW.
\begin{figure*}
\centering
\includegraphics{Studnie_odbicia.pdf}
\caption{ (a) Schematic picture of the QWs samples used in this work. Manganese ions in the QW are distributed in various sections along the growth axis - in either of the two sides of the QW near the interfaces with the barrier layer (samples UW1498 and UW1499), in the center of the well (UW1500), and in the entire QW (UW1501); (b) The schematic presentation of the spatial probability density of the first three heavy hole states in the model sample -- UW1501; (c) Reflectance spectra of studied samples measured at zero magnetic field.}
\label{odbicia}
\end{figure*}
Here we exploit the fact that by tailoring the profile of the doping by the magnetic ions, we can restrict their presence to selected parts of the structure. The hyperfine structure of energy levels of each ion is sensitive to the deformation of the lattice at its site.
Therefore, the deformation present in this part of the QW might be studied using the absorption of resonant microwave radiation with an applied magnetic field \cite{Lambe_1960_PR,wolos2004optical,Bogucki_2022_PRB}. This absorption can be detected either directly or indirectly by exploiting the fact that the optical properties of the studied material change when the paramagnetic resonance occurs. Thus, the resonance can be detected as a change of optical response, which is the essence of the optically detected magnetic resonance~(ODMR)~\cite{Gisbergen_1993_PRB,Ivanov_2001_APPA,Tolmachev_2020_N,Shornikova_2020_AN}. The ODMR technique allows studying observed effects locally in the area probed optically with micrometer resolution and only in the volume where the magnetic ions are introduced. The ODMR is particularly suitable for low-dimensional systems. Spatial selectivity, optical spatial resolution, and high sensitivity make ODMR a perfect technique for performing measurements on carriers and excitons coupled to the magnetic ions in nanostructures. The ODMR technique is especially useful for systems with large exchange interaction between magnetic ions and photocarriers – a shining example of such a system is (Cd,Mn)Te in the diluted regime. More generally, the diluted magnetic semiconductors (DMS) exhibit the giant Zeeman effect \cite{Gaj_1994_PRB}, which connects the shift of the excitonic energy with the magnetization of the system of magnetic ions.
The absorption between Mn$^{2+}$ energy levels under resonant microwave radiation (MW) leads to a decrease in magnetization (evidenced by the decrease of the giant Zeeman shift), which can also be described in terms of increased effective spin temperature.
In this work, we combine magnetooptical measurements and ODMR technique to study interfaces between (Cd,Mg)Te barriers and (Cd,Mn)Te QWs. In contrast to previous studies, the optical detection in our ODMR experiment is based on reflectivity measurements, which allows us to monitor the behavior of not only the ground state exciton but also higher excited states.
The ODMR technique lets us examine the properties of the structures locally, in the volume where the magnetic ions are incorporated. This local approach, in conjunction with the unique design of the samples, is beneficial for studying the distribution of the deformation and SLR time along the growth axis of the QW. Restricting the Mn$^{2+}$ ion incorporation to the specific parts of the QW allows probing the mentioned properties locally -- near the interfaces and in the center of the well. By exploiting different excitonic states, we investigate other parts of the structure by varying the overlap with the magnetic ions. We also use time-resolved ODMR to study spin-lattice relaxation of the manganese ions incorporated in different positions along the growth axis.
\section{SAMPLES AND EXPERIMENTAL SETUP}
The samples containing single quantum wells were grown by molecular beam epitaxy on the semi-isolating GaAs substrate with 3.5\,$\mu$m CdTe buffer layer. The 20\,nm (Cd,Mn)Te QWs are surrounded by the (Cd,Mg)Te barriers with magnesium content of about 45\%. The barrier underneath the QW is 2\,$\mu$m thick, while the top barrier is 50\,nm. Manganese content is equal to about 0.5\%. This amount of Mn$^{2+}$ was chosen to assure sufficient giant Zeeman effect, but negligible direct ion-ion interactions and a small bandgap offset (less than 8\,meV) \cite{Gaj_2010_book}. The samples were designed so that manganese ions are placed at different positions along the growth axis as shown in~Fig.~\ref{odbicia}(a,b). In order to verify that all the produced samples actually follow the above layout, we characterized them using reflectance in a magnetic field.
For the measurements, the samples were mounted on a holder designed specifically for ODMR experiments \cite{Patent_EN_2021}. The microwave radiation was provided to the samples with a microstrip antenna. The usage of an antenna has two main advantages over using a microwave cavity setup. First, the holder gives easy optical access both for PL and reflectance measurements, preserving the favored directions between the microwave magnetic and electric fields, as well as an external magnetic field. Second, it allows us to provide microwave radiation in a wide range of frequencies from 10\,GHz up to about 50\,GHz without any specific tuning or changing the cavity. All of the above allows measuring the paramagnetic resonance in a wide range of magnetic fields.
During measurements, the samples were immersed in pumped liquid helium (T$\approx$1.7\,K) in a magneto-optical cryostat with superconducting coils providing the magnetic field values up to 3\,T. The whole microwave setup was tested in a wide range of MW frequencies and amplitudes. The best antenna performance was selected by measuring the ODMR signal on a reference sample \cite{Bogucki_2022_PRB}.
We used MW radiation and light illumination in the pulsed mode to avoid unwanted temperature drifts and other slow setup-related disturbances. The reflectance spectra were obtained using a filtered supercontinuum laser as a light source. The impinging light was chopped into pulses with an acousto-optic modulator (AOM) triggered by the signal of the MW generator (Agilent E8257D). The relative delay between MW and light pulses was varied in order to get temporal profiles of the ODMR signal. The width of pulses was a few ms for MW pulses and tens of $\mu$s for light pulses. The latter determines the resolution of transients. The pulse timing was controlled with a resolution of about 10\,ns.
By keeping constant delays between the pulses, we performed time-integrated ODMR measurements.
The pulsed approach allowed us to correct for non-resonant thermal drift occurring during long experimental runs. At each data point, two cases were measured: with the light and MW excitation pulses in-phase (i.e. overlapping in time) and out-of-phase (i.e. with no overlap in time). The difference in the intensity of the signal (as defined in Sec. \ref{sec:odmr}) between these two situations gave us the ODMR signal robust against small temperature drifts.
\section{EXPERIMENTAL RESULTS}
\subsection{Magnetooptical measurements}
\begin{figure*}
\centering
\includegraphics{UW1498_bezMW_2.png}
\caption{(a) Reflectance spectrum measured for different magnetic fields for sample UW1498; E1HH$i$ denotes excitonic states, $\sigma+$/$\sigma-$ denotes detection polarization; (b) Giant Zeeman splitting of the different excitonic complexes observed for the sample UW1498 vs. magnetic field}
\label{UW1498_ref_bezMW}
\end{figure*}
The magnesium content of the barrier layers was verified using standard reflectance measurements. We observed barrier exciton at around 2.6 eV, which corresponds to magnesium content equal to 46\%, \cite{Waag_1993_JoCG}. The value of manganese content was determined by fitting the modified Brillouin function \cite{Gaj_2010_book,Gaj_1994_PRB,gaj1979relation} to the exciton giant Zeeman shift obtained for the reference sample UW1501 (the Mn$^{2+}$ ions present in the whole QW). The Mn$^{2+}$ content was confirmed to be equal to 0.5\%.
The reflectance spectra for each studied QW exhibit several distinct features related to excitonic complexes of electrons and holes at different QW sub-bands, see~Fig.\ref{odbicia}(c). The most pronounced feature at the lowest energy, formed by the lowest energy electron and heavy-hole subbands, corresponds to the excitonic state denoted as E1HH1. The energy of this exciton is determined by several factors, such as the width of the QW (the same for all the samples), but also the overlap between the excitonic wavefunction and incorporated Mn$^{2+}$ ions, since the bandgap of (Cd,Mn)Te is larger than that of pure CdTe \cite{Gaj_2010_book}. For samples with magnetic ions placed only on the sides of the QW, the overlap between exciton wavefunction and magnetic ions gives small addition to ground state energy. In contrast, in the case of the sample with ions placed in the center of the QW, the overlap is significantly larger and results in a respectively larger increase of the ground state energy. Finally, manganese ions present in the whole well raise the QW transition energy by about 8\,meV, which is in agreement with 0.5\% Mn content determined from the giant Zeeman effect \cite{Gaj_2010_book}. Thus, excitonic states in all the samples are shifted towards higher energies in comparison to the QW without any Mn$^{2+}$ ions. For all samples, features corresponding to a number of excited states (E1HH$i$, $i=2-4$) are less pronounced but still visible. It is worth noting that we can observe the transition E1HH2, which should be forbidden in the perfectly symmetric QW. Its visibility suggests asymmetry of the wells' potential \cite{cibert1993piezoelectric,vanelle1996ultrafast}, observed even for the samples UW1500 and UW1501, which were designed with a symmetric Mn-doping profile.
Figure~\ref{UW1498_ref_bezMW}(a) shows an example map of the reflectance spectrum measured in the magnetic field applied in the Faraday configuration. Features corresponding to excitonic complexes -- ground state neutral exciton and a number of excited states -- are well-pronounced. The observed variety of the excitonic transitions exhibits different behavior in the magnetic field. The samples with the magnetic ions present on sides near the interfaces have a higher wavefunction-magnetic-ion overlap for higher states. This leads to a stronger coupling to the source of magnetization than for the ground state, causing a more pronounced Zeeman splitting, see~Fig.~\ref{UW1498_ref_bezMW}(b). This result shows qualitative agreement with the model and further confirms that the manufactured samples are structured in accordance with our design. However, for the states higher than HH2, behavior in the magnetic field becomes more complicated as heavy holes are mixed with the light holes. The first light hole state is expected to be about 10\,meV above the HH1 state. This splitting is caused by the deformation of the QW created by lattice mismatch with barrier \cite{Bogucki_2022_PRB}.
Additionally, figure \ref{odbicia}(b) shows that samples UW1498 and UW1499 with Mn$^{2+}$ incorporated at two opposite interfaces have slightly different energies of E1HH1 exciton. They should be symmetric and have the same optical properties in the ideal case. However, the energy position of excitonic states differs here, which suggests a different overlap between carriers and Mn$^{2+}$ volume. Those differences are relatively small but clearly visible and become even more pronounced in the magnetic field. The Zeeman splitting of the exciton ground state in the sample UW1499 (with manganese ions near the top interface) is almost twice as high as splitting in sample UW1498 (bottom interface) see~Fig.\ref{Zeeman_inter}(a). Zeeman splittings for the UW1500 and UW1501 are even higher due to the higher wavefunction overlaps.
\begin{figure}
\centering
\includegraphics{Electrostatic_pot.pdf}
\caption{(a) Splitting in the magnetic field of the E1HH1 state measured for different samples at 1.7\,K (b) An overlap between the excitonic ground state and the magnetic ions volume obtained from the measurement of Zeeman shift vs. values calculated with consideration of electrostatic potential caused by the presence of the hole gas of density equal to $0.13\times10^{11}\,cm^{-2}$ with the spatial distribution given by ground heavy hole state probability.}
\label{Zeeman_inter}
\end{figure}
Different overlap between magnetic ions volume and excitonic states derived both from different Zeeman splittings and zero field energies, along with the presence of E1HH2 transition, suggest the asymmetry of the QWs potential. At least two effects can explain the discrepancy between the Zeeman splittings in nominally symmetric samples: asymmetry of the top and bottom interfaces or built-in electric field.
The first one corresponds to the non-abrupt, intermixed character of the interfaces. The formation process of interfaces depends on material properties and growth conditions. The QW asymmetry is caused by one of the intermixing mechanisms -- segregation: the growing material is dragged towards the outer layers, along the growth axis \cite{grieshaber1996magneto,Gaj_1994_PRB}. Thus,
magnesium from barrier material is mixed into the QW material on the first interface and pushed out on the second one. This shift of composition moves potential in the first part of the well upwards, shifting the carrier wavefunctions away from the first interface.
The second effect corresponds to unintentional p-type doping. Presented QWs are located near the surface of the sample (the top barrier is 50\,nm thick). In such a case, the surface states act as acceptors, causing the presence of hole gas in QW \cite{Maslana_2003_APL}. However, in the reflectance spectrum, we do not observe any suggestion of charged excitonic features below the neutral exciton ground state energy. From that fact we can conclude that the carrier density in the QW, for studied samples is low -- i.e. estimated below 10$^{11}$\,cm$^{-2}$ \cite{kossacki1999neutral,kossacki2003optical,Kossacki_2004_PRB} -- yet still significant. The positive sign of charge carriers was confirmed using the spin-singlet triplet transition in low magnetic fields as in \cite{Lopion_2020_JEM,Kossacki_2004_PRB} (not shown).
The presence of carrier gas in the QW (holes) and negatively charged states at the sample's surface result in an electrostatic field in the QW and the top barrier. This field modifies wavefunctions of carriers confined in QW. The modification is most pronounced for heavy holes due to their high effective mass. The hole gas builds a potential that drags holes wavefunctions towards the top barrier and electrons towards the bottom barrier. Consequently, wavefunction overlap with the manganese volume is higher in the sample with ions located near the top barrier, where the modified potential is deeper (UW1499).
We analyzed the plausibility of the above scenarios using numerical simulations. We found that the change of the potential corresponding to the intermixed interfaces is insufficient to explain observed differences in the Zeeman splitting when taking reasonable intermixing coefficients below 1\,nm \cite{Gaj_1994_PRB}. At the same time, calculations for the scenario with the electric field involved show excellent agreement with the measured data when assuming carrier density equal to 0.13$\times$10$^{11}$\,cm$^{-2}$.
The calculated overlap was defined as integral of state probability over the part of the QW where Mn$^{2+}$ ions are present. Assuming 100\% overlap between HH1 wavefunction and magnetic layer for sample UW1501, which causes a Zeeman splitting equal to 13.2\,meV in 3\,T, we can calculate the overlap between HH1 and magnetic layer for the other samples. This overlap is obtained as a ratio of experimentally determined Zeeman splitting for the analyzed sample and the sample UW1501, see~Fig.~\ref{Zeeman_inter}(b). We can compare those values with the HH1 - Mn$^{2+}$ overlap. As for sample UW1501, where manganese ions are present in the entire well, this overlap is equal to almost 1. For other samples, the overlap is respectively lower. Finally, we conclude that observed differences of giant Zeeman shift between samples UW1499 and UW1498 correspond to the electric potential built in the samples.
\begin{figure*}
\centering
\includegraphics{UW1498_zMW.png}
\caption{(a) Reflectance spectrum measured for the different magnetic fields in the presence of microwave radiation. In the resonant magnetic field (here 0.57\,T for 15.85\,GHz) the energy shift is observed for all excitonic complexes -- (e.g., see dashed rectangle); (a) Reflectance spectrum collected in magnetic field 0.57\,T with resonant microwave radiation (15.85\,GHz) with pulses in positions ON and OFF; insets represent two different excitonic states -- ground state E1HH1 and one of the excited states E1HH3; (c) Normalized ODMR signal measured for all visible excitonic complexes of UW1498 sample.}
\label{UW1498_ref_zMW}
\end{figure*}
\subsection{ODMR measurements \label{sec:odmr}}
Figure~\ref{UW1498_ref_zMW}(a) shows the reflectance spectra measured versus the magnetic field under microwave radiation. All the excitonic lines in the reflectance spectrum exhibit sensitivity to the applied microwave radiation for magnetic resonance fields (near $\pm$0.5~T). We observe that in the magnetic resonance fields, all the features are shifted towards the energies corresponding to lower absolute values of the magnetic field, see a dashed rectangle in Fig.~\ref{UW1498_ref_zMW}(a). Furthermore, the shift of the excited states under microwave radiation is more pronounced than the shift of the ground state, see~Fig.~\ref{UW1498_ref_zMW}(b). That effect corresponds to a larger change of the Zeeman shift for excited complexes.
We define the ODMR signal as the difference in the energy position of the excitonic line in the reflectance spectrum for microwave and light pulses overlapping (ON) and shifted (OFF). An example of a representative ODMR signal as a function of magnetic field for fixed microwave frequency is presented in~fig.~\ref{UW1498_ref_zMW}(c). The shape of the signal versus magnetic field obtained for all the excitonic states visible in the single QW is the same, see Fig.~\ref{shape}(a). This fact clearly shows that all the complexes are probing the ensemble of the magnetic ions with the same properties.
\begin{figure}
\centering
\includegraphics{interfejsy_sym_Mn.pdf}
\caption{(a) Normalized ODMR signal vs. magnetic field extracted from E1HH1 state for all measured samples (b) ODMR signal vs. magnetic field measured for the UW1498 QW with the simulated ODMR signal. The red spiked curve shows calculated magnetic transitions without any line broadening. The blue curve is a result of the convolution of the red-lined spectrum with the Gaussian kernel. (c) Compilation of the simulated shapes of ODMR signal as a function of magnetic field for different values of the spin Hamiltonian D parameter.}
\label{shape}
\end{figure}
The main factor that determines the overall shape of the ODMR signal as a function of the magnetic field at low temperatures is the deformation of the crystal lattice surrounding probed magnetic ions,~\cite{Bogucki_2022_PRB}. The lattice mismatch between the layers of the structures results in nonzero strain-induced axial-symmetry spin parameter D of the Hamiltonian of the Mn$^{2+}$ ion \cite{Qazzaz_1995_SSC}. Angular resolved OMDR measurements can determine the value of the spin Hamiltonian parameter D. We compare measured ODMR spectra with the numerically obtained positions and intensities of absorption lines based on the Mn$^{2+}$ spin Hamiltonian like in ref. \cite{Bogucki_2022_PRB}. The finite linewidth is obtained using the phenomenological broadening approach. In figure~\ref{shape}(b) experimental ODMR signal vs. magnetic field measured for the sample UW1498 is shown along with the calculated one. Each of the 30 transition lines (marked in red) is convoluted with a gaussian kernel of FWHM= 34\,mT (blue).
We satisfactorily reproduce the measured ODMR spectra for a D parameter value of 1250\,neV, which is the value corresponding to the nominal lattice mismatch in the case of (Cd,Mg)T/(Cd,Mn)Te QW with Mg content about 46\% \cite{Bogucki_2022_PRB}. Mention worthy is the sensitivity of the shape of the ODMR signal vs. magnetic field to changes of the D parameter as is presented in~figure~\ref{shape}(c). The agreement with the D corresponding to the nominal lattice mismatch suggests that the strain is distributed equally along the growth axis of the QW layer.
\begin{figure}
\centering
\includegraphics{delaye1_x.pdf}
\caption{Normalized time-resolved ODMR signal for E1HH1 and E1HH3 excitonic states for samples: (a) UW1500 and (b) UW1499 for resonant frequency f=10.15~GHz. The Observed ODMR amplitude relaxation time is the same regardless of the excitonic state used for the probe.}
\label{delaye1}
\end{figure}
\begin{figure}
\centering
\includegraphics{delaye2_x.pdf}
\caption{Time-resolved ODMR signal amplitude for samples UW1499 and UW1500 extracted from E1HH1 state for different resonance frequencies: (a) 10.15\,GHz and (b) 41.9\,GHz.}
\label{delaye2}
\end{figure}
\subsection{Time-resolved ODMR measurements}
We have performed time-resolved ODMR measurements on each of the presented samples and the visible excitonic complexes. The spin-lattice relaxation (SLR) time measured on the higher excitonic states has the same value as those obtained on the ground state. The example of the temporal profiles is shown in~Fig.~\ref{delaye1}(a,b). The SLR time measured on the excited states is the same as for the ground state for all the samples.
As expected, the SLR time depends on the magnetic field (resonant MW frequency), see~Fig.~\ref{delaye2}(a,b). We observe a significant shortening of the SLR time for higher magnetic fields, where the differences between samples become imperceptible.
The analysis of measured SLR times as a function of magnetic field for different samples reveals non-trivial effects. In~figure~\ref{delaye2}(a), temporal profiles measured for 10.15\,GHz resonance for samples UW1500 and UW1499 are compared. This frequency corresponds to the paramagnetic resonance in a magnetic field equal to approximately 0.36\,T. In this case, we observe a significant difference in the SLR times. The situation is noticeably different when it comes to measurements in higher magnetic fields -- temporal profiles obtained for 41.9\,GHz resonance (about 1.5\,T) are shown in~Fig.~\ref{delaye2}(b). Here, the SLR times are the same for both samples.
A summary of the measured SLR times for all samples and the magnetic fields is presented in~figure~\ref{gorycogram}(a). For all analyzed samples, a maximum value of SLR time is observed around 0.6\,T. The SLR times decrease with the magnetic field and follow the same curve for all the samples for higher magnetic fields. However, the measurements at lower magnetic fields show notable differences for different QWs. Especially the samples with Mn$^{2+}$ ions placed in the center of the QW (UW1500 and UW1501) have significantly shorter SLR times than the samples with ions placed on the QW interfaces. This finding suggests an essential role of the Mn$^{2+}$ - HH1 probability density overlap in the relaxation mechanisms at lower magnetic fields.
\section{Discussion}
\begin{figure}
\centering
\includegraphics{gorycogram.pdf}
\caption{SLR time versus the magnetic field. Black arrows mark magnetic fields for which temporal profiles are presented in~Fig.\ref{delaye2}(a) and (b); shadowed area in the background is a guide for the eye. As the magnetic field increases, variation of the SLR times between the samples decreases; (inset) mean HH1 - Mn$^{2+}$ probability density overlap for manganese volume vs. mean SLR time for magnetic fields below the 0.6\,T.}
\label{gorycogram}
\end{figure}
The observed change of behavior of the SLR time in the magnetic field suggests two regimes with different leading mechanisms causing SLR relaxation. In the high-field regime ($B>1\,$T), we find that the relaxation time is shorter upon increasing the magnetic field. Indeed, the increase of the SLR rate of Mn$^{2+}$ with the magnetic field is known for manganese ions in different systems: bulk crystals, quantum wells, and quantum dots \cite{Strutz_1992_PRL, Scherbakov_2001_PRB,Goryca_2015_PRB,strutz1993principles}.
In this regime, spin relaxation is caused mainly by phonon-related phenomena, and its rate is determined by the increasing density of states of phonons at the energy corresponding to the Zeeman splitting.
Consistently, we observed the same SLR times for all studied samples for the magnetic fields higher than 1\,T.
However, for lower magnetic fields, other mechanisms take the leading role. Relaxation time is shortened by the electron-induced scattering, demonstrated by Scherbakov~et~al.~\cite{Scherbakov_2001_PRB, Konig_2000_PRB} for the electron gas). Our samples (no intentional doping) are naturally p-type. Therefore in the presented case, we see the analogous effect to the intentionally n-doped samples, but here the relaxation acceleration is caused by hole gas.
For the magnetic field below 0.6 T, we observe the differences in SLR times between the studied samples. Those differences are correlated with the mean overlap between the probability density of HH1 state and manganese ions overlap, see~Fig.~\ref{gorycogram}(b). The observed differences between studied samples could be explained in the carrier gas density and ion-carriers interactions frame. The magnetic ions incorporated in the center of the UW1500 sample have a higher overlap with carrier gas wavefunctions than the ions on the sides of UW1499 or UW1498. Possibly, that makes the sample UW1500 more sensitive to the effects corresponding to the free carriers than the other samples. Moreover, the asymmetry between samples UW1499 and UW1498 discussed in the previous part can also affect the SLR time in the way we observe it. Namely, the relaxation for the UW1498 is slower -- as the overlap of the holes and ions for this sample is lower due to asymmetry caused by the electric field.
\section{Conclusions}
Our work shows that the ODMR technique is a powerful tool for studying the distribution of the strain in the nanostructures along the growth axis. It also reveals interactions between magnetic ions and charge carriers. Both strain and interactions with carriers can affect SLR, which may be essential for future applications. With the ODMR technique, we performed measurements on the series of samples with Mn$^{2+}$ ions incorporated in the different regions of the QWs -- on the sides -- near the barrier layers, and in the center. In addition, we found that we can probe the different layers of the well along the growth axis using the excited excitonic states.
We do not see any indications that the deformation of the crystal lattice is changing along the growth axis. Therefore, we conclude that the growth of the QW layers is homogeneous.
However, the carriers play an essential role in the presented samples. Even if the hole density in the QWs seems to be low, the introduced electric field affects the shape of the confined wavefunctions and results in a consecutive change of their overlap with magnetic ions. We observe noteworthy differences between the SLR times for the series of samples when we perform measurements in weak magnetic fields. We conclude that the overlap between magnetic ions and hole gas is the main factor varying across this series of QWs. Effects related to free charges are more pronounced for low magnetic fields -- which is in agreement with the previous studies \cite{Scherbakov_2001_PRB, Konig_2000_PRB}.
That causes the variation of the SLR time of the ions within the asymmetric QW and is a promising feature that can be utilized in future applications. We conclude that it is possible to tune the SLR time by electric field changes.
\section{Acknowledgements}
This work was supported by the Polish National
Science Centre under decisions DEC-2016/23/B/ST3/03437, ~DEC-2015/18/E/ST3/00559, DEC-2020/38/E/ST3/00364. One of us (P.K.) has been supported by the ATOMOPTO project (TEAM programme of the
Foundation for Polish Science, co-financed by the EU within
the ERDFund).
| 2024-02-18T23:39:46.014Z | 2022-05-09T02:01:35.000Z | algebraic_stack_train_0000 | 323 | 4,902 |
|
proofpile-arXiv_065-1731 | \section{Introduction}
In the last decades research on solitary waves became one of rapidly developing areas of mathematics and physics. Solitons were shown to exist in a wide variety of processes in optics, atomic physics, plasma physics, biophysics, fluid and solid mechanics, etc. \cite{Scott2005,Kharif2009,Boechler2010,Kevrekidis2008,Peyrard2004}.
In general a soliton is a localized wave that is formed due to the balance of nonlinear and dispersive effects and propagates with constant shape and speed.
Investigation of strain solitons propagating in the bulk of a solid body (referred to as an elastic waveguide or simply a waveguide throughout this paper) is on the one hand important in view of potential unexpected generation of such waves in constructional elements that can cause their damage and on the other hand is promising for application in nondestructive testing.
Strictly speaking, strain soliton can be formed only in an absolutely elastic solid where mechanical wave energy is conserved, while real solid materials, especially polymers, exhibit viscous properties which lead to energy dissipation and attenuation of strain waves. However, under certain conditions (e.g. when the viscosity is small) strain solitary waves can demonstrate a soliton-like behavior which makes them an important object of study.
Bulk strain solitary wave is a long trough-shaped elastic wave which is formed under favorable conditions from a short and powerful impact, usually in the form of a shock wave.
Unlike linear or shock waves, this wave does not exhibit significant decay in homogeneous waveguides and can propagate for much longer distances almost without shape transformation.
Although major characteristics and behavior of formed strain solitary waves in different waveguides were studied sufficiently well (see e.g. \cite{TP2008,jap2010,jap2012,APL2014} and references therein), the details of shock wave transformation into a long solitary wave are not completely clear yet. This is partially due to significant difference in the initial and final wave shapes and amplitudes, their rapid change and a number of physical processes involved that complicates the experimental investigation of the solitary wave formation process.
Optical techniques, in particular holographic interferometry and digital holography, proved to be quite suitable and informative in analysis of long bulk elastic waves in transparent waveguides (e.g. \cite{TP2008,WaMot2017,ApplSci2022}). While classical holographic interferometry with recording on high-resolution photographic materials required utilization of pulsed lasers and optical reconstruction of holograms, recent progress in fast-response global-shutter-assisted digital cameras allowed for simplification of the experimental procedure and usage of continuous-wave (CW) lasers for recording of wave patterns and computer-based hologram reconstruction. Therefore, processing of recorded digital holograms and data retrieval required development of a number of specific numerical algorithms, in particular those of phase unwrapping \cite{lee2014single, goldstein1988satellite, wang2019one} and filtering \cite {uzan2013speckle, shevkunov2020hyperspectral, katkovnik2015sparse}. On the other hand such short powerful disturbances as shock waves are scarcely resolvable by holographic techniques that allow mostly for visualization of wave patterns \cite{Takayama2019}.
Nonlinear elastic properties of the waveguide material play a key role in solitary wave formation and evolution. However, viscoelastic properties turn to be quite essential as well. In this paper we demonstrate how the initial shock wave transforms into the long solitary wave in the waveguide taking into account the viscoelastic effects, which suppress the formation of a number of short waves.
The paper is organized as follows. First we describe the experimental procedure applied for generation and visualization of the process of solitary wave formation in a polymer waveguide.
Then we present experimental results showing evolution of the wave pattern while travelling along the waveguide. In further sections we describe the developed three-dimensional model of nonlinear viscoelastic solid and its simplification for a one-dimensional model. Numerical modeling of the process of shock wave evolution into the solitary wave is presented in the next section. And we finalize the paper with discussion of the results obtained and conclusions.
\section{Experimental procedure}
We have previously shown that strain solitary waves can be formed in waveguides made of glassy polymers (polystyrene, polymethylmethacrylate and polycarbonate) from an initial shock wave produced in water in the vicinity of the waveguide input (see \cite{TP2008,TPL2011} and references therein).
In current experiments the shock wave was produced by explosive evaporation of the metallic foil by a pulsed Nd:YAG laser Spitlight 600 (Innolas, Germany) with pulse duration of 7 ns and pulse energy of 0.4 J at the wavelength of 532 nm.
The shock wave evolution into the strain solitary wave was analyzed in a bar-shaped waveguide, $10\times10$ mm in cross-section, 600 mm long made of transparent polystyrene (PS).
Observation and monitoring of wave patterns in the bulk of the waveguide was performed by digital holographic technique based on registration of 2D distributions of phase shift of the recording wave front induced by variations of waveguide thickness and density formed by the waves under study.
Holograms were recorded by a fast electro-optical camera Nanogate 24 (Nanoscan, Russia) with adjustable gain and exposure down to 10 ns. Synchronization of the laser pulse and camera shutter was provided by AM300 Dual Arbitrary Generator (Rohde$\&$Schwarz).
The short exposure time provided by the recording camera allowed for sufficiently fast registration of interference patterns, so that the displacement of the elastic wave propagating in the waveguide during the exposure time did not exceed the effective pixel size with regard to the reduction factor of the optical system
The exposure time in our experiments was within the range of 70 -- 120 nanoseconds and was slightly varied to obtain high-quality holograms. The optical schematic of the experimental setup
is shown in Fig. \ref{FigSetup}. Radiation from a probe semiconductor CW laser emitting at 532 nm was spatially filtered and divided into reference and object beams; the object beam was then expanded by a telescopic system to a diameter of 65 mm. A translation stage allowed us to shift the cuvette with the waveguide in the direction perpendicular to the object beam and to record wave patterns in different areas of the waveguide as shown in the inset in Fig. \ref{FigSetup}. For the sake of direct comparison the sequential shifted fields of view overlapped for about 15 mm, so that areas in Fig. \ref{FigSetup} are about 50 mm long.
The typical width of interference fringes in the hologram was adjusted to cover 8 -- 12 pixels of the camera matrix, that provided optimal performance of the reconstruction procedure of recorded holograms using the least squares algorithm \cite{liebling2004complex}.
\begin{figure}[h!]
\centering
\includegraphics[width=15cm]{SolitonSetupWaveMotion_cr.png}
\caption{Optical schematic of the setup used for strain wave detection by means of digital holography. L are lenses, M are mirrors, BS are beamsplitters. Schematic of waveguide areas is shown in the inset.}
\label{FigSetup}
\end{figure}
To compensate distortions caused by various factors including those due to non-ideal shape of the waveguide, the phase shift distribution introduced by the elastic waves into the probe wave front was determined as a difference between the reconstructed phase images of the waveguide in the presence and absence of the strain wave.
We note that the sidewalls of the polystyrene bar were slightly distorted and the probe wavefront that passed through it had a somewhat cylindrical shape, that is clearly seen in the examples of digital holograms shown in Fig. \ref{Figstrainwaverec}.
Since strain waves of high amplitude can introduce a phase shift noticeably exceeding 2$\pi$ radian, the obtained phase distribution corresponding to the strain wave was unwrapped \cite{ghiglia1998two} (see Fig. \ref{Figstrainwaverec}). If a strain wave is uniform along the Y-axis, its one-dimensional profile can be found by data averaging along the Y-axis. Mention that in some cases it is impossible to reconstruct phase shift distributions in the vicinity of waveguide edges because of their curvature.
The solitary wave parameters can be obtained from phase distributions: the wave width is determined as the distance between zero levels of phase shift and wave amplitude can be calculated as \cite{pump-probe2018}:
\begin{equation}
A = \frac{\phi\lambda_{\rm rec}}{2{\pi}h[(n-1)\nu+C(1-2\nu)]},
\end{equation}
where $\phi$ is the maximal phase shift, $\lambda_{\rm rec}$ is the recording light wavelength, $h$ is the bar thickness, $n$ and $\nu$ are, respectively, the refractive index and Poisson's coefficient of the bar material. The dimensionless coefficient C describes the dependence between the local density variations and refractive index: $\Delta n = C \Delta \rho/\rho$. According to the Lorenz-Lorentz equation C can be estimated for polystyrene as $C \approx n - 1$ with the precision of about
8\% \cite{vedam1976variation}.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{Strain_wave_rec2.pdf}
\caption{Procedure of strain wave reconstruction from recorded digital holograms.}
\label{Figstrainwaverec}
\end{figure}
Note that for recording of long complex wave patterns the available field of view may be insufficient. In these cases a synthetic large field of view is required for reliable observation of wave patterns. Such a synthetic field of view can be obtained by stitching together several wave profiles collected by reconstructing and processing of a set of phase images of the wave pattern at sequential time moments (see \cite{pump-probe2018} for details).
\section{Experimental results}
We have previously shown \cite{TP2008} that a shock wave in water produced by laser evaporation of the metallic foil consists of a short ($\sim$ 0.1 \textmu m) powerful compression peak followed by a relatively long ($\sim$ 1 mm) rarefaction area. When entering the waveguide this wave produces a complex disturbance and relatively quickly looses its energy.
However within the first couple of centimeters of the waveguide the initial shock wave is still too narrow to be resolved spatially and too powerful to be reconstructed in terms of phase shift of the recording wavefront (see \cite{TP2008,GKS2019} for details).
Figure \ref{FigEvolutionDH} presents a set of phase shift distributions characterizing evolution of the shock wave in the first two neighboring areas from the bar input (I and II areas as shown in Fig.~\ref{FigSetup}). In each area phase distributions were recorded at different delays between the laser pulse and camera shutter allowing for monitoring wave evolution within the field of view. As can be seen in Fig.~\ref{FigEvolutionDH} at the beginning of the waveguide, in the first area, the wave pattern is irregular representing a remainder of the initial shock wave followed by a long disturbance propagating with lower velocity. The wave pattern is nonuniform along the Y axis that does not allow for obtaining a reliable averaged contour. In the second area the wave pattern becomes more regular. The remainder of the shock wave outrun the general disturbance, it attenuated noticeably but is still visible. The long disturbance became more uniform, however it still has some relatively sharp peaks on its fronts. The plot of Y-averaged phase shift in this disturbance is shown in Fig. \ref{contours}a.
\begin{figure}[h!]
\centering
\includegraphics[width=15.5cm]{StrainWaveEvolution.png}
\caption{Phase distributions demonstrating wave patterns in the first (left) and second (right) areas of the polystyrene bar at different delays between the moments of shock wave generation and strain wave detection.}
\label{FigEvolutionDH}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=15.5cm]{Areas34.png}
\caption{Phase distributions demonstrating wave patterns in the third (left) and fourth (right) areas of the polystyrene bar at different delays between the moments of shock wave generation and strain wave detection.}
\label{IV-V-phase}
\end{figure}
Figure \ref{IV-V-phase} demonstrates sets of phase shift distributions characterizing wave patterns in the III-rd (100 -- 165 mm from the bar input) and IV-th (150 -- 215 mm) areas of the bar. The corresponding Y-averaged phase profiles are plotted in Fig. \ref{contours}.
As can be seen in Fig. \ref{IV-V-phase} the leading shock wave attenuated completely, the recorded phase shift distributions became smooth, with no abrupt changes, and sufficiently uniform along the Y axis, that allowed for estimation of the following wave parameters: amplitude, slopes of the leading and trailing edges, full width at half maximum $L_{\rm FWHM}$, width at the $1/e$ level $L_{\rm exp}$. The parameters of waves recorded in the I, II, III and IV areas are summarized in Table \ref{Tab1}.
As can be seen from Table \ref{Tab1} while travelling along the waveguide the wave attenuated, its amplitude decreased and its width increased. At the same time the leading edge showed a tendency to be flatter with respect to the trailing edge since the ratio $\alpha_{\rm front}/\alpha_{\rm rear}$ gradually decreased with the propagation distance.
This behavior is confirmed by our previous experiments on monitoring of solitary wave evolution at longer distances, see \cite{TP2008} where we have demonstrated that the solitary wave still exists at the distances of several tens of centimeters and its attenuation coefficient of about 0.005 cm$^{-1}$ in polystyrene allows to expect attenuation in $e$ times at the distance of about 2 m.
\begin{figure}[h!]
\centering
\includegraphics[width=16cm]{SolitonsPlots+.png}
\caption{Y-averaged contours of the strain waves recorded in three areas of the polystyrene bar: (a) II, (b) III, (c) IV. Waves move from left to right. The coordinate X originates from the beginning of each measurement area.}
\label{contours}
\end{figure}
\begin{table}
\centering
\caption{\label{Tab1}Parameters of strain waves recorded in the I-st, II-nd, III-rd and IV-th areas of the polystyrene bar.}
\begin{tabular}{c|ccccc}
Area & \textbf{$\phi_{\rm max}$} & \textbf{$\alpha_{\rm front}$} & \textbf{$\alpha_{\rm rear}$} & \textbf{$L_{\rm FWHM}$} & \textbf{$L_{\rm exp}$}\\
& (rad) & (rad/mm) & (rad/mm) & (mm) & (mm)\\
\hline
I & 17.74 & 0.82 & & & \\
II & 13.56$\pm$0.6 & 0.59$\pm$0.03 & 1.01$\pm$0.06 & 21.5$\pm$0.7 & \\
III & 10.24$\pm$0.4 & 0.37$\pm$0.02 & 0.73$\pm$0.04 & 24.9$\pm$0.5 & 29.2$\pm$0.7 \\
IV & 9.82$\pm$0.3 & 0.35$\pm$0.02 & 0.72$\pm$0.04 & 28.5$\pm$0.5 & 34.0$\pm$0.7 \\
\end{tabular}
\end{table}
\section{Three-dimensional model of nonlinear viscoelastic body}
Experiments showed the initial steps of solitary wave formation, i.e. the decay of a short strain wave generated by the shock wave in the water cuvette and formation of a long strain wave behind the short one. This process requires theoretical explanation which we start in this section by describing the full three-dimensional model of the bar. Polymeric materials, which the bar is made of, exhibit viscoelastic and nonlinear properties which we include in the model.
In our work we consider solid bodies which undergo small but finite strains. These strains are described by the Green-Lagrange finite strain tensor $\mathcal{E}$ with the nonlinear component which we refer to as geometric nonlinearity:
\begin{equation}\label{eq:strain}
\mathcal{E}_{\alpha\beta} = \frac12 \left(\frac{\partial U_\alpha}{\partial r_\beta} + \frac{\partial U_\beta}{\partial r_\alpha} + \frac{\partial U_\gamma}{\partial r_\beta} \frac{\partial U_\gamma}{\partial r_\alpha} \right).
\end{equation}
Here, the Einstein notation is used for the Greek indices, $U_\alpha$ is the displacement along the axis $\alpha$ and $r$ denotes material coordinate vector (Lagrangian approach).
The elastic response of a body is defined by the potential energy density $\Pi$, which we take in the form of a truncated power series in strain components with all possible quadratic terms corresponding to the mechanically linear material and all cubic terms that are the next order corrections and which we refer to as physical nonlinearity:
\begin{equation}\label{eq:pot_en}
\Pi = {\frac{\lambda+2\mu}{2}I_1^2(\mathcal{E}) - 2\mu I_2(\mathcal{E}) + \frac{l + 2m}{3}I_1^3(\mathcal{E}) - 2m I_1(\mathcal{E}) I_2(\mathcal{E}) + n I_3(\mathcal{E})}.
\end{equation}
Here $\lambda$ and $\mu$ are the Lam\'e (linear) elastic moduli, $l$, $m$ and $n$ are the Murnaghan (nonlinear) elastic moduli and $I_1(\mathcal{E}) = \operatorname{tr}\mathcal{E}$, $I_2(\mathcal{E}) = [(\operatorname{tr}\mathcal{E})^2 - \operatorname{tr}\mathcal{E}^2]/2$, and $I_3(\mathcal{E}) = \det \mathcal{E}$ denote invariants of the strain tensor~\cite{Samsonov2001}. This expression for the potential energy density has the most general form for the small but finite strains in an isotropic material. This model should not be confused with hyperelastic models (e.g., Neo-Hookean, Mooney-Rivlin, etc.) which are designed to describe large strains in rubber-like materials.
The elastodynamics of a body is defined by the stationary point of the action functional which is the time integral of the Lagrangian $L$:
\begin{equation}\label{eq:lagr}
L = {\int_\Omega \mathcal{L} dV - \int_{\partial\Omega}U_\alpha F_\alpha dS,}
\end{equation}
where $F$ is the external boundary stress, $\Omega$ is the volume of the body, $\mathcal{L} = K - \Pi$ is the Lagrangian volume density, $K = \frac12\rho \dot U_\alpha \dot U_\alpha$ is the kinetic energy density and $\rho$ is the material density. The unknown displacements $U_\alpha$ have to satisfy the Euler-Lagrange equations for the action functional to be at its stationary point
\begin{equation}
\frac{\partial}{\partial t} \frac{\partial\mathcal{L}}{\partial\dot U_\alpha} + \frac{\partial}{\partial r_\beta} \frac{\partial\mathcal{L}}{\partial(\partial_\beta U_\alpha)} - \frac{\partial\mathcal{L}}{\partial U_{\alpha}} = 0,
\end{equation}
where dot denotes the time derivative and $\partial_\beta$ denotes the derivative with respect to $r_\beta$. These equations yield the equations of motion which have to be complemented by the boundary conditions as follows:
\begin{gather}
\label{eq:dyn}
\rho \ddot{U}_\alpha = {\frac{\partial P_{\alpha\beta}}{\partial r_\beta}, \quad r\in\Omega}, \\
\label{eq:bc}
P_{\alpha\beta} n_\beta = F_\alpha, \quad r\in\partial\Omega,
\end{gather}
where $n$ is the surface normal vector and the first Piola-Kirchhoff stress tensor $P_{\alpha\beta}$ is defined as
\begin{equation}
P_{\alpha\beta} = \frac{\partial\Pi}{\partial(\partial_{\beta}U_\alpha)} = \left(\delta_{\alpha\gamma} + \partial_{\gamma} U_\alpha\right) \left(S^{\rm lin}_{\gamma\beta} + S^{\rm nl}_{\gamma\beta}\right). \label{eq:P}
\end{equation}
Here linear and nonlinear second Piola-Kirchhoff stress tensors are:
\begin{gather}
S^{\rm lin}_{\alpha\beta} = \lambda \mathcal{E}_{\gamma\gamma}\delta_{\alpha\beta} + 2\mu \mathcal{E}_{\alpha\beta}, \label{eq:Slin} \\
S^{\rm nl}_{\alpha\beta} = l \bigl(\mathcal{E}_{\gamma\gamma}\bigr)^2\delta_{\alpha\beta} + (2m-n) \bigl(\mathcal{E}_{\gamma\gamma} \mathcal{E}_{\alpha\beta} - I_2(\mathcal{E})\delta_{\alpha\beta}\bigr) + n \mathcal{E}_{\alpha\gamma}\mathcal{E}_{\gamma\beta}. \label{eq:Snl}
\end{gather}
In the general case, stiffness is a retarded integral operator that expresses the influence of material history on its current state and the kernel of these operators determines viscoelastic properties of the material~(\cite{HowellMechanics}). The main viscoelastic effects can be taken into account in the linear stress tensor $\smash{S^{\rm lin}_{\alpha\beta}}$ using retarded integral operators $\hat{\lambda}$ and $\hat{\mu}$:
\begin{equation}
S^{\rm lin}_{\alpha\beta} = \hat{\lambda} \mathcal{E}_{\gamma\gamma}\delta_{\alpha\beta} + 2\hat{\mu} \mathcal{E}_{\alpha\beta}. \label{eq:Slin_ve}
\end{equation}
We could also include the viscous effects in the nonlinear stress tensor $\smash{S^{\rm nl}_{\alpha\beta}}$, but it leads to unnecessary complication of the model and goes beyond the scope of the current work.
In our work we use the generalized Maxwell model with many characteristic relaxation times, which allows us to cover a wide range of frequencies where glassy polymers exhibit viscous properties. In this model operators $\hat{\lambda}$ and $\hat{\mu}$ act on an arbitrary function $f(t)$ at time $t$ in the following way:
\begin{subequations}
\label{eq:maxwell}
\begin{gather}
\hat{\lambda} f(t) = \lambda f(t) + \hat\xi \dot{f}(t) = \lambda f(t) + \sum_{s=1}^{N} \xi_s \int_{-\infty}^{t} e^{-\frac{t-t'}{\tau_s}} \dot{f}(t') dt', \label{eq:lambda_op}\\
\hat{\mu} f(t) = \mu f(t) + \hat\eta \dot{f}(t) = \mu f(t) + \sum_{s=1}^{N} \eta_s \int_{-\infty}^{t} e^{-\frac{t-t'}{\tau_s}} \dot{f}(t') dt', \label{eq:mu_op}
\end{gather}
\end{subequations}
where $N$ is the number of relaxation times $\tau_s$ with dilatational viscosity moduli $\xi_s$ and shear viscosity moduli $\eta_s$ (see~\cite{HowellMechanics} for details).
The system of equations (\ref{eq:dyn}) -- (\ref{eq:P}), (\ref{eq:Snl}), (\ref{eq:Slin_ve}) defines the dynamics of nonlinear viscoelastic body. For the generalized Maxwell model~\eqref{eq:maxwell}, the integro-differential equation (\ref{eq:Slin_ve}) can be written as a system of $N$ differential equations using the retarded strain rates $q^{(s)}_{\alpha\beta}$ as follows:
\begin{gather}
q_{\alpha\beta}^{(s)} + \tau_s \dot{q}_{\alpha\beta}^{(s)} = \mathcal{\dot E}_{\alpha\beta}, \quad s=1,\dots,N,\\
S^{\rm lin}_{\alpha\beta} = \lambda \mathcal{E}_{\gamma\gamma} \delta_{\alpha\beta} + 2\mu \mathcal{E}_{\alpha\beta} + \sum_{s=1}^N\left[\xi_s q^{(s)}_{\gamma\gamma} \delta_{\alpha\beta} + 2\eta_s q_{\alpha\beta}^{(s)}\right]. \label{eq:damping}
\end{gather}
The presented theory is rather general and the equations shown here are probably impossible to solve exactly even for the bodies of simple geometry. However, these equations can either be simplified to account only for a particular wave processes, or they can be simulated numerically. We chose both paths which are described in the following sections.
\section{One-dimensional model for plane waves travelling along the bar}
We consider a symmetric bar of square cross-section of the thickness $h$ shown in Fig.~\ref{fig:dispersion}a. To derive a simplified model we apply several assumptions limiting the motion to plane waves propagating along the bar axis only. First, we assume that the bar cross-section remains flat and normal to the bar axis which makes the longitudinal displacement independent of the transverse coordinates $y$ and $z$:
\begin{equation}
U_x(x,y,z,t) = u(x, t). \label{plane_sec}
\end{equation}
Second, we assume that the bar cross-section is uniformly deformed in $yz$ plane:
\begin{gather}
U_y(x,y,z,t) = yv(x,t), \label{lin_transverse1}\\
U_z(x,y,z,t) = zv(x,t), \label{lin_transverse2}
\end{gather}
where coordinates $y$ and $z$ originate from the center of mass of the cross-section. It can be shown that these assumptions are asymptotically satisfied both for long and short waves travelling along the bar axis. Here, we apply these assumptions beyond their proven range of applicability to derive a one-dimensional model for any wavelength. However, as we will show later, it turns out that the resulting simple model describes well the process of formation of the main long wave from the short one.
For assumptions~\eqref{plane_sec} --~\eqref{lin_transverse2} and non-viscous elastic moduli $\lambda$ and $\mu$, one can obtain Lagrange equations using the Lagrangian density $\cal{L}$ integrated over $y$ and $z$, i.e. over the bar cross-section. Substituting the viscoelastic moduli $\hat\lambda$ and $\hat\mu$ into the resulting equations gives
\begin{subequations}
\label{eq:1d}
\begin{gather}
\rho\ddot{u} = \left(\hat\lambda + 2\hat\mu\right) \pd{xx}u + 2 \hat \lambda \pd{x}v + \pd{x}f_{\rm nl}, \\
\rho\ddot{v} = \hat \mu \pd{xx}v - \frac{2}{R_*^2}\left[ \hat\lambda \pd{x}u + 2\left(\hat\lambda+\hat\mu\right)v + g_{\rm nl}\right],
\end{gather}
\end{subequations}
where $R_*^2 = \langle y^2 + z^2\rangle$ is the mean radius squared taken over the cross-section. For the square bar under consideration $R_*^2=h^2/6$. The nonlinear terms $f_{\rm nl}$ and $g_{\rm nl}$ are given in Appendix~\ref{app}. For a finite bar these equations have to be complemented by the boundary conditions. If the bar has a flat edge at $x=0$ subjected to uniform pressure $F$ along $x$ direction, the corresponding boundary conditions have the following form
\begin{subequations}
\label{eq:1d_bc}
\begin{align}
&(\hat\lambda+2\hat\mu)\pd{x}u + 2\hat\lambda v + f_{\rm nl} = -F,\\
&\pd{x}v = 0.
\end{align}
\end{subequations}
Equations~\eqref{eq:1d} constitute the coupled one-dimensional system of integro-differential equations ($\hat{\lambda}$ and $\hat{\mu}$ are the retarded integral operators) describing plane strain waves in a viscoelastic bar. This system is still hard to solve exactly, however the numerical simulation of it is way faster than for the full three-dimensional model. For the analytical study, different asymptotical regimes will be considered.
We start the analysis of the derived model~\eqref{eq:1d} in the linear regime, when all nonlinear terms can be neglected. The important special case of this regime is when the viscosity is also neglected and $\hat\lambda = \lambda$, $\hat\mu=\mu$. In the short-wave limit (wavelength is much smaller than $R_*$) one obtains two solutions of Eqs.~\eqref{eq:1d}: short longitudinal waves (P-waves) with the velocity $c_p = \sqrt{(\lambda+2\mu)/\rho}$ and short shear waves (S-waves) with the velocity $c_s = \sqrt{\mu/\rho}$. In the opposite limit of long waves one obtains two other solutions: long longitudinal waves with the velocity $c = \sqrt{E/\rho}$ defined by the Young modulus $E = \mu(3\lambda+2\mu)/(\lambda+\mu)$ and breathing mode with the frequency squared $\omega^2 = 4 (c_p^2 - c_s^2)/R_*^2$ and zero group velocity. General linear waves of an arbitrary wavelength are described by the equation
\begin{equation}
\pd{tt}u - c^2\pd{xx} u + \frac{R_*^2}{4\bigl(c_p^2 - c_s^2\bigr)} \Bigl(\pd{tttt}u - \bigl(c_p^2 + c_s^2\bigr)\pd{xxtt}u + c_p^2c_s^2\pd{xxxx}u\Bigr) = 0. \label{eq:elastic1d}
\end{equation}
This equation has two dispersion curves, which are presented in Fig.~\ref{fig:dispersion}b by dashed lines and have the above-mentioned asymptotics.
If viscosity is included but nonlinearity is still neglected the model describes linear elastic waves subjected to viscoelastic relaxation (damping). In this case one can introduce dynamic (complex) Lam\'e elastic moduli $\tilde{\lambda}(\omega) = \lambda'(\omega) + i \lambda''(\omega)$ and $\tilde{\mu}(\omega) = \mu'(\omega) + i \mu''(\omega)$ with real (storage) and imaginary (loss) components:
\begin{subequations}
\label{lam_mu_complex}
\begin{gather}
\lambda'(\omega) = \lambda + \sum_s \frac{\xi_s\omega^2\tau_s^2}{1+\omega^2\tau_s^2}, \quad \lambda''(\omega) = \sum_s \frac{\xi_s\omega\tau_s}{1+\omega^2\tau_s^2}, \\
\mu'(\omega) = \mu + \sum_s \frac{\eta_s\omega^2\tau_s^2}{1+\omega^2\tau_s^2}, \quad \mu''(\omega) = \sum_s \frac{\eta_s\omega\tau_s}{1+\omega^2\tau_s^2}.
\end{gather}
\end{subequations}
Using complex elastic moduli $\tilde{\lambda}(\omega)$ and $\tilde{\mu}(\omega)$ one can find linear waves of the form $u,v\sim e^{ikx-i\omega t}$. For a given real wavevector $k$, one obtains complex frequency $\omega=\omega'+i\omega''$, where $\omega''$ describes the damping. Figure~\ref{fig:dispersion}c shows the damping of waves in polystyrene bar with realistic parameters, which will be used in the numerical modelling (see the next section for details). One can see that short waves with large wavenumbers are attenuated much faster than long longitudinal waves (lower curve for small wavenumbers). The breathing mode (upper curve for small wavenumbers) has relatively large damping and almost zero group velocity (Fig.~\ref{fig:dispersion}b,c).
\begin{figure}
\centering
\includegraphics[width=15cm]{bar_dispersion_new.pdf}
\caption{a) Polystyrene bar with the thickness $h$. b) Dispersion of linear waves in the one-dimensional model of polystyrene bar given as a dependence of the scaled frequency $\omega' R_*/c$ as a function of scaled wavenumber $kR_*$. Solid lines show the dispersion for viscoelastic model (\ref{eq:1d}) with parameters described in Section~\ref{sec:num}. Dashed lines show the dispersion of non-viscous model (\ref{eq:elastic1d}), which depends only on the Poisson ratio $\nu=0.38$ for the given scale.
c) The corresponding damping of linear waves in the one-dimensional viscoelastic model.}
\label{fig:dispersion}
\end{figure}
The nonlinear case requires much more complicated analysis. However, the most important regime is the long-wave regime, when the solitary waves can be stabilized by the balance between the dispersion and the nonlinearity.
In this case, when viscosity is neglected, we obtain the nonlinear equation of the Boussinesq type:
\begin{equation}
\pd{tt}u - c^2\pd{xx} u + \frac{R_*^2}{4\bigl(c_p^2 - c_s^2\bigr)} \Bigl(\pd{tttt}u - \bigl(c_p^2 + c_s^2\bigr)\pd{xxtt}u + c_p^2c_s^2\pd{xxxx}u\Bigr) - \frac{\beta_s}{\rho}(\pd{x}u)(\pd{xx}u) = 0. \label{eq:nonlinear}
\end{equation}
Here $\beta_s=3E+2l(1-2\nu)^3 + 4m(1+\nu)^2(1-2\nu) + 6n\nu^2$ is the nonlinearity modulus~\cite{Samsonov2001}, where $\nu=\lambda/(2\lambda + 2\mu)$ is the Poisson's ratio. Equation (\ref{eq:nonlinear}) has one-parameter family of exact solitary wave (soliton) solutions:
\begin{equation}
\pd{x} u = A \operatorname{sech}^2 \frac{x - st}{L},
\end{equation}
where $s$, $A$, and $L$ are velocity, amplitude, and length of the soliton, respectively, with the following relation between them
\begin{equation}
A = \frac{3E}{\beta_s}\left(\frac{s^2}{c^2}-1\right), \quad L^2 = \frac{\bigl(c_p^2-s^2\bigr)\bigl(s^2 - c_s^2\bigr)}{\bigl(c_p^2-c_s^2\bigr)\bigl(s^2 - c^2\bigr)}R_*^2.
\end{equation}
A more rigorous asymptotic analysis of the model can be carried out, assuming that the viscous terms are small. In this case, a soliton-like solution with slowly varying amplitude, velocity, and length could be obtained~\cite{Kivshar1989}. However, this analysis is beyond the scope of our work, since we are focused on the initial viscoelastic evolution of strain waves.
The derived equation~\eqref{eq:nonlinear} can be viewed as the standard equation for the longitudinal waves travelling at the speed $c$ with small corrections in the form of nonlinear and dispersive terms. This equation has exactly the same nonlinear term and asymptotically equivalent dispersive terms as in the previously obtained Boussinesq-type models for the long waves in rods and bars~\cite{KhusnSamsPhysRev2008, GKS2019}.
\section{Numerical modelling}
\label{sec:num}
To perform numerical modelling of polystyrene bar, material parameters should represent realistic viscoelastic properties of polystyrene. The density was taken as ${\rho = 1.06\text{ g/cm}^3}$. To simulate the propagation of elastic waves excited by a short impact, one should take into account various relaxation processes in the most important range of relaxation times, which lay between 0.01 \textmu s and 100 \textmu s in our case. For larger time scales quasi-static elastic moduli $\lambda = 4.88$~GPa and $\mu = 1.54$~GPa were used. To define viscoelastic properties, the following equally spaced relaxation times in a logarithmic scale were considered: $\tau_s = 0.01$, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100 \textmu s.
The ratio between $\xi_s$ and $\eta_s$ was chosen to be equal to $\lambda/\mu$ since the Poisson's ratio $\tilde{\nu}(\omega) = \tilde{\lambda}(\omega)/(2\tilde{\lambda}(\omega) + 2\tilde{\mu}(\omega))$ almost does not depend on frequency for high frequencies in glassy polymers~\cite{Tschoegl2002}. In this case the loss tangent is the same for all elastic moduli: $\tan \delta (\omega) = \lambda''(\omega)/\lambda'(\omega) = \mu''(\omega)/\mu'(\omega) $. For polystyrene the loss tangent is weakly dependent on frequency in very wide range of frequencies and approximately equals to 0.02 -- 0.03 \cite{Hurley2013, Benbow1958, Yadav2020}. To simulate this property, viscosity moduli were chosen as ${\xi_s = \xi}$ and ${\eta_s = \eta}$ for all relaxation times $\tau_s$. For relaxation times $\tau_s$, which are equally spaced in a logarithmic scale, one can estimate the loss tangent as
\begin{equation}
\tan \delta \approx \frac{\pi N_{10}}{2 \ln10}\frac{\xi}{\lambda},
\end{equation}
where $N_{10}=2$ is the number of relaxation times per decade. To obtain the experimental value 0.03 of the loss tangent, given the values of $\lambda$ and $\mu$, the viscosity moduli were chosen as ${\xi = 0.09}$~GPa and ${\eta = 0.028}$~GPa. After all the assumptions made the quasistatic elastic moduli $\lambda$ and $\mu$ were the only tunable parameters. Their values, given earlier, were chosen to obtain good agreement between numerical simulations and experimental results.
The three-dimensional simulation was performed for the rectangular bar with a length of 200~mm and thickness of 10~mm with material properties mentioned above. The three-dimensional numerical analysis was performed using the multidomain pseudospectral method~\cite{Canuto2007-1, Canuto2007-2}. To understand the main effects of fast formation of the long wave, the numerical calculation was done without nonlinear effects. Reaching a balance between dispersion and nonlinearity for the formed long wave required longer computation time while the analytical properties of strain solitary waves have been previously studied in sufficient detail.
At the initial time moment the bar is unperturbed and a short normal stress is applied at its edge at $x = 0$:
\begin{equation}
F_x(y,z,t) = A_f e^{-t^2/\tau^2_{e}}, \quad F_y = F_z = 0,
\end{equation}
where $A_f = 1.4$~MPa and $\tau_e = 0.5\ \mu\text{s}$, while the other sides are free of external forces. In the experiment, the excitation pulse has much shorter duration. However, the shortening of the excitation pulse increases the simulation time significantly but leads to a minor change of the result. The value of $A_f$ was chosen to fit the amplitude of the formed long wave observed in the experiment.
\begin{figure}
\centering
\includegraphics[width=.68\linewidth]{1d_3d_visc_nonvisc_newest.pdf}
\caption{Longitudinal strain averaged over the cross-section as a function of the coordinate $x$ along the bar. Comparison of the full three-dimensional model and the simplified one-dimensional model with and without viscosity.}
\label{fig:4curves}
\end{figure}
The results of numerical simulation obtained using the full three-dimensional model and simplified one-dimensional model are presented by thick lines in Fig.~\ref{fig:4curves}. In both cases one can see the formation of the leading long wave from the initial short wave. The position, the amplitude, and the width of the leading long wave are approximately the same in both models. However, the one-dimensional model shows some small oscillations in the front of the main wave and a slightly different form of the tail behind the main wave.
To understand the effect of viscoelastic damping, we made the same calculations for an absolutely elastic case with $\xi_s=\eta_s=0$. According to Eq.~\eqref{lam_mu_complex}, the storage moduli $\lambda'(\omega)$ and $\mu'(\omega)$ have the frequency-dependent contribution introduced by $\xi_s$ and $\eta_s$. To compensate this effect, the storage moduli $\lambda'(\omega)=5.07$ GPa and $\mu'(\omega)=1.6$ GPa at a frequency of 0.1 MHz (i.e. for a typical duration of 10 \textmu s) were considered as elastic moduli $\lambda$ and $\mu$ for non-viscous case. The obtained results are presented by thin lines in Fig.~\ref{fig:4curves}. One can see approximately the same main long wave as was obtained by viscoelastic simulation. However, there are a number of short-wave oscillations, which were damped in the viscoelastic case. The viscoelastic effects are especially important for the full three-dimensional distribution of the strain presented in Fig.~\ref{fig:sim3d}. In the non-viscous case, formation of the long wave is completely hidden by a number of short waves.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{comparison_ret_and_no_visc_newest.pdf}
\caption{Results of the full three-dimensional simulation of the bar with generalized Maxwell viscosity (a, c) and without viscosity (b,~d). The panels show the longitudinal strain $\pd{x}U_x$ at the early and the late times as a function of $x$ and $y$ at $z=0$.}
\label{fig:sim3d}
\end{figure}
\section{Discussion and conclusions}
\begin{figure}
\centering
\includegraphics[width=14cm]{comparison_with_experiment_nu_0.38.pdf}
\caption{Comparison of the experimentally obtained phase shifts of the strain wave in the polystyrene bar with the results of three-dimensional numerical modelling at three delay times corresponding to the second, third, and fourth areas of the waveguide.}
\label{fig:comp}
\end{figure}
In this paper we have detected formation of the long solitary wave using holographic technique. To explain the physical processes involved in the solitary wave formation we have performed numerical modelling of a polystyrene bar with realistic viscoelastic parameters. The comparison between the experimentally recorded phase shift and the phase shift obtained by three-dimensional numerical simulation is shown in Fig.~\ref{fig:comp}. One can see approximately the same structure of the phase shift obtained at different time moments after the initial pulse.
Therefore, the three-dimensional numerical model takes into account all necessary properties of the elastic waveguide, which are responsible for formation of the leading long strain wave. Since the three-dimensional model can be reduced to a simple one-dimensional model without a significant loss of precision, one can conclude that wave dispersion in the waveguide and viscoelastic properties of material lead to the formation of the long wave. The effects of dispersion can be understood as a reflection of the propagating elastic wave from sidewalls of the waveguide and concentration of the elastic energy. At the same time, viscoelastic effects suppress short-wave oscillations and reveal the formation of the leading long wave.
The further propagation of the long wave can be understood in terms of the solitary dynamics, which was discussed in \cite{Samsonov2001,TP2008,jap2010,jap2012,APL2014} and other papers referred therein. Recent results show that the nonlinear moduli $l$, $m$, $n$ may rapidly increase in their absolute values with decreasing frequency~\cite{Belashov2021}. Therefore, the influence of nonlinear properties may be much greater for the formed long solitary wave than for the initial short waves.
However, short waves decay rapidly and nonlinear effects do not play a significant role in their evolution, which allows one to use frequency-independent nonlinear moduli with values at the characteristic frequency of a solitary wave.
Nonlinear viscoelastic dynamics with frequency-dependent moduli $l$, $m$, $n$ is the subject of the further study.
\section{Declaration of competing interests}
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
\section{Acknowledgements}
The financial support from Russian Science Foundation under the grant \#17-72-20201 is gratefully acknowledged.
| 2024-02-18T23:39:46.110Z | 2022-05-09T02:02:00.000Z | algebraic_stack_train_0000 | 327 | 6,450 |
|
proofpile-arXiv_065-1842 | \section{Introduction}
Let $\mathcal H_1$ and $\mathcal H_2$ be nonempty families of finite graphs. Consider the following game $\RR(\mathcal H_1, \mathcal H_2)$
between Builder and Painter, played on the infinite board $K_\mathbb N$. In every round, Builder chooses a not previously selected
edge of $K_\mathbb N$ and Painter colours it red or blue. The game ends if after a move of Painter there is a red copy of a graph $\mathcal H_1$
or a blue copy of a graph from $\mathcal H_2$. Builder's tries to finish the game as soon as possible, while the goal of Painter is opposite.
Let $\tilde{r}(\mathcal H_1, \mathcal H_2)$ be the minimum number of rounds in the game $\RR(H_1, H_2)$, provided both players play optimally.
If $\mathcal H_i$ consists of one graph $H_i$, we simply write $\tilde{r}(H_1, H_2)$ and $\RR(H_1, H_2)$ and call them an online size Ramsey
number an online size Ramsey game.
In the literature online size Ramsey numbers are called also online Ramsey numbers,
which can be a bit confusing. The online size Ramsey number $\tilde{r}(H_1, H_2)$ is a game-theoretic counterpart of the classic size Ramsey number
$\hat{r}(H_1, H_2)$, i.e.~the minimum number of edges in a graph with the property that every two-colouring of its edges results in
a red copy of $H_1$ or a blue copy of $H_2$. One of the most interesting questions in online size Ramsey theory is related to the
Ramsey clique game $\RR(K_n, K_n)$, introduced by Beck \cite{beck}. The following problem, attributed by Kurek and Ruci\'nski
\cite{ar} to V.~R\H odl, is still open:
\begin{conj}
$\frac{\tilde{r}(K_n,K_n)}{\hat{r}(K_n,K_n)}\to 0\text{ for } n\to\infty.$
\end{conj}
Conlon \cite{con} proved the above is true for an infinite increasing sequence of cliques. It is natural to ask for other natural graph
sequences $H_n$ with their online size Ramsey numbers much smaller that their
size Ramsey numbers, i.e. $\tilde{r}(H_n,H_n)=o(\hat{r}(H_n,H_n))$.
Bla\v zej, Dvo\v r\' ak and Valla \cite{ind} constructed an infinite sequence of increasing trees $T_n$ such that $\tilde{r}(T_n,T_n)=o(\hat{r}(T_n,T_n))$.
On the other hand, Grytczuk, Kierstead and Pra\l at \cite{gryt} constructed a family of trees $B_n$ on $n$ vertices such that both
$\tilde{r}(B_n,B_n)$ and $\hat{r}(B_n,B_n)$ are of order $n^2$.
Suppose that for a sequence of graphs $H_n$ on $n$ vertices we know that $\hat{r}(H_n,H_n)$ (or $\hat{r}(H_n,G)$ with a fixed graph $G$) is linear.
Then clearly also $\tilde{r}(H_n,H_n)$ (or $\tilde{r}(H_n,G)$) is linear.
In such cases, it is usually not easy to find multiplicative constants in the lower and the upper bound respectively, which do not differ much.
For example, for paths $P_n$ on $n$ vertices we have
$2n-3\le \tilde{r}(P_n,P_n)\le 4n-7$, for cycles $C_n$ we have $2n-1\le \tilde{r}(C_n,C_n)\le 72n-3$
(in case of even cycles the multiplicative constant in the upper bound can be improved to $71/2$).
The lower bound in both examples comes from an observation that in general $\tilde{r}(H_1,H_2)\ge |E(H_1)|+|E(H_2)|-1$, which is a result
of the following strategy of Painter: He colours every selected edge $e$ red unless $e$ added to a red subgraph
present at the board would create a red copy of $H_2$. The upper bound for paths was proved in \cite{gryt}, while the upper bound for
cycles -- in \cite{ind}.
The exact results on $\tilde{r}(H_1,H_2)$ are rare, except for very small graphs $H_1$, $H_2$.
It is known that $\tilde{r}(P_3,P_n)=\lceil 5(n-1)/4\rceil$ and $\tilde{r}(P_3,C_n)=\lceil 5n/4\rceil$ \cite{lo}.
The studies on games $\RR(\mathcal H_1, \mathcal H_2)$ for graph class $\mathcal H_1$, $\mathcal H_2$ such that at least one class is infinite, are not so
popular as their one-graph versions.
Nonetheless, they appear implicitly in the analysis of one-graph games $\RR(H_1,H_2)$ since a standard method of getting a lower bound on
$\tilde{r}(H_1,H_2)$ is to assume that Painter colours every selected edge $e$ red unless $e$ added to a red subgraph
present at the board would create a red copy of a graph from $\mathcal H$, for some graph class $\mathcal H$.
For example Cyman, Dzido, Lapinskas and Lo \cite{lo} analysed such a strategy while studying the game $\RR(C_k,H)$ for a connected graph $H$
and implicitly obtained that
\begin{equation}\label{cyccon}
\tilde{r}(\mathcal C,\mathcal Con_{n,m})\ge n+m-1,
\end{equation}
where $\mathcal C$ denote the class of all cycles
and $\mathcal Con_{n,m}$ is the class of all connected graphs with exactly $n$ vertices and at least $m$ edges.
Infinite graph classes are also studied explicitly in the size Ramsey theory and in its online counterpart.
Dudek, Khoeini and Pra\l at \cite{dud} initiated the study on the size Ramsey number $\hat{r}(\mathcal C,P_n)$.
Bal and Schudrich \cite{bal} proved that $2.06n-O(1)\le \hat{r}(\mathcal C,P_n)\le 21/4 n+27$.
The online version of this problem was considered by Schudrich \cite{ms}, who showed that $\tilde{r}(\mathcal C,P_n)\le 2.5 n+5$.
Here the multiplicative constant is quite close to the constant 2 in the lower bound, given by \eqref{cyccon}.
Let ${\mathcal C}_{\text odd}$ denote the class of all odd cycles and $S_n$ be a star with $n$ edges (i.e.~$S_n=K_{1,n}$).
Pikurko \cite{oleg}, while disproving Erd\H os' conjecture that $\hat{r}(C_3, S_n)=(3/2+o(1))n^2$, proved that
$\hat{r}({\mathcal C}_{\text odd}, S_n)=(1+o(1))n^2$. As for the online version of this problem, it is not hard to see that
Builder can force either a red $C_3$ or blue $S_n$ within $3n$ rounds (he starts with selecting $2n$ edges of a star, then he forces
a blue $S_n$ on the set of vertices incident to red edges). Thus $\tilde{r}(C_3, S_n)\le 3n$, so $\tilde{r}({\mathcal C}_{\text odd}, S_n)\le 3n$ and
we see another example of online size Ramsey numbers which are much smaller
that their classic counterpart. Let us mention that the constant 3 in the upper bound on $\tilde{r}(C_3, S_n)$ is not far
from the constant $2.6$ in the lower bound, implied by our main theorem \ref{oddcon} below.
In this paper we focus on games $\RR({\mathcal C}_{\text odd},\mathcal Con_{n,m})$ and $\RR(C_3,P_n)$.
Since $\tilde{r}(\mathcal C,\mathcal Con_{n,m})\le\tilde{r}({\mathcal C}_{\text odd},\mathcal Con_{n,m})$, in view of \eqref{cyccon} we have
$\tilde{r}({\mathcal C}_{\text odd},\mathcal Con_{n,m})\ge n+m-1$. We improve this lower bound. Our strategy for Painter is based on a potential function method and the golden number $\varphi=(\sqrt5+1)/2$ plays a role in it. Here is the first main result of our paper.
\begin{theorem}\label{oddcon}
For every $m,n\in\mathbb N$ such that $n-1\le m\le \binom n2$
$$\tilde{r}({\mathcal C}_{\text odd},\mathcal Con_{n,m})\ge \varphi n + m-2\varphi+1.$$
\end{theorem}
We will prove this theorem in Section \ref{lower}.
As a side effect, we receive the lower bound $\tilde{r}(C_{2k+1},T_n)\ge 2.6n-3$ for $k\ge 1$ and every tree $T_n$ on $n\ge 3$ vertices.
In order to find an upper bound for $\tilde{r}({\mathcal C}_{\text odd},\mathcal Con_{n,m})$, we begin from estimating $\tilde{r}(C_3,P_n)$. Here is our second main result,
which will be proved in Section \ref{upper}.
\begin{theorem}\label{c3pn}
For $n\ge 3$
$$\tilde{r}(C_3,P_n)\le 3n-4.$$
\end{theorem}
Clearly $\tilde{r}({\mathcal C}_{\text odd},\mathcal Con_{n,n-1})\le \tilde{r}(C_3,P_n)$ so Theorems \ref{oddcon} and \ref{c3pn} give the following bounds on $\tilde{r}(C_3,P_n)$.
\begin{cor}\label{cor1}
If $n\ge 3$, then
$$\big\lceil (\varphi+1)n -2\varphi\big\rceil\le \tilde{r}(C_3,P_n)\le 3n-4.$$
\end{cor}
It improves the best known multiplicative constants in the lower and upper bounds $2n-1\le \tilde{r}(C_3,P_n)\le 4n-5$, proved by
Dybizba\'nski, Dzido and Zakrzewska \cite{dzido}. The upper and the lower bounds in Corollary \ref{cor1} are optimal for $n=3, 4$
since $\tilde{r}(C_3,P_3)=5$ and $\tilde{r}(C_3,P_4)=8$, as it was verified (by computer) by Gordinowicz and Pra\l at \cite{pral}.
We believe that the upper bound is sharp for every $n\ge 3$.
\begin{conj}
$\tilde{r}(C_3,P_n)=3n-4$ for every $n\ge 3$.
\end{conj}
In view of Theorem \ref{c3pn}, we have $\tilde{r}({\mathcal C}_{\text odd},\mathcal Con_{n,n-1})\le 3n-4$.
In the last section we will prove the following upper bound on $\tilde{r}({\mathcal C}_{\text odd},\mathcal Con_{n,m})$.
\begin{theorem}\label{conupper}
For $n\ge 3$ and $n-1\le m\le (n-1)^2/4$
$$\tilde{r}({\mathcal C}_{\text odd},\mathcal Con_{n,m})\le n+2m+O(\sqrt{m-n+1}).$$
\end{theorem}
\section{Preliminaries}
For a subgraph of $G$ induced on $V'\subseteq V(G)$, we denote the set of its edges by $E[V']$. If $V'\subseteq V(G)$ is empty, then
we define $E[V']=\emptyset$.
We say that a graph $H$ is coloured if every its edge is blue or red. A graph is red (or blue) if all its edges are red (blue).
Let $G$ be a coloured graph. We say $G$ is red-bipartite if there exists a partition $V(G)=V_1\cup V_2$ (one of the two sets may be empty) such that there are no blue edges of $G$ between $V_1$ and $V_2$ and there are no red edges in $E[V_1]\cup E[V_2]$.
A pair of such sets $(V_1,V_2)$ is called a red-bipartition of $G$.
It is not hard to observe that a coloured graph $G$ is red-bipartite if and only if $G$ has no cycle with an odd number of red edges.
Furthermore, every component of a coloured graph has at most one red-bipartition up to the order in the pair $(V_1,V_2)$.
\section{Proof of Theorem \ref{oddcon}}\label{lower}
Let $n,m\in\mathbb N$. Consider the following auxiliary game $\mathcal G(n,m)$. In every round Builder chooses a previously not selected
edge from $K_\mathbb N$ and Painter colours it red or blue.
The game ends if after a move of Painter there is a coloured cycle with an odd number of red edges or
there exists a coloured connected graph $H$ with a red-bipartition $(V_1,V_2)$ such that for some $i\in \{1,2\}$ we have $|V_i|\ge n$
and $|E[V_i]|\ge m$. Builder's tries to finish the game as soon as possible, while the goal of Painter is opposite.
Let $\tilde{r}_{\mathcal G(n,m)}$ be the minimum number of rounds in the game $\mathcal G(n,m)$ provided both players play optimally.
Clearly, if in there is a red odd cycle or a connected blue graph with $n$ vertices and $m$ edges at the board, then the game $\mathcal G(n,m)$ ends so
$$\tilde{r}_{\mathcal G(n,m)}\le \tilde{r}({\mathcal C}_{\text odd},\mathcal Con_{n,m}).$$
Therefore in order to prove theorem \ref{oddcon} it is enough to prove that $\tilde{r}_{\mathcal G(n,m)}\ge \varphi n + m-2\varphi+1$.
We will define a strategy of Painter in $\mathcal G(n,m)$ based on a potential function and prove that the potential function does not grow too much
during the game. Let us define a function $f$ on the family of all coloured red-bipartite subgraphs of $K_{\mathbb N}$.
Suppose $G=(V,E)$ is a coloured red-bipartite graph.
If $G$ is an isolated vertex, then let $f(G)=0$. If $G$ is connected, with the red-bipartition $(V_1,V_2)$ and $|V|>1$, let
\begin{eqnarray*}
p_G(V_i)&=&|V_i|\varphi+|E[V_i]|,\text{ for } i=1,2;\\
a(G)&=&\max(p(V_1),p(V_2)),\\
b(G)&=&\min(p(V_1),p(V_2)),\\
f(G)&=&\varphi a(G)-\varphi+ \max(a(G)-\varphi^3,b(G)).
\end{eqnarray*}
Finally, if $G$ consists of components $G_1, G_2,\ldots, G_t$, then we put
$$f(G)=\sum_{i=1}^t f(G_i).$$
We will use also the following function $g:\mathbb R^2\to\mathbb R$
$$g(x,y)=\varphi \max(x,y)-\varphi+ \max(\max(x,y)-\varphi^3,\min(x,y)).$$
Note that $g$ is symmetric, nondecreasing with respect to $x$ (and $y$) and if $(V_1,V_2)$ is a red-bipartition of a connected, coloured graph $H$, then
$g(p_H(V_1),p_H(V_2))=f(H)$. We can also rewrite $g$ as
$$g(x,y)=x+y-
\varphi+(\varphi-1)\max(x,y)+\max(x-y-\varphi^3,y-x-\varphi^3,0),$$
so $g$ is convex.
We are ready to present the strategy of Painter in $\mathcal G(n,m)$. He will play so that after every round the coloured graph $G=(V(K_{\mathbb N},E)$
containing all coloured edges is red-bipartite. Furthermore, we will show that it is possible to colour edges selected by Builder so that
after every round the potential $f(G)$ increase not more than by $\varphi+1$. The inductive argument is stated in the following lemma.
\begin{lemma}\label{gd3p}
Let $G$ be a coloured subgraph of $K_\mathbb N$ with $V(G)=V(K_\mathbb N)$ and the edge set consisting of all edges coloured within $t\ge 0$ rounds of the game $\mathcal G(n,m)$. Suppose that $G$ is red-bipartite and $e$ is an edge selected by Builder in round $t+1$.
Then Painter can colour $e$ red or blue so that the obtained coloured graph $G'$ with $|E(G)|+1$ edges is red-bipartite and
$f(G')\le f(G)+\varphi+1$.
\end{lemma}
\begin{proof}
Let $G$ satisfy the assumptions of the lemma, let $e=uu'$ be an edge selected in round $t+1$, and suppose that $H,H'$ are the components of $G$ such that $u\in H$, $u'\in H'$, and $(V_1, V_2)$, $(V_1', V_2')$ are red-bipartitions of $H$ and $H'$, respectively.
We consider several cases. Below by $F+e_{red}$ and $F+e_{blue}$ we denote the coloured graphs obtain by adding $e$ to a coloured graph $F$ provided $e$ is coloured red or blue, respectively.
\begin{enumerate}
\item
$u,u'$ are in the same connected component of $G$, i.e. $H=H'$.
If $u\in V_1,u'\in V_2$ then Painter colours $e$ red. Then
$f(H+e_{red})=f(H)$ and $f(G+e_{red})=f(G)$.
If $u,u'\in V_1$, then Painter colours $e$ blue. Then we obtain the component $H''=H+e_{blue}$ with
$p_{H''}(V_1)=p_H(V_1)+1$ and $p_{H''}(V_2)=p_H(V_2)$. Therefore
$f(H'')\le f(H)+\varphi+1$ and $f(G+e_{blue})\le f(G)+\varphi+1$.
\item
$u,u'$ are isolated vertices in $G$.
Then Painter colours $e$ red. The obtained graph $G+e_{red}$ is red-bipartite and
$$f(G+e_{red})=f(G)+\varphi \varphi-\varphi+\varphi=f(G)+\varphi^2=f(G)+\varphi+1.$$
\item
$u'$ is isolated in $G$, but $u$ is not. We may assume that $u\in V_1$.
\begin{enumerate}
\item[a.]
Suppose that $p_H(V_2)>p_H(V_1)+\varphi+1$.
Then Painter colours $e$ blue. The obtained graph $G+e_{blue}$ is red-bipartite and for $F=H+e_{blue}$ we have
$$a(F)=\max(p_{F}(V_1\cup\{u'\}),p_{F}(V_2))=\max(p_H(V_1)+\varphi+1,p_H(V_2))=p_H(V_2)=a(H)$$
and
$$b(F)=p_{F}(V_1\cup \{u'\})=b(H)+\varphi+1.$$
Therefore $f(G+e_{blue})\le f(G)+\varphi+1$.
\item[b.]
Suppose that $p_H(V_2)\le p_H(V_1)+\varphi+1$.
Then Painter colours $e$ red. Thus $u'$ is added to $V_2$ in the red-bipartition of $F=H+e_{red}$ and
$$p_{F}(V_2+u')=p_H(V_2)+\varphi.$$
In order to estimate $f(F)$, we calculate $dg/dy(x_1,\cdot)$ for a fixed $x_1\in\mathbb R$. By definition of $g$ we have
$$
\frac{dg}{dy}(x_1,y)=\begin{cases}
0& \text{ for } y<x_1-\varphi^3,\\
1& \text{ for } x_1-\varphi^3< y<x_1,\\
\varphi& \text{ for } x_1< y<x_1+\varphi^3,\\
\varphi^2& \text{ for } y> x_1+\varphi^3.\\
\end{cases}
$$
Thus, for every $k\in(0,\varphi)$, in view of the fact that $p_H(V_2)+k\le p_H(V_1)+\varphi+1+k<p_H(V_1)+\varphi^3$, we have
$$\frac{dg}{dy}(p_H(V_1),p_H(V_2)+k)\le \varphi.$$
Therefore
\begin{eqnarray*}
f(F)-f(H)&=& g(p_{F}(V_1),p_{F}(V_2\cup \{u'\}))-g(p_H(V_1),p_H(V_2))\\
&=& g(p_{H}(V_1),p_{H}(V_2)+\varphi)-g(p_H(V_1),p_H(V_2))\le \varphi^2=\varphi+1.
\end{eqnarray*}
\end{enumerate}
\item \label{nieizol}
$u$ and $u'$ are not isolated in $G$ and $H\neq H'$.
We may assume that $u\in V_1$ and $u'\in V_1'$.
Let $a=\max(p_H(V_1),p_H(V_2))$, $b=\min(p_H(V_1),p_H(V_2))$, $c=a-b$,
$a'=\max(p_{H'}(V_1'),p_{H'}(V_2'))$, $b'=\min(p_{H'}(V_1'),p(V_2'))$ and $c'=a'-b'$.
\begin{enumerate}
\item[a.]
Suppose that $p_H(V_1)>p_H(V_2)+1$ and $p_{H'}(V_1')+1<p_{H'}(V_2')$.
Then Painter colours $e$ blue. Thus the components $H,H'$ of $G$ are joined into a component $F$ of $G+e_{blue}$, with the
red-bipartition $(V_1\cup V_1', V_2\cup V_2')$ of $F$.
Notice that $p_H(V_1)=a$, $p_H(V_2)=b$, $p_{H'}(V_1')=b'$, $p_{H'}(V_2')=a'$ and $c,c'>1$. Furthermore
\begin{eqnarray*}
p_F(V_1\cup V_1')&=& p_H(V_1)+p_{H'}(V_1')+1=a+b'+1,\\
p_F(V_2\cup V_2')&=& p_H(V_2)+p_{H'}(V_2')=a'+b.
\end{eqnarray*}
Assume to the contrary that $f(G+e_{blue})>f(G)+\varphi+1$. This means that
$$f(H)+f(H')+\varphi+1< f(F),$$
or equivalently
$$g(a,b)+g(a',b')+\varphi+1< g(a+b'+1,a'+b).$$
The above inequality and a general property of $g$ that $g(x+z,y+z)=g(x,y)+(\varphi+1)z$, implies that
$$g(c,0)+g(c',0)+\varphi^2< g(c+1,c').$$
If $c<c'$, then we can swap $c$ and $c'$ and the left-hand side will not change while the right-hand side will get bigger (by the fact that $g$ is convex and symmetric). It means that we may assume that $c\ge c'$. Thus in view of the definition of $g$ we get
\begin{eqnarray*}
c\varphi+\max(c-\varphi^3,0)+c'\varphi+\max(c'-\varphi^3,0)-2\varphi+\varphi^2\\
<(c+1)\varphi-\varphi+\max(c+1-\varphi^3,c'),
\end{eqnarray*}
and hence
$$\max(c-\varphi^3,0)+(c'-1)\varphi+\max(c'-\varphi^3,0)<\max(c-\varphi^3,c'-1).$$
This inequality implies that $\max(c-\varphi^3,(c'-1)\varphi)<\max(c-\varphi^3,c'-1)$, which for $c'>1$ leads to a contradiction.
Thereby we proved that $f(G+e_{blue})\le f(G)+\varphi+1$.
\item[b.]
Suppose that $p_H(V_1)>p_H(V_2)$ and $p_{H'}(V_1')<p_{H'}(V_2')$ but case \ref{nieizol}a does not hold, i.e.
$p_H(V_1)\le p_H(V_2)+1$ or $p_{H'}(V_1')+1\ge p_{H'}(V_2')$.
Then Painter colours $e$ red. Similarly to the previous case, the components $H,H'$ of $G$ are joined into a component $F$ of $G+e_{red}$ but the red-bipartition of $F$ is $(V_1\cup V_2', V_2\cup V_1')$ and
$$
p_F(V_1\cup V_2')=a+a',\
p_F(V_2\cup V_1')= b+b'.
$$
Again, assume the contrary that $f(G+e_{red})>f(G)+\varphi+1$, which is equivalent to
$f(H)+f(H')+\varphi+1< f(F)$ and hence
$$g(a,b)+g(a',b')+\varphi^2< g(a+a',b+b').$$
Then we get
$$g(c,0)+g(c',0)+\varphi+1<g(c+c',0)$$
and in view of the definition of $g$
$$\max(c-\varphi^3,0)+\max(c'-\varphi^3,0)+1<\max(c+c'-\varphi^3,0).$$
By the assumptions of this case we have $c\le 1$ or $c'\le 1$ so the above inequality cannot hold.
\end{enumerate}
Because of the symmetric role of $H$ and $H'$, cases \ref{nieizol}a and \ref{nieizol}b cover all situations with
$(p_H(V_1)-p_H(V_2))(p_{H'}(V_1')-p_{H'}(V_2'))<0$. It remains to analyse the opposite case.
\begin{enumerate}
\item[c.] Assume that $(p(V_1)-p(V_2))(p(V_1')-p(V_2'))\ge 0$.
Then Painter colours $e$ red. As in the previous case, the components $H,H'$ of $G$ are joined into a component $F$ of $G+e_{red}$ with its red-bipartition $(V_1\cup V_2', V_2\cup V_1')$ but we have either
$$
p_F(V_1\cup V_2')=a+b'\text{ and }
p_F(V_2\cup V_1')= b+a',
$$
or
$$
p_F(V_1\cup V_2')=b+a'\text{ and }
p_F(V_2\cup V_1')=a+b'.
$$
In both cases, by the symmetry of the function $g$, the assumption that $f(H)+f(H')+\varphi+1< f(F)$ leads to
$$g(a,b)+g(a',b')+\varphi+1<g(a+b',b+a').$$
Then
$$g(c,0)+g(c',0)+\varphi+1<g(c,c').$$
The inequality is symmetric with respect to $c$ and $c'$ so we may assume $c\ge c'$. Then the above inequality is equivalent to
$$\max(c-\varphi^3,0)+\max(c'-\varphi^3,0)+c'\varphi+1<\max(c-\varphi^3,c').$$
It implies that $\max(c-\varphi^3,c'\varphi)<\max(c-\varphi^3,c')$ and again we get a contradiction.
\end{enumerate}
\end{enumerate}
\end{proof}
We have proved that Painter has a strategy in $\mathcal G(n,m)$ such that at every moment of the game $\mathcal G(n,m)$
the graph induced by the set of all coloured edges is red-bipartite and in every round its potential $f$ increases not more than by $\varphi+1$
(at the start of the game the potential is 0). We infer that a coloured cycle with an odd number of red edges never appears in the game.
Suppose that (given the described strategy of Painter and any strategy of Builder) after $t$ rounds of the game
the graph $G$ induced by the set of all coloured edges contains a component $H$ with a red bipartition $(V_1,V_2)$ such that
$|V_1|\ge n$ and $|E[V_1]|\ge m$. From one hand we have $f(G)\le t(\varphi+1)$. On the other hand, we have
$$f(G)\ge f(H)=\varphi a(H)-\varphi+ \max(a(H)-\varphi^3,b(H))\ge (\varphi+1)a(H)-\varphi-\varphi^3
\ge (\varphi+1)(\varphi n+m)-\varphi-\varphi^3.$$
Therefore
$$t\ge \frac{(\varphi+1)(\varphi n+m)-\varphi-\varphi^3}{\varphi+1}=\varphi n+m-2\varphi+1.$$
We conclude that Painter can survive at least $\varphi n+m-2\varphi+1$ in the game $\mathcal G(n,m)$. In view of previous remarks,
it proves also that $\tilde{r}({\mathcal C}_{\text odd},\mathcal Con_{n,m})\ge \varphi n+m-2\varphi+1$.
\section{Proof of Theorem \ref{c3pn}}\label{upper}
If $n=3$, then we know that $\tilde{r}(C_3,P_n)=5=3n-4$. One can also check that Builder can force a red $C_3$ or a blue $P_3$
playing on $K_5$ only -- we will use this fact later.
Throughout this section, we assume that $n\ge 4$. While considering a moment of the game $\RR(C_3,P_n)$ we say
that Builder connects two paths $P$ and $P'$ if in the considered round he selects an edge incident to an end of $P$ and an end of $P'$.
Let $P(s,t)$ be a coloured path on $s+t$ vertices obtained from two vertex-disjoint blue paths $P_s$ and $P_t$ by connecting their
ends with a red edge. Moreover, let $P(s,0)$ denote a blue path on $s$ vertices.
A maximal (in sense of inclusion) coloured path is called a $brb$-path if it is isomorphic to $P(s,t)$ for some $s,t>0$.
We say that a blue path is pure if it is maximal (in sense of inclusion) and none of its vertices is incident to a red edge.
Let $u(P(s,t))=|s-t|$ for every $s,t\ge 0$.
Thus $u$ is a measure of the path "imbalance". We say that $P(s,t)$ is balanced if $u(P(s,t))=0$ and imbalanced otherwise.
We start with the following observation.
\begin{lemma} \label{phase2}
Suppose that $a_i\ge b_i$ for $i=0,1,\ldots,k$ and a coloured graph $G$ contains vertex-disjoint paths $P(a_0,b_0),P(a_1,b_1),\ldots,P(a_k,b_k)$.
Then Builder can force either a red $C_3$ or a blue path on $a_0+\sum_{i=1}^k b_k$ vertices within at most $2k$ rounds.
\end{lemma}
\begin{proof}
We prove it by induction. The case $k=0$ is trivial. If $k>0$, then Builder can connect the end of the blue path $P_{a_0}$
with both ends of a red edge in $P(a_k,b_k)$ and force a blue path of length at least $a_0+b_k$.
Now we have vertex-disjoint paths $P(a_0+b_k,0),P(a_1,b_1),\ldots,P(a_{k-1},b_{k-1})$, so by the induction hypothesis
Builder can force a blue path of length $a_0+b_k+\sum_{i=1}^{k-1} b_i$ in the next $2k-2$ moves.
\end{proof}
Consider the following strategy for Builder in $\RR(C_3,P_n)$.
Builder will assume that the board of the game is $K_{2n-1}$, other edges of $K_\mathbb N$ are ignored.
We divide the game into two stages. Roughly speaking, in Stage 1 Builder forces as many $brb$-paths as possible so that
one of the coloured paths is more imbalanced than all the other put together. In Stage 2 Builder applies his strategy from
Lemma \ref{phase2} and forces a long blue path.
More precisely, at the beginning of the game, we have $2n-1$ pure blue paths (they are trivial).
In every round of Stage 1, Builder connects two shortest pure blue paths.
Painter colours the selected edge red or blue.
Builder continues doing so as long as there are at least two pure blue paths at the board, with one exception:
If Painter colours the first $n-3$ edges red, then Stage 1 ends.
If this exception happens, the coloured graph consists of $n-3$ isolated red edges and five isolated vertices $u_1,\ldots,u_5$.
The game proceeds to Stage 2.
Builder forces a blue $P_3$ within five next rounds, using only edges of the graph $K_5$ on the vertex set $\{u_1,\ldots,u_5\}$.
After that the coloured graph contains a path $P(3,0)$ and $n-3$ copies of $P(1,1)$,
so in view of Lemma \ref{phase2} Builder can force a blue path on $3+n-3=n$
vertices or a red $C_3$ within next $2(n-3)$ rounds. Thus in this case the game ends after at most $n-3+5+2(n-3)=3n-4$ rounds.
Further we assume that Painter colours blue at least one of the edges selected in the first $n-3$ rounds.
Stage 1 ends when there is at most one pure blue path at the board. Observe that at the end of Stage 1
the graph induced by all coloured edges consists of vertex-disjoint $brb$-paths and at most one pure blue path.
Since Builder always connects two shortest pure blue paths, we infer that the following holds.
\begin{fact} \label{times2}
After every round of Stage 1, if $P_k$, $P_l$ are pure blue paths, then $k\le 2l$.
\end{fact}
Let us verify that also the following is true.
\begin{fact} \label{most}
At every moment of Stage 1, there is at most one pure blue path which number of vertices is not a power of $2$.
\end{fact}
Indeed, suppose that this property holds after some round $t$, in the next round Builder connects pure blue paths are $P_{a}$ and $P_{b}$,
and after round $t+1$, the number of pure blue paths in which the number of vertices is not a power of $2$ increases.
It is possible only if the edge in round $t+1$ was coloured blue, $a=2^k$ and $b=2^l$, with some integers $k\neq l$.
From the fact \ref{times2} we know that it is only possible when $k+1=l$, $P_{2^k}$ was the only shortest pure blue path and $P_{2^l}$ was
both the second shortest and the longest pure blue path after round $t$. It means that after round $t+1$ there is a pure blue
path $P_{a+b}$ and every other pure blue path has $2^l$ vertices.
The next lemma gives some insight into the imbalance properties of the collection of the coloured paths in Stage 1.
\begin{lemma}\label{imb}
Let $H_1,H_2,\ldots,H_q$ be the sequence of all imbalanced $brb$-paths at the moment of the game and suppose
they appeared in that order (i.e.~$H_1$ was created first in the game, $H_2$ was second and so on).
Then $u(H_{k+1})\ge 2u(H_k)$ for $k=1,2, \ldots, q-1$. Moreover for any pure blue path $H$ we have $u(H)\ge 2u(H_q)$.
\end{lemma}
\begin{proof}
Let us present the inductive argument. Suppose the assertion is true for the sequence $H_1,H_2,\ldots,H_{q-1}$
and consider the round $r$ after which $H_{q-1}=P(s,t)$ appeared. From the facts \ref{times2} and \ref{most} we know that $s,t\in [2^k,2^{k+1}]$ for some integer $k$, so $u(H_{q-1})\le 2^k$. By the two facts and by the minimality of the blue paths generating $H_{q-1}$ we infer that at the end of round $r$ all pure blue paths have $2^{k+1}$ vertices. Therefore $2^{k+1}|u(H)$ for any pure blue path $H$ created after round $r$, until the end of Stage 1. Thus $2u(H_{q-1})\le u(H)$ for any such path $H$. Furthermore, since $H_q$ is created after round $r$ by connecting two pure blue paths on, say, $s'$ and $t'$ vertices, we have $u(H_q)=|s'-t'|>0$ and this
number is divisible by $2^{k+1}$. Therefore $2u(H_{q-1})\le u(H_q)$. The argument that $2u(H_{q})\le u(H)$ for every pure blue path
is analogous to the one for $H_{q-1}$.
\end{proof}
Let us consider the position at the end of Stage 1. Let us recall that then we have at least one blue edge at the board $K_{2n-1}$,
at most one pure blue path and a collection of $brb$-paths (if any).
If there is a pure blue path, then let $H_0$ be that path. Otherwise let $H_0$ be the last imbalanced $brb$-path
that appeared in Stage 1. Let $H_1,H_2,\ldots,H_m$ be the sequence of all imbalanced $brb$-paths which appeared (in that order) in Stage 1, except for the path $H_0$. It follows from Lemma \ref{imb} that
\begin{equation}\label{uu}
\sum\limits_{j=1}^m u(H_j)\le \sum\limits_{j=0}^{m-1} \frac{u(H_m)}{2^j}\le 2 u(H_m)\le u(H_0).
\end{equation}
Let $H_1',H_2',\ldots, H_l'$ be the family of all $brb$-paths which are balanced (at the end of Stage 1).
In order to calculate the number of rounds in Stage 1, notice that the subgraph $G$ of $K_{2n-1}$ with $V(G)=V(K_{2n-1})$ which edge set
consists of all edges coloured in Stage 1 is a union of $m+l+1$ vertex-disjoint paths so Stage 1 lasts $|E(G)|= |V(G)|-m-l-1=2n-2-m-l$ rounds.
Observe also that $G$ has no isolated vertices. Indeed, if such a trivial blue path existed at the end of Stage 1, then
in every previous round, Builder would have connected two trivial blue paths and Painter has coloured the selected edges red.
It would contradict the assumption that there is a blue edge in Stage 1. Thus $G$ has no isolated vertices and thus it has at most
$n-1$ components. We conclude that
\begin{equation}\label{comp}
m+l+1\le n-1.
\end{equation}
After Stage 1 the game proceeds to Stage 2.
Note that for any path $P=P(s,t)$ (balanced or imbalanced) with $s\ge t$ we have $s=(|V(P)|+u(P))/2$ and $t=(|V(P)|-u(P))/2$.
Let us also recall that for Builder the board of the game is $K_{2n-1}$ so
$$\sum\limits_{j=0}^m |V(H_j)|+\sum\limits_{j=1}^l |V(H_j')|=2n-1.$$
In Stage 2 Builder applies his strategy from Lemma \ref{phase2} to the paths
$H_0, H_1,\ldots,H_m, H_1',H_2',\ldots,H_l'$. Thereby he forces a blue path on $t$ vertices, where
\begin{eqnarray*}
t&=&\frac12\Big(|V(H_0)|+u(H_0)+\sum\limits_{j=1}^m (|V(H_j)|-u(H_j))+\sum\limits_{j=1}^l (|V(H_j')|-u(H_j'))\Big)\\
&=&\frac12\Big(2n-1+u(H_0)-\sum\limits_{j=1}^m u(H_j)-\sum\limits_{j=1}^l u(H_j')\Big)\\
&=&n-\frac12+\frac 12\Big(u(H_0)-\sum\limits_{j=1}^m u(H_j)\Big)\ge n-\frac12.
\end{eqnarray*}
The last inequality follows from \eqref{uu}. Stage 2 ends.
Thus in Stage 2 Painter forces either a red $C_3$ or a blue $P_n$ and Stage 2 lasts, in view of Lemma \ref{phase2}, at most $2(m+l)$ rounds.
We conclude that the number of rounds in Stage 1 and Stage 2 is not greater than $2n-2-m-l+2(m+l)=2n-2+m+l$.
Because of \eqref{comp}, the game $\RR(C_3,P_n)$ lasts at most $2n-2+n-2=3n-4$ rounds.
\section{Proof of Theorem \ref{conupper}}
Let $n\ge 3$ and $n-1\le m\le(n-1)^2/4\le \binom{\lfloor n/2\rfloor }{2}+\binom{\lceil n/2\rceil }{2}$.
In view of Theorem \ref{c3pn} it is enough to assume that $m\ge n$.
Let $k$ be the smallest integer such that $1\le k\le n$ and $m\le n-k+\binom{\lfloor k/2\rfloor }{2}+\binom{\lceil k/2\rceil }{2}$.
Builder begins the game $\RR({\mathcal C}_{\text odd},\mathcal Con_{n,m})$ by forcing either a red triangle or a connected blue graph $n$ vertices.
By theorem \ref{c3pn} it takes him at most $3n-4$ rounds.
Suppose that after there is no red cycle at the board and denote the blue graph on $n$ vertices by $B$.
Let $B'$ be a connected blue subgraph of $B$ with $|V(B')|=k$. Clearly $|E(B')|\ge k-1$ and $|E(B)\setminus E(B')|\ge n-k$.
Further in the game, Builder selects all edges of $E[V(B')]$ which are not coloured yet. It takes him at most
$\binom{k}{2}-|E(B')|\le \binom{k}{2}-(k-1)$ rounds. If there is no red triangle in the resulting coloured complete graph
on $k$ vertices, then by the Tur\'an theorem at least $\binom{\lfloor k/2\rfloor }{2}+\binom{\lceil k/2\rceil }{2}$ of its edges are blue.
There are also at least $n-k$ blue edges in $E(B)\setminus E(B')$ so after at most
$3n-4+\binom{k}{2}-(k-1)$ rounds
we have a blue graph on $n$ vertices with at least
$n-k+\binom{\lfloor k/2\rfloor }{2}+\binom{\lceil k/2\rceil }{2}\ge m$ edges.
Thus
$$\tilde{r}({\mathcal C}_{\text odd},\mathcal Con_{n,m})\le 3n+\binom{k}{2}-k-3.$$
It follows from the definition of $k$ that $m=n+\frac{k^2}{2}+O(k)$ and $k=O(\sqrt{m-n+1})$ and hence
$$\tilde{r}({\mathcal C}_{\text odd},\mathcal Con_{n,m})\le n+2m+O(\sqrt{m-n+1}).$$
\bibliographystyle{amsplain}
| 2024-02-18T23:39:46.625Z | 2021-11-30T02:19:38.000Z | algebraic_stack_train_0000 | 353 | 6,191 |
|
proofpile-arXiv_065-1849 | \section{Introduction}
Cross-lingual transfer learning provides a way to train a model using a dataset in one or more languages and use this model to make inferences in other languages. This type of transfer learning can benefit applications such as question answering \cite{lee2019cross}, dialogue systems \cite{schuster2018cross}, machine translation \cite{ji2020cross}, named entity recognition \cite{johnson2019cross}, as in all of these applications it is essential to have good representations of words and texts. These representations should be independent of the language and capture high-level semantic relations.
Contextual word embeddings (such as ELMo \cite{peters2018deep}, GPT \cite{radford2018improving}, or BERT \cite{devlin2018bert}) have shown state-of-the-art performance on many NLP tasks. Their performance depends on the availability of a large amount of labeled text data. Recent work with Multilingual BERT (M-BERT) has demonstrated that the model performs well in zero-shot settings \cite{conneau2018xnli}. In this case, only labeled English data are necessary to train the model and use it to make inferences in another language.
Large-scale Multi-label Text Classification (LMTC) is the task of assigning a subset from a collection of thousands of labels to a given document. There are many challenges connected with this task. First, the distribution of labels is usually sparse and follows the power-law distribution. Another challenge is the availability of a large dataset to train a good model that generalizes well to unseen data. Collecting and annotating such datasets is an expensive and cumbersome process; annotators need to read the entire document and check against all available labels to decide which labels to assign to the document. Furthermore, it is very likely that annotators are missing some potentially correct tags.
Cross-lingual transfer learning (CLTL) can mitigate the issue of dataset availability for LMTC tasks by jointly training an LTMC model for several languages. It is also possible to train an LTMC for low-resources languages in zero-shot settings using available data in other languages and then making inferences in the unseen target language.
French and German alongside with English are the main focus of this paper. Ethnologue's method of calculating lexical similarity between languages \cite{rensch1992calculating} shows that English has a lexical similarity of 60\% with German and 27\% with French. Ethnologue's method compares a regionally standardized wordlist and counts those forms that show similarity in both form and meaning.
In this work, we focus on cross-lingual transfer learning for LMTC task, based on JRC-Aquis dataset \cite{steinberger2006jrc} and an extended version of EURLEX57K \cite{chalkidis2019large} dataset. Both datasets contain documents from EurLex, the legal database of the European Enion (EU), and they are annotated using descriptors from the the European Union’s multilingual and multidisciplinary thesaurus EuroVoc. JRC-Aquis is a large parallel corpus of documents available in 25 languages including English, French and German. EURLEX57K is available in English, we extended this dataset to include parallel documents in French and German.
The goal of this work is to start a baseline for LMTC based on these two multilingual datasets which contain parallel documents in English, French and German. We compare between two CLTL settings for this task: (i) a zero-shot setting in which we train a multi-lingual model using the English training set and then we test using the French and German test sets; (ii) a joint training setting in which we train the model using all training data including English, French and German training sets.
The main findings and contributions of this work are: (i) the experiments with multilingual-BERT and multilingual-DistilBERT with gradual unfreezing and language model finetuning (ii) providing a new standardized multilingual dataset for further investigation, (iii) ablation studies to measure the impact and benefits of various training strategies.
The remainder of the paper is organized as follows: After a discussion of related work in Section \ref{sec-relatedworks}, we discuss CLTL (section \ref{sec-cross-lingual}) and multi-lingual datasets (sections \ref{sec-datasets}). Then we present the main methods (BERT, DistilBERT) and strategies for training multi-lingual model in Section \ref{section-methods}. Section \ref{sec_results} contains extensive evaluations of the methods on both datasets as well as ablation studies, and after a discussion of results (Section \ref{sec-discussion}) we conclude the paper in Section \ref{sec-conclusion}.
\section{Related Works}
\label{sec-relatedworks}
In the realm of cross-lingual transfer learning, Eriguchi et al.~\cite{eriguchi2018zero} performed zero-shot binary sentiment classification by reusing an encoder from multilingual neural machine translation; they extended this encoder with a task-specific classifier component to perform text classification in a new language, where training data in this particular language was not used. On Amazon reviews, their model achieves 73.88\% accuracy on the French test set in zero-shot settings when training using English training data only, meanwhile including French training data in the training process increases the accuracy on the French test set to 83.10\%. As a result, the zero-shot model obtains 92.8 \% of the accuracy achieved after including French training data. \\
Pelicon et al.~\cite{pelicon2020zero} used multilingual BERT to perform zero-shot sentiment classification by training a classifier in Slovene and making inference using texts in other languages. The model trained using the Slovene training set obtains $52.41 \pm2.58$ F1-score on the Croatian test set, however on the Slovene test set its performance reaches $63.39 \pm 2.42$ F1-score.\\
Keung et al.~\cite{keung2019adversarial} improved zero-shot cross-lingual transfer learning for text classification and named entity recognition by incorporating language-adversarial training to extract language-independent representations of the texts and align the embeddings of English documents and their translations. Regarding the classification task, they trained a classifier using English training data of the MLDoc dataset, they report 85.7\% and 88.1\% accuracy on French and German test sets correspondingly after using language-adversarial training.\\
Chalkidis et al.~\cite{chalkidis2019large} published a new EURLEX57K dataset, a dataset of European legal documents in English. Steinberger et al.~\cite{steinberger2006jrc} presented JRC-Acquis, a freely available parallel corpus containing European Union documents. This dataset is available in 20 official EU languages, including English, French, and German.\\
In our previous work~\cite{shaheen2020large}, we used a transformer-based pre-trained model (BERT, DistilBERT, RoBerta, XLNet) to extract high-level vector representations from legal documents. First, we applied Language Model Finetuning (LMFT) to this model using documents from the training set; the goal here is to improve the quality of document representations extracted from this model. Then, we extened the previously finetuned model with a classifier. Later, the transformer model and the classifier were jointly trained while gradually unfreezing the layers of the transformer model during training. This approach led to a significant improvement in the quality of the model.
In this work, we experiment with Multilingual-BERT and Multilingual-DistilBERT under cross-lingual zero-shot and joint-training transfer settings. We provide ablation studies to measure the impact of various training strategies and heuristics. Moreover, we provide new standardized multilingual dataset for further investigation by the research community.
\section{Cross-Lingual Transfer Learning}
\label{sec-cross-lingual}
The idea behind Cross-Lingual Transfer Learning (CLTL) in text classification tasks is to use a representation of words or documents extracted using a multilingual model; this representation should be independent of the language and capture high-level semantic and syntactic relations. Through transfer learning, it is possible to train a classifier using a dataset in one or more languages (source languages) and then transfer knowledge to different languages (target languages). This transfer learning approach is well-suited for low-resourced languages and for tasks requiring a lot of data. The performance obtained with CLTL aims to be as close as possible to training the entire system on language-specific resources.
There are different schemes for cross and multilingual document classification, which can be distinguished by the source and target languages, as well as the approach of selecting the best model. In a Zero-Shot Learning (ZSL) scheme, the source languages are different from the target languages, and the selection of the best model is performed using a development set from the source languages. In the Target Learning (TL) scheme, the source and target languages do not overlap, but the model selection is performed using the development set of target languages. In a Joint Learning (JL) scheme, the source and target languages are the same, and the selection method is applied using the development set of these languages.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{figs/cross.png}
\caption{Zero-shot Cross-lingual transfer learning.}
\end{figure}
\section{Datasets}
\label{sec-datasets}
In this section, we introduce the multilingual EuroVoc thesaurus used to classify legal documents in both JRC-Aquis and EURLEX57K datasets. Then we explore the multilingual version of JRC-Aquis V3. We also describe how we extended EURLEX57K dataset by adding parallel documents available in French and German.
\subsection{EuroVoc Thesaurus}
The EuroVoc thesaurus is a multilingual thesaurus thematically covering many of the activities of the EU. It contains 20 domains, each domain contains a number of micro-thesauri. Descriptors in EuroVoc are classified under these micro-thesauri, and each descriptor belongs to one or more micro-thesauri. Relations between descriptors are represented using the SKOS ontology\footnote{https://www.w3.org/2004/02/skos}. Hierarchical relation between descriptors are specified with the SKOS \emph{broader} relation.
Used instead relation identifies the relation between a descriptor and its replacement.
The SKOS \emph{related} link is used to map a descriptor to its related descriptors; Used for relation maps each descriptor to its related labels. In total, there are 127 micro-thesaurus and 7221 Descriptors.
\subsection{JRC-Acquis multilingual}
\label{secjrc}
JRC-Acquis dataset is a smaller dataset with parallel documents in 20 languages; this dataset overlaps with EURLEX57K dataset and contains additional documents. It is labeled using descriptors from EuroVoc. We selected documents in English, French, and German for our experiments; we show statistics about this dataset in table~\ref{tab_stats_jrc}. We do not use unlabeled documents for classifier finetuning. Therefore, we do not assign them to any training split, and we use them only for language model finetuning.
\begin{table}[htbp]
\small
\centering
\caption{JRC-Acquis dataset in English (EN), French (Fr) and German (DE). Number of documents in train, development and test sets in addition to the number of documents with no split and the total number of documents.}
\label{tab_stats_jrc}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Language & train & development & test & no split & total \\\hline
EN&16454&1960&1968&3163&23545\\\hline
FR&16434&1959&1967&3267&23627\\\hline
DE&16363&1957&1965&3256&23541\\\hline
\end{tabular}
\end{table}
\subsection{EURLEX57K multilingual}
\label{seceurlex}
EUR-Lex documents are Legal documents from the European Union labeled using the set of EuroVoc thesaurus descriptors.
We collected parallel documents in German and French to the documents in EURLEX57K dataset. We use the CELEX ID from the original EURLEX57K dataset to divide the data into train, development, and test sets. The documents from the parallel corpora are assigned the same splits as in the original monolingual EURLEX57K dataset. Therefore, our final dataset contains parallel texts in 3 languages. Statistics about this dataset are found in table~\ref{tab_stats_eur}.
\begin{table}[htbp]
\small
\centering
\caption{Multilingual EURLEX57K dataset in English (EN), French (Fr) and German (DE). Number of documents in train, development and test sets in addition to the number of documents with no split and the total number of documents.}
\label{tab_stats_eur}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Language & train & development & test & no split & total \\\hline
EN&44428&5929&5921&24004&80282\\\hline
FR&44427&5929&5921&24452&80729\\\hline
DE&43749&5842&5820&23942&79353\\\hline
\end{tabular}
\end{table}
We extended our dataset by including EUR-Lex documents that are not available in EURLEX57K.
We use these additional documents only for Language Model finetuning stage (see section \ref{sub_sec_training_strategies}), so they do not have a training split, and we do not use them in classifier finetuning.
\section{Methods}
\label{section-methods}
In this section we describe the methods used in the ZSL and JT experiments presented in the results section, and multilingual training process.
Also, we discuss important related points such as language model finetuning, and gradual unfreezing.
\subsection{Multilingual Transformer Based Models}
\label{secmultimodels}
\textbf{BERT} it is a transformer-based architecture trained using masked language model (MLM) and next sentence prediction (NSP) objectives. In MLM, 15\% of the tokens are randomly masked, and the model tries to predict them from the context. BERT learns rich contextual representations of words and relations between them.
BERT uses a special token [CLS] token for classification, which is added at the beginning of the text by the tokenizer.
The token reflects the hidden representation of the last BERT layer and aggregates the sequence representation.
BERT appeared in 2019, and since then was successfully applied in many natural language processing and understanding tasks.
In this work, we utilize the multilingual version of BERT called M-BERT. \\
\textbf{DistillBERT} it is a distilled version of BERT; it achieves over 95\% of BERT's performance while having 40\% fewer parameters. In our experiments, we used DistillBERT to select the best training strategy for computationally expensive experiments, and then apply the strategy for M-BERT. We refer to the multilingual version of DistilBERT as M-DistilBERT.
\subsection{Multilingual Training}
To train our multilingual cross-lingual model, we finetune transformer-based models (see Section \ref{secmultimodels}) using multilingual documents from the legal domain (see Sections \ref{seceurlex} and \ref{secjrc}).
The classifier is built upon the document representation produced by the M-BERT and M-DistilBERT models. We pass the representation of [CLS] token through a fully connected layer and then project the output to a vector with the size of the target classes count.\\
In finetuning the language model, we experimented with different numbers of epochs and different combinations of datasets; an ablation study is found in section \ref{ablation}.\\
The classifier is trained in the ZSL scheme using the English part of the dataset, we pick the model configuration with the best F1-score on the English test set, and evaluate it on the French and German datasets independently.\\
In JT scheme, the model is trained by including all the languages in the training and model picking process.
We evaluate the selected model using the test sets in English, French, and German independently.
To evaluate the effect of having parallel languages in the training process, we compare the model trained in the ZSL scheme and the model trained in the JL scheme on the English test set; the results of this ablation study are given in Section \ref{ablation}
\subsection{Training Strategies}
\label{sub_sec_training_strategies}
In line with Shaheen et al.~\cite{shaheen2020large}, we train multilingual classifiers using the training strategies described below.
The first strategy is \emph{language model finetuning} of the transformer model before using it in classification. Finetuning is done on all training documents, and additionally on unlabeled documents available
in the EurLex database. This step aims at improving the model's representation of legal documents.
Secondly, in \emph{gradual unfreezing}, we freeze all the model's layers with the exception of the last few layers. We start by training only those layers.
Later, the number of unfrozen layers is gradually increased during training. An ablation study about the effect of using such training strategies on multilingual models trained in ZSL and JT schemes is found in Section \ref{ablation}.
Both training strategies are proposed by Howard and Ruder \cite{howard2018universal}.
\subsection{Baseline}
Shaheen et al.~\cite{shaheen2020large} investigated the performance of various transformer-based models (including BERT, RoBERTa, and DistillBERT), in combination with training strategies such as language model finetuning and gradual unfreezing. The authors report their results on the English part of JRC-Acquis and EURLEX57K dataset.
Here, we use these results as a baseline to compare our results on the English part of the datasets.\\
However, to the best of our knowledge, no baseline exists for the text classification using EurLex and JRC-Acquis for French and German,
for which we provide a reference evaluation for both the JT and ZSL schemes.
\subsection{Evaluation}
Following Shaheen et al. \cite{shaheen2020large}, we use F1-score as a decision support metric. This metric focuses on measuring how well the system helps to recommend correct labels, it aims at selecting relevant labels and avoiding irrelevant labels. Precision is the percentage of selected labels that are relevant to the document. Its focus is recommending mostly related labels. Recall is the percentage of relevant labels that the system selected. Its focus is not missing relevant labels. F1-score is the harmonic mean between precision and recall. These metrics have a major drawback, they are targeted at predicting relevant labels regardless their position in the list of predicted labels, and as a result this metric is not suitable for applications like a recommendation system.
Shaheen et al. \cite{shaheen2020large} use additional retrieval measures for evaluation. R-Precision@K (RP@K), and Normalized Discounted Cumulative Gain (nDCG@K). These Rank-Aware metrics emphasis being good at finding and ranking labels, they rewards putting relevant labels high up in the list of recommendations and penalizes late recommendation of relevant labels.
\section{Results}
\label{sec_results}
This section reports the results of multilingual transformer-based models trained in the ZSL scheme (Section~\ref{sec_results_zsl}), the JT scheme (Section~\ref{sec_results_jt}) and the ablation studies (Section~\ref{sec_results}).
\subsection{Zero-Shot Results}
\label{sec_results_zsl}
First, we evaluate multilingual transformer-based models (M-BERT and M-DistilBERT) trained in the ZSL scheme to classify French and German texts -- using only English texts as training data.
Table~\ref{tab_zero_shot_jrc} shows the results on JRC-Acquis dataset and, followed by Table~\ref{tab_zero_shot_eurlex} with the results for M-EURLEX57K.
The French and German test sets are being evaluated separately.\\
In our experiments M-BERT consistently outperforms M-DistilBERT in the ZSL setting by a large margin across both datasets.
Further, we observe better classification performance for the French datasets than for the respective German datasets, both for M-BERT and M-DistilBERT models.
\begin{table*}[htbp]
\caption{The results of multilingual models (M-BERT, M-DistilBERT) trained in ZSL scheme using the English part of JRC-Acquis on French (FR) and German (DE) parallel test sets.}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Language & Model & F1-score & RP@3 & RP@5 & nDCG@3 & nDCG@5\\\hline
FR & M-DistilBERT & 0.504 & 0.628 & 0.56 & 0.66 & 0.604\\
FR & M-BERT & \textbf{0.55} & \textbf{0.674} & \textbf{0.604} & \textbf{0.704} & \textbf{0.648}\\\hline\hline
DE & M-DistilBERT & 0.473 & 0.583 & 0.527 & 0.613 & 0.566\\
DE & M-BERT & \textbf{0.519} & \textbf{0.637} & \textbf{0.571} & \textbf{0.667} & \textbf{0.613}\\\hline
\end{tabular}
\label{tab_zero_shot_jrc}
\end{center}
\end{table*}
\begin{table*}[htbp]
\centering
\caption{The results of multilingual models (M-BERT, M-DistilBERT) trained in ZSL scheme using the English part of the multilingual EURLEX57K dataset on French (FR) and German (DE) test sets.}
\label{tab_zero_shot_eurlex}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Language & Model & F1-score & RP@3 & RP@5 & nDCG@3 & nDCG@5\\\hline
FR & M-DistilBERT & 0.614 & 0.718 & 0.677 & 0.741 & 0.706\\
FR & M-BERT & \textbf{0.67} & \textbf{0.771} & \textbf{0.726} & \textbf{0.795} & \textbf{0.757}\\\hline\hline
DE & M-DistilBERT & 0.594 & 0.7 & 0.652 & 0.723 & 0.683\\
DE & M-BERT & \textbf{0.648} & \textbf{0.751} & \textbf{0.7} & \textbf{0.776} & \textbf{0.733}\\\hline
\end{tabular}
\end{table*}
\subsection{Joint Training Results}
\label{sec_results_jt}
We continue with the evaluation of multilingual transformer-based Models (M-BERT and M-DistilBERT) trained in the JT scheme for English, French and German languages.
The results of monolingual models (BERT, RoBERTa, and DistilBERT), as reported in Shaheen et al.~\cite{shaheen2020large}, serve as a baseline on the English test set.
\textbf{JRC Acquis:} Table~\ref{tab_jrc_results} presents an overview of the results on JRC-Acquis. We observe that transformer-based models, trained using JRC Acquis in the JT scheme, fail to reach the performance of monolingual models on the English test set. In this manner, multilingual models achieve about 96.83-98.39\% of the performance achieved by monolingual baseline models. Interestingly, both M-DistilBERT and M-BERT perform similarly according to all metrics, with slightly better performance for M-BERT on F1-score and slightly better performance for M-DistilBERT on the rest of the metrics (RP@3, RP@5, nDCG@3, nDCG@5).
\begin{table*}[t]
\centering
\caption{M-BERT and M-DistilBERT results trained in the JT scheme for the JRC Acquis dataset in English (EN), French (FR) and German(DE), plus baseline results of monolingual models (BERT, DistilBERT, RoBERTa) on the English test set.}
\label{tab_jrc_results}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Language & Model & F1-score & RP@3 & RP@5 & nDCG@3 & nDCG@5\\\hline
FR & M-DistilBERT & 0.637 & \textbf{0.766} & 0.692 & \textbf{0.79} & 0.732\\
FR & M-BERT & \textbf{0.642} & 0.763 & \textbf{0.696} & 0.785 & \textbf{0.733}\\\hline\hline
DE & M-DistilBERT & 0.634 & \textbf{0.762} & 0.691 & \textbf{0.787} & \textbf{0.731}\\
DE & M-BERT & \textbf{0.641} & 0.759 & \textbf{0.693} & 0.781 & 0.729\\\hline\hline
EN & M-DistilBERT & 0.638 & 0.768 & 0.697 & 0.794 & 0.737\\
EN & M-BERT & 0.644 & 0.763 & 0.695 & 0.785 & 0.733\\\hline
EN & DistilBERT & 0.652 & 0.78 & 0.711 & 0.805 & 0.75\\
EN & BERT & \textbf{0.661} & 0.784 & 0.715 & 0.803 & 0.750\\
EN & RoBERTa & 0.659 & \textbf{0.788} & \textbf{0.716} & \textbf{0.807} & \textbf{0.753}\\\hline
\end{tabular}
\end{table*}
\textbf{EURLEX57K:} In contrast to JRC-Aquis, for the M-EURLEX57K (see Table~\ref{tab_eur_results})
M-BERT achieves similar or slightly better results on all metrics than RoBERTa (the best baseline model),
when comparing multilingual models to the monolingual baseline.
Also, M-BERT provides an improvement of 1\% over monolingual (English) BERT on all metrics. Although monolingual DistilBERT achieves slightly better results than M-DistilBERT, results are also identical.
\begin{table*}[htbp]
\small
\centering
\caption{M-BERT and M-DistilBERT results trained in the JT scheme for the EURLEX57K dataset in English (EN), French (FR) and German(DE), plus baseline results of monolingual models (BERT, DistilBERT, RoBERTa) on the English test set.}
\label{tab_eur_results}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Language & Model & F1-score & RP@3 & RP@5 & nDCG@3 & nDCG@5\\\hline
FR & M-DistilBERT & 0.754 & 0.846 & 0.803 & 0.864 & 0.829\\
FR & M-BERT & \textbf{0.761} & \textbf{0.851} & \textbf{0.811} & \textbf{0.867} & \textbf{0.833}\\\hline\hline
DE & M-DistilBERT & 0.751 & 0.843 & 0.801 & 0.862 & 0.827\\
DE & M-BERT & \textbf{0.759} & \textbf{0.847} & \textbf{0.807} & \textbf{0.864} & \textbf{0.831}\\\hline\hline
EN & M-DistilBERT & 0.753 & 0.847 & 0.803 & 0.865 & 0.829\\
EN & M-BERT & \textbf{0.761} & \textbf{0.85} & \textbf{0.812} & \textbf{0.867} & \textbf{0.836}\\\hline
EN & DistilBERT & 0.754 & 0.848 & 0.807 & 0.866 & 0.833\\
EN & BERT & 0.751 & 0.843 & 0.805 & 0.859 & 0.828\\
EN & RoBERTa & 0.758 & \textbf{0.85} & \textbf{0.812} & 0.866 & 0.835\\\hline
\end{tabular}
\end{table*}
\subsection{Ablation Studies}
\label{ablation}
In this set of experiments, we study the contributions of different training components and training strategies on the ZSL model -- by excluding some of those components individually or reducing the number of training epochs. We focus on three components:
(i) the use of gradual unfreezing or not, (ii) the number of unfrozen layers, (iii) and the number of language model finetuning epochs.
In all those experiments, we train the models using the English training data of JRC Acquis, and we test using French and German test sets.\\
Table~\ref{tab_abl_nogduf} provides a comparison of the evaluation metrics with or without gradual unfreezing. For both French and German, we can see consistent a improvement of results when using gradual unfreezing. The relative improvement for French is in the range 38-45\%, and for German is in the range 58-70\%. In conclusion, gradual unfreezing is a crucial component for good classification performance of a model trained in the ZSL scheme.\\
Next, we examine the effect of freezing the network layers at the start of training, and gradually unfreezing some the of the layers during training (Table~\ref{tab_abl_gduf}).
\begin{table*}[htbp]
\small
\centering
\caption{Ablation Study: ZSL M-DistilBERT performance on JRC-Acquis depending on the number the unfrozen layers. Again, we train on the English training set, and test on French and German.}
\label{tab_abl_gduf}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Language & Unfrozen Layers & F1-score & RP@3 & RP@5 & nDCG@3 & nDCG@5\\\hline
FR & Last 2 layers & 0.434 & 0.543 & 0.486 & 0.574 & 0.527\\
FR & Last 3 layers & 0.442 & 0.547 & 0.493 & 0.58 & 0.533\\
FR & Last 4 layers & 0.439 & 0.549 & 0.491 & 0.579 & 0.532\\
FR & Last 5 layers & \textbf{0.455} & \textbf{0.567} & \textbf{0.505} & \textbf{0.597} & \textbf{0.547}\\
FR & All 6 layers & 0.451 & 0.563 & 0.5 & 0.593 & 0.542\\
FR & All 6 layers + EMB & \textbf{0.455} & 0.566 & 0.504 & 0.596 & 0.546\\\hline
DE & Last 2 layers & 0.388 & 0.471 & 0.429 & 0.501 & 0.463\\
DE & Last 3 layers & 0.393 & 0.484 & 0.434 & 0.509 & 0.468\\
DE & Last 4 layers & 0.381 & 0.466 & 0.418 & 0.495 & 0.454\\
DE & Last 5 layers & \textbf{0.395} & \textbf{0.488} & \textbf{0.442} & \textbf{0.516} & \textbf{0.477}\\
DE & All 6 layers & 0.384 & 0.468 & 0.42 & 0.497 & 0.456\\
DE & All 6 layers + EMB & 0.391 & 0.474 & 0.428 & 0.504 & 0.464\\\hline
\end{tabular}
\end{table*}
Gradually unfreezing the last five layers while keeping the first and embedding (EMB) layers frozen achieves the best performance on French and German test sets.
Unfreezing all layers (including the embedding layer) obtains very close results to the best results on the French test set, while the difference on the German test set is a bit larger.\\
In Table~\ref{tab_abl_lmft}, we test the effect of the number of language model finetuning epochs.
On the French test set, one cycle of language model finetuning leads to 18.6-20.48\% of relative gain compared to no LM finetuning at all. Increasing the number of epochs to 5 and 10 increases the relative gain to 29.6-32.53\% and 32.0-34.94\% correspondingly. The difference is much bigger on the German test set, compared to no LM finetuning the relative gain is 42.82-49.47\%, 70.69-81.49\%, 76.15-87.54\% for 1, 5, 10 epochs of LM finetuning.
\begin{table*}[htbp]
\small
\centering
\caption{Ablation Study: ZSL M-DistilBERT performance on JRC-Acquis depending on the number of language model finetuning cycles (LMFT-cycles) -- with 6 layers and unfrozen and training on the English training set.}
\label{tab_abl_lmft}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Language & \#LMFT-cycles & F1-score & RP@3 & RP@5 & nDCG@3 & nDCG@5\\\hline
FR & 0 & 0.379 & 0.47 & 0.415 & 0.5 & 0.454\\
FR & 1 & 0.451 & 0.563 & 0.5 & 0.593 & 0.542\\
FR & 5 & 0.498 & 0.615 & 0.55 & 0.648 & 0.595\\
FR & 10 & \textbf{0.504} & \textbf{0.628} & \textbf{0.56} & \textbf{0.66} & \textbf{0.604}\\\hline
DE & 0 & 0.267 & 0.32 & 0.281 & 0.348 & 0.313\\
DE & 1 & 0.384 & 0.468 & 0.42 & 0.497 & 0.456\\
DE & 5 & 0.459 & 0.563 & 0.51 & 0.594 & 0.549\\
DE & 10 & \textbf{0.473} & \textbf{0.583} & \textbf{0.527} & \textbf{0.613} & \textbf{0.566}\\\hline
\end{tabular}
\end{table*}
\begin{table*}[htbp]
\small
\centering
\caption{Ablation Study: ZSL M-DistilBERT performance on JRC-Acquis regarding the use of gradual unfreezing (GDUF). We unfreeze 6 layers and train on the English training set.}
\label{tab_abl_nogduf}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Language & GDUF & F1-score & RP@3 & RP@5 & nDCG@3 & nDCG@5\\\hline
FR & False & 0.327 & 0.385 & 0.351 & 0.406 & 0.377\\
FR & True & \textbf{0.451} & \textbf{0.563} & \textbf{0.5} & \textbf{0.593} & \textbf{0.542}\\\hline
DE & False & 0.243 & 0.274 & 0.248 & 0.291 & 0.267\\
DE & True & \textbf{0.384} & \textbf{0.468} & \textbf{0.42} & \textbf{0.497} & \textbf{0.456}\\\hline
\end{tabular}
\end{table*}
\section{Discussion}
\label{sec-discussion}
We included much of the detailed discussion in
the results section (Section \ref{sec_results}), so here we will summarize and extend on some of the key findings.\\
Comparing the results of the ZSL scheme (Tables ~\ref{tab_zero_shot_jrc} and \ref{tab_zero_shot_eurlex}) to the JT scheme (Tables ~\ref{tab_jrc_results} and \ref{tab_eur_results}) on French and German test sets, the experiments show that M-BERT trained in ZSL scheme reaches about 86\% of the performance of a model trained in the JT scheme. In the same way, M-DistilBERT in ZSL settings achieves about 79\% of the performance of the JT scheme.
Additionally, the multilingual models (M-BERT, M-DistilBERT) trained in the JT scheme on English, French and German provide similar performance on their respective test sets (see tables \ref{tab_jrc_results} and \ref{tab_eur_results}).
However, when using the ZSL scheme, there is a discrepancy of between French and German results, indicating that the multilingual models can more easily transfer from the English to the French representations (Tables \ref{tab_zero_shot_jrc} and \ref{tab_zero_shot_eurlex}).
\section{Conclusion}
\label{sec-conclusion}
In this work, we evaluate cross-lingual transfer learning for LMTC task, based on JRC-Aquis dataset and an extended version of EURLEX57K dataset. We started a baseline for LMTC based on these two multilingual datasets which contain parallel documents in English, French and German. We also compared between two CLTL settings for this task: zero-shot setting and joint training setting.
The main contributions of this work are: (i) the experiments with multilingual-BERT and multilingual-DistilBERT with gradual unfreezing and language model finetuning (ii) providing a new standardized multilingual dataset for further investigation, (iii) ablation studies to measure the impact and benefits of various training strategies on Zero-shot and Joint-Training transfer learning.
There are multiple angles for future work, including potentially deriving higher performance by using hand-picked learning rates and other hyperparameters for each model individually. Moreover, experiments with language adversarial training and various data augmentation techniques are candidates to improve classification performance.
\bibliographystyle{IEEEtran}
| 2024-02-18T23:39:46.658Z | 2021-11-30T02:21:19.000Z | algebraic_stack_train_0000 | 355 | 5,207 |
|
proofpile-arXiv_065-1861 | \section{Introduction} \label{sec:intro}
Order structures are ubiquitous in broad range of physical theories.
Examples of them are
the adiabatic accessibility relation in thermodynamics~\cite{giles1964mathematical,LIEB19991},
the convertibility relation by local operations and classical communication (LOCC) of bipartite quantum states~\cite{RevModPhys.81.865,nielsen_chuang_2010}, the post-processing relation of quantum measurements, or positive operator-valued measures (POVMs)~\cite{Martens1990,Dorofeev1997349,buscemi2005clean,jencova2008,10.1063/1.4934235,10.1063/1.4961516,Guff_2021}, and so on.
These relations are related to (quantum) resource theories~\cite{RevModPhys.91.025001}, which is recently being intensively studied in the area of quantum information.
In thermodynamics, the adiabatic accessibility relation is characterized by a single function, the entropy.
This means that a thermodynamic state is adiabatically convertible to another one if and only if the entropy increases.
From this, we can see that the adiabatic accessibility relation is a total order.
On the other hand, the LOCC convertibility relation of a bipartite pure state is not a total order, while the well-known characterization of this relation by the majorization order (\cite{nielsen_chuang_2010}, Theorem~12.15) suggests that we have still finite number of order monotones (i.e.\ order-preserving functions) that characterize the order.
How about the case of the post-processing relation of POVMs?
Recently it is shown~\cite{PhysRevLett.122.140403,Guff_2021,kuramochi2020compact} that we have uncountably infinite number of order monotones, the state discrimination probabilities, that characterize the post-processing order (see Proposition~\ref{prop:bss}).
These in mind, the authors of \cite{Guff_2021} asked if we can characterize the post-processing order of POVMs by a \textit{finite} number of order monotones, as in the case of the LOCC convertibility relation of bipartite pure states.
In this paper we give a negative answer to this open question.
More strongly, we show that any choice of finite number of order monotones or total orders cannot characterize the post-processing relation of measurements on any non-trivial generalized probabilistic theory (GPT)~\cite{1751-8121-47-32-323001,ddd.uab.cat:187745,kuramochi2020compact}, which is a general framework of physical theories and contains the quantum and classical theories as special examples.
Moreover, we demonstrate that a \textit{countable} number of order monotones characterizing the post-processing order exists if the state space is separable in the norm topology.
We also prove a similar statement for the quantum post-processing relation for quantum channels with a fixed input Hilbert space.
The main theorems (Theorems~\ref{thm:main1} and \ref{thm:main2}) and their proofs are based on the notion of the \textit{order dimension} (Definition~\ref{def:dim}) known in the area of order theory~\cite{Dushnik1941,Hiraguti1955,BA18230795}.
The order dimension of an order is defined as the minimum number of total orders that characterize the order and roughly quantifies the complexity of the order or its deviation from a simple total order.
In the main part, we also introduce another related quantity called the \textit{order monotone dimension} of an order (Definition~\ref{def:dim}).
This is defined as the minimum number of order monotones that characterize the order and directly connected to the open question in \cite{Guff_2021}.
The order monotone dimension is shown to be always greater than or equal to the order dimension (see Lemma~\ref{lemm:leq}).
The present work is the first attempt to evaluate these dimensions of orders appearing in quantum information.
The rest of this paper is organized as follows.
In Section~\ref{sec:prel} we introduce some preliminaries and definitions.
In Section~\ref{sec:main} we state and prove the main theorems (Theorems~\ref{thm:main1} and \ref{thm:main2}).
We then conclude the paper in Section~\ref{sec:conclusion}.
\section{Preliminaries} \label{sec:prel}
In this section, we give preliminaries on the order theory and post-processing relations of measurements and fix the notation.
In this paper, we denote by $\mathbb{N} = \set{1,2, \dots}$, $\mathbb{R}$, and $\mathbb{C}$ the sets of natural, real, and complex numbers, respectively.
For each $m \in \mathbb{N} $, we write as $\mathbb{N}_m := \{ 0,1,\dots , m-1 \}$.
The vector spaces in this paper are over $\mathbb{R}$ unless otherwise stated.
\subsection{Order theory} \label{subsec:ord}
In this paper we identify each binary relation $R$ on a set $S$ with its graph, that is, $R$ is the subset of $S \times S$ consisting of pairs satisfying the relation $R .$
For a binary relation $R \subseteq S \times S ,$ the relation $(x,y) \in R $ $(x,y \in S)$ is occasionally written as $x R y ,$ which is consistent with the common notation.
If $A$ is a subset of $S$ and $R$ is a binary relation on $S ,$ we define the \textit{restriction} of $R$ to $A$ by $ R\rvert_A := R \cap (A \times A) . $
Let $R$ be a binary relation on a set $S .$
We consider following conditions for $R.$
\begin{itemize}
\item
$R$ is \textit{reflexive} $:\stackrel{\mathrm{def.}}{\Leftrightarrow}$ $x R x$ $(\forall x \in S).$
\item
$R$ is \textit{symmetric} $:\stackrel{\mathrm{def.}}{\Leftrightarrow}$ $x R y$ implies $yRx$ $(\forall x ,y \in S).$
\item
$R$ is \textit{antisymmetric} $:\stackrel{\mathrm{def.}}{\Leftrightarrow}$ $x R y$ and $yRx$ imply $x=y$ $(\forall x ,y \in S).$
\item
$R$ is \textit{transitive} $: \stackrel{\mathrm{def.}}{\Leftrightarrow}$ $xRy$ and $yRz$ imply $xRz$ $(\forall x ,y, z \in S).$
\item
$R$ is \textit{total} $:\stackrel{\mathrm{def.}}{\Leftrightarrow}$ either $xRy$ or $yRx$ holds $(\forall x, y \in S) .$
\end{itemize}
A reflexive, transitive binary relation is called a \textit{preorder}.
An antisymmetric preorder is called a \textit{partial order}.
A total partial order is just called a \textit{total order} (or a \textit{linear order}).
If $R$ is a partial order on a set $S ,$ $(S,R)$ is called a \textit{partially ordered set}, or a \textit{poset}.
A poset $(S, R)$ is called a \textit{chain} if $R$ is a total order.
If $P$ is a preorder on $S ,$ then the relation $\sim$ defined via
\[
x \sim y : \stackrel{\mathrm{def.}}{\Leftrightarrow} \text{$xPy$ and $yPx$} \quad (x,y \in S)
\]
is an equivalence relation (i.e.\ a reflexive, symmetric, and transitive relation).
If we denote by $[x]$ the equivalence class to which $x \in S$ belongs, then we may define a binary relation $R$ on the quotient space $S/ \mathord\sim$ via
\[
[x] R [y] :\stackrel{\mathrm{def.}}{\Leftrightarrow} x Py
\]
and $R$ is a partial order on $S/ \mathord\sim$.
A binary relation $P$ on $S$ is said to be an \textit{extension} of a binary relation $R$ on $S$ if $R \subseteq P ,$ i.e.\ $xRy$ implies $xPy$ for every $x,y \in S .$
If $L$ is an extension of a binary relation $R $ and is a total order, then $L$ is called a \textit{linear extension} of $R$.
A family $\mathcal{L}$ of binary relations on a set $S$ is said to \textit{realize} a binary relation $R$ on $S$ (or \textit{realize} $(S,R)$) if $R = \bigcap \mathcal{L} ,$
i.e.\ for every $x,y \in S ,$
\[
x R y \iff [ x L y \quad (\forall L \in \mathcal{L})].
\]
Let $(S , \mathord\preceq)$ be a poset.
A real-valued function $f \colon S \to \mathbb{R}$ is called an \textit{order monotone}, or a \textit{resource monotone}, (with respect to $\mathord\preceq$) if for every $x,y \in S$
\begin{equation}
x \preceq y \implies f(x) \leq f(y) .
\notag
\end{equation}
A family $\mathcal{F}$ of real-valued functions on $S$ is said to \textit{characterize}
(or to be a \textit{complete family} of) $(S, \mathord\preceq) $ if for every $x,y \in S$
\begin{equation}
x \preceq y \iff [f(x) \leq f(y) \quad (\forall f \in \mathcal{F} )] .
\label{eq:condiF}
\end{equation}
If \eqref{eq:condiF} holds, then each $f\in \mathcal{F}$ is necessarily an order monotone.
The following notions of the order and order monotone dimensions play the central role in this paper.
\begin{defi} \label{def:dim}
Let $(S , \mathord\preceq)$ be a poset.
\begin{enumerate}
\item
By the \textit{order dimension} \cite{Dushnik1941,Hiraguti1955} of $(S , \mathord\preceq) ,$ written as $\dim_{\mathrm{ord}} (S , \mathord\preceq ) ,$ we mean the minimum cardinality $|\mathcal{L} |$ of a family $\mathcal{L}$ of linear extensions of $\mathord\preceq$ that realizes $(S ,\mathord\preceq ).$
Here $|A|$ denotes the cardinality of a set $A .$
\item
By the \textit{order monotone dimension} of $(S , \mathord\preceq) ,$ written as $\dim_{\mathrm{ord}, \, \realn} (S , \mathord\preceq ) ,$ we mean the minimum cardinality $|\mathcal{F} |$ of a family $\mathcal{F}$ of order monotones on $S$ that characterizes $(S, \mathord\preceq ) .$
\end{enumerate}
\end{defi}
The order monotone dimension of a poset $(S , \mathord\preceq)$ is well-defined.
Indeed, according to \cite{COECKE201659} (Proposition~5.2), if we define the order monotone
\begin{equation}
M_a (x) :=
\begin{cases}
1 & (\text{when $a \preceq x$}); \\
0 &(\text{otherwise})
\end{cases}
\quad
(x \in S)
\notag
\end{equation}
for each $a\in S ,$
then the family $\{ M_a \}_{a \in S}$ characterizes $(S ,\preceq ) .$
Hence $\dim_{\mathrm{ord}, \, \realn} (S , \mathord\preceq)$ is well-defined and at most $|S| .$
The well-definedness of the order dimension is proved in \cite{Dushnik1941} (Theorem~2.32) by using the Szpilrajn extension theorem \cite{SzpilrajnSurLD}.
We also note that, since the cardinals are well-ordered, we can always take a family $\mathcal{F}$ of monotones that characterizes $\preceq$ and the cardinality $|\mathcal{F}|$ is minimum, i.e.\ $|\mathcal{F} | = \dim_{\mathrm{ord}, \, \realn} (S , \mathord\preceq) .$
A similar statement for the order dimension is also true.
We will prove in Lemma~\ref{lemm:leq} that the order monotone dimension is always greater than or equal to the order dimension.
\subsection{General probabilistic theory} \label{subsec:gpt}
A general probabilistic theory (GPT) with the no-restriction hypothesis~\cite{PhysRevA.87.052131} is mathematically described by the following notion of the base-norm Banach space.
\begin{defi} \label{def:gpt}
A triple $(V, V_+ , \Omega)$ is called a \textit{base-norm Banach space} if the following conditions hold.
\begin{enumerate}
\item
$V$ is a real vector space.
\item
$V_+$ is a positive cone of $V$, i.e.\ $\lambda V_+ \subseteq V_+$ $(\forall \lambda \in [0,\infty))$, $V_+ + V_+ \subseteq V_+$, and $V_+ \cap (-V_+) = \{ 0\}$ hold.
We define the linear order on $V$ induced from $V_+$ by
\begin{equation*}
x \leq y : \stackrel{\mathrm{def.}}{\Leftrightarrow} y-x \in V_+ \quad (x,y \in V) .
\end{equation*}
\item
$V_+$ is generating, i.e.\ $V = V_+ + (-V_+)$.
\item
$\Omega$ is a base of $V_+$, i.e.\ $\Omega$ is a convex subset of $V_+$ and for every $x \in V_+$ there exists a unique $\lambda \in [0,\infty )$ such that $x \in \lambda \Omega $.
\item
We define the base-norm on $V$ by
\begin{equation*}
\| x \| := \inf \set{\alpha + \beta | x = \alpha \omega_1 + \beta \omega_2 ; \, \alpha , \beta \in [0,\infty ); \, \omega_1 , \omega_2 \in \Omega}
\quad (x \in V).
\end{equation*}
We require that the base-norm $\| \cdot \|$ is a complete norm on $V$.
\end{enumerate}
If these conditions are satisfied, $\Omega$ is called a \textit{state space} and each element $\omega \in \Omega$ is called a \textit{state}.
\end{defi}
The reader can find in \cite{ddd.uab.cat:187745} (Chapter~1) how the notion of the base-norm Banach space is derived from operationally natural requirements on the GPT.
Let $(V , V_+ , \Omega)$ be a base-norm Banach space.
We denote by $V^\ast$ the continuous dual of $V$ (i.e.\ the set of norm-continuous real linear functionals on $V$) equipped with the dual norm
\begin{equation*}
\| f\| := \sup_{x \in V , \, \| x\| \leq 1} |f(x)| \quad (f\in V^\ast) .
\end{equation*}
For each $f \in V^\ast$ and each $x \in V$ we occasionally write as $\braket{f,x} := f(x)$.
The dual positive cone $V^\ast_+$ of $V^\ast$ is defined by
\begin{equation*}
V^\ast_+ := \set{f \in V^\ast | \braket{f,x} \geq 0 \, (\forall x \in V_+)}
\end{equation*}
and the dual linear order by
\begin{equation*}
f \leq g : \stackrel{\mathrm{def.}}{\Leftrightarrow} g - f \in V^\ast_+ \Leftrightarrow [f(x) \leq g(x) \quad (\forall x \in V_+)] \qquad (f,g \in V^\ast).
\end{equation*}
It can be shown that there exists a unique positive element, called the unit element, $u_\Omega \in V^\ast_+$ such that $\braket{u_\Omega,\Omega} = 1.$
Then the dual norm on $V^\ast$ coincides with the order unit norm of $u_\Omega$:
\begin{equation*}
\| f \| =
\inf \set{\lambda \in [0,\infty) | - \lambda u \leq f \leq \lambda u}
\quad (f \in V^\ast).
\end{equation*}
An element $e \in V^\ast$ satisfying $0 \leq e \leq u_\Omega$ is called an \textit{effect} (on $\Omega$).
The set of effects on $\Omega$ is denoted by $\mathcal{E}(\Omega)$.
In the main part of the paper, we will consider the following examples of quantum and classical theories.
\begin{exam}[Quantum theory] \label{ex:quantum}
Let $\mathcal{H}$ be a complex Hilbert space.
We write the inner product of $\mathcal{H}$ as $\braket{ \cdot | \cdot }$ which is antilinear and linear in the first and second components, respectively, and the complete norm as $\| \psi \| := \braket{\psi | \psi}^{1/2} .$
The sets of bounded and trace-class linear operators on $\mathcal{H}$ are denoted by $\mathbf{B}(\mathcal{H})$ and $\mathbf{T}(\mathcal{H}) ,$ respectively.
The self-adjoint and positive parts of these sets are defined by
\begin{gather*}
\mathbf{B}_{\mathrm{sa}}(\mathcal{H}) := \set{a \in \mathbf{B}(\mathcal{H}) | a = a^\ast} , \\
\mathbf{B}_+(\mathcal{H}) := \set{a \in \mathbf{B}(\mathcal{H}) | \braket{\psi | a \psi} \geq 0 \, (\forall \psi \in \mathcal{H})}, \\
\mathbf{T}_{\mathrm{sa}}(\mathcal{H}) := \set{a \in \mathbf{T}(\mathcal{H}) | a = a^\ast}, \\
\mathbf{T}_+(\mathcal{H}) := \mathbf{T}(\mathcal{H}) \cap \mathbf{B}_+(\mathcal{H}) ,
\end{gather*}
where $a^\ast$ denotes the adjoint operator of $a \in \mathbf{B}(\mathcal{H})$.
The uniform and the trace norms are respectively defined by
\begin{gather*}
\| a \| := \sup_{\psi \in \mathcal{H} , \, \| \psi \| \leq 1} \| a \psi \|
\quad (a \in \mathbf{B}(\mathcal{H})) ,
\\
\| b \|_1 : = \mathrm{tr} (\sqrt{b^\ast b}) \quad (b \in \mathbf{T}(\mathcal{H})),
\end{gather*}
where $\mathrm{tr}(\cdot)$ denotes the trace.
A non-negative trace-class operator $\rho$ satisfying the normalization condition $\mathrm{tr} (\rho) =1$ is called a density operator.
The set of density operators on $\mathcal{H}$ is denoted by $\mathbf{D}(\mathcal{H})$.
In the GPT framework in Definition~\ref{def:gpt}, the quantum theory corresponds to the case
$(V,V_+ , \Omega) = (\mathbf{T}_{\mathrm{sa}}(\mathcal{H}) , \mathbf{T}_+(\mathcal{H}) , \mathbf{D}(\mathcal{H}))$.
The base-norm on $\mathbf{T}_{\mathrm{sa}}(\mathcal{H})$ then coincides with the trace norm.
The continuous dual space $\mathbf{T}_{\mathrm{sa}}(\mathcal{H})^\ast$, the dual norm on it, the dual positive cone $\mathbf{T}_{\mathrm{sa}}(\mathcal{H})^\ast_+$, and the unit element $u_{\mathbf{D}(\mathcal{H})}$ are respectively identified with $\mathbf{B}_{\mathrm{sa}}(\mathcal{H})$, the uniform norm, $\mathbf{B}_+(\mathcal{H})$, and the identity operator $\mathbbm{1}_{\mathcal{H}}$ on $\mathcal{H}$ by the duality
\begin{equation*}
\braket {a , b} = \mathrm{tr} (ab) \quad (a \in \mathbf{B}_{\mathrm{sa}}(\mathcal{H}) ; \, b \in \mathbf{T}_{\mathrm{sa}}(\mathcal{H})).
\end{equation*}
By this duality, we identify $\mathbf{T}_{\mathrm{sa}}(\mathcal{H})^\ast$ with $\mathbf{B}_{\mathrm{sa}}(\mathcal{H})$.
\end{exam}
\begin{exam}[Discrete classical theory] \label{ex:classical}
Let $X$ be a non-empty set.
We define $\ell^1(X)$ and $\ell^\infty(X)$ and their positive parts by
\begin{gather*}
\ell^1(X) := \set{f = (f(x))_{x\in X} \in \mathbb{R}^X | \| f\|_1 < \infty}, \\
\ell^\infty(X) := \set{f = (f(x))_{x\in X} \in \mathbb{R}^X | \| f\|_\infty < \infty}, \\
\ell^1_+(X) := \set{(f(x))_{x\in X} \in \ell^1(X) | f(x) \geq 0 \, (\forall x \in X)}, \\
\ell^\infty_+(X) := \set{(f(x))_{x\in X} \in \ell^\infty (X) | f(x) \geq 0 \, (\forall x \in X)},
\end{gather*}
where for $f \in \mathbb{R}^X$
\begin{gather*}
\|f \|_1 := \sum_{x\in X} | f(x) |, \\
\| f \|_\infty := \sup_{x\in X} |f(x)|
\end{gather*}
are respectively the $\ell^1$- and the $\ell^\infty$-norms.
We also define the simplex of the probability distributions on $X$ by
\begin{equation*}
\mathcal{P} (X) := \Set{(p (x))_{x\in X} \in \ell^1_+(X) | \sum_{x\in X} p(x) =1 } .
\end{equation*}
In the framework of the GPT in Definition~\ref{def:gpt}, a discrete classical theory corresponds to the case $(V,V_+, \Omega) = (\ell^1 (X) , \ell^1_+(X), \mathcal{P}(X))$.
The base-norm on $\ell^1(X)$ coincides with the $\ell^1$-norm $\| \cdot \|_1$.
The continuous dual $\ell^1(X)^\ast$, the dual norm on $\ell^1(X)^\ast$, the dual positive cone $\ell^1(X)^\ast_+$, and the unit element $u_{\mathcal{P}(X)}$ are respectively identified with $\ell^\infty(X)$, the $\ell^\infty$-norm $\| \cdot \|_\infty$, $\ell^\infty_+(X)$, and the constant function
$
1_X := (1)_{x\in X}
$
by the duality
\begin{equation*}
\braket{f,g} = \sum_{x\in X} f(x) g(x) \quad (f\in \ell^\infty (X) ; \, g \in \ell^1(X)) .
\end{equation*}
\end{exam}
\subsection{Post-processing relation of measurements on a GPT} \label{subsec:gptpost}
Now we fix a base-norm Banach space $(V, V_+ , \Omega)$ corresponding to a GPT.
For a natural number $m \in \mathbb{N}$, a finite sequence $(\mathsf{M}(k))_{k =0}^{m-1} \in (V^\ast)^m$ is called an \textit{($m$-outcome) effect-valued measure} (EVM) (on $\Omega$) if $\mathsf{M}(k) \geq 0$ $(k\in \mathbb{N}_m)$ and $\sum_{k =0}^{m-1}\mathsf{M}(k) = u_\Omega $ hold.
We denote by $\mathrm{EVM}_m (\Omega) $ the set of $m$-outcome EVMs.
We also define $\mathrm{EVM}_{\mathrm{fin}}(\Omega) := \bigcup_{m \in \mathbb{N}} \mathrm{EVM}_m (\Omega)$, which is the set of finite-outcome EVMs on $\Omega$.
For each EVM $\mathsf{M} = (\mathsf{M}(k))_{k =0}^{m-1}$ and each state $\omega \in \Omega$, the sequence $(p^\mathsf{M}_\omega (k))_{k =0}^{m-1}$ defined by
\begin{equation*}
p^\mathsf{M}_\omega (k) := \braket{\mathsf{M}(k) , \omega} \quad (k \in \mathbb{N}_m)
\end{equation*}
is a probability distribution.
In the physical context, $p^\mathsf{M}_\omega$ is the outcome probability distribution of the measurement $\mathsf{M}$ when the state of the system is prepared to be $\omega$.
Let $\mathsf{M} = (\oM(j))_{j=0}^{m-1}$ and $\mathsf{N} = (\oN(k))_{k=0}^{n-1}$ be EVMs on $\Omega$.
$\mathsf{M}$ is said to be a \textit{post-processing} of (or, \textit{less or equally informative} than) $\mathsf{N}$ \cite{Martens1990,Dorofeev1997349,buscemi2005clean,jencova2008,10.1063/1.4934235,10.1063/1.4961516}, written as $\mathsf{M} \preceq_{\mathrm{post}} \mathsf{N} $, if there exists a matrix $(p(j|k))_{j \in \mathbb{N}_m , \, k \in \mathbb{N}_n}$ such that
\begin{gather}
p(j|k) \geq 0 \quad (j \in \mathbb{N}_m , \, k \in \mathbb{N}_n) ,
\label{eq:MK1}
\\
\sum_{j=0}^{m-1} p(j|k) =1 \quad ( k \in \mathbb{N}_n) ,
\label{eq:MK2}
\\
\mathsf{M}(j) = \sum_{k=0}^{n-1} p(j|k) \mathsf{N}(k) \quad (j \in \mathbb{N}_m) .
\notag
\end{gather}
A matrix $(p(j|k))_{j \in \mathbb{N}_m , \, k \in \mathbb{N}_n}$ satisfying \eqref{eq:MK1} and \eqref{eq:MK2} is called a \textit{Markov matrix}.
The relation $\mathsf{M} \preceq_{\mathrm{post}} \mathsf{N} $ means that the measurement $\mathsf{M}$ is realized if we first perform $\mathsf{N} ,$ which gives a measurement outcome $k \in \mathbb{N}_n$, then randomly generate $j \in \mathbb{N}_m$ according to the probability distribution $(p(j|k))_{ j \in \mathbb{N}_m }$, forget the original measurement outcome $k$, and finally record $j$ as the measurement outcome.
We also say that $\mathsf{M}$ and $\mathsf{N}$ are \textit{post-processing equivalent} (or \textit{equally informative}), written as $\mathsf{M} \sim_{\mathrm{post}} \mathsf{N} $, if both $\mathsf{M} \preceq_{\mathrm{post}} \mathsf{N}$ and $\mathsf{N} \preceq_{\mathrm{post}} \mathsf{M}$ hold.
The binary relations $\preceq_{\mathrm{post}}$ and $\sim_{\mathrm{post}}$ are respectively preorder and equivalence relations on $\mathrm{EVM}_{\mathrm{fin}}(\Omega) .$
We write as $\mathfrak{M}_{\mathrm{fin}}(\Omega) := \mathrm{EVM}_{\mathrm{fin}}(\Omega) / \mathord\sim_{\mathrm{post}} $
and, for each $\mathsf{M} \in \mathrm{EVM}_{\mathrm{fin}}(\Omega) ,$ denote by $[\mathsf{M}]$ the equivalence class to which $\mathsf{M}$ belongs.
We define the binary relation $\preceq_{\mathrm{post}}$ on $\mathfrak{M}_{\mathrm{fin}}(\Omega)$ by
\begin{equation*}
[\mathsf{M}] \preceq_{\mathrm{post}} [\mathsf{N}] :\stackrel{\mathrm{def.}}{\Leftrightarrow} \mathsf{M} \preceq_{\mathrm{post}} \mathsf{N} \quad ([\mathsf{M}] , [\mathsf{N}] \in \mathfrak{M}_{\mathrm{fin}}(\Omega)) .
\end{equation*}
Then $(\mathfrak{M}_{\mathrm{fin}}(\Omega) , \mathord\pp )$ is a poset.
An EVM $\mathsf{M} = (\mathsf{M}(k))_{k =0}^{m-1}$ is called \textit{trivial} if each component $\mathsf{M}(k)$ is proportional to the unit $u_\Omega .$
The equivalence class $[\mathsf{M}]$ of a trivial EVM $\mathsf{M}$ is the minimum element of the poset $(\mathfrak{M}_{\mathrm{fin}}(\Omega) , \preceq_{\mathrm{post}})$, i.e.\ $[\mathsf{M}] \preceq_{\mathrm{post}} [\mathsf{N}]$ for all $[\mathsf{N}] \in \mathfrak{M}_{\mathrm{fin}}(\Omega)$
The post-processing relation $\preceq_{\mathrm{post}}$ on $\mathfrak{M}_{\mathrm{fin}}(\Omega)$ is characterized by the state discrimination probabilities defined as follows.
A finite sequence $\mathcal{E} = (\rho_k)_{k=0}^{N-1} \in V_+^N$ of non-negative elements in $V$ is called an \textit{ensemble} (on $\Omega$) if the normalization condition $\sum_{k=0}^{N-1} \braket{u_\Omega , \rho_k}=1$ holds.
For an ensemble $\mathcal{E} = (\rho_k)_{k=0}^{N-1}$ and an EVM $\mathsf{M} = (\oM(j))_{j=0}^{m-1}$ on $\Omega$,
we define the \textit{state discrimination probability} by
\begin{equation}
P_\mathrm{g} (\mathcal{E} ; \mathsf{M})
:= \sup_{\text{$(p(k|j))_{k \in \mathbb{N}_N, \, j \in \mathbb{N}_m}$: Markov matrix}}
\sum_{k \in \mathbb{N}_N , \, j \in \mathbb{N}_m} p(k|j) \braket{ \mathsf{M} (j) ,\rho_k}
\label{eq:Pgdef}
\end{equation}
The ensemble $\mathcal{E} = (\rho_k)_{k=0}^{N-1}$ corresponds to the situation in which the system is prepared in the state $\braket{u_\Omega , \rho_k}^{-1}\rho_k$ with the probability $\braket{u_\Omega , \rho_k}.$
The state discrimination probability \eqref{eq:Pgdef} is the optimum probability of the correct guessing of the index $k$ of the ensemble when we are given the measurement outcome $j$ of $\mathsf{M} .$
The maximum of the optimization problem in the RHS of \eqref{eq:Pgdef} is attained when $p(k|j) = \delta_{k(j) , j} ,$ where $k(j)$ is chosen so that $k(j) \in \arg \max_{k \in \mathbb{N}_N} \braket{\mathsf{M}(j), \rho_k}$, i.e.\ when the maximum likelihood estimation is adopted.
The optimal value is then given by
\begin{equation}
P_\mathrm{g} (\mathcal{E} ; \mathsf{M}) = \sum_{j \in \mathbb{N}_m} \max_{k \in \mathbb{N}_N} \braket{\mathsf{M}(j) , \rho_k}
\label{eq:mle}
\end{equation}
(\cite{kuramochi2020compact}, Lemma~5).
The following Blackwell-Sherman-Stein (BSS) theorem for EVMs~\cite{PhysRevLett.122.140403,Guff_2021,kuramochi2020compact} can be regarded as the generalization of the corresponding BSS theorem for statistical experiments known in mathematical statistics~\cite{lecam1986asymptotic,torgersen1991comparison}.
\begin{prop}[BSS theorem for EVMs] \label{prop:bss}
Let $(V, V_+ , \Omega)$ be a base-norm Banach space and let $\mathsf{M}$ and $\mathsf{N}$ be finite-outcome EVMs on $\Omega$.
Then $\mathsf{M} \preceq_{\mathrm{post}} \mathsf{N}$ if and only if $P_\mathrm{g} (\mathcal{E} ; \mathsf{M}) \leq P_\mathrm{g} (\mathcal{E}; \mathsf{N})$ for all ensemble $\mathcal{E}$ on $\Omega$.
\end{prop}
The proof of the BSS theorem for EVMs on a (possibly infinite-dimensional) GPT can be found in \cite{kuramochi2020compact} (Theorem~1).
Proposition~\ref{prop:bss} implies that the functions
\begin{equation}
\mathfrak{M}_{\mathrm{fin}}(\mathcal{H}) \ni [\mathsf{M}] \mapsto P_\mathrm{g} (\mathcal{E} ; \mathsf{M}) =: P_\mathrm{g} (\mathcal{E} ; [\mathsf{M}]) \in \mathbb{R}
\quad
(\text{$\mathcal{E}$: ensemble})
\notag
\end{equation}
are well-defined and the family
\[
\set{P_\mathrm{g} (\mathcal{E} ; \cdot) \mid \text{$\mathcal{E}$ is an ensemble on $\Omega$}}
\]
characterizes the poset $(\mathfrak{M}_{\mathrm{fin}}(\Omega) , \mathord\pp) .$
\subsection{Post-processing relation of quantum channels} \label{subsec:cp}
Let $\mathcal{H}$ and $\mathcal{K}$ be complex Hilbert spaces and let $\Gamma \colon \mathbf{B}(\mathcal{K}) \to \mathbf{B}(\mathcal{H})$ be a complex linear map.
We have the following definitions.
\begin{itemize}
\item
$\Gamma$ is \textit{unital} $:\stackrel{\mathrm{def.}}{\Leftrightarrow}$ $\Gamma (\mathbbm{1}_{\mathcal{K}}) = \mathbbm{1}_{\mathcal{H}}$.
\item
$\Gamma$ is \textit{positive} $:\stackrel{\mathrm{def.}}{\Leftrightarrow}$ $a \geq 0$ implies $\Gamma (a) \geq 0$ for any $a \in \mathbf{B}(\mathcal{K}) .$
\item
$\Gamma$ is \textit{completely positive} (\textit{CP}) \cite{1955stinespring,paulsen_2003} $: \stackrel{\mathrm{def.}}{\Leftrightarrow}$ the product linear map $\Gamma \otimes \mathrm{id}_n \colon \mathbf{B}(\mathcal{K} \otimes \mathbb{C}^n) \to \mathbf{B}(\mathcal{H} \otimes \mathbb{C}^n)$ is positive for all $n \in \mathbb{N} ,$ where $\mathrm{id}_n \colon \mathbf{B}(\mathbb{C}^n) \to \mathbf{B}(\mathbb{C}^n)$ is the identity map on $\mathbf{B}(\mathbb{C}^n) .$
\item
$\Gamma$ is a \textit{channel} (in the Heisenberg picture) $:\stackrel{\mathrm{def.}}{\Leftrightarrow}$ $\Gamma$ is unital and CP.
\item
For positive $\Gamma ,$ $\Gamma$ is normal $:\stackrel{\mathrm{def.}}{\Leftrightarrow}$ $\sup_i \Gamma (a_i) = \Gamma (\sup_i a_i)$ for every upper bounded increasing net $(a_i)$ in $\mathbf{B}(\mathcal{K}) .$
This condition is equivalent to the ultraweak continuity of $\Gamma $, which means that $b_j \xrightarrow{\mathrm{uw}} b$ implies $\Gamma (b_j) \xrightarrow{\mathrm{uw}} \Gamma(b)$ for any net $(b_j)$ and any element $b$ in $\mathbf{B}(\mathcal{K}) $, where $\xrightarrow{\mathrm{uw}}$ denotes the ultraweak convergence.
\end{itemize}
For a channel $\Gamma \colon \mathbf{B}(\mathcal{K}) \to \mathbf{B}(\mathcal{H}) ,$ the Hilbert spaces $\mathcal{H}$ and $\mathcal{K}$ are called respectively the input and output Hilbert spaces of $\Gamma$.
If $\Gamma \colon \mathbf{B}(\mathcal{K}) \to \mathbf{B}(\mathcal{H})$ is positive and normal, then there exists the unique positive linear map $\Gamma_\ast \colon \mathbf{T}(\mathcal{H}) \to \mathbf{T}(\mathcal{K}) ,$ called the predual map of $\Gamma$, such that
\begin{equation}
\mathrm{tr} (\rho \Gamma (a)) = \mathrm{tr} (\Gamma_\ast (\rho) a)
\quad (\rho \in \mathbf{T}(\mathcal{H}) ; a \in \mathbf{B}(\mathcal{K})) .
\label{eq:predual}
\end{equation}
Conversely if $\Gamma_\ast \colon \mathbf{T}(\mathcal{H}) \to \mathbf{T}(\mathcal{K})$ is a positive linear map then there exists unique normal positive linear map $\Gamma$ satisfying \eqref{eq:predual}.
If $\Gamma$ is a normal channel, then its predual describes the state change (channel in the Schr\"odinger picture).
Let $\Gamma \colon \mathbf{B}(\mathcal{K}) \to \mathbf{B}(\mathcal{H})$ and $\Lambda \colon \mathbf{B}(\mathcal{J}) \to \mathbf{B}(\mathcal{H})$ be normal channels with the same input Hilbert space $\mathcal{H} .$
We define the post-processing relations for channels as follows.
\begin{itemize}
\item
$\Gamma \preceq_{\mathrm{CP}} \Lambda$ ($\Gamma$ is less or equally informative than $\Lambda$) $:\stackrel{\mathrm{def.}}{\Leftrightarrow}$ there exists a normal channel $\Psi \colon \mathbf{B}(\mathcal{K}) \to \mathbf{B}(\mathcal{J})$ such that $\Gamma = \Lambda \circ \Psi .$
It is known that $\Gamma \preceq_{\mathrm{CP}} \Lambda$ holds if and only if there exists a (not necessarily normal) channel $\Phi \colon \mathbf{B}(\mathcal{K}) \to \mathbf{B}(\mathcal{J})$ such that $\Gamma = \Lambda \circ \Phi $
(\cite{gutajencova2007}, Lemma~3.12; \cite{kuramochi2018incomp}, Theorem~2).
\item
$\Gamma \sim_{\mathrm{CP}} \Lambda$ ($\Gamma$ and $\Lambda$ are equally informative)
$:\stackrel{\mathrm{def.}}{\Leftrightarrow}$ $\Gamma \preceq_{\mathrm{CP}} \Lambda$ and $\Lambda \preceq_{\mathrm{CP}} \Gamma .$
\end{itemize}
The binary relations $\mathord\preceq_{\mathrm{CP}}$ and $\mathord\sim_{\mathrm{CP}}$ are respectively a preorder and an equivalence relation on the class $\mathbf{Ch} (\to \mathbf{B}(\mathcal{H}))$ of normal channels with a fixed input Hilbert space $\mathcal{H}$.
It can be shown that there exist a set $\mathfrak{C} (\mathcal{H})$ and a class-to-set surjection
\begin{equation}
\mathbf{Ch} (\to \mathbf{B}(\mathcal{H})) \ni \Gamma \mapsto [\Gamma] \in \mathfrak{C} (\mathcal{H})
\label{eq:Chmap}
\end{equation}
such that
\[ \Gamma \sim_{\mathrm{CP}} \Lambda \iff [\Gamma] = [ \Lambda ] \]
for every $\Gamma , \Lambda \in \mathbf{Ch} (\to \mathbf{B}(\mathcal{H})) . $
This follows from a more general result for normal channels with arbitrary input and output von Neumann algebras (\cite{kuramochi2020directed}, Section~3.3).
We fix such a set $\mathfrak{C} (\mathcal{H})$ and a map \eqref{eq:Chmap}.
We also define the post-processing relation on $\mathfrak{C} (\mathcal{H})$ by
\begin{equation*}
[\Gamma] \preceq_{\mathrm{CP}} [\Lambda] :\stackrel{\mathrm{def.}}{\Leftrightarrow} \Gamma \preceq_{\mathrm{CP}} \Lambda
\quad ([\Gamma ] , [\Lambda ] \in \mathfrak{C} (\mathcal{H}) ) .
\end{equation*}
Then $(\mathfrak{C} (\mathcal{H}) , \mathord\preceq_{\mathrm{CP}})$ is a poset.
Let $\mathcal{H}$ be a complex Hilbert space, let $\mathcal{E} = (\rho_k)_{k=0}^{n-1}$ be an ensemble on $\mathbf{D}(\mathcal{H})$, and let $\Gamma \colon \mathbf{B}(\mathcal{K}) \to \mathbf{B}(\mathcal{H}) $ a channel (or more generally, a unital positive map).
We define the state discrimination probability by
\begin{equation}
P_\mathrm{g} (\mathcal{E} ; \Gamma)
:= \sup_{(\mathsf{M}(k))_{k=0}^{n-1} \in \mathrm{EVM}_n (\mathbf{D}(\mathcal{K}))}
\sum_{k=0}^{n-1} \braket{\mathsf{M}(k) , \rho_k} .
\label{eq:Pgch}
\end{equation}
This quantity is the maximal state discrimination probability of the index $k$ of the ensemble $\mathcal{E}$ when the operation $\Gamma$ is performed on the system whose state is prepared according to $\mathcal{E}$, and then an optimal measurement on the output space $\mathcal{K} $ is performed.
The post-processing relation for channels is characterized by the state discrimination probabilities \textit{with the quantum side information} as shown in the following BSS-type theorem.
\begin{prop}[BSS theorem for channels] \label{prop:qbss}
Let $\Gamma \colon \mathbf{B}(\mathcal{K}) \to \mathbf{B}(\mathcal{H})$ and $\Lambda \colon \mathbf{B}(\mathcal{J}) \to \mathbf{B}(\mathcal{H})$ be normal channels.
Then the following conditions are equivalent.
\begin{enumerate}[(i)]
\item \label{it:qbss1}
$\Gamma \preceq_{\mathrm{CP}} \Lambda .$
\item \label{it:qbss2}
For every $n \in \mathbb{N}$ and every ensemble $\mathcal{E}$ on $\mathbf{D} (\mathcal{H} \otimes \mathbb{C}^n)$
\begin{equation}
P_\mathrm{g} ( \mathcal{E} ; \Gamma \otimes \mathrm{id}_n) \leq P_\mathrm{g} (\mathcal{E} ;\Lambda \otimes \mathrm{id}_n)
\label{eq:Pgleq}
\end{equation}
holds.
\end{enumerate}
\end{prop}
Proposition~\ref{prop:qbss} for finite-dimensional channels is proved in \cite{chefles2009quantum} by using Shmaya\rq{}s theorem \cite{Shmaya_2005}.
In \ref{app:bss}, we give a proof of Proposition~\ref{prop:qbss} based on another (infinite-dimensional) BSS theorem for normal positive maps obtained in \cite{luczak2019}.
Proposition~\ref{prop:qbss} implies that the function
\begin{equation*}
\mathfrak{C} (\mathcal{H}) \ni [\Gamma] \mapsto P_\mathrm{g} (\mathcal{E} ; \Gamma \otimes \mathrm{id}_n)
=: P_\mathrm{g}^{(n)} (\mathcal{E} ; [\Gamma]) \in \mathbb{R}
\end{equation*}
is a well-defined order monotone for every $n\in \mathbb{N} $ and every ensemble $\mathcal{E}$ on $\mathbf{D}(\mathcal{H}\otimes \mathbb{C}^n)$ and that the family
\[
\{ P_\mathrm{g}^{(n)} (\mathcal{E} ; \cdot) \mid \text{$n\in \mathbb{N}$ and $\mathcal{E}$ is an ensemble on $\mathbf{D}(\mathcal{H}\otimes \mathbb{C}^n)$}\}
\]
characterizes the poset $(\mathfrak{C} (\mathcal{H}) , \mathord\preceq_{\mathrm{CP}})$.
\section{Main theorems and their proofs} \label{sec:main}
In this section we prove the following main theorems of this paper:
\begin{thm} \label{thm:main1}
Let $(V, V_+, \Omega)$ be a base-norm Banach space with $\dim V \geq 2$.
Then the following assertions hold.
\begin{enumerate}[1.]
\item \label{it:thm1.1}
Both $\dim_{\mathrm{ord}} (\mathfrak{M}_{\mathrm{fin}}(\Omega) , \mathord\preceq_{\mathrm{post}})$ and $\dim_{\mathrm{ord}, \, \realn} (\mathfrak{M}_{\mathrm{fin}}(\Omega) , \mathord\preceq_{\mathrm{post}})$ are infinite.
\item \label{it:thm1.2}
If $V$ is separable in the norm topology, i.e.\ $V$ has a countable norm dense subset, then
\begin{equation}
\dim_{\mathrm{ord}} (\mathfrak{M}_{\mathrm{fin}}(\Omega) , \mathord\preceq_{\mathrm{post}}) = \dim_{\mathrm{ord}, \, \realn} (\mathfrak{M}_{\mathrm{fin}}(\Omega) , \mathord\preceq_{\mathrm{post}}) = \aleph_0 ,
\label{eq:main1}
\end{equation}
where $\aleph_0 = |\mathbb{N} |$ denotes the cardinality of a countably infinite set.
\end{enumerate}
\end{thm}
\begin{thm} \label{thm:main2}
Let $\mathcal{H}$ be a separable complex Hilbert space with $\dim \mathcal{H} \geq 2 .$
Then
\begin{gather}
\dim_{\mathrm{ord}} ( \mathfrak{M}_{\mathrm{fin}} (\mathbf{D}(\mathcal{H})) ,\mathord\preceq_{\mathrm{post}} ) = \dim_{\mathrm{ord}, \, \realn} ( \mathfrak{M}_{\mathrm{fin}} (\mathbf{D}(\mathcal{H})) ,\mathord\preceq_{\mathrm{post}} ) = \aleph_0 ,
\label{eq:main2-1} \\
\dim_{\mathrm{ord}} ( \mathfrak{C} ( \mathcal{H}) ,\mathord\preceq_{\mathrm{CP}} ) = \dim_{\mathrm{ord}, \, \realn} ( \mathfrak{C} (\mathcal{H}) ,\mathord\preceq_{\mathrm{CP}} ) = \aleph_0 .
\label{eq:main2-2}
\end{gather}
\end{thm}
\begin{rem} \label{rem:1}
The assumption $\dim V \geq 2$ in Theorem~\ref{thm:main1} holds if and only if the state space $\Omega$ contains at least two distinct points because $V$ is the linear span of $\Omega$ and $\Omega$ is contained in the hyperplane $\set{x \in V | \braket{u_\Omega , x} =1}$, which does not contain the origin.
Similarly the separability of $V$ in Theorem~\ref{thm:main1} is equivalent to that of $\Omega$.
\end{rem}
\begin{rem} \label{rem:2}
Theorem~\ref{thm:main1}.\ref{it:thm1.1} is proved by explicitly constructing a sequence of finite subsets of $\mathfrak{M}_{\mathrm{fin}}(\Omega)$ with arbitrarily large dimensions (see Lemma~\ref{lemm:main1}).
Indeed, it is proved in \cite{harzheim1970} that for every poset with an infinite order dimension we can always find such a sequence of finite subsets.
We give another simple proof of this fact in \ref{app:compact} based on Tychonoff\rq{}s theorem.
\end{rem}
\begin{rem} \label{rem:3}
As shown in \cite{kuramochi2020compact}, we can define the set $\mathfrak{M} (\Omega)$ of post-processing equivalence classes of general (i.e.\ possibly continuous outcome) EVMs on $\Omega$.
Then we can easily show that Theorem~\ref{thm:main1}.\ref{it:thm1.1} is also valid for $\mathfrak{M} (\Omega)$ since $\mathfrak{M}_{\mathrm{fin}}(\Omega)$ is a subset of $\mathfrak{M} (\Omega)$, i.e.\ the finite-outcome EVM is a special kind of the general EVM.
We can also show that the proof of Theorem~\ref{thm:main1}.\ref{it:thm1.2} (Lemma~\ref{lemm:countable}) can be straightforwardly generalized to $\mathfrak{M} (\Omega)$ since the BSS theorem is also valid for general EVMs (\cite{kuramochi2020compact}, Theorem~1).
\end{rem}
\begin{rem} \label{rem:4}
A parametrized family $(P_\theta)_{\theta \in \Theta}$ of classical probabilities is called a statistical experiment, or a statistical model, and is one of the basic concepts in mathematical statistics~\cite{lecam1986asymptotic,torgersen1991comparison}.
As shown in \cite{kuramochi2020compact} (Appendix~D), we can identify the class of statistical experiments with a fixed parameter set $\Theta$ with the class of measurements (EVMs) on the input classical GPT $(\ell^1(\Theta), \ell^1_+(\Theta),\mathcal{P}(\Theta))$.
Based on this correspondence Theorem~\ref{thm:main1} straightforwardly applies to the post-processing order of statistical experiments.
\end{rem}
In the rest of this section, we prove Theorems~\ref{thm:main1}.\ref{it:thm1.1}, \ref{thm:main1}.\ref{it:thm1.2}, and \ref{thm:main2} in Sections~\ref{subsec:m1}, \ref{subsec:m1.2}, and \ref{subsec:m2}, respectively.
\subsection{Proof of Theorem~\ref{thm:main1}.\ref{it:thm1.1}} \label{subsec:m1}
The proof is split into some lemmas.
We first establish some general properties of the order dimensions necessary for the proof.
\begin{defi} \label{def:embedding}
Let $(S, \mathord\preceq_1)$ and $(T, \mathord\preceq_2)$ be posets.
A map $f \colon S \to T$ is called an \textit{order embedding} from $(S, \mathord\preceq_1)$ into $(T, \mathord\preceq_2)$ if
$
x \preceq_1 y
$
if and only if $f (x) \preceq_2 f(y)$ for any $x,y \in S .$
An order embedding is necessarily an injection.
If such an order embedding exists, $(S, \mathord\preceq_1)$ is said to be \textit{embeddable} into $(T, \mathord\preceq_2) .$
If $(S, \mathord\preceq_1)$ is embeddable into $(T, \mathord\preceq_2) ,$ $(S, \mathord\preceq_1)$ is order isomorphic to a subset of $T$ equipped with the restriction order of $\preceq_2 .$
\end{defi}
The following lemma is implicit in the literature~\cite{Dushnik1941,Hiraguti1955}, while here we give a proof for completeness.
\begin{lemm} \label{lemm:embedding}
Let $(S, \mathord\preceq_1)$ and $(T, \mathord\preceq_2)$ be posets.
Suppose that $(S, \mathord\preceq_1)$ is embeddable into $(T, \mathord\preceq_2) .$
Then $\dim_{\mathrm{ord}} (S, \mathord\preceq_1) \leq \dim_{\mathrm{ord}} (T, \mathord\preceq_2) $ and $\dim_{\mathrm{ord}, \, \realn} (S, \mathord\preceq_1) \leq \dim_{\mathrm{ord}, \, \realn} (T, \mathord\preceq_2) $ hold.
\end{lemm}
\begin{proof}
Let $g \colon S \to T$ be an order embedding and let $\mathcal{L}$ be a family of total orders on $T$ such that $\mathord\preceq_2 = \bigcap \mathcal{L}$ and $| \mathcal{L} | = \dim_{\mathrm{ord}} (T, \mathord\preceq_2) .$
For each $L \in \mathcal{L} ,$ we define a binary relation $g^{-1}(L)$ on $S$ by
\[
x g^{-1}(L) y : \stackrel{\mathrm{def.}}{\Leftrightarrow} g(x) L g(y) \quad (x, y \in S).
\]
Then, since $g$ is an injection, each $g^{-1}(L)$ $(L \in \mathcal{L})$ is a total order on $S .$
Moreover for every $x,y \in S$ we have
\begin{align*}
x \preceq_1 y
& \iff g(x) \preceq_2 g(y)
\\
& \iff g(x) L g(y) \quad (\forall L \in \mathcal{L})
\\
& \iff x g^{-1} (L) y \quad (\forall L \in \mathcal{L}) ,
\end{align*}
which implies that the family $\set{g^{-1} (L) | L \in \mathcal{L}}$ realizes $\preceq_1 .$
Then the first claim $\dim_{\mathrm{ord}} (S, \mathord\preceq_1) \leq \dim_{\mathrm{ord}} (T, \mathord\preceq_2) $ immediately follows from the definition of the order dimension.
Let $\mathcal{F}$ be a family of order monotones characterizing $(T , \mathord\preceq_2)$
such that $| \mathcal{F}| = \dim_{\mathrm{ord}, \, \realn} (T , \mathord\preceq_2)$.
Then for every $x,y \in S$ we have
\begin{align*}
x \preceq_1 y
& \iff g(x) \preceq_2 g(y)
\\
& \iff f \circ g(x) \leq f \circ g(y) \quad (\forall f \in \mathcal{F}).
\end{align*}
This implies that $\set{f\circ g}_{f \in \mathcal{F}}$ characterizes $(S , \mathord\preceq_1 )$.
From this the second claim $\dim_{\mathrm{ord}, \, \realn} (S, \mathord\preceq_1) \leq \dim_{\mathrm{ord}, \, \realn} (T, \mathord\preceq_2) $ immediately follows.
\end{proof}
Let $((S_i , \mathord\preceq_i))_{i\in I}$ be an indexed family of posets.
We define a poset, called the \textit{direct product}, by
\begin{gather*}
\bigotimes_{i\in I} (S_i , \mathord\preceq_i) := \left(\prod_{i\in I} S_i , \mathord\preceq \right) ,
\\
(x_i)_{i \in I} \preceq (y_i)_{i \in I}
: \stackrel{\mathrm{def.}}{\Leftrightarrow}
[x_i \preceq_i y_i \quad (\forall i \in I)] .
\end{gather*}
For a poset $(S, \mathord\preceq)$ we denote by $\mathrm{dpc} (S , \mathord\preceq)$ the minimum cardinality $|I|$ of a family $((C_i, \mathord\preceq_i))_{i\in I}$ of \textit{chains} such that $(S, \mathord\preceq)$ is embeddable into the direct product $\bigotimes_{i\in I} (C_i , \mathord\preceq_i) . $
Then it is known~\cite{milner1990note} that
\begin{equation}
\dim_{\mathrm{ord}} (S, \mathord\preceq) = \mathrm{dpc} (S, \mathord\preceq)
\label{eq:dpc}
\end{equation}
holds for every poset $(S, \mathord\preceq)$.
From this we can show
\begin{lemm} \label{lemm:leq}
Let $(S , \mathord\preceq)$ be a poset.
Then $\dim_{\mathrm{ord}} (S , \mathord\preceq) \leq \dim_{\mathrm{ord}, \, \realn} (S , \mathord\preceq) .$
\end{lemm}
\begin{proof}
Let $\mathcal{F}$ be a set of order monotones on $S$ such that $\mathcal{F}$ characterizes $\mathord\preceq$ and $|\mathcal{F} | = \dim_{\mathrm{ord}, \, \realn} (S , \mathord\preceq ) .$
Then the map
\begin{equation*}
S \ni x \mapsto (f(x))_{f \in \mathcal{F}} \in \mathbb{R}^{\mathcal{F}}
\end{equation*}
is an order embedding from $(S , \mathord\preceq)$ into the direct product $\bigotimes_{f \in \mathcal{F}}(\mathbb{R} , \mathord\leq ) ,$ where the order $\leq$ on the reals $\mathbb{R}$ is the usual order.
Since $(\mathbb{R} , \mathord\leq)$ is a chain, the claim follows from~\eqref{eq:dpc}.
\end{proof}
We now consider the specific base-norm Banach space $(\ell^2(\mathbb{N}_2) , \ell^2_+(\mathbb{N}_2) , \mathcal{P}(\mathbb{N}_2))$ and the poset $(\mathfrak{M}_{\mathrm{fin}} (\mathcal{P}(\mathbb{N}_2)), \mathord\pp) $.
Here the base $\mathcal{P}(\mathbb{N}_2)$ corresponds to the state space of a classical bit.
We can and do identify $(\ell^2(\mathbb{N}_2) , \ell^2_+(\mathbb{N}_2) , \mathcal{P}(\mathbb{N}_2))$ and $(\ell^\infty(\mathbb{N}_2) , \ell^\infty_+ (\mathbb{N}_2), u_{\mathcal{P}(\mathbb{N}_2)} )$ with $(\mathbb{R}^2 , \mathbb{R}_+^2 , \mathcal{S}_2)$ and $(\mathbb{R}^2 , \mathbb{R}_+^2 , (1,1))$, respectively, where
\begin{gather*}
\mathbb{R}_+ := [0,\infty) ,\\
\mathcal{S}_2 := \set{(p_0 , p_1) \in \mathbb{R}^2 | p_0 , p_1 \geq 0 ,\, p_0 + p_1 =1 },
\end{gather*}
and the duality of $\mathbb{R}^2$ and $\mathbb{R}^{2 \ast} = \mathbb{R}^2$ is given by
\begin{equation*}
\braket{(a_0,a_1) , (b_0,b_1)} := a_0 b_0 + a_1 b_1
\quad ((a_0,a_1) , (b_0 ,b_1) \in \mathbb{R}^2) .
\end{equation*}
\begin{lemm} \label{lemm:cbit}
$(\mathfrak{M}_{\mathrm{fin}} (\mathcal{P}(\mathbb{N}_2)), \mathord\pp) $ is embeddable into $(\mathfrak{M}_{\mathrm{fin}}(\Omega), \mathord\pp) .$
\end{lemm}
\begin{proof}
From the assumption $\dim V \geq 2$, we have $\dim V^\ast \geq 2$.
Thus there exists an element $a_0 \in V^\ast$ such that $(a_0, u_\Omega)$ is linearly independent.
We put
\begin{gather*}
a := \left\| a_0 + \| a_0 \| u_\Omega \right\|^{-1} (a_0 + \| a_0 \| u_\Omega),
\\
a^\prime := u_\Omega - a.
\end{gather*}
Then $a$ and $a^\prime$ are effects and $(a, a^\prime)$ is linearly independent.
We define a linear map $\Psi \colon \mathbb{R}^2 \to V^\ast$ by
\begin{equation*}
\Psi ((\alpha_0 , \alpha_1)) := \alpha_0 a + \alpha_1 a^\prime
\quad ((\alpha_0,\alpha_1) \in \mathbb{R}^2) .
\end{equation*}
Then, by the linear independence of $(a, a^\prime)$, $\Psi$ is injective.
Moreover, $\Psi$ is unital and positive, i.e.\ $\Psi ((1,1)) = u_\Omega$ and $\Psi (\mathbb{R}_+^2) \subseteq V_+^\ast$ hold.
Now for each EVM $\mathsf{M} = (\oM(j))_{j=0}^{m-1} \in( \mathbb{R}^2)^m (= \ell^\infty(\mathcal{P}(\mathbb{N}_2)))$ on $\mathcal{P}(\mathbb{N}_2)$, we define
\begin{equation*}
\Psi (\mathsf{M}) := (\Psi(\mathsf{M}(k)))_{k=0}^{m-1} \in V^{\ast m}.
\end{equation*}
Then since $\Psi$ is unital and positive, $\Psi (\mathsf{M})$ is an EVM on $\Omega$.
Then for every EVMs $\mathsf{M} = (\oM(j))_{j=0}^{m-1}$ and $\mathsf{N} = (\oN(k))_{k=0}^{n-1}$ on $\mathcal{P}(\mathbb{N}_2)$ the equivalence
\begin{equation}
\mathsf{M} \preceq_{\mathrm{post}} \mathsf{N} \iff \Psi (\mathsf{M}) \preceq_{\mathrm{post}} \Psi (\mathsf{N})
\label{eq:cbitiff}
\end{equation}
holds.
This can be shown as follows:
\begin{align*}
&\mathsf{M} \preceq_{\mathrm{post}} \mathsf{N} \\
&\iff \exists \text{$(p(k|j))_{k \in \mathbb{N}_m , \, j \in \mathbb{N}_n}$: Markov matrix s.t.\ }
\mathsf{M}(k) = \sum_{j=0}^{n-1}p(k|j) \mathsf{N}(j)
\\
&\iff \exists \text{$(p(k|j))_{k \in \mathbb{N}_m , \, j \in \mathbb{N}_n}$: Markov matrix s.t.\ }
\Psi(\mathsf{M}(k)) = \sum_{j=0}^{n-1}p(k|j) \Psi(\mathsf{N}(j))
\\
&\iff \Psi (\mathsf{M}) \preceq_{\mathrm{post}} \Psi (\mathsf{N}) ,
\end{align*}
where the second equivalence follows from the injectivity of $\Psi$.
From \eqref{eq:cbitiff}, the map
\begin{equation*}
\mathfrak{M}_{\mathrm{fin}} (\mathcal{P}(\mathbb{N}_2) ) \ni [\mathsf{M}] \mapsto [\Psi (\mathsf{M})] \in \mathfrak{M}_{\mathrm{fin}}(\Omega)
\end{equation*}
is a well-defined order embedding, which completes the proof.
\end{proof}
From Lemmas~\ref{lemm:embedding}, \ref{lemm:leq}, and \ref{lemm:cbit}, the proof of the infinite dimensionality of $(\mathfrak{M}_{\mathrm{fin}}(\Omega) , \mathord\pp)$ reduces to the case of the classical bit space $\mathcal{P}(\mathbb{N}_2)$.
This is done by proving that the following \textit{standard example of an $n$-dimensional poset} (\cite{Dushnik1941,Hiraguti1955}; \cite{BA18230795}, Chapter~1, \S~5) is embeddable into $(\mathfrak{M}_{\mathrm{fin}} (\mathcal{P}(\mathbb{N}_2)) , \mathord\pp)$.
\begin{defi}[Standard example of an $n$-dimensional poset] \label{def:standard}
For each natural number $n \geq 2$, we define a poset $(S_n , \mathord\preceq_n)$, called the standard example of an $n$-dimensional poset, as follows.
$S_n $ is a $2n$-element set given by $S_n := \{a_j\}_{j=0}^{n-1} \cup \{b_j \}_{j=0}^{n-1} $ and the order $\preceq_n$ is given by
\begin{equation*}
\mathord\preceq_n := \{(a_j , a_j) \}_{j=0}^{n-1} \cup \{(b_j , b_j) \}_{j=0}^{n-1} \cup
\bigcup_{j=0}^{n-1} \bigcup_{k\in \mathbb{N}_n \setminus \{ j\}} \set{(a_j ,b_k)}
\end{equation*}
(see Fig.~\ref{fig:hasse} for the Hasse diagram).
It is known~\cite{Dushnik1941,BA18230795} that $\dim_{\mathrm{ord}} (S_n , \mathord\preceq_n) = n$ for each $n \geq 2$.
\begin{figure}
\centering
\begin{tikzpicture}
\node[circle,fill=white,draw=black,inner sep=0pt,minimum size=5pt,label=below:{$a_0$}] (a0) at (0,0) {};
\node[circle,fill=white,draw=black,inner sep=0pt,minimum size=5pt,label=below:{$a_1$}] (a1) at (1,0) {};
\node[circle,fill=white,draw=black,inner sep=0pt,minimum size=5pt,label=below:{$a_2$}] (a2) at (2,0) {};
\node[circle,fill=white,draw=black,inner sep=0pt,minimum size=5pt,label=below:{$a_{n-1}$}] (an) at (4,0) {};
\node[circle,fill=white,draw=black,inner sep=0pt,minimum size=5pt,label=above:{$b_0$}] (b0) at (0,2) {};
\node[circle,fill=white,draw=black,inner sep=0pt,minimum size=5pt,label=above:{$b_1$}] (b1) at (1,2) {};
\node[circle,fill=white,draw=black,inner sep=0pt,minimum size=5pt,label=above:{$b_2$}] (b2) at (2,2) {};
\node[circle,fill=white,draw=black,inner sep=0pt,minimum size=5pt,label=above:{$b_{n-1}$}] (bn) at (4,2) {};
\node (adots) at (3,0) {$\cdots$};
\node (bdots) at (3,2) {$\cdots$};
\draw (a0) -- (b1) -- (a2) -- (b0) -- (a1) -- (b2) -- (a0) -- (bn) -- (a1);
\draw (a2) --(bn);
\draw (an) -- (b0);
\draw (an) -- (b1);
\draw (an) -- (b2);
\end{tikzpicture}
\caption{The Hasse diagram of the standard example $(S_n , \mathord\preceq_n)$ of an $n$-dimensional poset.}
\label{fig:hasse}
\end{figure}
\end{defi}
In order to show that $(S_n , \mathord\preceq_n)$ is embeddable into $(\mathfrak{M}_{\mathrm{fin}} (\mathcal{P}(\mathbb{N}_2)) , \mathord\pp)$, we need the following notion of the direct mixture of EVMs and some lemmas.
\begin{defi} \label{def:mix}
Let $\mathsf{M}_j = (\mathsf{M}_j(k))_{k=0}^{m_j -1}$ $(j = 0 ,1 , \dots, N-1)$ be EVMs on $\Omega$ and let $(p_j)_{j=0}^{N-1}$ be a probability distribution.
Then we define a $\sum_{j=0}^{N-1} m_j =: m$-outcome EVM $\mathsf{M} = (\mathsf{M} (k))_{k=0}^{m-1}$ by
\begin{equation*}
\mathsf{M} (k) := p_j \mathsf{M}_j \left(k - \sum_{j=0}^{j-1} m_j \right)
\quad \left(\text{when $\sum_{l=0}^{j-1} m_l \leq k < \sum_{l=0}^{j} m_l$}\right),
\end{equation*}
i.e.\
\begin{align*}
\mathsf{M} := &(p_0 \mathsf{M}_0 (0), \dots, p_0 \mathsf{M}_0 (m_0-1), p_1 \mathsf{M}_1 (0) , \dots , p_1 \mathsf{M}(m_1 -1) , \dots , \\ & p_{N-1} \mathsf{M}_{N-1} (0) , \dots , p_{N-1} \mathsf{M}(m_{N-1} -1) ).
\end{align*}
This $\mathsf{M}$ is called the \textit{direct mixture} and written as
\begin{equation*}
\bigoplus_{j=0}^{N-1} p_j \mathsf{M}_j
\end{equation*}
or
\begin{equation*}
p_0 \mathsf{M}_0 \oplus p_1 \mathsf{M}_1 \oplus \dots \oplus p_{N-1} \mathsf{M}_{N-1} .
\end{equation*}
In the operational language, the measurement corresponding to the direct mixture $\mathsf{M}$ is realized as follows: we first generate a random number $j$ according to the probability distribution $(p_j)_{j=0}^{N-1}$, then perform $\mathsf{M}_j$ which gives a measurement outcome $k \in \mathbb{N}_{m_j},$ and finally record both $j$ and $k$.
The reader should not confuse the direct mixture with the ordinary mixture $( \sum_{j=0}^{N-1} p_j \mathsf{M}_j (k) )_{k=0}^{m^\prime -1}$ (here we assume $m_j = m^\prime$ for all $j$), which is realized when we forget $j$ and only record $k$.
\end{defi}
The state discrimination probability $P_\mathrm{g} (\mathcal{E} ; \cdot)$ is affine with respect to the direct mixture as shown in the following lemma.
\begin{lemm}[cf.\ \cite{kuramochi2020compact}, Proposition~14.1] \label{lemm:affinity}
Let $\mathsf{M}_j = (\mathsf{M}_j(k))_{k=0}^{m_j -1}$ $(j = 0 ,1 , \dots, N-1)$ be EVMs on $\Omega$,
let $(p_j)_{j=0}^{N-1}$ be a probability distribution, and let $\mathcal{E} = (\rho_l)_{l=0}^{n-1}$ be an ensemble on $\Omega$.
Then
\begin{equation}
P_\mathrm{g} \left( \mathcal{E} ; \bigoplus_{j=0}^{N-1} p_j \mathsf{M}_j \right)
= \sum_{j=0}^{N-1} p_j P_\mathrm{g} (\mathcal{E} ; \mathsf{M}_j)
\label{eq:affinity}
\end{equation}
holds.
\end{lemm}
\begin{proof}
By using \eqref{eq:mle}, we have
\begin{align*}
P_\mathrm{g} \left( \mathcal{E} ; \bigoplus_{j=0}^{N-1} p_j \mathsf{M}_j \right)
&= \sum_{j=0}^{N-1} \sum_{k=0}^{m_j-1} \max_{l \in \mathbb{N}_n} \braket{p_j \mathsf{M}_j(k) , \rho_l}
\\
&= \sum_{j=0}^{N-1} p_j \sum_{k=0}^{m_j-1} \max_{l \in \mathbb{N}_n} \braket{ \mathsf{M}_j(k) , \rho_l}
\\
&= \sum_{j=0}^{N-1} p_j P_\mathrm{g} (\mathcal{E} ; \mathsf{M}_j),
\end{align*}
which proves \eqref{eq:affinity}.
\end{proof}
For each $ (s_0 , s_1) \in [0,1]^2(= \mathcal{E} (\ell^\infty(\mathbb{N}_2)) ) $ we write as
\begin{equation*}
\mathsf{A}_{s_0 , s_1} := ((s_0,s_1) , u_{\mathcal{P}(\mathbb{N}_2)} - (s_0,s_1)) = ((s_0,s_1) , (1-s_0,1-s_1)) \in \mathrm{EVM}_2 (\mathcal{P}(\mathbb{N}_2)),
\end{equation*}
which is the general form of the $2$-outcome EVM on $\mathcal{P}(\mathbb{N}_2)$.
\begin{lemm} \label{lemm:para}
Let $s_0 , s_1 , t_0 , t_1 \in [0,1]$.
Then $\mathsf{A}_{s_0 , s_1} \preceq_{\mathrm{post}} \mathsf{A}_{t_0 , t_1}$ if and only if there exist scalars $p, q \in [0,1] $ such that $(s_0 , s_1) = p (t_0 , t_1) + q (1-t_0 , 1-t_1)$, i.e.\ $(s_0 ,s_1)$ is inside the parallelogram $(0,0)$-$(t_0,t_1)$-$(1,1)$-$(1-t_0 ,1-t_1)$.
\end{lemm}
\begin{proof}
\begin{align*}
&\mathsf{A}_{s_0 , s_1} \preceq_{\mathrm{post}} \mathsf{A}_{t_0 , t_1} \\
&\iff \exists p,p^\prime , q, q^\prime \in [0, \infty) \quad \mathrm{s.t.} \quad
\begin{cases}
p+p^\prime = q+ q^\prime =1, \\
(s_0, s_1) = p(t_0 , t_1) + q (1-t_0 ,1-t_1), \\
(1-s_0,1- s_1) = p^\prime (t_0 , t_1) + q^\prime (1-t_0 ,1-t_1)
\end{cases}
\\
&\iff \exists p,q \in [0, 1] \quad \mathrm{s.t.} \quad
\begin{cases}
(s_0, s_1) = p(t_0 , t_1) + q (1-t_0 ,1-t_1), \\
(1-s_0,1- s_1) = (1-p) (t_0 , t_1) + (1-q) (1-t_0 ,1-t_1)
\end{cases}
\\
&\iff \exists p,q \in [0, 1] \quad \mathrm{s.t.} \quad
(s_0, s_1) = p(t_0 , t_1) + q (1-t_0 ,1-t_1),
\end{align*}
where the final equivalence holds since the second condition
\[
(1-s_0,1- s_1) = (1-p) (t_0 , t_1) + (1-q) (1-t_0 ,1-t_1)
\]
follows from the first condition
\[
(s_0, s_1) = p(t_0 , t_1) + q (1-t_0 ,1-t_1). \qedhere
\]
\end{proof}
\begin{lemm} \label{lemm:parabola}
Let $0 < s < t < 1$. Then $\mathsf{A}_{s, s^2}$ and $\mathsf{A}_{t,t^2}$ are incomparable, i.e.\ neither $\mathsf{A}_{s, s^2} \preceq_{\mathrm{post}} \mathsf{A}_{t,t^2}$ nor $\mathsf{A}_{t, t^2} \preceq_{\mathrm{post}} \mathsf{A}_{s,s^2}$ holds.
\end{lemm}
\begin{proof}
By the strict convexity of the function $f(x) = x^2 ,$ the point $(s,s^2) \in \mathbb{R}^2$ is below the line segment $(0,0)$-$(t,t^2)$.
Therefore $(s,s^2) $ is outside the parallelogram $(0,0)$-$(t, t^2)$-$(1,1)$-$(1-t ,1-t^2)$ (Fig.~\ref{fig:para}~(a)).
Hence Lemma~\ref{lemm:para} implies that $\mathsf{A}_{s, s^2} \preceq_{\mathrm{post}} \mathsf{A}_{t,t^2}$ does not hold.
Similarly $(t, t^2)$ is below the line segment $(s,s^2)$-$(1,1)$ (Fig.~\ref{fig:para}~(b))
and hence Lemma~\ref{lemm:para} implies that $\mathsf{A}_{t, t^2} \preceq_{\mathrm{post}} \mathsf{A}_{s,s^2}$ does not hold.
\begin{figure}
\centering
\begin{tikzpicture}[domain=0:3.5,samples=200,>=stealth]
\node at (2 , -1) {(a)};
\draw[->] (0,0) -- (3.5,0) node[right] {$x$};
\draw[->] (0,0) -- (0,3.5) node[above] {$y$};
\draw plot (\x, {\x * \x / 3}) node[below right] {{\footnotesize$y=x^2$}};
\draw plot (\x , {3 - (3-\x) * (3-\x) /3} ) node[below right] {{\footnotesize$y= 1 - (1-x)^2$}};
\node[circle,fill=black,draw=black,inner sep=0pt,minimum size=3pt,label=above left:{\footnotesize$(1,1)$}] (u) at (3,3) {};
\node[circle,fill=black,draw=black,inner sep=0pt,minimum size=3pt,label=below left:{\footnotesize$(0,0)$}] (o) at (0,0) {};
\node[circle,fill=black,draw=black,inner sep=0pt,minimum size=3pt,label=below right:{\footnotesize$(s,s^2)$}] (s) at (1.3,1.69/3) {};
\node[circle,fill=black,draw=black,inner sep=0pt,minimum size=3pt,label=below right:{\footnotesize$(t,t^2)$}] (t) at (2,4/3) {};
\node[circle,fill=black,draw=black,inner sep=0pt,minimum size=3pt] (tp) at (3-2,3-4/3) {};
\node at (1.1, 3) {\footnotesize$(1-t,1-t^2)$};
\draw[->] (1,2.8) to [in=135,out=225] (0.9,3-4/3);
\draw (o)--(t)--(u)--(tp)--(o);
\end{tikzpicture}
\begin{tikzpicture}[domain=0:3.5,samples=200,>=stealth]
\node at (2 , -1) {(b)};
\draw[->] (0,0) -- (3.5,0) node[right] {$x$};
\draw[->] (0,0) -- (0,3.5) node[above] {$y$};
\draw plot (\x, {\x * \x / 3}) node[below right] {{\footnotesize$y=x^2$}};
\draw plot (\x , {3 - (3-\x) * (3-\x) /3} ) node[below right] {{\footnotesize$y= 1 - (1-x)^2$}};
\node[circle,fill=black,draw=black,inner sep=0pt,minimum size=3pt,label=above left:{\footnotesize$(1,1)$}] (u) at (3,3) {};
\node[circle,fill=black,draw=black,inner sep=0pt,minimum size=3pt,label=below left:{\footnotesize$(0,0)$}] (o) at (0,0) {};
\node[circle,fill=black,draw=black,inner sep=0pt,minimum size=3pt,label=below right:{\footnotesize$(s,s^2)$}] (s) at (1.3,1.69/3) {};
\node[circle,fill=black,draw=black,inner sep=0pt,minimum size=3pt,label=below right:{\footnotesize$(t,t^2)$}] (t) at (2,4/3) {};
\node[circle,fill=black,draw=black,inner sep=0pt,minimum size=3pt] (sp) at (3-1.3, 3-1.69/3) {};
\node at (1.1, 3) {\footnotesize$(1-s,1-s^2)$};
\draw (o)--(s)--(u)--(sp)--(o);
\end{tikzpicture}
\caption{The points $(s,s^2)$ and $(t,t^2)$ are respectively outside the parallelograms $(0,0)$-$(t, t^2)$-$(1,1)$-$(1-t ,1-t^2)$ (a) and $(0,0)$-$(s, s^2)$-$(1,1)$-$(1-s ,1-s^2)$ (b).}
\label{fig:para}
\end{figure}
\end{proof}
We are now in a position to prove the following crucial lemma.
\begin{lemm} \label{lemm:main1}
$(S_n , \mathord\preceq_n)$ is embeddable into $(\mathfrak{M}_{\mathrm{fin}} (\mathcal{P}(\mathbb{N}_2)), \mathord\pp)$ for each natural number $n \geq 3$.
\end{lemm}
\begin{proof}
We write the trivial measurement on $\mathcal{P} (\mathbb{N}_2)$ as $\mathsf{U} := (u_{\mathcal{P} (\mathbb{N}_2)}) = ((1,1)) \in \mathrm{EVM}_{\mathrm{fin}} (\mathcal{P}(\mathbb{N}_2))$.
We define a map $f\colon S_n \to \mathfrak{M}_{\mathrm{fin}} (\mathcal{P}(\mathbb{N}_2))$ by
\begin{gather*}
s_j := 3^{j-n} , \\
f(a_j) := [\mathsf{A}^{(j)}],
\quad \mathsf{A}^{(j)} := \frac{1}{n} \mathsf{A}_{s_j , s_j^2} \oplus \frac{n-1}{n} \mathsf{U} , \\
f(b_j) :=[\mathsf{B}^{(j)}] , \quad
\mathsf{B}^{(j)} := \frac{1}{n} \mathsf{U} \oplus \bigoplus_{k \in \mathbb{N}_n \setminus \{ j \}} \frac{1}{n} \mathsf{A}_{s_k, s_k^2}
\end{gather*}
$(0 \leq j \leq n-1)$.
We establish the lemma by demonstrating that $f$ is an order embedding.
For this we have only to prove the following assertions:
for every $j, k \in \mathbb{N}_n$ with $j \neq k$
\begin{enumerate}
\item \label{it:lm1}
$\mathsf{A}^{(j)}$ and $\mathsf{A}^{(k)}$ are incomparable;
\item \label{it:lm2}
$\mathsf{B}^{(j)}$ and $\mathsf{B}^{(k)}$ are incomparable;
\item \label{it:lm3}
$\mathsf{A}^{(j)} \preceq_{\mathrm{post}} \mathsf{B}^{(k)}$;
\item \label{it:lm4}
$\mathsf{B}^{(k)} \preceq_{\mathrm{post}} \mathsf{A}^{(j)}$ does not hold;
\item \label{it:lm5}
$\mathsf{A}^{(j)}$ and $\mathsf{B}^{(j)}$ are incomparable.
\end{enumerate}
\noindent
\textit{Proof of \eqref{it:lm1}.}
From Proposition~\ref{prop:bss} and Lemma~\ref{lemm:affinity} we have the following equivalences:
\begin{align*}
&\mathsf{A}^{(j)} \preceq_{\mathrm{post}} \mathsf{A}^{(k)} \\
&\iff P_\mathrm{g} (\mathcal{E} ; \mathsf{A}^{(j)}) \leq P_\mathrm{g} (\mathcal{E} ; \mathsf{A}^{(k)}) \quad (\text{$\forall \mathcal{E}$: ensemble})
\\
&\iff
\frac{1}{n}P_\mathrm{g} (\mathcal{E} ; \mathsf{A}_{s_j , s_j^2}) + \frac{n-1}{n} P_\mathrm{g} (\mathcal{E}; \mathsf{U})
\leq
\frac{1}{n}P_\mathrm{g} (\mathcal{E} ; \mathsf{A}_{s_k , s_k^2}) + \frac{n-1}{n} P_\mathrm{g} (\mathcal{E}; \mathsf{U})
\quad (\text{$\forall \mathcal{E}$: ensemble})
\\
&\iff
P_\mathrm{g} (\mathcal{E} ; \mathsf{A}_{s_j , s_j^2}) \leq P_\mathrm{g} (\mathcal{E} ; \mathsf{A}_{s_k , s_k^2}) \quad (\text{$\forall \mathcal{E}$: ensemble})
\\
&\iff \mathsf{A}_{s_j , s_j^2} \preceq_{\mathrm{post}} \mathsf{A}_{s_k , s_k^2},
\end{align*}
but the last condition does not hold by Lemma~\ref{lemm:parabola}.
Hence $\mathsf{A}^{(j)} \preceq_{\mathrm{post}} \mathsf{A}^{(k)}$ does not hold.
Similarly $\mathsf{A}^{(k)} \preceq_{\mathrm{post}} \mathsf{A}^{(j)}$ does not hold.
\noindent
\textit{Proof of \eqref{it:lm2}.}
By using Proposition~\ref{prop:bss} and Lemma~\ref{lemm:affinity} we have
\begin{align*}
&\mathsf{B}^{(j)} \preceq_{\mathrm{post}} \mathsf{B}^{(k)} \\
&\iff P_\mathrm{g} (\mathcal{E} ; \mathsf{B}^{(j)}) \leq P_\mathrm{g} (\mathcal{E} ; \mathsf{B}^{(k)}) \quad (\text{$\forall \mathcal{E}$: ensemble})
\\
&\iff
\frac{1}{n} P_\mathrm{g} (\mathcal{E}; \mathsf{U})+ \sum_{l \in \mathbb{N}_n \setminus\{ j\}}\frac{1}{n}P_\mathrm{g} (\mathcal{E} ; \mathsf{A}_{s_l , s_l^2})
\leq
\frac{1}{n} P_\mathrm{g} (\mathcal{E}; \mathsf{U})+ \sum_{l \in \mathbb{N}_n \setminus\{ k \}}\frac{1}{n}P_\mathrm{g} (\mathcal{E} ; \mathsf{A}_{s_l , s_l^2})
\quad (\text{$\forall \mathcal{E}$: ensemble})
\\
&\iff
P_\mathrm{g} (\mathcal{E} ; \mathsf{A}_{s_k , s_k^2}) \leq P_\mathrm{g} (\mathcal{E} ; \mathsf{A}_{s_j , s_j^2}) \quad (\text{$\forall \mathcal{E}$: ensemble})
\\
&\iff \mathsf{A}_{s_k , s_k^2} \preceq_{\mathrm{post}} \mathsf{A}_{s_j , s_j^2},
\end{align*}
but the last condition does not hold by Lemma~\ref{lemm:parabola}.
Hence $\mathsf{B}^{(j)} \preceq_{\mathrm{post}} \mathsf{B}^{(k)}$ does not hold.
Similarly $\mathsf{B}^{(k)} \preceq_{\mathrm{post}} \mathsf{B}^{(j)}$ does not hold.
\noindent
\textit{Proof of \eqref{it:lm3}.}
By using Proposition~\ref{prop:bss} and Lemma~\ref{lemm:affinity} we have
\begin{align*}
&\mathsf{A}^{(j)} \preceq_{\mathrm{post}} \mathsf{B}^{(k)} \\
&\iff P_\mathrm{g} (\mathcal{E} ; \mathsf{A}^{(j)}) \leq P_\mathrm{g} (\mathcal{E} ; \mathsf{B}^{(k)}) \quad (\text{$\forall \mathcal{E}$: ensemble})
\\
&\iff
\frac{1}{n}P_\mathrm{g} (\mathcal{E} ; \mathsf{A}_{s_j , s_j^2}) + \frac{n-1}{n} P_\mathrm{g} (\mathcal{E}; \mathsf{U})
\leq \frac{1}{n} P_\mathrm{g} (\mathcal{E}; \mathsf{U})+ \sum_{l \in \mathbb{N}_n \setminus\{ k \}}\frac{1}{n}P_\mathrm{g} (\mathcal{E} ; \mathsf{A}_{s_l , s_l^2})
\quad (\text{$\forall \mathcal{E}$: ensemble})
\\
&\iff
\sum_{l \in \mathbb{N}_n \setminus \{ j,k \}} (P_\mathrm{g} (\mathcal{E} ; \mathsf{A}_{s_l , s_l^2}) - P_\mathrm{g} (\mathcal{E}; \mathsf{U})) \geq 0
\quad (\text{$\forall \mathcal{E}$: ensemble}).
\end{align*}
The last condition holds because $\mathsf{U} \preceq_{\mathrm{post}} \mathsf{M}$ for every EVM $\mathsf{M}$.
Thus $\mathsf{A}^{(j)} \preceq_{\mathrm{post}} \mathsf{B}^{(k)}$ holds.
\noindent
\textit{Proof of \eqref{it:lm4}.}
By using Proposition~\ref{prop:bss} and Lemma~\ref{lemm:affinity} we have
\begin{align*}
&\mathsf{B}^{(k)} \preceq_{\mathrm{post}} \mathsf{A}^{(j)} \\
&\iff P_\mathrm{g} (\mathcal{E} ; \mathsf{B}^{(k)} ) \leq P_\mathrm{g} (\mathcal{E} ; \mathsf{A}^{(j)}) \quad (\text{$\forall \mathcal{E}$: ensemble})
\\
&\iff
\frac{1}{n} P_\mathrm{g} (\mathcal{E}; \mathsf{U})+ \sum_{l \in \mathbb{N}_n \setminus\{ k \}}\frac{1}{n}P_\mathrm{g} (\mathcal{E} ; \mathsf{A}_{s_l , s_l^2}) \leq \frac{1}{n}P_\mathrm{g} (\mathcal{E} ; \mathsf{A}_{s_j , s_j^2}) + \frac{n-1}{n} P_\mathrm{g} (\mathcal{E}; \mathsf{U})
\quad (\text{$\forall \mathcal{E}$: ensemble})
\\
&\iff
\sum_{l \in \mathbb{N}_n \setminus \{ j,k \}} (P_\mathrm{g} (\mathcal{E} ; \mathsf{A}_{s_l , s_l^2}) - P_\mathrm{g} (\mathcal{E}; \mathsf{U})) \leq 0
\quad (\text{$\forall \mathcal{E}$: ensemble}).
\end{align*}
Since $P_\mathrm{g} (\mathcal{E} ; \mathsf{A}_{s_l , s_l^2}) - P_\mathrm{g} (\mathcal{E}; \mathsf{U}) \geq 0$, the last condition is equivalent to
\begin{equation*}
P_\mathrm{g} (\mathcal{E} ; \mathsf{A}_{s_l , s_l^2}) = P_\mathrm{g} (\mathcal{E} ; \mathsf{U}) \quad (\text{$\forall l \in \mathbb{N}_n \setminus \{ j,k \}$; $\forall \mathcal{E}$: ensemble}),
\end{equation*}
and therefore equivalent to $\mathsf{A}_{s_l , s_l^2} \sim_{\mathrm{post}} \mathsf{U} $ $(\forall l \in \mathbb{N}_n \setminus \{ j, k \})$.
The EVM $\mathsf{A}_{1,1}$ is trivial and hence post-processing equivalent to $\mathsf{U}$.
Since $(s_l , s_l^2)$ is outside the line segment $(0,0)$-$(1,1)$, Lemma~\ref{lemm:para} implies that $\mathsf{A}_{s_l , s_l^2} \preceq_{\mathrm{post}} \mathsf{A}_{1,1}$, or equivalently $\mathsf{A}_{s_l , s_l^2} \preceq_{\mathrm{post}} \mathsf{U}$, does not hold.
Therefore $\mathsf{B}^{(k)} \preceq_{\mathrm{post}} \mathsf{A}^{(j)}$ does not hold.
\noindent
\textit{Proof of \eqref{it:lm5}.}
We first assume $\mathsf{A}^{(j)} \preceq_{\mathrm{post}} \mathsf{B}^{(j)}$ and derive a contradiction.
From the definition of the post-processing relation, the component $ (s_j/n , s_j^2/n)$ of the EVM $\mathsf{A}^{(j)}$ can be written as the following conic combination of the components of $\mathsf{B}^{(j)}$:
\begin{equation}
\frac{1}{n}(s_j , s_j^2)
= \frac{1}{n}\sum_{l \in \mathbb{N}_n \setminus \{ j \} } \left[ q_l (s_l , s_l^2 ) + r_l (1-s_l , 1-s_l^2) \right]
+\frac{r}{n} (1,1),
\notag
\end{equation}
where $q_l, r_l , r \in [0,1]$.
This implies
\begin{equation}
(s_j, s_j^2)
= \vec{u} + \vec{v},
\label{eq:conic1}
\end{equation}
where
\begin{gather}
\vec{u} := \sum_{l=0}^{j-1} q_l (s_l , 0), \notag
\\
\vec{v} := \sum_{l=0}^{j-1} q_l (0,s_l^2) + \sum_{l=j+1}^{n-1} q_l (s_l , s_l^2) + \sum_{l \in \mathbb{N}_n \setminus \{ j \}} r_l (1-s_l , 1-s_l^2) + r (1,1)
\label{eq:vdef}
\end{gather}
and the sum $\sum_{l=0}^{j-1}\cdots$ is understood to be $0$ when $j=0$.
Since we have
\begin{gather*}
\sum_{l=0}^{j-1} q_l s_l
\leq \sum_{l=0}^{j-1} s_l
= \frac{3^{-n} (3^j -1)}{2}
< \frac{3^{-n + j} }{2} = \frac{s_j}{2} \quad (j>0), \\
0 < \frac{s_0}{2} ,
\end{gather*}
$\vec{u}$ is in the line segment $L_j := \set{(t,0) | 0 \leq t \leq s_j /2} .$
On the other hand, $\vec{v}$ is in the convex cone
\begin{align*}
C_j &:= \set{ (\alpha s_{j+1} , \alpha s_{j+1}^2 + \beta) | \alpha , \beta \in [0,\infty)} \\
&= \set{(x,y) \in \mathbb{R}^2 | x \geq 0 , \, y \geq s_{j+1} x}
\end{align*}
generated by $(s_{j+1} , s_{j+1}^2)$ and $(0,1)$, where we put $s_n := 3^{n-n} = 1$.
This can be seen from that all the terms on the RHS of \eqref{eq:vdef} are in $C_j$
and $C_j$ is closed under conic combinations.
Therefore \eqref{eq:conic1} implies that
\begin{equation}
(s_j , s_j^2) \in L_j + C_j = \set{ (x + t , y) | x \geq 0 , \, y \geq s_{j+1}x , \, 0 \leq t \leq s_j /2},
\notag
\end{equation}
and hence there exists $ 0 \leq t \leq s_j/2$ such that
\begin{equation*}
s_j^2 \geq s_{j+1} (s_j -t)
\end{equation*}
This implies
\begin{equation*}
\frac{s_j}{2} \geq t \geq (1 -s_j s_{j+1}^{-1})s_j = (1 - 3^{j-n - (j+1) + n}) s_j = \frac{2}{3}s_j >0,
\end{equation*}
which is a contradiction.
Thus $\mathsf{A}^{(j)} \preceq_{\mathrm{post}} \mathsf{B}^{(j)}$ does not hold.
We next assume $\mathsf{B}^{(j)} \preceq_{\mathrm{post}} \mathsf{A}^{(j)}$ and derive a contradiction.
We first assume $j > 0 $.
In this case, the component $(s_{j-1} /n , s_{j-1}^2 /n)$ of $\mathsf{B}^{(j)}$ can be written as a conic combination of the components of $\mathsf{A}^{(j)}$.
Thus we can take scalars $\alpha_0, \alpha_1 , \alpha_2 \geq 0 $ such that
\begin{equation*}
(s_{j-1} , s_{j-1}^2 ) = \alpha_0 (s_j , s_j^2 ) + \alpha_1 (1-s_j , 1-s_j^2) + \alpha_2 (1,1)
\in C_{j-1},
\end{equation*}
which is impossible because $(s_{j-1} , s_{j-1}^2 ) \not\in C_{j-1}$.
Therefore $\mathsf{B}^{(j)} \preceq_{\mathrm{post}} \mathsf{A}^{(j)}$ does not hold when $j >0$.
Now we assume $\mathsf{B}^{(0)} \preceq_{\mathrm{post}} \mathsf{A}^{(0)}$.
Then the component $(s_1 /n , s_1^2 /n)$ of $\mathsf{B}^{(0)}$ is written as
\begin{equation*}
\frac{1}{n}(s_1 , s_1^2 ) = \beta_0\frac{1}{n} (s_0 , s_0^2) + \beta_1 \frac{1}{n} (1-s_0,1-s_0^2)+ \beta \frac{n-1}{n}(1,1)
\end{equation*}
for some scalars $\beta_0 , \beta_1, \beta \in [0,1]$.
This implies
\begin{equation}
(s_1 , s_1^2) = \vec{\xi} + \vec{\eta},
\label{eq:ls1}
\end{equation}
where
\begin{gather*}
\vec{\xi} := \beta_0 (s_0 , 0) = \frac{\beta_0}{3}(s_1, 0) \in L_1 ,\\
\vec{\eta} := \beta_0 (0,s_0^2) +\beta_1 (1-s_0 , 1 -s_0^2) + \beta (n-1)(1,1) \in C_1 .
\end{gather*}
Thus \eqref{eq:ls1} implies $(s_1 , s_1^2) \in L_1 + C_1$, which gives a contradiction as we have already shown in the last paragraph.
Therefore $\mathsf{B}^{(0)} \preceq_{\mathrm{post}} \mathsf{A}^{(0)}$ does not hold.
\end{proof}
\noindent
\textit{Proof of Theorem~\ref{thm:main1}.\ref{it:thm1.1}.}
From Lemma~\ref{lemm:leq} we have
\begin{equation}
\dim_{\mathrm{ord}} (\mathfrak{M}_{\mathrm{fin}}(\Omega) , \mathord\pp) \leq \dim_{\mathrm{ord}, \, \realn} (\mathfrak{M}_{\mathrm{fin}}(\Omega) , \mathord\pp).
\label{eq:mp1}
\end{equation}
From Lemma~\ref{lemm:embedding},
the order embedding results in Lemmas~\ref{lemm:cbit} and \ref{lemm:main1} imply
\begin{equation}
n = \dim_{\mathrm{ord}} (S_n , \mathord\preceq_n) \leq \dim_{\mathrm{ord}} (\mathfrak{M}_{\mathrm{fin}} (\mathcal{P}(\mathbb{N}_2)), \mathord\pp) \leq \dim_{\mathrm{ord}} (\mathfrak{M}_{\mathrm{fin}}(\Omega), \mathord\pp)
\label{eq:mp2}
\end{equation}
for any natural number $n \geq 3$.
Then the claim follows from \eqref{eq:mp1} and \eqref{eq:mp2}.
\qed
\subsection{Proof of Theorem~\ref{thm:main1}.\ref{it:thm1.2}} \label{subsec:m1.2}
Now we assume that $V$ is separable in the norm topology.
\begin{lemm} \label{lemm:countable}
There exists a countable family of order monotones that characterizes $(\mathfrak{M}_{\mathrm{fin}}(\Omega) , \mathord\pp) .$
\end{lemm}
\begin{proof}
For the proof we explicitly construct a countable family of order monotones that characterizes the post-processing order.
By the separability of $V$, the set of ensembles on $\Omega$ is also separable in the norm topology, which means that
we can take a sequence $\mathcal{E}^{(i)} = (\rho^{(i)}_j)_{j=1}^{N_i-1}$ $(i\in \mathbb{N})$ of ensembles on $\Omega$ such that for every ensemble $\mathcal{E} = (\rho_j)_{j=0}^{N-1}$ and $\epsilon > 0$ there exists some $i \in \mathbb{N}$ satisfying $N_i = N$ and
\begin{equation}
\| \mathcal{E} - \mathcal{E}^{(i)} \| := \sum_{j=0}^{N-1} \| \rho_j - \rho_j^{(i)} \| < \epsilon .
\notag
\end{equation}
We now prove that the countable family $\{P_\mathrm{g} (\mathcal{E}^{(i)} ; \cdot)\}_{i \in \mathbb{N}}$ of order monotones characterizes $(\mathfrak{M}_{\mathrm{fin}}(\Omega) , \mathord\pp) .$
For this, from Proposition~\ref{prop:bss}, we have only to prove that for every EVMs $\mathsf{M} = (\oM(j))_{j=0}^{m-1}$ and $\mathsf{N} = (\oN(k))_{k=0}^{n-1}$ on $\Omega$,
\begin{equation}
P_\mathrm{g} (\mathcal{E}^{(i)} ; \mathsf{M}) \leq P_\mathrm{g} (\mathcal{E}^{(i)} ; \mathsf{N} )
\quad (\forall i \in \mathbb{N})
\label{eq:as1}
\end{equation}
implies
\begin{equation}
P_\mathrm{g} (\mathcal{E}; \mathsf{M}) \leq P_\mathrm{g} (\mathcal{E} ; \mathsf{N} )
\quad (\text{$\forall \mathcal{E}$: ensemble}) .
\label{eq:con1}
\end{equation}
Assume \eqref{eq:as1}.
We take an arbitrary ensemble $\mathcal{E} = (\rho_j)_{j=0}^{N-1}$ on $\Omega$ and $\epsilon >0 .$
By the density of $\{ \mathcal{E}^{(i)} \}_{i \in \mathbb{N}}$ there exists some $i \in \mathbb{N} $ such that $N_i = N$ and $\|\mathcal{E} - \mathcal{E}^{(i)} \|_1 < \epsilon .$
We define
\begin{equation}
\mathrm{EVM} (N ; \mathsf{M})
:= \set{\mathsf{A} = \left( \mathsf{A} (j) ) \right)_{j=0}^{N-1} \in \mathrm{EVM}_N (\Omega) | \mathsf{A} \preceq_{\mathrm{post}} \mathsf{M} } ,
\notag
\end{equation}
which is the set of $N$-outcome EVMs obtained by post-processing $\mathsf{M} .$
Then from the definitions of $P_\mathrm{g} (\mathcal{E};\mathsf{M})$ and $\mathord\pp$ we have
\begin{align}
P_\mathrm{g} (\mathcal{E} ; \mathsf{M} )
&=\sup_{(\mathsf{A}(j))_{j=0}^{N-1} \in \mathrm{EVM} (N; \mathsf{M} )}
\sum_{j=0}^{N-1} \braket{\mathsf{A}(j) , \rho_j}
\notag \\
&=\sup_{(\mathsf{A}(j))_{j=0}^{N-1} \in \mathrm{EVM} (N; \mathsf{M} )}
\sum_{j=0}^{N-1} \left( \braket{\mathsf{A}(j) , \rho_j^{(i)}} + \braket{\mathsf{A}(j), \rho_j - \rho_j^{(i)}} \right)
\notag \\
& \leq \sup_{(\mathsf{A}(j))_{j=0}^{N-1} \in \mathrm{EVM} (N; \mathsf{M} )}
\sum_{j=0}^{N-1} \left(\braket{\mathsf{A}(j), \rho_j^{(i)}} + \|\rho_j - \rho_j^{(i)} \| \right)
\label{eq:der1}
\\
&\leq \sup_{(\mathsf{A}(j))_{j=0}^{N-1} \in \mathrm{EVM} (N; \mathsf{M} )}
\left(
\epsilon +
\sum_{j=1}^N \braket{\mathsf{A}(j), \rho_j^{(i)}}
\right)
\notag \\
&= P_\mathrm{g} (\mathcal{E}^{(i)}; \mathsf{M}) + \epsilon , \label{eq:ineq1}
\end{align}
where in deriving \eqref{eq:der1} we used the inequality
\[
|\braket{f,x}| \leq \| f \| \|x\|
\quad (\text{$f \in V^\ast ,$ $x \in V$})
\]
and $\| \mathsf{A}(j) \| \leq 1 .$
By replacing $\mathsf{M},$ $\mathcal{E}, $ and $\mathcal{E}^{(i)}$ in the above argument with $\mathsf{N},$ $\mathcal{E}^{(i)} ,$ and $\mathcal{E},$ respectively, we also obtain
\begin{equation}
P_\mathrm{g} (\mathcal{E}^{(i)} ; \mathsf{N}) \leq P_\mathrm{g} (\mathcal{E} ; \mathsf{N}) + \epsilon.
\label{eq:ineq2}
\end{equation}
From \eqref{eq:as1}, \eqref{eq:ineq1}, and \eqref{eq:ineq2}, we have
\begin{equation}
P_\mathrm{g} (\mathcal{E} ; \mathsf{M}) \leq P_\mathrm{g}(\mathcal{E} ; \mathsf{N}) + 2 \epsilon .
\label{eq:Pgineq}
\end{equation}
Since $\epsilon >0$ is arbitrary, \eqref{eq:Pgineq} implies $P_\mathrm{g} (\mathcal{E} ; \mathsf{M}) \leq P_\mathrm{g} (\mathcal{E} ; \mathsf{N}) ,$ which completes the proof of the implication \eqref{eq:as1}$\implies$\eqref{eq:con1}.
\end{proof}
\noindent
\textit{Proof of Theorem~\ref{thm:main1}.\ref{it:thm1.2}.}
From \eqref{eq:mp1} and Theorem~\ref{thm:main1}.\ref{it:thm1.1}, we have
\begin{equation}
\aleph_0 \leq \dim_{\mathrm{ord}} (\mathfrak{M}_{\mathrm{fin}}(\Omega), \mathord\pp) \leq \dim_{\mathrm{ord}, \, \realn} (\mathfrak{M}_{\mathrm{fin}}(\Omega), \mathord\pp).
\label{eq:mp3}
\end{equation}
On the other hand, Lemma~\ref{lemm:countable} implies
\begin{equation}
\dim_{\mathrm{ord}, \, \realn} (\mathfrak{M}_{\mathrm{fin}}(\Omega), \mathord\pp) \leq \aleph_0 .
\label{eq:mp4}
\end{equation}
Then \eqref{eq:main1} follows from \eqref{eq:mp3} and \eqref{eq:mp4}. \qed
\subsection{Proof of Theorem~\ref{thm:main2}} \label{subsec:m2}
Now we assume that $\mathcal{H}$ is a complex separable Hilbert space.
\begin{lemm} \label{lemm:embedding2}
$(\mathfrak{M}_{\mathrm{fin}} (\mathbf{D}(\mathcal{H}){}) , \mathord\pp )$ is embeddable into $(\mathfrak{C} (\mathcal{H}) , \mathord\preceq_{\mathrm{CP}})$.
\end{lemm}
\begin{proof}
For each EVM $\mathsf{M} = (\oM(j))_{j=0}^{m-1} \in \mathrm{EVM}_{\mathrm{fin}}(\mathbf{D}(\mathcal{H}){}) $ we define a normal channel (called the quantum-classical channel) $\Gamma^\mathsf{M} \colon \mathbf{B}(\mathbb{C}^m) \to \mathbf{B}(\mathcal{H})$ by
\begin{equation*}
\Gamma^\mathsf{M} (a) := \sum_{j=0}^{m-1} \braket{\xi_j^{(m)} | a \xi_j^{(m)}} \mathsf{M}(j)
\quad (a \in \mathbf{B}(\mathbb{C}^m)) ,
\end{equation*}
where $(\xi_j^{(m)})_{j=0}^{m-1} $ is an orthonormal basis of $\mathbb{C}^m .$
It is known that
\begin{equation}
\mathsf{M} \preceq_{\mathrm{post}} \mathsf{N} \iff \Gamma^\mathsf{M} \preceq_{\mathrm{CP}} \Gamma^\mathsf{N}
\label{eq:iff3}
\end{equation}
for every finite-outcome EVMs $\mathsf{M} $ and $\mathsf{N} $ on $\mathbf{D}(\mathcal{H}) $ (\cite{1751-8121-50-13-135302}, Proposition~1).
From \eqref{eq:iff3} it readily follows that the map
\begin{equation*}
\mathfrak{M}_{\mathrm{fin}} (\mathbf{D}(\mathcal{H}){}) \ni [\mathsf{M}] \mapsto [\Gamma^\mathsf{M}] \in \mathfrak{C} (\mathcal{H})
\end{equation*}
is a well-defined order embedding, which proves the claim.
\end{proof}
The proof of the following lemma is almost parallel to that of Lemma~\ref{lemm:countable}.
\begin{lemm} \label{lemm:countable2}
There exists a countable family of order monotones that characterizes $(\mathfrak{C} (\mathcal{H}) , \mathord\preceq_{\mathrm{CP}}) .$
\end{lemm}
\begin{proof}
Since $\mathcal{H} \otimes \mathbb{C}^n$ is separable for each $n \in \mathbb{N}$, we can take a dense countable family $\mathcal{E}^{(n,i)} = (\rho_k^{(n, i)})_{k=0}^{N_{n,i}-1}$ $(i\in \mathbb{N})$ of ensembles on $\mathbf{D}(\mathcal{H} \otimes \mathbb{C}^n)$ in the sense that for every ensemble $\mathcal{E} = (\rho_k)_{k=0}^{m-1}$ on $\mathbf{D}(\mathcal{H} \otimes \mathbb{C}^n)$ and every $\epsilon >0$ there exists $i \in \mathbb{N}$ such that $N_{n,i} = m$ and
\begin{equation}
\| \mathcal{E} - \mathcal{E}^{(n,i)} \|_1
:=\sum_{k=0}^{m-1} \| \rho_k - \rho_k^{(n,i)} \|_1 < \epsilon .
\notag
\end{equation}
We establish the lemma by demonstrating that the countable family $\{P_\mathrm{g}^{(n)} (\mathcal{E}^{(n,i)} ; \cdot ) \}_{n,i\in \mathbb{N}}$ characterizes $(\mathfrak{C} (\mathcal{H}) , \mathord\preceq_{\mathrm{CP}}) .$
From Proposition~\ref{prop:qbss}, it suffices to show that
\begin{equation}
P_\mathrm{g} (\mathcal{E}^{(n,i)} ; \Gamma \otimes \mathrm{id}_n) \leq P_\mathrm{g} (\mathcal{E}^{(n,i)} ; \Lambda \otimes \mathrm{id}_n )
\quad (\forall i \in \mathbb{N})
\label{eq:c1}
\end{equation}
implies
\begin{equation}
P_\mathrm{g} (\mathcal{E} ; \Gamma \otimes \mathrm{id}_n ) \leq P_\mathrm{g} (\mathcal{E} ; \Lambda \otimes \mathrm{id}_n )
\quad (\text{$\forall \mathcal{E}$: ensemble on $\mathbf{D}(\mathcal{H} \otimes \mathbb{C}^n)$})
\label{eq:c2}
\end{equation}
for every channels $\Gamma \colon \mathbf{B}(\mathcal{K}) \to \mathbf{B}(\mathcal{H})$ and $\Lambda \colon \mathbf{B}(\mathcal{J}) \to \mathbf{B}(\mathcal{H}) $,
and every $n \in \mathbb{N} .$
Assume \eqref{eq:c1}.
Let $\mathcal{E} = (\rho_k)_{k=0}^{m-1}$ be an arbitrary ensemble on $\mathbf{D}(\mathcal{H} \otimes \mathbb{C}^n)$.
For any $\epsilon >0$ we can take $i\in \mathbb{N}$ such that $N_{n,i} = m$ and $\| \mathcal{E} - \mathcal{E}^{(n,i)} \|_1 < \epsilon $.
Then we have
\begin{align}
&P_\mathrm{g} (\mathcal{E} ; \Gamma \otimes \mathrm{id}_n) \notag \\
&= \sup_{(\mathsf{M} (k))_{k=0}^{m-1} \in \mathrm{EVM}_m (\mathbf{D}(\mathcal{K} \otimes \mathbb{C}^n))}
\sum_{k=0}^{m-1} \braket{ (\Gamma \otimes \mathrm{id}_n) (\mathsf{M} (k)) , \rho_k }
\notag \\
&=\sup_{(\mathsf{M} (k))_{k=0}^{m-1} \in \mathrm{EVM}_m (\mathbf{D}(\mathcal{K} \otimes \mathbb{C}^n))}
\sum_{k=0}^{m-1} \left( \braket{ (\Gamma \otimes \mathrm{id}_n) (\mathsf{M} (k)) , \rho_k^{(n,i)}}
+ \braket{ (\Gamma \otimes \mathrm{id}_n) (\mathsf{M}(k)), \rho_k - \rho_k^{(n,i)}}
\right)
\notag \\
&\leq
\sup_{(\mathsf{M} (k))_{k=0}^{m-1} \in \mathrm{EVM}_m (\mathbf{D}(\mathcal{K} \otimes \mathbb{C}^n))}
\sum_{k=0}^{m-1} \left( \braket{ (\Gamma \otimes \mathrm{id}_n) (\mathsf{M} (k)), \rho_k^{(n,i)}} + \| \rho_k - \rho_k^{(n,i)} \|_1 \| (\Gamma \otimes \mathrm{id}_n) (\mathsf{M}(k)) \|
\right)
\notag \\
&\leq
\sup_{(\mathsf{M} (k))_{k=0}^{m-1} \in \mathrm{EVM}_m (\mathbf{D}(\mathcal{K} \otimes \mathbb{C}^n))}
\left( \sum_{k=0}^{m-1} \braket{ (\Gamma \otimes \mathrm{id}_n) (\mathsf{M} (k)), \rho_k^{(n,i)}}
\right) + \epsilon
\label{eq:c2.5} \\
&=
P_\mathrm{g} (\mathcal{E}^{(n,i)} ; \Gamma \otimes \mathrm{id}_n) + \epsilon ,
\label{eq:c3}
\end{align}
where we used $ \| (\Gamma \otimes \mathrm{id}_n) (\mathsf{M}(k)) \| \leq 1$ in deriving \eqref{eq:c2.5}.
By replacing $\mathcal{E} $, $\mathcal{E}^{(n,i)}$, and $\Gamma$ in the above argument with $\mathcal{E}^{(n,i)}$, $\mathcal{E}$, and $\Lambda$, respectively, we also obtain
\begin{equation}
P_\mathrm{g} (\mathcal{E}^{(n,i)} ; \Lambda \otimes \mathrm{id}_n) \leq P_\mathrm{g} (\mathcal{E} ; \Lambda \otimes \mathrm{id}_n) + \epsilon .
\label{eq:c4}
\end{equation}
From \eqref{eq:c1}, \eqref{eq:c3}, and \eqref{eq:c4}, we have
\begin{equation*}
P_\mathrm{g} (\mathcal{E} ; \Gamma \otimes \mathrm{id}_n) \leq P_\mathrm{g} (\mathcal{E} ; \Lambda \otimes \mathrm{id}_n) + 2\epsilon .
\end{equation*}
Since $\epsilon >0$ is arbitrary, this implies $P_\mathrm{g} (\mathcal{E} ; \Gamma \otimes \mathrm{id}_n) \leq P_\mathrm{g} (\mathcal{E} ; \Lambda \otimes \mathrm{id}_n),$ which completes the proof of \eqref{eq:c1}$\implies$\eqref{eq:c2}.
\end{proof}
\noindent
\textit{Proof of Theorem~\ref{thm:main2}.}
Since $\mathbf{T}_{\mathrm{sa}}(\mathcal{H})$ is separable in the trace norm topology and $\dim \mathbf{T}_{\mathrm{sa}}(\mathcal{H}) \geq 4$, the first claim~\eqref{eq:main2-1} follows from Theorem~\ref{thm:main1}.\ref{it:thm1.2}.
Now we prove the second claim~\eqref{eq:main2-2}.
Lemma~\ref{lemm:leq} implies
\begin{equation}
\dim_{\mathrm{ord}} (\mathfrak{C} (\mathcal{H}) , \mathord\preceq_{\mathrm{CP}}) \leq \dim_{\mathrm{ord}, \, \realn} (\mathfrak{C} (\mathcal{H}) , \mathord\preceq_{\mathrm{CP}})
\label{eq:prf2} .
\end{equation}
From Lemmas~\ref{lemm:embedding} and \ref{lemm:embedding2} and \eqref{eq:main2-1} we have
\begin{equation}
\aleph_0 = \dim_{\mathrm{ord}} (\mathfrak{M}_{\mathrm{fin}} (\mathbf{D}(\mathcal{H})) , \mathord\pp) \leq \dim_{\mathrm{ord}} (\mathfrak{C} (\mathcal{H}), \mathord\preceq_{\mathrm{CP}} ).
\label{eq:prf3}
\end{equation}
From Lemma~\ref{lemm:countable2} we also have
\begin{equation}
\dim_{\mathrm{ord}, \, \realn} (\mathfrak{C} (\mathcal{H}) , \mathord\preceq_{\mathrm{CP}}) \leq \aleph_0 .
\label{eq:prf2-2}
\end{equation}
Then \eqref{eq:main2-2} follows from \eqref{eq:prf2}, \eqref{eq:prf3}, and \eqref{eq:prf2-2}.
\qed
\section{Conclusion} \label{sec:conclusion}
In this paper we have evaluated the order and order monotone dimensions of the post-processing orders of measurements on an arbitrary non-trivial GPT $\Omega$ (Theorem~\ref{thm:main1}) and of quantum channels with a fixed input Hilbert space (Theorem~\ref{thm:main2}).
We found that all of these order dimensions are infinite.
Our results reveal that the post-processing order of measurements or quantum channels is qualitatively more complex than any order with a finite dimension, such as the adiabatic accessibility relation in thermodynamics or the LOCC convertibility relation of bipartite pure states.
In the crucial step of the proof, we have explicitly constructed an order embedding from the standard example $(S_n , \mathord\preceq_n)$ of an $n$-dimensional poset into the poset $(\mathfrak{M}_{\mathrm{fin}} (\mathcal{P}(\mathbb{N}_2)) , \mathord\pp)$ of the equivalence classes of finite-outcome EVMs on the classical bit space $\mathcal{P}(\mathbb{N}_2)$ for every $n \geq 3$ (Lemma~\ref{lemm:main1}).
We also note that the BSS-type theorems (Propositions~\ref{prop:bss} and \ref{prop:qbss}) played important roles in the proofs of Lemmas~\ref{lemm:main1}, \ref{lemm:countable}, and \ref{lemm:countable2}.
As mentioned in the introduction, we can find many other important non-total orders in physics or quantum information, especially in quantum resource theories.
The present work is just the first step to evaluate these kinds of order dimensions and it would be an interesting future work to investigate other orders in resource theories from the standpoint of the order dimension.
| 2024-02-18T23:39:46.699Z | 2021-11-30T02:19:17.000Z | algebraic_stack_train_0000 | 358 | 13,806 |
|
proofpile-arXiv_065-1879 | \section*{Introduction}
We study critical behaviour of the $O(n)$-symmetric $\phi^{4}$ model with an antisymmetric tensor order parameter $\phi_{ik}=-\phi_{ki}$; $i$, $k=1$, \dots, $n$. The action of the model:
\begin{equation}
\label{action}
S(\phi)=\frac{1}{2}\,\mathrm{tr}\left(\phi\left(-\partial^{2}+m^{2}_{0}\right)\phi\right) - \frac{g_{10}}{4!} \left(\mathrm{tr}\left(\phi^{2}\right)\right)^{2} - \frac{g_{20}}{4!}\, \mathrm{tr} \left(\phi^{4}\right).
\end{equation}
includes two independent $O(n)$ invariant quartic structures and consequently two independent coupling constants. Previously, this model was studied in the framework of minimal subtraction (MS) scheme with $\varepsilon$-expansion and renormalization group approach in the space of a fixed dimension with pseudo-$\varepsilon$-expansion up to four-loop order \cite{AKL1,KL1,KL2}. It was shown that requirement of convergence of a functional integral with action (\ref{action}) imposes restriction on the values the couplings can take:
\begin{eqnarray}
\label{ineq}
&\text{even}& \qquad 2 g_{10} + g_{20} >0, \quad n g_{10} + g_{20} >0, \\
&\text{odd} &\qquad 2 g_{10} + g_{20} >0, \quad (n - 1) g_{10} + g_{20} >0, \nonumber
\end{eqnarray}
and also high order asymptotic (HOA) for coefficients of the perturbation series and corresponding $\varepsilon$-expansion had been found:
\begin{equation}
\label{hoag}
\beta^{(N)}_{i}(g_{1},g_{2})= \text{Const} \cdot N!N^{b}(-a(g_{1},g_{2}))^{N}\left(1+O\Big(\frac{1}{N}\Big)\right)
\end{equation}
\begin{equation}
\label{hoae}
g^{(N)}_{1,2*} = \text{Const} \cdot N!N^{b+1}(-a(g^{(1)}_{1*},g^{(1)}_{2*}))^{N}\left(1+O\Big(\frac{1}{N}\Big)\right).
\end{equation}
Here $a(g_{1},g_{2})=\max_{k}\ [a_{k}(g_{1},g_{2})]=\max_{k}\,((2kg_{1} + g_{2})/4k)$; $k=1$, \dots, $n/2$; $g^{(1,2)}_{1*}$ is a one loop contribution to a coordinates of the fixed points and $a_{k}$ correspond to different instanton solutions.
It is also know that in cases $n=2,3$ the model is reduced to scalar and $O(3)$-vector models, and for $n>4$ there is no IR stable fixed points within perturbation theory. In case $n=4$ there are 3 fixed points: A which is a saddle point at all orders and points B and C which are saddle point and IR stable point at the 4-loop level. The coordinates of the latter point are known to be the subject of the relation $g^{*}_{1} = -0.75 g^{*}_{2}$ at all orders.
Recently the $\varepsilon$-expansions were extended up to six order \cite{PB} which allows us refine our understanding of the critical properties of the model and provides a great sandbox to study stability of certain resummation techniques based on Borel-Leroy transformation in the case of multi charge model.
\label{sec:borel-leroy}
\section*{Borel-Leroy transformation}
For some quantity $f(z)$ Borel-Leroy transform is defined as follows:
\begin{equation}
f(z) = \sum_{N \ge 0} f_{N} z^{N}; \quad \Rightarrow \quad B(t) = \sum_{N \ge 0} \frac{f_{N}}{\Gamma(N+b_{0}+1)}\ t^{N} = \sum_{N \ge 0} B_{N} t^{N}.
\end{equation}
If the original series was asymptotic with exponentially growing coefficients then series for Borel image will be convergent in a circle of the radius $1/a$.
\begin{equation}
f_{N} \simeq \text{Const} \cdot N!N^{b}(-a)^{N} \quad \Rightarrow \quad B_{N} \simeq \text{Const} \cdot N^{b - b_{0}}(-a)^{N}.
\end{equation}
So that in order to perform inverse transform and get resumed quantity:
\begin{equation}
f^{\mathrm{res}}(z) = \int^{\infty}_{0} dt\ t^{b_o} e^{-t} B(tz)
\end{equation}
one should construct an analytical continuation for $B(t)$ outside of the convergence radius (for a more detailed discussion of Borel-Leroy-based techniques see \cite{K} and references therein).
\label{sec:Conformal-borel}
\section*{Conformal-Borel}
One possible way to perform inverse transform is to map integration contour inside the convergence radius of $B(t)$:
\begin{equation}
u(t) = \frac{\sqrt{1+at}-1}{\sqrt{1+at}+1} \quad \Leftrightarrow \quad t(u) = \frac{4u}{a(u-1)^{2}},
\label{mapping}
\end{equation}
where $a$ is the parameter of HOA (\ref{hoag}), (\ref{hoae}) then re-expand in terms of new variable and perform inverse transform. The choice of this mapping function together with setting free parameter $b_{0} = 3/2$ guaranties that resumed series will have correct HOA. Values of critical exponents obtained in such a way presented in Tables \ref{tabconfpa}--\ref{confpc}.
\begin{table}[H]
\caption{\label{tabconfpa}Values of critical exponents at different number of loops taken into account for the fixed point A.}
\centering
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
\multirow{2}{*}{quantity} & \multicolumn{3}{c|}{$d=2$} & \multicolumn{3}{c|}{$d=3$} \\
\cline{2-7}
& 4 loop & 5 loops & 6 loops & 4 loops & 5 loops & 6 loops\\
\hline \hline
$g_{1}^{*}$ & 1.10 & 1.24 & 1.34 & 0.542 & 0.571 & 0.586 \\
\hline
$g_{2}^{*}$ & 0 & 0 & 0 & 0 & 0 & 0 \\
\hline
$\omega_1$ & $-0.487$ & $-0.583$ & $-0.673$ & $-0.226$ & $-0.246$ & $-0.260$\\
\hline
$\omega_2$ & 1.35 & 1.39 & 1.36 & 0.781 & 0.791 & 0.786\\
\hline
$\eta$ & 0.0557 & 0.0820 & 0.106 & 0.0192 & 0.0245 & 0.0279\\
\hline
\end{tabular}
\bigskip
\caption{\label{confpb}Values of critical exponents at different number of loops taken into account for the fixed point B.}
\centering
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
\multirow{2}{*}{quantity} & \multicolumn{3}{c|}{$d=2$} & \multicolumn{3}{c|}{$d=3$} \\
\cline{2-7}
& 4 loop & 5 loops & 6 loops & 4 loops & 5 loops & 6 loops\\
\hline \hline
$g_{1}^{*}$ & 2.40 & 2.89 & 3.21 & 1.14 & 1.26 & 1.32 \\
\hline
$g_{2}^{*}$ & $-3.95$ & $-5.21$ & $-6.32$ & $-1.75$ & $-2.04$ & $-2.24$ \\
\hline
$\omega_1$ & $-0.198$ & $-0.413$ & $-0.683$ & $-0.0547$ & $-0.105$ & $-0.153$\\
\hline
$\omega_2$ & 1.28 & 1.36 & 1.43 & 0.755 & 0.774 & 0.787\\
\hline
$\eta$ & 0.0126 & 0.0267 & 0.0440 & 0.00407 & 0.00721 & 0.0101\\
\hline
\end{tabular}
\bigskip
\caption{\label{confpc}Values of critical exponents at different number of loops taken into account for the fixed point C.}
\centering
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
\multirow{2}{*}{quantity} & \multicolumn{3}{c|}{$d=2$} & \multicolumn{3}{c|}{$d=3$} \\
\cline{2-7}
& 4 loop & 5 loops & 6 loops & 4 loops & 5 loops & 6 loops\\
\hline \hline
$g_{1}^{*}$ & 2.02 & 2.32 & 2.54 & 1.03 & 1.10 & 1.14 \\
\hline
$g_{2}^{*}$ & $-2.69$ & $-3.10$ & $-3.38$ & $-1.37$ & $-1.47$ & $-1.52$ \\
\hline
$\omega_1$ & 0.245 & 0.377 & 0.500 & 0.0774 & 0.109 & 0.131\\
\hline
$\omega_2$ & 1.30 & 1.38 & 1.41 & 0.762 & 0.781 & 0.787\\
\hline
$\eta$ & 0.0477 & 0.0744 & 0.101 & 0.0177 & 0.0238 & 0.0284\\
\hline
\end{tabular}
\end{table}
\label{sec:pade-borel}
\section*{Pade-Borel}
More simple way to make analytical continuation of Borel image is to construct Pade approximants of it in such a way that the initial terms of the series
expansion of $B(t)$ are reproduced.
\begin{equation}
f^{\mathrm{res}}_{[N, M]}(z) = \int^{\infty}_{0}\!\! dt\ e^{-t} t^{b_o} B_{[N,M]}(zt); \quad B_{[N, M]}(t) = \frac{P_{N}(t)}{P_{M}(t)} = \frac{\sum_{i =0}^{i = N} \alpha_{i} t^{i}}{\sum_{j =0}^{j = M} \beta_{j}t^{j}}\,.
\end{equation}
Values of critical exponents corresponding to the IR stable fixed point at $d=3$ obtained in such a way presented in Tables~\ref{tabeps}--\ref{tabeps4}.
\begin{table}[H]
\caption{\label{tabeps}Coordinate $g^{*}_{1}$.}
\centering
\begin{tabular}{|c||c c c c c c }
\hline
\backslashbox{N}{M} & 0 & 1 & 2 & 3 & 4 & 5 \\
\hline
1 & 0.818 & 2.93 & 0.772 & 1.44 & 0.471 & 0.0260 \\
\cline{1-1}
2 & 1.28 & 1.12 & 1.23 & 1.12 & 1.32& \\
\cline{1-1}
3 & 1.03 & 1.19 & 1.17 & 1.18 & & \\
\cline{1-1}
4 & 1.51 & 1.16 & 1.18 & & & \\
\cline{1-1}
5 & 0.222 & 1.20 & & & &\\
\cline{1-1}
6 & 4.48 & & & & & \\
\cline{1-1}
\end{tabular}
\end{table}
\begin{table}[H]
\caption{\label{tabeps2}Eigenvalue $\omega_{1}$.}
\centering
\begin{tabular}{|c||c c c c c c }
\hline
\backslashbox{N}{M} & 0 & 1 & 2 & 3 & 4 & 5 \\
\hline
1 & $-0.0909$ & $-0.0217$ & $-0.00783$ & $-0.00313$ & $-0.00143$ & $-0.00072$ \\
\cline{1-1}
2 & 0.221 & 0.0912 & 0.0337 & 0.0530 & $-0.0377$ & \\
\cline{1-1}
3 & $-0.00926$ & 0.153 & 0.156 & 0.210 & & \\
\cline{1-1}
4 & 0.578 & 0.156 & 0.153 & & & \\
\cline{1-1}
5 & $-1.00$ & 0.199 & & & &\\
\cline{1-1}
6 & 4.28 & & & & & \\
\cline{1-1}
\end{tabular}
\end{table}
\begin{table}[H]
\caption{\label{tabeps3}Eigenvalue $\omega_{2}$.}
\centering
\begin{tabular}{|c||c c c c c c }
\hline
\backslashbox{N}{M} & 0 & 1 & 2 & 3 & 4 & 5 \\
\hline
1 & 1.00 & 0.645 & 1.18 & 0.434 & $-0.129$ & 0.201 \\
\cline{1-1}
2 & 0.430 & 0.817 & 0.776 & 0.818 & 0.755 & \\
\cline{1-1}
3 & 1.71 & 0.769 & 0.797 & 0.791 & & \\
\cline{1-1}
4 & $-2.07$ & 0.831 & 0.790 & & & \\
\cline{1-1}
5 & 11.1 & 0.708 & & & &\\
\cline{1-1}
6 & $-41.1$ & & & & & \\
\cline{1-1}
\end{tabular}
\end{table}
\begin{table}[H]
\caption{\label{tabeps4}Exponent $\eta$.}
\centering
\begin{tabular}{|c||c c c c c }
\hline
\backslashbox{N}{M} & 0 & 1 & 2 & 3 & 4 \\
\hline
2 & 0.0206 & 0.0802 & 0.0170 & $-0.00749$ & 0.00832 \\
\cline{1-1}
3 & 0.0391 & 0.0339 & 0.0429 & 0.0339 & \\
\cline{1-1}
4 & 0.0316 & 0.0370 & 0.0372 & & \\
\cline{1-1}
5 & 0.0520 & 0.0372 & & &\\
\cline{1-1}
6 & $-0.00503$ & & & & \\
\cline{1-1}
\end{tabular}
\end{table}
\label{sec:Proximity}
\section*{Proximity of the ressumed series to the exact results}
If we calculate analytical continuation of $B(t)$ from first $l$ known coefficients we can expand it back in powers of $t$ to find that the expansion not just reproduce first $l$ coefficients but also add some additional sub-series that we are actually summing up.
\begin{equation}
B_{\mathrm{continued}}(t) = \sum_{N \leq l} B_{N} t^{N} + \sum_{N > l} B^{r}_{N} t^{N}.
\end{equation}
In order to estimate how close this reconstructed sub-series to the unknown exact coefficients one can try to reconstruct last know coefficient $B_{l}$ taking into account less of know contributions, and then estimate proximity to exact answer and convergence rate calculating relative discrepancy from exact value.
\begin{equation}
\xi_{l} = \frac{f_{l} - f^{r}_{l}}{f_{l}}\,.
\end{equation}
The estimates of the value of $\xi_{6}$ for $\varepsilon$-expansions fixed point C obtained from conformal mapping are presented in Table~\ref{predictA} and for Pade approximation in Tables~\ref{tabeps5}--\ref{tabeps6}.
\begin{table}[H]
\caption{\label{predictA}$\xi_{6}$ value for $\varepsilon$-expansions at the IR attractive fixed point.}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
quantity & 3 loops & 4 loops & 5 loops \\
\hline
$g^{*}_{1}$ & $-41.9$ & 18.2 & $-3.15$ \\
\hline
$\omega_{1}$ & $-9.47$ & 5.93 & $-1.47$ \\
\hline
$\omega_{2}$ & 0.996 & 0.424 & 0.0389 \\
\hline
$\eta$ & 100 & $-80.7$ & 22.7 \\
\hline
\end{tabular}
\end{table}
\begin{table}[H]
\caption{\label{tabeps5}$\xi_{6}$ value for $\varepsilon$-expansions of $g^{*}_{1}$ (left) and $\eta$ (right) at the IR attractive fixed point.}
\begin{tabular}{|c||c c c c c }
\hline
\backslashbox{N}{M} & 1 & 2 & 3 & 4 \\
\hline
1 & 0.973 & 0.889 & 0.998 & 1.97 \\
\cline{1-1}
2 & 0.983 & 0.700 & 0.270 & \\
\cline{1-1}
3 & 0.500 & 0.113 & & \\
\cline{1-1}
4 & 0.136 & & & \\
\cline{1-1}
\end{tabular}
\quad
\begin{tabular}{|c||c c c c }
\hline
\backslashbox{N}{M} & 1 & 2 & 3 \\
\hline
2 & 1.38 & 0.525 & 2.74 \\
\cline{1-1}
3 & 0.973 & 0.469 & \\
\cline{1-1}
4 & $-0.0563$ & & \\
\cline{1-1}
\multicolumn{1}{c}{} & & &
\end{tabular}
\end{table}
\begin{table}[H]
\caption{\label{tabeps6}$\xi_{6}$ value for $\varepsilon$-expansions of $\omega_{2}$ at the IR attractive fixed point.}
\centering
\begin{tabular}{|c||c c c c c }
\hline
\backslashbox{N}{M} & 1 & 2 & 3 & 4 \\
\hline
1 & 0.997 & 0.915 & 0.708 & 0.495 \\
\cline{1-1}
2 & 0.550 & 0.194 & 0.0432 & \\
\cline{1-1}
3 & 0.212 & 0.0197 & & \\
\cline{1-1}
4 & 0.0550 & & & \\
\cline{1-1}
\end{tabular}
\end{table}
\label{sec:large eps}
\section*{Large $\varepsilon$ behaviour}
Since the mapping function (\ref{mapping}) tends to unity at large values of $t$
it is possible to modify conformal analytical continuation in a way that allows one to control not only HOA but also large $z$ behaviour of the resumed series
\begin{equation}
\widetilde{B}(u(t)) = \bigg[\frac{t}{u(t)}\bigg]^{\nu} \sum_{N \leq l} B_{N} u(t)^{N}.
\end{equation}
It's shown \cite{K} that if exact series has power asymptotic then setting parameter $\nu$ close to its actual value speeds up rate of order by order convergence. More over in case of scalar $\phi^{4}$ model relative discrepancy $\xi_{l}$ tends to have minimal absolute value at the actual value of $\nu$. However from the Figure~\ref{fig1} it can be seen that in our case there seem to be no universal value of $\nu$, even for different $\varepsilon$-series separately, in which vicinity $\xi_{6}$ minimizes its absolute value.
\begin{figure}[H]
\centering
\begin{subfigure}[t]{65mm}
\includegraphics[width=65mm]{omega1.pdf}
\caption{}
\end{subfigure}
\hfill
\begin{subfigure}[t]{65mm}
\includegraphics[width=65mm]{omega2.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{65mm}
\includegraphics[width=65mm]{g1.pdf}
\caption{}
\end{subfigure}
\hfill
\begin{subfigure}[t]{65mm}
\includegraphics[width=65mm]{eta.pdf}
\caption{}
\end{subfigure}
\caption{\label{fig1} Dependence of the relative discrepancy $\xi_{6}$ on the value of $\nu$ for $\omega_{1}$ (a), $\omega_{2}$ (b), $g^{*}_{1}$ (c), $\eta$ (d) for IR stable fixed point with the different number of loops taken into account.}
\end{figure}
\label{sec:beta res}
\section*{Ressumation of $\beta$ functions}
Except for resummation of $\varepsilon$-expansions we can also resum directly series of $\beta$-functions in the powers of couplings. For that purpose we should rescale couplings:
\begin{equation}
\beta(g_{1}, g_{2}) = \sum_{i,j} \beta_{i,j} \ g_{1}^{i} g_{2}^{j} \quad \Rightarrow \quad \beta(z) = \beta(z g_{1}, z g_{2}) = \sum_{i} \beta_{i}(g_{1}, g_{2})z^{i}
\end{equation}
so that $\beta(z)|_{z=1}=\beta(g_{1}, g_{2})$, and then resum the series in powers of $z$ with coefficients depending on couplings.
\begin{equation}
B(t) = \sum \frac{\beta_{N}(g1, g2)}{\Gamma(N+b_{0}+1)}\ t^{N} = \sum B_{N}(g_{1}, g_{2}) t^{N}.
\end{equation}
At each point of couplings plane the most relevant instanton shell be used so that value of parameter a in (\ref{mapping}) is now depend on the particular coordinate on the plane of invariant couplings.
Ressumed $\beta$-functions are given by:
\begin{equation}
\beta^{\mathrm{res}}(g_{1}, g_{2}) = \beta^{\mathrm{res}}(z=1) = \int^{\infty}_{0} dt\ e^{-t} t^{b_o} B(t).
\end{equation}
Finaly we solve numerically system of equations for invariant coupling:
\begin{equation}
s\partial_{s}\bar g(s,g) = \beta^{\mathrm{res}}_{g} (\bar g), \quad
\bar g(1,g) = g \quad s = p/\mu.
\end{equation}
\begin{figure}[H]
\begin{subfigure}[t]{65mm}
\includegraphics[width=70mm]{n4d3.png}
\caption{$d=2$}
\end{subfigure}
\hfill
\begin{subfigure}[t]{65mm}
\includegraphics[width=70mm]{n4d2.png}
\caption{$d=3$}
\end{subfigure}
\caption{Ressumed RG flows in the plane of invariant couplings. The gray area is unphysical region according to (\ref{ineq}) lower dashed line separates the region where instanton solution governing asymptotics of perturbative series formally melts down making resummation procedure meaningless. The black dots mark fixed points positions according to Table~\ref{confpc}.}\label{fig2}
\end{figure}
As can be seen from the Fig.~\ref{fig2}(a) at $d=3$ we have only qualitatively agreement between results of resummation of $\varepsilon$-expansions for fixed points and root positions of $\beta$-functions resumed directly. At the same time Fig.~\ref{fig2}(b) shows that at $d=2$ situation seem to be much worse because resumed $\beta$-functions are lacking of two nontrivial fixed points at all. The later is indicating that for $d=2$ six orders or perturbation theory may be not sufficient to make even consistent qualitative conclusions about asymptotic regimes of the model under consideration based on the resummation techniques.
\label{sec:Conclusions}
\section*{Conclusions}
We have obtained coordinates of the fixed points and established their IR stability properties at 6-loop level resuming corresponding $\varepsilon$-expansions using two different procedures based on the Borel-Leroy transform. For one of the points that appeared to be IR attractive we also calculated anomalous dimension of a pair correlation function. These results are in qualitative agreement with each other but numerically show discrepancy already at the level of a second significant digit. We obtained estimates for how close first unknown coefficients of resumed series are to their exact values. The best predictions are obtained for $\omega_{2}$ eigenvalue which original $\varepsilon$-series are most divergent, while the highest discrepancy is obtained for $\eta$ series which is still showing apparent convergence even at 6-loop level. We have shown that accounting for possible power-like asymptotic of $\varepsilon$-expansions does not help to optimise resummation procedure. And finally we have resumed expansions for $\beta$-functions directly and shown that at $d=3$ they qualitatively agree with results of $\varepsilon$-expansions resummation while at $d=2$ there is no even qualitatively agreement. The precise reason of such a discrepancy could be a subject of a separate study.
\label{sec:Acknowledgments}
\section*{Acknowledgments}
Author is grateful to A.F. Pikelner and G.A. Kalagov for useful discussions and comments and to FFK2021 Organizing Committee for support and hospitality. The reported study was funded by the Russian Foundation for Basic Research, project number 20-32-70139.
| 2024-02-18T23:39:46.759Z | 2021-11-30T02:24:56.000Z | algebraic_stack_train_0000 | 366 | 3,101 |
|
proofpile-arXiv_065-1911 | \section{Introduction}
L\'evy processes and stochastic differential equations (SDEs) driven by L\'evy processes have been widely employed to model uncertain phenomena in various areas \cite{AkgirayBooth1988,BardhanChao1993,ContTankov2003,ContVoltchkova2004,Mandelbrot1997}. Since different classes of L\'evy processes may have distinct properties \cite{D.Applebaum2009,Sato2013}, much more attention should be paid when analyzing numerical methods for SDEs driven by L\'evy processes.
\par
Higham and Kloeden \cite{HighamKloeden2007} studied the semi-implicit Euler-Maruyama (EM) method for solving SDEs with Poisson-driven jumps and obtained its optimal strong convergence rate. Oliver and Dai \cite{OliverDai2017} investigated the strong convergence rate of the EM method for SDEs with H\"older continuous coefficients in both time and spatial variables and a truncated symmetric $\a$-stable process.
Mikulevicius and Xu \cite{MikuleviciusXu2018} considered the convergence rate of the EM approximation for SDEs driven by an $\a$-stable process with a Lipschitz continuous coefficient and a H\"older drift. Huang and Liao \cite{HuangLiao2018} and Huang and Yang \cite{HuangYang2021} extended the EM method to stochastic functional differential equations and distribution-dependent SDEs with H\"older drift and $\a$-stable process, respectively. K\"uhn and Schilling \cite{KuhnSchilling2019} established strong convergence of the EM method for a class of L\'evy process-driven SDEs. In recent years, the tamed Euler approach \cite{DareiotisKumarSabanis2016}, the tamed Milstein method \cite{KumarSabanis2017}, general one-step methods \cite{ChenGanWang2019}, and the truncated EM method \cite{Zhang2021} were proposed for solving SDEs driven by L\'evy processes with super-linearly growing coefficients.
\par
In this paper, we are interested in the SDEs driven by a class of L\'evy processes introduced by K\"uhn and Schilling \cite{KuhnSchilling2019} with a super-linearly growing drift. Such type of L\'evy processes covers many interesting stable processes, such as the Lamperti stable process \cite{CPP2010} and the tempered stable process \cite{KT2013}. Since the EM method used in \cite{KuhnSchilling2019} does not converge any more for the SDEs with a super-linearly growing drift, we propose the semi-implicit EM method and investigate its finite-time strong convergence.
\par
Numerical approximations to invariant measure of SDEs are also important topics and have attracted a lot of attention in recent years. For the SDEs are driven by the Brownian motion, research results on the invariant measure of the numerical methods can be found in \cite{BaoShaoYuan2016,JiangWengLiu2020,LiuMao2015,LiMaoYin2019,Talay2002}. We just mention some of them here and refer the readers to the references therein. When the SDEs are driven by some stable process, the invariant measure of the analytic solution has been well studied in \cite{BYY2016,TJZ2018,Wang2013}. To the best of our knowledge, there is few work on the invariant measure of the numerical methods for SDEs driven by such class of L\'evy processes.
\par
The main contributions of this paper are as follows.
\begin{itemize}
\item The strong convergence in the finite time of the semi-implicit EM method for SDEs driven by a class of L\'evy process is established and the dependence of the convergence rate on the H\"older continuity and the L\'evy process is revealed.
\item The existence and uniqueness of the numerical invariant measure of the semi-implicit EM method is proved. Moreover, the convergence of the numerical invariant measure to the underlying one is obtained.
\end{itemize}
\par
This paper is organized as follows. In Section 2, some preliminaries are briefly introduced. In Section 3, we establish the finite-time strong convergence of the semi-implicit EM method. The convergence of the numerical invariant measure is studied in Section 4. Numerical examples are given in Section 5 to illustrate our theoretical results and some concluding remarks are included in Section 6.
\section{Preliminaries}
Let $(\Omega, \mathcal{F},\mathbb{P})$ be a complete probability space with a filtration $\{\mathcal{F}_t\}_{t\geq 0}$ satisfying the usual conditions (i.e., it is right continuous and increasing in $t$ while $\mathcal{F}_0$ contains all $\mathbb{P}$-null sets). Let $|\cdot |$ denote the Euclidean norm in $\RR^d$. Let $B(t)$ be the $m$-dimensional Brownian motion defined on the probability space.
\par
A stochastic process $L=\{L(t), t\geq 0\}$ is called a $m$-dimensional L\'evy process if
\begin{itemize}
\item $L(0)=0$ (a.s.);
\item For any integer $n\geq 1$ and $0\leq t_0<t_1<\cdots <t_n\leq T$, the random variables $L(t_0), L(t_1)-L(t_0), \cdots , L(t_n)-L(t_{n-1})$ have independent and stationary increments;
\item $L$ is stochastically continuous, i.e. for all $\epsilon>0$ and all $s\geq 0$,
\begin{equation*}
\lim_{t\rightarrow s}\mathbb{P}(|L(t)-L(s)|>\epsilon)=0.
\end{equation*}
\end{itemize}
\par
The L\'evy-Khintchine formula \cite{D.Applebaum2009} for the L\'evy process $L$ is
\begin{align*}
\varphi_L(\o)(t):=& \EE[e^{i\o \cdot L(t)}]\\
=& \exp\left(t(ib\cdot\o-\frac{1}{2}\o\cdot A\o+\int_{\RR^d \setminus\{0\}}(e^{i\o\cdot z}-1-i\o \cdot x\mathrm{1}_{|z|<1})\nu(dz))\right),
\end{align*}
where $b\in\RR^d$, $A\in \mathbb{R}^{d\times d}$ is a positive definite symmetric matrix and the L\'evy measure $\nu$ is a $\s$-finite measure such that $\int_{\RR^d\setminus \{0\}}\min(1,x^2)\nu(dx)<\infty$. The L\'evy triplet $(b, A, \nu)$ is the characteristics of the infinitely divisible random variable $L(1)$, which has a one-to-one correspondence with the L\'evy-Khintchine formula.
In this paper, we consider a SDE driven by the multiplicative Brownian motion and the additive L\'evy process of the form
\begin{equation}\label{sde}
y(t)=y(0)+\int_{0}^{t}f(s,y(s))ds+\int_{0}^{t}g(s,y(s))dB(s)+\int_{0}^{t}dL(s),\ \ t\in (0, T],
\end{equation}
where $f: \RR_{+}\times \RR^d\rightarrow \RR^d$, $g:\RR_{+}\times \RR^d\rightarrow\RR^{d\times m}$ and $\mathbb{E}|y(0)|^p<+\infty$ for $2\leq p<+\infty$. For simplicity, the L\'evy triplet is assumed to be $(0,0,\nu)$ and satisfies
$\int_{|z|< 1}|z|^{\gamma_0} \nu(dz)<\infty$ with $\gamma_0\in[1,2]$ and $\int_{|z|\geq 1}|z|^{\gamma_{\infty}} \nu(dz)<\infty$ with $\gamma_{\infty}>1$ (see \cite{KuhnSchilling2019} for more details). In addition, we need some hypothesises on the drift and diffusion coefficients.
\begin{assp}\label{superlinear}
There exist constants $H>0$ and $\s>0$ such that
\begin{equation*}
| f(t,x)-f(t,y) |^2 \leq H(1+| x |^{\s}+| y |^{\s})| x-y |^2
\end{equation*}
for all $x,y\in\RR^d$ and $t\in[0,T]$.
\end{assp}
It can be observed from Assumption \ref{superlinear} that for all $t\in[0,T]$ and $x\in\RR^d$
\begin{equation}\label{superx}
|f(t,x)|^2\leq \widetilde{H}(1+|x|^{\s+2}),
\end{equation}
where $\widetilde{H}$ depends on $H$ and $\sup_{0\leq t\leq T}|f(t,0)|^2$.
\begin{assp}\label{Khasminskii}
There are constants $q\geq 2\s+2$ and $M>0$ such that
\begin{equation*}
x^{\mathrm{T}}f(t,x)+\frac{q-1}{2}| g(t,x) |^2 \leq M(1+| x |^2)
\end{equation*}
for all $x\in \RR^d$ and $t\in [0,T]$, where $\s$ is given in Assumption \ref{superlinear}.
\end{assp}
\begin{assp}\label{timeHolder}
There exist constants $K_1>0$, $K_2>0$, $\gamma_1\in(0,1)$ and $\gamma_2\in(0,1)$ such that
\begin{equation*}
| f(s,x)-f(t,x) | \leq K_1(1+| x |^{\s+1})| t-s |^{\gamma_1}
\end{equation*}
and
\begin{equation*}
| g(s,x)-g(t,x) | \leq K_2(1+| x |^{\s+1})| t-s |^{\gamma_2}
\end{equation*}
for all $x\in \RR^d$ and $s,t\in [0,T]$, where $\s$ is given in Assumption \ref{superlinear}.
\end{assp}
\begin{assp}\label{sideLip}
There exists a constant $K_3<-\frac{1}{2}$ such that
\begin{equation*}
(x-y)^{\mathrm{T}}(f(t,x)-f(t,y))\leq K_3| x-y |^2
\end{equation*}
for all $x,y\in \RR^d$ and $t\in[0,T]$.
\end{assp}
Setting $y=0$, we have the following inequality
\begin{equation}\label{sideLipx}
x^{\mathrm{T}}f(t,x)\leq M_1|x|^2+m_1,
\end{equation}
where $M_1=\frac{1}{2}+K_3<0$ and $m_1=\frac{1}{2}|f(t,0)|^2$.
\begin{assp}\label{lineargro}
There exists a constant $K_4>0$ with $K_4+2K_3<-1$ such that
\begin{equation*}
| g(t,x)-g(t,y)|^2 \leq K_4| x-y |^2
\end{equation*}
for all $x,y\in \RR^d$ and $t\in[0,T]$.
\end{assp}
It follows from Assumption \ref{lineargro} that
\begin{equation}\label{lineg}
|g(t,x)|^2\leq M_2|x|^2+m_2,
\end{equation}
where $M_2=K_4$, $m_2=|g(t,0)|^2$.
\par
The existence and uniqueness of the solution $y(t)$ of SDE \eqref{sde} is guaranteed \cite[Section 6.2]{D.Applebaum2009} under Assumptions \ref{sideLip} and \ref{lineargro}.
\section{Main results on the finite time strong convergence}
Given $\D t\in(0,1)$ and time $T$, let $\mathbf{N}=\lfloor T/\D t\rfloor$ and $t_i=i\D t, i=0,1,\ldots, \mathbf{N}$. The semi-implicit EM method of SDE \eqref{sde} is defined by
\begin{equation}\label{Numsde}
Y_{i+1}=Y_i + f(t_{i+1},Y_{i+1})\D t + g(t_i,Y_i)\D B_{i+1}+\D L_{i+1}, \quad i=0,1,2,\cdots \mathbf{N},
\end{equation}
where $ Y_i$ approximates $y(t_i)$, $\D B_{i+1}=B(t_{i+1})-B(t_i)$, and $\D L_{i+1}=L(t_{i+1})-L(t_i)$.
\par
The continuous-time semi-implicit EM solution of SDE \eqref{sde} is constructed in the following manner
\begin{equation}\label{consol}
Y(t)=Y_i, \quad t\in[t_i,t_{i+1}),\quad i=0,1,2,\cdots, \mathbf{N}.
\end{equation}
Our first main result is on the finite time strong convergence.
\begin{theorem}\label{Conver}
Suppose that Assumptions \ref{superlinear}--\ref{lineargro} hold. Then the semi-implicit EM method \eqref{Numsde}-\eqref{consol} for solving \eqref{sde} is convergent and satisfies
\begin{equation}\label{conver}
\EE|y(t)-Y(t)|^2\leq C\left(\D t^{2\gamma_1}+\D t^{2\gamma_2}+\D t^{\frac{1}{2}}\right),\ \ \forall t\in[0,T],
\end{equation}
where the positive constant $C$ is independent of $\D t$.
\end{theorem}
The proof of Theorem \ref{Conver} will be given in Section 3.2.
\par
Now we look at a special interesting case that $g(\cdot,\cdot) \equiv 0$, i.e. the Brownian motion is removed
\begin{equation}\label{SDE2}
u(t)=u(0)+\int_{0}^{t}f(s,u(s))ds+\int_{0}^{t}dL(s).
\end{equation}
Then the semi-implicit EM method becomes
\begin{equation}
\label{1}
U_{i+1}=U_i + f(t_{i+1},U_{i+1})\D t +\D L_{i+1}
\end{equation}
and its continuous-time version is
\begin{equation}
\label{2}
U(t)=U_i, \quad t\in[t_i,t_{i+1}),\quad i=0,1,2,\cdots, \mathbf{N}.
\end{equation}
The second main result of this paper, which we think is more interesting, states that the convergence rate depends on the parameter $\gamma_0$ of the L\'evy process and the H\"older's index $\gamma_1$.
\begin{coro}\label{Conver2}
Suppose that Assumptions \ref{superlinear}--\ref{sideLip} hold. Then the semi-implicit EM method \eqref{1}-\eqref{2} for solving \eqref{SDE2} is convergent and satisfies
\begin{equation}\label{conver2}
\EE|u(t)-U(t)|^2\leq C\left(\D t^{2\gamma_1}+\D t^{\frac{2}{\gamma_0}}\right),\ \ \forall t\in[0,T],
\end{equation}
where the positive constant $C$ is independent of $\D t$.
\end{coro}
When the Brownian motion term is removed (i.e., $g(\cdot,\cdot)\equiv 0$),
the main modification of the proof is that the exponent $\eta$ in Lemma \ref{continue} will be taken as $ \eta= \frac{2}{\gamma_0}$ in the proof of Theorem \ref{Conver}.
\subsection{Some necessary lemmas}
The existence and uniqueness of global solution has been given by \cite{ChenGanWang2019,GyongyKrylov1980}. In this section, we show the boundedness of the $p$th moment and the continuity of the solution.
\begin{lemma}\label{exactbound}
Suppose that Assumption \ref{superlinear} and \ref{Khasminskii} hold. Then, for fixed $p\in[2,q)$,
\begin{equation*}
\EE | y(t) |^p \leq C_p, \ \ \forall t\in[0,T],
\end{equation*}
where $C_p:=C\left(p,T,M,\EE|y(0)|^p,\int_{|z|<1}|z|^2 \nu(dz),\int_{|z|\geq 1}|z| \nu(dz)\right)$.
\end{lemma}
\begin{proof}
Let $N$ be the Poisson measure on $[0,\infty)\times(\RR \backslash \{0\})$ with $\EE N(dt,dz)=dt\nu (dz)$. Define the compensated Poisson random measure $\widetilde{N}(dt,dz)=N(dt,dz)-\nu(dz)dt.$ From the It\^o formula \cite{D.Applebaum2009}, it follows
\allowdisplaybreaks[4]
\begin{align*}
| y(t)|^p &= | y(0)|^p + \int_{0}^{t}p| y(s)|^{p-2} y^{\mathrm{T}}(s)f(s,y(s))ds+\int_{0}^{t}\frac{1}{2}p(p-1)| y(s)|^{p-2}| g(s,y(s))|^2 ds\\
&\quad + \int_{0}^{t}p| y(s)|^{p-2}y^{\mathrm{T}}(s)g(s,y(s))dB(s)+\int_{0}^{t}\int_{|z|<1}(|y(s)+z|^p-|y(s)|^p)\widetilde{N}(dz,ds)\\
&\quad + \int_{0}^{t}\int_{|z|\geq 1}(|y(s)+z|^p-|y(s)|^p)N(dz,ds)\\
&\quad + \int_{0}^{t}\int_{\RR^d}(|y(s)+z|^p-|y(s)|^p-p|y(s)|^{p-2}y^{\mathrm{T}}(s)z\mathbbm{1}_{(0,1)}(|z|))\nu(dz)ds\\
&= | y(0)|^p + \int_{0}^{t}p| y(s)|^{p-2}y^{\mathrm{T}}(s)f(s,y(s))+\frac{1}{2}p(p-1)| y(s)|^{p-2}| g(s,y(s))|^2 ds\\
&\quad + \int_{0}^{t}p| y(s)|^{p-2}y^{\mathrm{T}}(s)g(s,y(s))dB(s)+\int_{0}^{t}\int_{|z|<1}(|y(s)+z|^p-|y(s)|^p)\widetilde{N}(dz,ds)\\
&\quad + \int_{0}^{t}\int_{|z|\geq 1}(|y(s)+z|^p-|y(s)|^p)\widetilde{N}(dz,ds)\\
&\quad + 2\int_{0}^{t}\int_{|z|\geq 1}(|y(s)+z|^p-|y(s)|^p)\nu(dz)ds\\
&\quad + \int_{0}^{t}\int_{|z|<1}(|y(s)+z|^p-|y(s)|^p-p|y(s)|^{p-2}y^{\mathrm{T}}(s)z)\nu(dz)ds,
\end{align*}
Taking expectations on both sides, we obtain
\begin{align*}
\EE|y(t)|^p&\leq \EE|y(0)|^p+p\EE\int_{0}^{t}|y(s)|^{p-2}[y^{\mathrm{T}}(s)f(s,y(s))+\frac{p-1}{2}|g(s,y(s))|^2]ds\\
&\quad + 2\EE\int_{0}^{t}\int_{|z|\geq 1}(|y(s)+z|^p-|y(s)|^p)\nu(dz)ds\\
&\quad + \EE\int_{0}^{t}\int_{|z|<1}(|y(s)+z|^p-|y(s)|^p-p|y(s)|^{p-2}y^{\mathrm{T}}(s)z)\nu(dz)ds\\
&=: \EE|y(0)|^p+I_1+I_2+I_3.
\end{align*}
It follows from Assumption \ref{Khasminskii} that
\begin{align*}
I_1&= p\EE\int_{0}^{t}|y(s)|^{p-2}[y^{\mathrm{T}}(s)f(s,y(s))+\frac{p-1}{2}|g(s,y(s))|^2]ds\\
&\leq pM\EE\int_{0}^{t}|y(s)|^{p-2}(1+|y(s)|^2)ds\\
&\leq \EE [pMT] + 2pM\EE\int_{0}^{t}|y(s)|^pds.
\end{align*}
It is obvious that
\begin{align*}
I_2&= 2\EE\int_{0}^{t}\int_{|z|\geq 1}(|y(s)+z|^p-|y(s)|^p) \nu(dz)ds\\
&\leq 2p\EE\int_{0}^{t}\int_{|z|\geq 1}|y(s)|^{p-2}|y^{\mathrm{T}}(s)z| \nu(dz)ds\\
&\leq 2pT\EE\int_{|z|\geq 1}|z| \nu(dz) + 2p\EE\int_{0}^{t}|y(s)|^p\int_{|z|\geq 1}|z| \nu(dz)ds\\
&\leq C_p + C_p\EE\int_{0}^{t}|y(s)|^p ds.
\end{align*}
Note that for any $y_1, y_2\in\RR^d$
\begin{align*}
|y_1+y_2|^p-|y_1|^p-p|y_1|^{p-2}y_1^{\mathrm{T}}y_2&\leq C_p\int_{0}^{1}|y_1+\o y_2|^{p-2}|y_2|^pd\o \\
&\leq C_p\left(|y_1|^{p-2}|y_2|^2+|y_2|^p\right).
\end{align*}
So,
\begin{align*}
I_3&= \EE\int_{0}^{t}\int_{|z|<1}\left[|y(s)+z|^p-|y(s)|^p-p|y(s)|^{p-2}y^{\mathrm{T}}(s)z\right]\nu(dz)ds\\
&\leq C_p\EE\int_{0}^{t}\int_{|z|<1}\left(|y(s)|^{p-2}|z|^2+|y(s)|^p\right)\nu(dz)ds\\
&\leq C_p\EE\int_{0}^{t}\int_{|z|<1}\left[|z|^2+(1+|z|^2)|y(s)|^p\right]ds\\
&\leq C_p+C_p\EE\int_{0}^{t}|y(s)|^p ds.
\end{align*}
Combining the estimates of $I_1$, $I_2$ and $I_3$, we arrive at
\begin{equation*}
\EE|y(t)|^p\leq C_p+C_p\EE\int_{0}^{t}|y(s)|^p ds.
\end{equation*}
The desired result follows from the Gronwall inequality.
\end{proof}
\begin{lemma}\label{continue}
Suppose that Assumptions \ref{superlinear} and \ref{lineargro} hold. Then, for any $1\leq \k <p$, $0\leq s < t\leq T$ and $|t-s|<1$, we have
\begin{equation}\label{yts}
\EE|y(t)-y(s)|^{\k}\leq C|t-s|^{\eta},
\end{equation}
where the constant $C$ only depends on $\widetilde{H}$, $C_p$ and $\k$ and $\eta=
\begin{cases}
\frac{\k}{2}, & g(\cdot,\cdot)\neq 0,\\
\frac{\k}{\gamma_0}, & g(\cdot,\cdot)\equiv 0.
\end{cases}$
\end{lemma}
\begin{proof}
For any $0\leq s<t\leq T$, the solution $y$ satisfies
\begin{equation*}
y(t)-y(s) = \int_{s}^{t}f(r,y(r))dr+\int_{s}^{t}g(r,y(r))dB(r)+(L(t)-L(s)).
\end{equation*}
By the H\"older inequality, the Burkholder--Davis--Gundy (BDG) inequality \cite{Mao2008}, the fractional moment estimates for the L\'evy process \cite{Kuhn2017}, and the fact that
\begin{equation*}
L_t-L_s \overset{d}{=} L_{t-s},
\end{equation*}
we conclude that
\begin{align*}
&\EE|y(t)-y(s)|^{\k} =\EE\left|\int_{s}^{t}f(r,y(r))dr + \int_{s}^{t}g(r,y(r))dB(r) + (L(t)-L(s))\right|^{\k}\\
&\leq 3^{\k-1}\left(\EE\left|\int_{s}^{t}f(r,y(r))dr\right|^{\k}+\EE\left|\int_{s}^{t}g(r,y(r))dB(r)\right|^{\k}+\EE|L(t)-L(s)|^{\k}\right)\\
&\leq 3^{\k-1}\left(|t-s|^{\k-1}\int_{s}^{t}\EE|f(r,y(r))|^{\k}dr + c_{\k}\left|\int_{s}^{t}\EE|g(r,y(r))|^2dr\right|^{\frac{\k}{2}}+\EE|L(t)-L(s)|^{\k}\right)\\
&\leq C|t-s|^{\eta},
\end{align*}
where
the constant $C>0$ depends on $\widetilde{H}$, $C_p$ and $\k$,
and
$$\eta=
\begin{cases}
\frac{\k}{2}, & g(\cdot,\cdot)\neq 0,\\
\frac{\k}{\gamma_0}, & g(\cdot,\cdot)\equiv 0.
\end{cases}$$
This completes the proof.
\end{proof}
\subsection{Proof of Theorem \ref{Conver}}
\begin{proof}
It follows from \eqref{sde} and \eqref{Numsde} that
\begin{equation*}
y(t_{i+1})=y(t_i)+\int_{t_i}^{t_{i+1}}f(s,y(s))ds + \int_{t_i}^{t_{i+1}}g(s,y(s))dB(s)+\int_{t_i}^{t_{i+1}}dL(s)
\end{equation*}
and
\begin{equation*}
Y_{i+1}=Y_i+\int_{t_i}^{t_{i+1}}f(t_{i+1},Y_{i+1})ds + \int_{t_i}^{t_{i+1}}g(t_i,Y_i)dB(s)+\int_{t_i}^{t_{i+1}}dL(s),\quad i=0,1,\cdots, \mathbf{N}.
\end{equation*}
So,
\begin{align*}
y(t_{i+1})-Y_{i+1} &= y(t_i)-Y_i+\int_{t_i}^{t_{i+1}}f(s,y(s))-f(t_{i+1},Y_{i+1})ds
\\
&\quad +\int_{t_i}^{t_{i+1}}g(s,y(s))-g(t_{i},Y_{i})dB(s).
\end{align*}
Multiplying both sides by the transpose of $y(t_{i+1})-Y_{i+1}$, we get
\begin{align*}
|y(t_{i+1})-Y_{i+1}|^2 &= \int_{t_i}^{t_{i+1}}(y(t_{i+1})-Y_{i+1})^{\mathrm{T}}(f(s,y(s))-f(t_{i+1},Y_{i+1}))ds\\
&\quad + (y(t_{i+1})-Y_{i+1})^{\mathrm{T}}\left(( y(t_i)-Y_i)+\int_{t_i}^{t_{i+1}}g(s,y(s))-g(t_{i},Y_i)dB(s)\right)\\
&=: J_1+J_2.
\end{align*}
Note that
\begin{align*}
J_1&= \int_{t_i}^{t_{i+1}}(y(t_{i+1})-Y_{i+1})^{\mathrm{T}}(f(s,y(s))-f(t_{i+1},Y_{i+1}))ds\\
&= \int_{t_i}^{t_{i+1}}(y(t_{i+1})-Y_{i+1})^{\mathrm{T}}(f(s,y(t_{i+1}))-f(t_{i+1},y(t_{i+1})))ds\\
&\quad + \int_{t_i}^{t_{i+1}}(y(t_{i+1})-Y_{i+1})^{\mathrm{T}}(f(t_{i+1},y(t_{i+1}))-f(t_{i+1},Y_{i+1}))ds\\
&\quad + \int_{t_i}^{t_{i+1}}(y(t_{i+1})-Y_{i+1})^{\mathrm{T}}(f(s,y(s))-f(s,y(t_{i+1})))ds\\
&=: J_{11}+J_{12}+J_{13}.
\end{align*}
We estimate the terms $ J_{11}, J_{12}$ and $J_{13}$ separately. By Assumption \ref{timeHolder}, we have
\begin{align*}
J_{11}&= \int_{t_i}^{t_{i+1}}(y(t_{i+1})-Y_{i+1})^{\mathrm{T}}(f(s,y(t_{i+1}))-f(t_{i+1},y(t_{i+1})))ds\\
&\leq \frac{1}{2}\left(\int_{t_i}^{t_{i+1}}|y(t_{i+1})-Y_{i+1}|^2ds+ \int_{t_i}^{t_{i+1}}|f(s,y(t_{i+1}))-f(t_{i+1},y(t_{i+1}))|^2ds\right)\\
&\leq \frac{1}{2}\int_{t_i}^{t_{i+1}}|y(t_{i+1})-Y_{i+1}|^2ds+\frac{K_1^2}{2}\int_{t_i}^{t_{i+1}}(1+|y(t_{i+1})|^{2\s+2})|t_{i+1}-s|^{2\gamma_1}ds\\
&\leq \frac{1}{2}\int_{t_i}^{t_{i+1}}|y(t_{i+1})-Y_{i+1}|^2ds+\frac{K_1^2}{2}\D t^{1+2\gamma_1}+\frac{K_1^2}{2}\D t^{1+2\gamma_1}|y(t_{i+1})|^{2\s+2}.
\end{align*}
Due to Assumption \ref{sideLip},
\begin{align*}
J_{12}&= \int_{t_i}^{t_{i+1}}(y(t_{i+1})-Y_{i+1})^{\mathrm{T}}(f(t_{i+1},y(t_{i+1}))-f(t_{i+1},Y_{i+1}))ds\\
&\leq K_3\int_{t_i}^{t_{i+1}}|y(t_{i+1})-Y_{i+1}|^2ds.
\end{align*}
It follows from Assumption \ref{superlinear} that
\begin{align}
\label{J13}
\nonumber
J_{13}&=\int_{t_i}^{t_{i+1}}(y(t_{i+1})-Y_{i+1})^{\mathrm{T}}(f(s,y(s))-f(s,y(t_{i+1})))ds\\ \nonumber
&\leq \frac{1}{2}\left(\int_{t_i}^{t_{i+1}}|y(t_{i+1})-Y_{i+1}|^2ds+|f(s,y(s))-f(s,y(t_{i+1}))|^2ds\right)\\\nonumber
&\leq \frac{1}{2}H\int_{t_i}^{t_{i+1}}(1+|y(s)|^{\s}+|y(t_{i+1})|^{\s})|y(s)-y(t_{i+1})|^2ds\\
&\quad+ \frac{1}{2}\int_{t_i}^{t_{i+1}}|y(t_{i+1})-Y_{i+1}|^2ds.
\end{align}
Applying the H\"older inequality and Lemmas \ref{exactbound} and \ref{continue} (the case of $g(\cdot,\cdot)\neq 0$), we have
\begin{align}\label{ts}
\EE&\left((1+|y(s)|^{\s} +|y(t_{i+1})|^{\s})|y(s)-y(t_{i+1})|^2\right)\notag\\
&\leq 3^{\frac{2}{\s}}\left(\EE(1+|y(s)|^{\s+2}+|y(t_{i+1})|^{\s+2})\right)^{\frac{\s}{\s+2}}\left(\EE|y(s)-y(t_{i+1})|^{\s+2}\right)^{\frac{2}{\s+2}}\notag\\
&\leq 3^{\frac{2}{\s}}\left(1+\EE|y(s)|^{\s+2}+\EE|y(t_{i+1})|^{\s+2}\right)^{\frac{\s}{\s+2}}\left(\EE|y(s)-y(t_{i+1})|^{\s+2}\right)^{\frac{2}{\s+2}}\notag\\
&\leq 2\times3^{\frac{2}{\s}}(1+C_p)^{\frac{\s}{\s+2}}\left(C|t_{i+1}-s|^{\frac{\s+2}{2}}\right)^{\frac{2}{\s+2}}\notag\\
&\leq C|t_{i+1}-s|.
\end{align}
From the above estimates, it yields
\begin{align}\label{J1eatimate}
\nonumber
\EE[J_1]&\leq (1+K_3)\int_{t_i}^{t_{i+1}}\EE|y(t_{i+1})-Y_{i+1}|^2ds+\frac{K_1^2}{2}\D t^{1+2\gamma_1}\\ \nonumber
&\quad+ \frac{K_1^2}{2}\D t^{1+2\gamma_1}\EE|y(t_{i+1})|^{2\s+2}+\frac{HC}{2}|t_{i+1}-s|^2\\ \nonumber
&\leq (1+K_3)\D t\EE|y(t_{i+1})-Y_{i+1}|^2 + \frac{K_1^2}{2}\D t^{1+2\gamma_1} + \frac{K_1^2C_p}{2}\D t^{1+2\gamma_1}+\frac{HC}{2}\D t^2\\
&\leq (1+K_3)\D t\EE|y(t_{i+1})-Y_{i+1}|^2 + C(\D t^{1+2\gamma_1}+\D t^2).
\end{align}
The term $J_2$ can be estimated as
\begin{align*}
J_2&= (y(t_{i+1})-Y_{i+1})^{\mathrm{T}}\left(( y(t_i)-Y_i)+\int_{t_i}^{t_{i+1}}g(s,y(s))-g(t_{i},Y_i)dB(s)\right)\\
&\leq \frac{1}{2}|y(t_{i+1})-Y_{i+1}|^2+\frac{1}{2}\left|( y(t_i)-Y_i)+\int_{t_i}^{t_{i+1}}g(s,y(s))-g(t_{i},Y_i)dB(s)\right|^2\\
&=: \frac{1}{2}|y(t_{i+1})-Y_{i+1}|^2+J_{21}.
\end{align*}
It follows from the isometric property
\begin{equation*}
\EE[J_{21}]\leq \EE|y(t_i)-Y_i|^2 + \int_{t_i}^{t_{i+1}}\EE|g(s,y(s))-g(t_{i},Y_i)|^2ds.
\end{equation*}
By Assumptions \ref{timeHolder} and \ref{lineargro}, we can easily get
\begin{align*}
|g&(s, y(s))-g(t_{i},Y_i)|^2\\
&= |g(s,y(t_i))-g(t_i,y(t_i))+g(t_i,y(t_i))-g(t_i,Y_i)+g(s,y(s))-g(s,y(t_i))|^2\\
&\leq 3|g(s,y(t_i))-g(t_i,y(t_i))|^2+3|g(t_i,y(t_i))-g(t_i,Y_i)|^2+3|g(s,y(s))-g(s,y(t_i))|^2\\
&\leq 6K_2^2(1+|y(t_i)|^{2\s+2})|s-t_i|^{2\gamma_2}+3K_4|y(t_i)-Y_i|^2+3K_4|y(s)-y(t_i)|^2\\
&\leq 6K_2^2\D t^{2\gamma_2}(1+|y(t_i)|^{2\s+2})+3K_4|y(t_i)-Y_i|^2+3K_4|y(s)-y(t_i)|^2.
\end{align*}
So,
\begin{align*}
\int_{t_i}^{t_{i+1}}&\EE|g(s,y(s))-g(t_{i},Y_i)|^2ds\\& \leq 6K_2^2\D t^{2\gamma_2}\int_{t_i}^{t_{i+1}}(1+\EE|y(t_i)|^{2\s+2})ds+3K_4\int_{t_i}^{t_{i+1}}\EE|y(t_i)-Y_i|^2ds\\&\quad
+3K_4\int_{t_i}^{t_{i+1}}\EE|y(s)-y(t_i)|^2ds\\
&\leq 6K_2^2(1+C_p)\D t^{1+2\gamma_2}+3K_4\D t\EE|y(t_i)-Y_i|^2+3K_4C\D t^{\frac{3}{2}}\\
&\leq 3K_4\D t\EE|y(t_i)-Y_i|^2+C(\D t^{1+2\gamma_2}+\D t^{\frac{3}{2}}).
\end{align*}
Therefore,
\begin{equation*}
\EE[J_{21}]\leq (1+3K_4\D t)\EE|y(t_i)-Y_i|^2+C(\D t^{1+2\gamma_2}+\D t^{\frac{3}{2}}).
\end{equation*}
We get the estimate
\begin{equation}\label{J2eatimate}
\EE[J_2]\leq \frac{1}{2}\EE|y(t_{i+1})-Y_{i+1}|^2+(1+3K_4\D t)\EE|y(t_i)-Y_i|^2+C(\D t^{1+2\gamma_2}+\D t^{\frac{3}{2}}).
\end{equation}
Combination of \eqref{J1eatimate} and \eqref{J2eatimate} gives
\begin{align*}
\EE|y(t_{i+1})-Y_{i+1}|^2&\leq (K_3\D t+\D t+\frac{1}{2})\EE|y(t_{i+1})-Y_{i+1}|^2+C(\D t^{1+2\gamma_1}+\D t^2)\\
&\quad + (1+3K_4\D t)\EE|y(t_i)-Y_i|^2+C(\D t^{1+2\gamma_2}+\D t^{\frac{3}{2}})\\
&\leq \frac{1+3K_4\D t}{\frac{1}{2}-K_3\D t-\D t}\EE|y(t_i)-Y_i|^2+ C(\D t^{1+2\gamma_1}+\D t^{1+2\gamma_2}+\D t^{\frac{3}{2}}).
\end{align*}
Summing up the above, we get
\begin{align*}
\sum_{r=1}^{i}\EE|y(t_r)-Y_r|^2\leq \frac{2(1+3K_4\D t)}{1-2K_3\D t-2\D t}\sum_{r=1}^{i-1}\EE|y(t_r)-Y_r|^2+ iC\left(\D t^{1+2\gamma_1}+\D t^{1+2\gamma_2}+\D t^{\frac{3}{2}}\right).
\end{align*}
Because of $i\D t=t_i\leq e^{t_i}$, we can obtain
\begin{align*}
\EE|y(t_i)-Y_i|^2&\leq \frac{1+2\D t(3K_4+K_3+1)}{1-2K_3\D t-2\D t}\sum_{r=1}^{i-1}\EE|y(t_r)-Y_r|^2\\
&\quad+ Ce^{t_i}\left(\D t^{2\gamma_1}+\D t^{2\gamma_2}+\D t^{\frac{1}{2}}\right),
\end{align*}
by the discrete Gronwall inequality, we get
\begin{equation}\label{yXi}
\EE|y(t_i)-Y_i|^2\leq C\left(\D t^{2\gamma_1}+\D t^{2\gamma_2}+\D t^{\frac{1}{2}}\right)e^{Ct_i}.
\end{equation}
\par
For any $t\in[i\D t, (i+1)\D t)$, it follows from \eqref{consol}, \eqref{yts} and \eqref{yXi} that
\begin{align*}
\EE|y(t)-Y(t)|^2&\leq 2\EE|y(t)-y(t_i)|^2+2\EE|y(t_i)-Y_i|^2\\
&\leq 2\D t +2C\left(\D t^{2\gamma_1}+\D t^{2\gamma_2}+\D t^{\frac{1}{2}}\right)e^{Ct_i}\\
&\leq C\left(\D t^{2\gamma_1}+\D t^{2\gamma_2}+\D t^{\frac{1}{2}}\right).
\end{align*}
This completes the proof.
\end{proof}
\section{Numerical invariant measure}
In this section, we discuss the stationary distribution of the numerical solution generated by the semi-implicit EM method. In order to simplify the analysis, we consider the following autonomous SDE
\begin{equation}
\label{SDE3}
x(t)=x(0)+\int_{0}^{t}f(x(s))ds+\int_{0}^{t}g(x(s))dB(s)+ L(t), \quad t>0,
\end{equation}
where the L\'evy process $L(t)$ is defined as that in Section 2 with an additional requirement that $\mathbb{E}\left(L(t) \right) = 0$. The semi-implicit EM method reduces to
\begin{equation}
\label{autonomonsNum}
X_{i+1}=X_i + f(X_{i+1})\D t + g(X_i)\D B_{i+1}+\D L_{i+1},\quad i=0,1,2,\ldots.
\end{equation}
\par
We need more notations.
Let $\mathcal{P}(\RR^d)$ denote the family of all probability measures on $\RR^d$. For any $k\in(0,2]$, define a metric $d_k(u,v)$ on $\RR^d$ as
\begin{equation*}
d_k(u,v)=|u-v|^k, \quad u,v\in\RR^d.
\end{equation*}
Define the corresponding Wasserstein distance between $\omega\in\mathcal{P}(\RR^d)$ and $\omega'\in\mathcal{P}(\RR^d)$ by
\begin{equation*}
W_k(\omega,\omega')=\inf\EE(d_k(u,v)),
\end{equation*}
where the infimum is taken over all pairs of random variables $u$ and $v$ on $\RR^d$ with respect to the laws $\omega$ and $\omega'$.\par
Let $\bar{\mathbb{P}}_t(\cdot,\cdot)$ be the transition probability kernel of the underlying solution $x(t)$, with the notation $\delta_x\bar{\mathbb{P}}_t$ emphasizing the initial value $x$. The probability measure $\pi(\cdot)\in\mathcal{P}(\RR^d)$ is called an invariant measure of $x(t)$ if
\begin{equation*}
\pi(B)=\int_{\RR^d}\bar{\mathbb{P}}_t(x,B)\pi(dx)
\end{equation*}
holds for any $t\geq 0$ and Borel set $B\in \mathcal{B}(\RR^d)$.
Let $\mathbb{P}_i(\cdot,\cdot)$ be the transition probability kernel of the numerical solution $\{X_i\}_{i\geq 0}$ with the notation $\delta_x\mathbb{P}_i$ emphasizing the initial value $x$. The probability measure $\Pi_{\D t}(\cdot)\in\mathcal{P}(\RR^d)$ is called an invariant measure of the numerical solution $\{X_i\}_{i\geq 0}$ if
\begin{equation*}
\Pi_{\D t}(B)=\int_{\RR^d}\mathbb{P}_i(x,B)\Pi_{\D t}(dx),~~\forall i=0,1,2,...,
\end{equation*}
for any $B\in \mathcal{B}(\RR^d)$.
\begin{lemma}\label{Xiyizhi}
Suppose that \eqref{sideLipx} and \eqref{lineg} hold. Then for any $\D t\in(0,1)$, the numerical solution is uniformly bounded by
\begin{equation*}
\EE|X_i|^2\leq Q_1^i\EE|X_0|^2+\frac{Q_2(1-Q_1^i)}{1-Q_1},\quad i=1,2,\ldots, \mathbf{N},
\end{equation*}
where
\begin{equation*}
Q_1=\frac{1+M_2\D t}{1-2M_1\D t}<1\ \ \textrm{and} \quad Q_2=\frac{(2m_1+m_2+1)\D t}{1-2M_1\D t}.
\end{equation*}
\end{lemma}
\begin{proof}
Multiplying both sides of \eqref{autonomonsNum} with the transpose of $X_{i+1}$ yields
\begin{equation*}
|X_{i+1}|^2=X^{\mathrm{T}}_{i+1}f(X_{i+1})\D t +X_{i+1}^{\mathrm{T}}\left(X_i+g(X_i)\D B_{i+1}+\D L_{i+1}\right).
\end{equation*}
It follows from \eqref{sideLipx}, \eqref{lineg}, $\D t^{2/\gamma_0}<\D t<1$ and the elementary inequality that
\begin{align*}
\EE|X_{i+1}|^2&\leq \frac{2m_1\D t}{1-2M_1\D t}+\frac{1}{1-2M_1\D t}\EE|X_i|^2+\frac{M_2\D t}{1-2M_1\D t}\EE|X_i|^2\\
&\quad +\frac{m_2\D t}{1-2M_1\D t}+\frac{1}{1-2M_1\D t}\D t^{2/\gamma_0}\\
&\leq \frac{1+M_2\D t}{1-2M_1\D t}\EE|X_i|^2+\frac{(2m_1+m_2+1)\D t}{1-2M_1\D t}\\
&= Q_1\EE|X_i|^2+Q_2.
\end{align*}
Since $M_2+2M_1<0$,
\begin{equation*}
Q_1=\frac{1+M_2\D t}{1-2M_1\D t}<1
\end{equation*}
for any $\D t\in(0,1)$.
Thus
\begin{equation*}
\EE|X_i|^2\leq Q_1^i\EE|X_0|^2+\frac{Q_2(1-Q_1^i)}{1-Q_1}.
\end{equation*}
This completes the proof.
\end{proof}
Let $\{X_i^x\}_{i\geq 0}$ and $\{X_i^y\}_{i\geq 0}$ be the numerical solutions with respect to two different initial values $x$ and $y$.
\begin{lemma}\label{Y2}
Given Assumptions \ref{sideLip} and \ref{lineargro}. For any $\D t\in(0,1)$, the numerical solutions satisfy
\begin{equation*}
\lim_{i\rightarrow \infty}\EE|X_i^x-X_i^y|^2=0.
\end{equation*}
\end{lemma}
\begin{proof}
Note that
\begin{equation*}
X_{i+1}^x-X_{i+1}^y=X_i^x-X_i^y + (f(X_{i+1}^x)-f(X_{i+1}^y))\D t + (g(X_i^x)-g(X_i^y))\D B_{i+1}.
\end{equation*}
Multiplying both sides with the transpose of $X_{i+1}^x-X_{i+1}^y$ yields
\begin{align*}
|X_{i+1}^x-X_{i+1}^y|^2&=(X_{i+1}^x-X_{i+1}^y)^{\mathrm{T}}(f(X_{i+1}^x)-f(X_{i+1}^y))\D t \\
&\quad +(X_{i+1}^x-X_{i+1}^y)^{\mathrm{T}}\left(X_i^x-X_i^y + (g(X_i^x)-g(X_i^y))\D B_{i+1}\right).
\end{align*}
By Assumption \ref{sideLip} and $ab\leq (a^2+b^2)/2$, we have
\begin{equation*}
|X_{i+1}^x-X_{i+1}^y|^2 \leq (\frac{1}{2}+K_3\D t)|X_i^x-X_i^y|^2+\frac{1}{2}|X_i^x-X_i^y + (g(X_i^x)-g(X_i^y))\D B_{i+1}|^2.
\end{equation*}
Taking expectations on both sides above and using Assumption \ref{lineargro} result in
\begin{equation*}
\EE|X_{i+1}^x-X_{i+1}^y|^2\leq \frac{1+K_4\D t}{1-2K_3\D t}\EE|X_i^x-X_i^y|^2.
\end{equation*}
Due to $K_4+2K_3<-1$,
\begin{equation*}
\frac{1+K_4\D t}{1-2K_3\D t}<1
\end{equation*}
holds for any $\D t\in(0, 1)$. Therefore,
\begin{equation*}
\EE|X_i^x-X_i^y|^2\leq Q_3^i\EE|X_0^x-X_0^y|^2.
\end{equation*}
where $Q_3=\frac{1+K_4\D t}{1-2K_3\D t}$. This completes the proof.
\end{proof}
We now present the existence and uniqueness of the invariant measure of the numerical solution $\{X_i\}_{i\geq 0}$.
\begin{theorem}\label{Invariant}
Suppose that Assumptions \ref{sideLip} and \ref{lineargro} hold. Then for any fixed $\D t\in(0,1)$, the numerical solution $\{X_i\}_{i\geq 0}$ has a unique invariant measure $\Pi_{\D t}$.
\end{theorem}
\begin{proof}
For each integer $n\geq 1$ and any Borel set $B\subset \RR^d$, define the measure
\begin{equation*}
\omega_n(B)={\frac{1}{n}\sum_{i=1}^{n}\mathbb{P}(X_i\in B)}.
\end{equation*}
It follows from Lemma \ref{Xiyizhi} and the Chebyshev inequality that the measure sequence $\{\omega_n(B)\}_{n\geq 1}$ is tight such that there exists a subsequence which converges to an invariant measure \cite{YuanMao2003}. This proves the existence of the invariant measure of the numerical solution. In the following, we show the uniqueness of the invariant measure.
Let $\Pi_{\D t}^{x}$ and $\Pi_{\D t}^{y}$ be invariant measures of $\{X_i^x\}_{i\geq 0}$ and $\{X_i^y\}_{i\geq 0}$, respectively. Then
\begin{align*}
W_k(\Pi_{\D t}^x,\Pi_{\D t}^y)&= W_k(\Pi_{\D t}^x\mathbb{P}_i,\Pi_{\D t}^y\mathbb{P}_i)\\
&\leq \int_{\RR^d}\int_{\RR^d}\Pi_{\D t}^x(dx)\Pi_{\D t}^y(dy)W_k(\delta_x\mathbb{P}_i,\delta_y\mathbb{P}_i).
\end{align*}
From Lemma \ref{Y2}, we get
\begin{equation*}
W_k(\delta_x\mathbb{P}_i,\delta_y\mathbb{P}_i)\leq (Q_3^i\EE|x-y|^2)^{\frac{k}{2}} \rightarrow 0, \quad \text{as} \quad i\rightarrow \infty.
\end{equation*}
Then, we have
\begin{equation*}
\lim_{i\rightarrow \infty}W_k(\Pi_{\D t}^x,\Pi_{\D t}^y)=0,
\end{equation*}
which complete the proof.
\end{proof}
The following theorem states the numerical invariant measure $\Pi_{\D t}$ converges to the underlying one $\pi$ in the Wassertein distance.
\begin{theorem}\label{conv}
Suppose that Assumptions \ref{sideLip} and \ref{lineargro} hold. Then
\begin{equation*}
\lim_{\D t\rightarrow 0+}W_k(\pi,\Pi_{\D t})=0.
\end{equation*}
\end{theorem}
\begin{proof}
For any $k\in(0,2]$,
\begin{equation*}
W_k(\delta_x\bar{\mathbb{P}}_{i\D t},\pi)\leq \int_{\RR^d}\pi(dy)W_k(\delta_x\bar{\mathbb{P}}_{i\D t},\delta_y\bar{\mathbb{P}}_{i\D t})
\end{equation*}
and
\begin{equation*}
W_k(\delta_x\mathbb{P}_{i},\Pi_{\D t})\leq \int_{\RR^d}\Pi_{\D t}(dy)W_k(\delta_x\mathbb{P}_{i},\delta_y\mathbb{P}_{i}).
\end{equation*}
Because of the existence and uniqueness of the invariant measure for the numerical method \eqref{autonomonsNum} and Theorem \ref{Invariant}, for any $\D t\in(0,1)$ and $\e>0$, there exists a $i>0$ sufficiently large such that
\begin{equation*}
W_k(\delta_x\bar{\mathbb{P}}_{i\D t},\pi)\leq \frac{\e}{3} \quad \text{and} \quad W_k(\delta_x\mathbb{P}_{i},\Pi_{\D t})\leq \frac{\e}{3}.
\end{equation*}
Then, for the chosen $i$, by Theorem \ref{Conver}, we get
\begin{equation*}
W_k(\delta_x\bar{\mathbb{P}}_{i\D t},\delta_x\mathbb{P}_{i})\leq \frac{\e}{3}.
\end{equation*}
Therefore,
\begin{equation*}
W_k(\pi, \Pi_{\D t})\leq W_k(\d_x\bar{\mathbb{P}}_{i\D t},\pi)+W_k(\delta_x\mathbb{P}_{i},\Pi_{\D t})+W_k(\d_x\bar{\mathbb{P}}_{i\D t},\delta_x\mathbb{P}_{i})\leq \e,
\end{equation*}
which completes the proof.
\end{proof}
\section{Numerical examples}
In this section, we conduct some numerical experiments to verify our theoretical results. Examples \ref{ex_conver} and \ref{ex_conver2} are used to illustrate the results of Theorem \ref{Conver} and Corollary \ref{Conver2}, respectively. Example \ref{ex_stable} is given to demonstrates the results of Theorems \ref{Invariant} and \ref{conv}.
\par
It is should be noted that the L\'evy process we used in our numerical experiments is the tempered stable process, whose generating algorithm is borrowed from \cite{ErnestJum2015}.
\begin{expl}\label{ex_conver}
Consider the following SDE
\begin{equation*}
d y(t)=\left([(t-1)(2-t)]^{\frac{1}{5}}y^2(t)-2y^5(t)\right)dt+\left([(t-1)(2-t)]^{\frac{2}{5}}y^2(t)\right)d B(t)+d L(t),
\end{equation*}
where $t\in (0,1]$, $y(0)=1$.
\end{expl}
For any $x,y\in\RR$,
\begin{align*}
|f(t,x)-f(t,y)|^2&=\left|[(t-1)(2-t)]^{\frac{1}{5}}(x^2-y^2)-2(x^5-y^5)\right|^2\\
&\leq \left|[(t-1)(2-t)]^{\frac{1}{5}}(x+y)-(x^4+y^4)\right|^2|x-y|^2\\
&\leq H(1+|x|^8+|y|^8)|x-y|^2,
\end{align*}
which shows that Assumption \ref{superlinear} is satisfied with $\s=8$.
\par
Using the same approach, for any $t\in(0,1)$, we can prove that Assumptions \ref{Khasminskii}, \ref{sideLip} and \ref{lineargro} are satisfied. Assumption \ref{timeHolder} also holds with $\gamma_1=1/5$ and $\gamma_2=2/5$. Thus, according to the Theorem \ref{Conver}, we obtain
\begin{equation*}
\EE|y(t)-Y(t)|^2\leq C\left(\D t^{2\gamma_1}+\D t^{2\gamma_2}+\D t^{1/2}\right).
\end{equation*}
Note that the parameter $\gamma_0$ does not affect the convergence order.\par
We conduct $100$ independent trajectories with different step sizes $10^{-2}$, $10^{-3}$, $10^{-4}$ and $10^{-6}$, respectively. Since the true solution of the SDE is difficult to obtain, we take the numerical solution solved with the minimum step size $10^{-6}$ as a reference solution. We choose different $\gamma_0$ and record the errors versus the step sizes. Figures \ref{fig:1-1} and \ref{fig:1-2} show that the convergence order is about $0.2$ when $\gamma_0=1.3, 1.5$. For $\gamma_0=1.3$, we take $f(t,y)=[(t-1)(2-t)]^{4/5}y^2(t)-2y^5(t)$, and thus $\gamma_1=0.8$. Figure \ref{fig:1-3} shows the convergence order is about $0.25$.
Numerical results confirm a close agreement between theoretical and numerical convergence order.
\begin{figure}[H]
\centering
\subfigure[{\label{fig:1-1}}$\gamma_0=1.3$, $\gamma_1=0.2$, $\gamma_2=0.4$]{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=2.9in]{conver_ex1.eps}
\end{minipage}
}
\subfigure[{\label{fig:1-2}}$\gamma_0=1.5$, $\gamma_1=0.2$, $\gamma_2=0.4$]{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=2.9in]{conver_1_5.eps}
\end{minipage}
}
\subfigure[{\label{fig:1-3}}$\gamma_0=1.3$, $\gamma_1=0.8$, $\gamma_2=0.4$]{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=2.9in]{conver25.eps}
\end{minipage}
}
\centering
\caption{Errors versus step sizes}
\end{figure}
\begin{expl}\label{ex_conver2}
Consider the following SDE
\begin{equation*}
d u(t)=\left([(t-1)(2-t)]^{\gamma_1}u^2(t)-2u^5(t)\right)dt+d L(t),\quad t\in[0,1],
\end{equation*}
where $u(0)=1$.
\end{expl}
The Brownian motion noise doesn't appear in this example.
According to Corollary \ref{Conver2}, the convergence order of the semi-implicit EM method is $\min\{\gamma_1,1/\gamma_0\}$.
For $\gamma_1=0.9$ and $\gamma_0=1.2$, we observe from Figure \ref{fig:2-1} that the convergence order is close to $1/\gamma_0\approx 0.83$, which agrees with Corollary \ref{Conver2}.
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{conver_ex2.eps}
\caption{Errors versus step sizes}
\label{fig:2-1}
\end{figure}
\begin{expl}\label{ex_stable}
Consider the following Ornstein-Uhlenbeck (OU) process
\begin{equation*}
dx(t)=-2x(t)dt+2L(t), ~~ x(0)=100.
\end{equation*}
\end{expl}
The Kolmogorov-Smirnov test \cite{Massey1951} helps us measure the difference in distribution between the numerical and true solutions more clearly. The distribution of numerical solutions at time $T=1000$ is regarded as the reference distribution. We simulate 10000 paths generated by the semi-implicit EM method. The empirical distributions at $t=0.1$, $t=0.3$, $t=0.7$ and $t=2$ are ploted in Figure \ref{fig:3-1}. It can be seen that the empirical distributions get closer to the true distribution as time $t$ increases. Figure \ref{fig:3-2} shows that the distribution difference between the numerical solution and the true solution decreases as time $t$ increases.
\begin{figure}[H]
\centering
\subfigure[{\label{fig:3-1}} Empirical distributions]
{\includegraphics[width=0.49\linewidth]{invar61.eps} }
\subfigure[{\label{fig:3-2}} Differences between empirical distributions and the true stationary distribution]
{\includegraphics[width=0.49\linewidth]{invar62.eps}}
\caption{Empirical distributions and long time stability}
\end{figure}
\section{Conclusion and future research}
We investigate the finite time strong convergence of the semi-implicit EM method for SDEs with super-linear drift coefficient driven by a class of L\'evy processes. One of the key findings is that the convergence rate is related to the parameter of the class of L\'evy process, which has not been observed in literatures. In addition, the semi-implicit EM method is capable of providing a good approximation of the invariant measure of the underlying SDEs.
\par
There are some technical difficulties still to be overcome to deal with the multiplicative case of the class of L\'evy processes. Furthmore, other stable processes covered by the settings of the class of L\'evy processes in Section 2 are worth to try in the future.
| 2024-02-18T23:39:46.887Z | 2023-01-02T02:08:36.000Z | algebraic_stack_train_0000 | 372 | 7,645 |
|
proofpile-arXiv_065-1931 | \section{\label{sec:Introduction}Introduction}
The cosmological magnetic field can serve as a probe into the early universe.
Observations tell that the intergalactic magnetic field exists even in the void \cite{NeronovVovk10, 2010MNRAS.406L..70T, 2010ApJ...722L..39A, 2011MNRAS.414.3566T, 2011ApJ...727L...4D, 2011APh....35..135E, 2011A&A...529A.144T, 2011ApJ...733L..21D, 2012ApJ...747L..14V, Takahashi:2013lba, 2015ApJ...814...20F, Ackermann+18, 2020ApJ...902L..11A}, and its origin may be attributed to the early universe.
Possible formation scenarios include magnetogenesis during or at the end of inflation \cite{1988PhRvD..37.2743T, 1992ApJ...391L...1R, 1992PhRvD..46.5346G, 1995PhRvD..52.6694M, PhysRevD.77.123002, 2008JCAP...04..024B, 1995PhRvD..52.1955L, 1995PhRvL..75.3796G, 2004PhRvD..69d3507B, 2007JCAP...02..030B, martin2008generation, 2009JCAP...08..025D, 2009JCAP...12..009K, 2014JCAP...05..040K, 1998PhRvD..57.7139C, 2000PhRvD..62j3512G, 2001PhLB..501..165D, 2002PhRvD..65f3505D, 1993PhRvD..48.2499D, 1999PhLB..455...96B, 2005PhRvD..71j3509A, 2008PhRvL.100x1301D, 2006JCAP...10..018A, 2011JCAP...03..037D, 2015JCAP...05..054F, 2016JCAP...10..039A} and at the phase transitions \cite{PhysRevLett.51.1488, 1989ApJ...344L..49Q, PhysRevD.50.2421, 1991PhLB..265..258V, 1993PhLB..319..178E, PhysRevD.58.103505, PhysRevD.53.662, PhysRevD.54.1291, PhysRevD.55.4582, PhysRevD.57.664, PhysRevD.56.6146, PhysRevLett.79.1193, PhysRevD.62.103008, PhysRevLett.87.251302}.
To understand the role of the primordial magnetic field
and to connect the models of the primordial universe and observation,
describing its magneto-hydrodynamic (MHD) evolution within the framework of the standard cosmology is the first and the most important step.
In this context, Banerjee \& Jedamzik~\cite{Banerjee+04} tried to describe the evolution of the cosmological magnetic field comprehensively, which
has been commonly accepted. However, some of their results appear to be inconsistent with those of recent numerical MHD simulations \cite{2014ApJ...794L..26Z, 2015PhRvL.114g5001B, Brandenburg:2016odr, Brandenburg+17, 2017PhRvE..96e3105R, 2017MNRAS.472.1628P}.
As a representative scenario, let us assume that the coupled system of the primordial magnetic field and the plasma filling the universe is {\it magnetically dominant} and {\it non-helical} initially, {\it e.g.}, at the electroweak symmetry breaking temperature.
While the conventional analysis assumes that the magnetic energy of the long-wavelength modes is intact as the short-wavelength modes decay, numerical simulations exhibit the so-called {\it inverse transfer} \cite{2014ApJ...794L..26Z, 2015PhRvL.114g5001B, Brandenburg:2016odr, Brandenburg+17, 2017PhRvE..96e3105R, 2017MNRAS.472.1628P}, in which the long-wavelength modes are enhanced in contrast to the decaying short-wavelength modes.
In this letter, we present analytic description of the evolution of the cosmological magnetic field with the initial conditions stated above, consistently with the numerical results.
The system during its evolution can be classified into four different regimes, according to whether the dynamics is linear or nonlinear with respect to the magnetic field and whether the kinetic dissipation is mainly due to shear viscosity or drag force.
The evolution of cosmological magnetic field should inevitably experience these different regimes since the dominant source of dissipation in the early universe vary with time among the collision between the particles of the fluid, the friction from the background free-streaming particles such as photons and neutrinos, and the Hubble friction in the matter dominated era.
As for the linear regimes with either shear viscosity or drag force, our analysis is partly based on Ref.~\cite{Banerjee+04}, and for the nonlinear regime with shear viscosity, mostly on Ref.~\cite{Hosking+21, Hosking+22}, which resolves the inconsistency between theory and numerical studies by introducing the {\it Hosking integral} (or the {\it Saffman helicity invariant}) \cite{Hosking+21} and the {\it reconnection time scale}.
The turbulent regime with drag force has never been discussed in literature, on which an analysis is provided in this letter consistently with the other three regimes.
For the first time, we integrate the analyses in these four regimes, which complete the description of the evolution of the magnetically-dominant non-helical cosmological magnetic field throughout the cosmic history from its generation to the present.
While the analysis in Ref.~\cite{Hosking+21, Hosking+22} may be sufficient to connect the properties of magnetic field in the early universe and at present for relatively strong one,
our analysis is essential for its complete history.
To see the difference clearly, we focus on the case of weak magnetic field where the reconnection is driven by the Sweet--Parker reconnection~\cite{sweet58, Parker57}.
\section{\label{sec:setup}Equations of magneto-hydrodynamics}
We consider a coupled system of the magnetic field and the velocity field of the plasma fluid in the early universe.
In the following expressions, $\tau$ is the conformal time, $\bm{x}$ is the comoving coordinate, and $\rho$ and $p$ are the comoving energy density and the comoving pressure of the fluid, respectively.
The comoving quantities are related to the physical fields denoted by a subscript $_\text{p}$ as
\begin{eqnarray}
{\bm B} = a^2 {\bm B}_\text{p},\quad
{\bm v} = {\bm v}_\text{p},\quad
\rho = a^4\rho_\text{p},\quad
p = a^4 p_\text{p},\notag\\
\sigma = a \sigma_\text{p},\quad
\eta = a^{-1}\eta_\text{p},\quad
\alpha = a \alpha_\text{p},
\label{eq:Relations of comoving and physical quantities}
\end{eqnarray}
where $a$ is the scale factor of the universe.
The equations of motions are
\begin{eqnarray}
\partial_\tau {\bm B} -{\bm \nabla} \times ({\bm v} \times {\bm B})= \frac{1}{\sigma} {\bm \nabla}^2 {\bm B}
\label{eq:Faraday's induction equation}
\end{eqnarray}
for the magnetic field $\bm{B}(\tau,\bm{x})$ and
\begin{eqnarray}
\partial_\tau {\bm v} +({\bm v} \cdot {\bm \nabla}){\bm v}
-\frac{1}{\rho+p}({\nabla} \times {\bm B}) \times {\bm B}+\frac{1}{\rho+p}{\bm \nabla} p \notag\\
=\eta \left[{\bm \nabla}^2 {\bm v} + \frac{1}{3} {\bm \nabla}({\bm \nabla} \cdot {\bm v})\right]-\alpha\bm{v}
\label{eq:Navier--Stokes equation}
\end{eqnarray}
for the velocity field $\bm{v}(\tau,\bm{x})$.
The right hand sides of these equations \eqref{eq:Faraday's induction equation} and \eqref{eq:Navier--Stokes equation} represent the dissipation of the energy, in terms of the electric conductivity, $\sigma$, the shear viscosity, $\eta$, and the drag force coefficient, $\alpha$.
The former two quantities originate from the collision between constituent particles of the plasma, while the last one from the background of the system, {\it i.e.,} the Hubble friction and/or free-streaming particles.
As the temperature of the universe decreases, the dominant term for the dissipation changes.
These equations are closed together with the continuity equation
\begin{eqnarray}
\partial_\tau \rho +{\bm \nabla} \cdot[(\rho+p){\bm v} ]=\bm{E}\cdot(\bm{\nabla}\times\bm{B})\left(+aH\rho\right)&,
\label{eq:Continuity equation}\\
\text{where}\quad\bm{E}=\left(\frac{1}{\sigma}\bm{\nabla}-\bm{v}\right)\times\bm{B}&.
\label{eq:Electric field in terms of magnetic field}
\end{eqnarray}
The last term in the right-hand-side of Eq.~\eqref{eq:Continuity equation} is present in the matter dominated era.
We assume the homogeneity and isotropy on average and express the magnetic and velocity field configurations on each time slice by a few parameters, by treating them as stochastic fields.
If we look inside a sufficiently small region, the magnetic field will be almost coherent and it appears anisotropic.
However, the present Hubble patch is composed of many coherent subpatches that have random magnetic fields assigned independently by a single probability distribution.
The universality of the probability distribution throughout the space guarantees the homogeneity, and the isotropy is imposed on the probability distribution.
For later purpose, we further assume that the probability distribution is almost Gaussian.
On top of these assumptions,
we parametrize the magnetic field by its typical strength, $B$, and coherence length, $\xi_{\text{M}}$.
One may define these quantities in terms of the power spectrum, $P_B(k,\tau)$,
of the magnetic field,
\begin{equation}
\langle\boldsymbol{B}(\boldsymbol{k})\cdot\boldsymbol{B}(\boldsymbol{k}') \rangle=P_B(k,\tau)(2\pi)^3\delta^3(\boldsymbol{k}+\boldsymbol{k}'),
\end{equation}
where $\boldsymbol{B}(\boldsymbol{k})$ denotes a Fourier mode.
\begin{align}
B^2&:=\langle \bm{B}^2\rangle=\int \frac{d^3k}{(2\pi)^3}P_B(k,\tau),\\
\xi_{\text{M}}&:=\frac{1}{B^2}\int \frac{d^3k}{(2\pi)^3}\frac{2\pi}{k}P_B(k,\tau) ,
\label{eq:Definition of the parameters}
\end{align}
In the same way, we can define the typical velocity, $v$, and the coherence length of the velocity field, $\xi_{\text{K}}$, in tems of the velocity power spectrum, $P_v(k,\tau)$, as
\begin{align}
v^2&:=\langle \bm{v}^2\rangle=\int \frac{d^3k}{(2\pi)^3}P_v(k,\tau),\\
\xi_{\text{M}}&:=\frac{1}{v^2}\int \frac{d^3k}{(2\pi)^3}\frac{2\pi}{k}P_v(k,\tau) .
\label{eq:Definition of the parameters2}
\end{align}
In the rest of the letter, we are going to determine the time dependence of these quantities,
which shows scaling behavior. \\
Conserved quantities play important roles to determine the scaling evolution of the magnetic and velocity
fields.
In magnetically dominant regimes, where the energy density of the non-helical magnetic field is dominant over that of the velocity field,
an approximate conserved quantity called Hosking integral,
\begin{eqnarray}
I_{H_{\text{M}}}
:=\int_{V} d^3r \langle h_{\text{M}}(\bm{x})h_{\text{M}}(\bm{x}+\bm{r})\rangle,
\label{eq:Definition of Saffmnan helicity integral}\\
\text{where}\quad
h_{\text{M}}:=\bm{A}\cdot\bm{B},
\label{eq:Definition of helicity density}
\end{eqnarray}
has been recently proposed~\cite{Hosking+21}.
Here integral is taken over a volume $V$ much larger than the correlation volume, $\xi_\text{M}^3$, of the magnetic field. Since the correlation length of $h_{\text{M}}$ is also expected to be of order of $\xi_\text{M}$,
on dimentional grounds we find
\begin{eqnarray}
I_{H_{\text{M}}}\simeq B^4 \xi_\text{M}^5
\end{eqnarray}
up to a spectrum-dependent numerical factor.
In particular, we obtain the following constraint between the parameters at the temperature of the universe, $T$.
\begin{eqnarray}
B(T)^4 \xi_\text{M}(T)^5=B_\text{ini}^4 \xi_\text{M,ini}^5,
\label{eq:Condition from conservation of the Hosking integral}
\end{eqnarray}
where the subscript $_\text{ini}$ denotes the initial condition.
Indeed, the recent numerical study confirms that the Hosking integral is well conserved~\cite{zhou2022scaling}.
This conservation law restricts the evolution of the system as long as the system is magnetically dominant.
We will determine the scaling exponents using Eq.~\eqref{eq:Condition from conservation of the Hosking integral} for each regime in the succeeding section.
\section{\label{sec:Regime-dependent analysis}Regime-dependent analyses}
We now turn to the regime-dependent analysis, which is the main result of this letter.
As a preparation, we define a quantity that determines the boundary between non-linear and linear regimes by comparing the contributions from non-linear and linear terms in the induction equation, \eqref{eq:Faraday's induction equation}.
From rough estimates, $\vert{\bm \nabla} \times ({\bm v} \times {\bm B})\vert \sim v B/\min\{\xi_\text{M}, \xi_\text{K}\}$ and $ \vert{\bm \nabla}^2 {\bm B}/\sigma \vert \sim B/\sigma \xi_\mathrm{M}^2$, we define the magnetic Reynolds number,
\begin{eqnarray}
\text{Re}_{\text{M}}
:=\frac{\sigma v \xi_\text{M}^2}{\min\{\xi_\text{M}, \xi_\text{K}\}}.
\label{eq:Definition of Magnetic Reynolds number}
\end{eqnarray}
We identify $\text{Re}_\text{M}>(<)1$ with the non-linear (linear) regimes.
We also define another quantity that determines the boundary between viscous and dragged regimes by comparing the contributions in the right-hand-side of the Navier--Stokes equation, Eq.~\eqref{eq:Navier--Stokes equation}, at the scale of the kinetic coherence length, $\xi_\text{K}$.
By estimating that $\alpha |{\bm v}| \sim \alpha v$ and
$\left\vert\eta \left[{\bm \nabla}^2 {\bm v} + \frac{1}{3} {\bm \nabla}({\bm \nabla} \cdot {\bm v})\right]\right\vert \sim \eta v/\xi_\mathrm{K}^2$,
we define the quantity that characterises the ratio of dissipation terms as
\begin{eqnarray}
r_\text{diss}:=\frac{\alpha\xi_\text{K}^2}{\eta}.
\label{eq:Ratio of the dissipation terms}
\end{eqnarray}
and then $r_\text{diss}<(>)1$ corresponds to the viscous (drag) regimes.
In the following, we study the evolution of the system by classifying the regimes according to the criteria determined by these quantities.
One can find the results of our analysis for each regime in Table~\ref{tab:summary of the decay laws}.
\subsection{\label{sec:Nonlinear with shear viscosity}Nonlinear regime with shear viscosity}
First, we consider the case where the magnetic Reynolds number is larger than unity and the shear viscosity is dominant over the drag force,
\begin{eqnarray}
\text{Re}_\text{M}\gg1,\quad r_\text{diss}\ll1,
\label{eq:Conditions of nonlinear regime with shear viscosity}
\end{eqnarray}
In this regime, the evolution of the system is determined by the quasi-stationary condition of the magnetic reconnection with dissipation due to the shear viscosity, known as the Sweet--Paker reconnection~\cite{sweet58, Parker57}, which drives the transfer between the magnetic and kinetic energy, if the magnetic field is not so strong (or more precisely if the {\it Lundquist number} is not so large)\footnote{
For stronger magnetic field, we expect that the reconnection is driven by the fast reconnection \cite{ji2011phase}.
In this case, the analysis in Ref.~\cite{Hosking+21, Hosking+22} may be sufficient.}.
The application of this physical mechanism to this regime is originally discussed in Ref.~\cite{Hosking+21}. Based on their discussion, we derived the scaling laws for MHD system.
The Sweet--Parker reconnection mechanism is described as follows.
A current sheet of size $\xi_\text{M}^2\times\xi_\text{K}$ is formed at each boundary between two regions of coherent magnetic field lines, on which incoming magnetic field lines are dissipated, reconnect, and feed energy into the velocity field~\cite{sweet58, Parker57}.
See Fig.~\ref{fig:SweetParker}.
Let $v_\text{in}$ and $v_\text{out}$ denote incoming and outgoing velocities of material carrying the magnetic field lines, respectively, as illustrated in Fig.~\ref{fig:SweetParker}.
The mass conservation on each sheet implies $v_\text{in}\xi_\text{M}=v_\text{out}\xi_\text{K}$.
Comparing the inside and outside of the current sheet, the stationarity condition in the induction equation, Eq.~\eqref{eq:Faraday's induction equation}, is approximately $\frac{Bv_\text{in}}{\xi_\text{K}}\simeq \frac{B}{\sigma\xi_\text{K}^2}$, where we have approximated that ${\bm \nabla} \sim \xi_\mathrm{K}^{-1}$ at the current sheet.
Therefore, we can express $v_\text{in}$ and $v_\text{out}$ in terms of $B, \xi_\text{M}$ and $\xi_\text{K}$ as
\begin{eqnarray}
v_\text{in} = \frac{1}{\sigma\xi_\text{K}},\quad
v_\text{out} = \frac{\xi_\text{M}}{\sigma\xi_\text{K}^2}. \label{vinxik}
\end{eqnarray}
The outgoing flow spreads over the whole volume keeping the coherence length $\xi_\mathrm{K}$ while decreasing the amplitude.
Taking into account the dilution factor, we obtain
\begin{eqnarray}
v^2
\simeq \frac{\xi_\text{K}}{\xi_\text{M}}v_\text{out}^2.
\label{eq:velocity_SP-reconnection}
\end{eqnarray}
Here we assume the high aspect ratio, $\xi_\text{M}/\xi_\text{K}\gg 1$, of the current sheet.
Since it takes the time,
\begin{eqnarray}
\tau_\text{SP} := \frac{\xi_\text{M}}{v_\text{in}},
\end{eqnarray}
to process all the magnetic field within the volume $\xi_\text{M}^3$, the decay of the magnetic field energy in this regime proceeds, keeping the condition that the time scale equals to the conformal time at cosmic temperature $T$,
\begin{eqnarray}
\tau(T) \left(= \tau_\text{SP}\right) = \sigma\xi_\text{M}\xi_\text{K}.
\label{eq:Condition from the timescale of the Sweet--Parker}
\end{eqnarray}
Here we have used Eq.~\eqref{vinxik}.
This condition is the origin of the explicit time dependence of the evolution of the characteristic properties of the magnetic and velocity field. \\
\begin{figure*}[thb]
\includegraphics[width=0.8\textwidth]{SweetParker.pdf}
\caption{\label{fig:SweetParker}An illustration of the current sheet (size of $\xi_{\rm M}^{2} \times \xi_{\rm K}$)
embedded in each patch (size of $\xi_{\rm M}^3$).
The magnetic reconnection occurs due to the finite electric conductivity on the current sheet.
The red solid and the black dashed arrows show the magnetic field and velocity flows, respectively.
}
\end{figure*}
Now let us take into account the effect of shear viscosity \cite{park+84} to determine the relation between the energy density of the magnetic and velocity fields.
Considering the energy budget along the fluid motion inside the current sheet, the energy of the magnetic field should be transferred to the energy of the outflow.
Taking into account the dissipation due to the shear viscosity, $\frac{\rho+p}{2}\frac{\eta v_\text{out}}{\xi_\text{K}^2}\xi_\text{M}$,
which is evaluated by supposing the balance between the injection term and dissipation term in the Navier--Stokes equation, together with Eq.~\eqref{vinxik}, the energy conservation leads to another condition,
\begin{eqnarray}
\frac{1}{2}B^2 = \frac{\rho+p}{2}\left(1+\text{Pr}_\text{M}\right)v_\text{out}^2
\label{eq:Energy balance for Sweet--Parker with shear viscosity}
\end{eqnarray}
is imposed.
Here we have defined the magnetic Prandtl number,
\begin{eqnarray}
\text{Pr}_\text{M}:=\sigma\eta.
\label{eq:Define the magnetic Prandtl number}
\end{eqnarray}
From Eqs.~\eqref{vinxik}, \eqref{eq:Condition from the timescale of the Sweet--Parker} and ~\eqref{eq:Energy balance for Sweet--Parker with shear viscosity}, we obtain a relation between $B$ and $\xi_{\rm M}$.
\begin{eqnarray}
B^2 \tau^4 \simeq (\rho+p) \sigma^3\eta \xi_\text{M}^6,
\label{eq:constraint_nonlinear-shear_viscosity}
\end{eqnarray}
where we have used the approximation, $\text{Pr}_\text{M} \gg1$, which is the case in the early universe~\cite{Durrer+13}.
Combining Eqs.~\eqref{eq:Condition from conservation of the Hosking integral} and \eqref{eq:constraint_nonlinear-shear_viscosity}, we may determine the scaling behaviors of the magnetic field in this regime.
\begin{eqnarray}
B &=& B_\text{ini}^{\frac{12}{17}} \,\xi_\text{M,ini}^{\frac{15}{17}} \left[(\rho+p) \sigma^3\eta\right]^{\frac{5}{34}} \tau^{-\frac{10}{17}},
\label{eq:Magnetic field strength evolution for Sweet--Parker with shear viscosity}\\
\xi_\text{M} &=& B_\text{ini}^{\frac{4}{17}} \,\xi_\text{M,ini}^{\frac{5}{17}} \left[(\rho+p) \sigma^3\eta\right]^{-\frac{2}{17}} \tau^{\frac{8}{17}}.
\label{eq:Magnetic field coherence length evolution for Sweet--Parker with shear viscosity}
\end{eqnarray}
Note that if the initial coherence length is large,
\begin{equation}
\tau < B_\mathrm{ini}^{-\frac{1}{2}} \xi_\mathrm{M,ini}^{\frac{3}{2}} \left[(\rho+p) \sigma^3\eta\right]^{\frac{1}{4}}, \label{taufrozenA}
\end{equation}
the magnetic field is frozen and eventually starts the scaling evolution when the equality for Eq.~\eqref{taufrozenA} is satisfied.
As for the evolution of the velocity field, we find
\begin{eqnarray}
v &=& \sigma^{\frac{1}{2}} \tau^{-\frac{3}{2}} \xi_\text{M}^2,\quad
\xi_\text{K} = \sigma^{-1} \tau \xi_\text{M}^{-1},
\label{eq:Velocity field coherence length evolution for Sweet--Parker with shear viscosity}
\end{eqnarray}
as the formulae describing the evolution in this regime.\\
The conditions for the system to be in this regime of the scaling evolution, Eqs.~\eqref{eq:Conditions of nonlinear regime with shear viscosity}, can be rewritten as
\begin{align}
\text{Re}_\text{M} (\tau)
&= \sigma^{\frac{5}{2}} \tau^{-\frac{5}{2}} \xi_\text{M}^5 (:=\text{Re}_\text{M}^{\text{SP}})
\gg1,\;
\label{eq:The condition of low ReM for the Sweet--Parker with shear viscosity}
\\
r_\text{diss}(\tau)
&= \alpha \sigma^{-2} \eta^{-1} \tau^{2} \xi_\text{M}^{-2} (:=r_\text{diss}^{\text{SP}})
\ll1,\quad\;\;\;
\label{eq:The condition of low r_diss for the Sweet--Parker with shear viscosity}
\end{align}
which may be eventually violated so that the system enters another regime. Note that from these conditions, we can confirm the consistency of our solutions with the assumptions that the system is magnetically dominant, $B^2/[(\rho+p)v^2] = \text{Pr}_\text{M} \text{Re}_\text{M}^{\text{SP} \frac{2}{5}} \gg 1$, and that the aspect ratio is large, $\xi_\text{M}/\xi_\text{K} = \text{Re}_\text{M}^{\text{SP} \frac{2}{5}} \gg
1$.
\subsection{\label{sec:Nonlinear with drag force}Nonlinear regime with drag force}
For the case where the magnetic Reynolds number is larger than unity, while the drag force is dominant over the shear viscosity,
\begin{eqnarray}
\text{Re}_\text{M}\gg1,\quad r_\text{diss}\gg1,
\label{eq:Conditions of nonlinear regime with drag force}
\end{eqnarray}
the system is driven by the magnetic reconnection with dissipation due to the drag force.
To the best of our knowledge, this regime has never been discussed consistently with the other three regimes, so we work it out here.
Exactly the same discussion as the previous section, Sec.~\ref{sec:Nonlinear with shear viscosity}, holds (Eqs.~\eqref{vinxik} and \eqref{eq:Condition from the timescale of the Sweet--Parker}) up to the consideration about the energy balance.
On the other hand, the relation between the magnetic and kinetic energy (Eq.~\eqref{eq:Energy balance for Sweet--Parker with shear viscosity}) changes as follows.
In this regime, the dissipation of the kinetic energy along the fluid motion inside the current sheet is replaced by the one due to the drag force, $\frac{\rho+p}{2}\alpha v_\text{out}\xi_\text{M}$.
Then, the energy balance leads to a condition,
\begin{eqnarray}
\frac{1}{2}B^2 \simeq \frac{\rho+p}{2}\alpha v_\text{out} \xi_\text{M}.
\label{eq:Energy balance for Sweet--Parker with drag force}
\end{eqnarray}
Here we have used the relation $\alpha \xi_\mathrm{M}/v_\mathrm{out} = r_\mathrm{diss} \mathrm{Pr_M} \gg 1$ so that the dissipation dominates over the remaining kinetic energy.
Using Eqs.~\eqref{vinxik} and \eqref{eq:Condition from the timescale of the Sweet--Parker}, Eq.~\eqref{eq:Energy balance for Sweet--Parker with drag force} can be rewritten as
\begin{eqnarray}
B^2 \tau^2 = (\rho+p) \sigma \alpha \xi_\text{M}^4.
\label{eq:constraint_nonlinear-drag_force}
\end{eqnarray}
Combining Eq.~\eqref{eq:Condition from conservation of the Hosking integral} and Eq.~\eqref{eq:constraint_nonlinear-drag_force}, we obtain
\begin{eqnarray}
B &=& B_\text{ini}^{\frac{8}{13}} \,\xi_\text{M,ini}^{\frac{10}{13}} \left[(\rho+p) \sigma\alpha\right]^{\frac{5}{26}} \tau^{-\frac{5}{13}},
\label{eq:Magnetic field strength evolution for Sweet--Parker with drag force}\\
\xi_\text{M} &=& B_\text{ini}^{\frac{4}{13}} \,\xi_\text{M,ini}^{\frac{5}{13}} \left[(\rho+p) \sigma\alpha\right]^{-\frac{2}{13}} \tau^{\frac{4}{13}}.
\label{eq:Magnetic field coherence length evolution for Sweet--Parker with drag force}
\end{eqnarray}
Note that, once more, if the initial coherence length is large with
\begin{equation}
\tau < B_\mathrm{ini}^{-1} \xi_\mathrm{M,ini}^2 \left[(\rho+p) \sigma\alpha \right]^{\frac{1}{2}},
\end{equation}
the magnetic field is frozen and starts the scaling evolution when the inequality is saturated.
The velocity field evolves in the same manner as described by Eqs.~\eqref{eq:Velocity field coherence length evolution for Sweet--Parker with shear viscosity} in this regime.\\
The Reynolds number and the ratio of dissipation terms are evaluated in the same way as Eqs.~\eqref{eq:The condition of low ReM for the Sweet--Parker with shear viscosity} and \eqref{eq:The condition of low r_diss for the Sweet--Parker with shear viscosity}.
The conditions for the system to be in this regime, Eqs.~\eqref{eq:Conditions of nonlinear regime with drag force}, can then be rewritten as
\begin{eqnarray}
\text{Re}_\text{M}^{\text{SP}}
\gg 1,\quad
r_\text{diss}^{\text{SP}}
\gg1.\quad\;\;
\label{eq:The condition of low r_diss for the Sweet--Parker with drag force}
\end{eqnarray}
In this regime, we can also confirm the dominance of the magnetic energy as $B^2/[(\rho+p)v^2] = \text{Pr}_\text{M} \text{Re}_\text{M}^{\text{SP} \frac{2}{5}} r_\text{diss}^{\text{SP}} \gg 1$.
The large aspect ratio is confirmed in the same way as Sec.~\ref{sec:Nonlinear with shear viscosity}.
\subsection{\label{sec:Linear with shear viscosity}Linear regime with shear viscosity}
Next, we consider the case where the magnetic Reynolds number is smaller than unity and the shear viscosity is dominant over the drag force,
\begin{eqnarray}
\text{Re}_\text{M}\ll1,\quad r_\text{diss}\ll1.
\label{eq:Conditions of linear regime with shear viscosity}
\end{eqnarray}
In this case, the magnetic reconnection does not occur as the dominant contribution for the energy transfer between the magnetic and kinetic energy.
Instead, the velocity field is excited by the Lorentz force at the scale of the magnetic coherence length
\begin{eqnarray}
\xi_\text{K}\simeq\xi_\text{M}.
\label{eq:coherence lengths in the linear regimes}
\end{eqnarray}
The energy of the velocity field is brought to smaller scales by the Kolmogorov turbulence and finally dissipated by the shear viscosity.
The quasi-stationarity of the system implies the balance of the injection and the dissipation of the kinetic energy in the Navier--Stokes equation, Eq.~\eqref{eq:Navier--Stokes equation},
\begin{eqnarray}
\frac{1}{\rho+p}\frac{B^2}{\xi_\text{M}}
\simeq \eta \frac{v}{\xi_\text{K}^2},
\end{eqnarray}
from which one can express the typical velocity as
\begin{eqnarray}
v = \frac{1}{\rho+p}\frac{B^2\xi_\text{M}}{\eta}.
\label{eq:velocity in terms of magnetic field in the linear regime with shear viscosity}
\end{eqnarray}
For the Kolmogorov turbulence, it takes
\begin{eqnarray}
\tau_\text{eddy}^\eta
:= \frac{\xi_\text{K}}{v}
= \frac{(\rho+p) \eta}{B^2}
\end{eqnarray}
to break the eddy of the coherence scale.
Therefore, the decay of the kinetic energy in this regime proceeds, keeping the condition
\begin{eqnarray}
\tau(T) = \tau_\text{eddy}^\eta.
\end{eqnarray}
Combining Eqs.~\eqref{eq:Condition from conservation of the Hosking integral} and \eqref{eq:velocity in terms of magnetic field in the linear regime with shear viscosity}, we obtain the scaling behaviors of the magnetic field.
\begin{eqnarray}
B &=& \left[(\rho+p)\eta\right]^{\frac{1}{2}}\tau^{-\frac{1}{2}},
\label{eq:Magnetic field strength evolution for linear with shear viscosity}\\
\xi_\text{M} &=& B_\text{ini}^{\frac{4}{5}} \,\xi_\text{M,ini}\left[(\rho+p)\eta\right]^{-\frac{2}{5}}\tau^{\frac{2}{5}}.
\label{eq:Magnetic coherence length evolution for linear with shear viscosity}
\end{eqnarray}
The evolution of the velocity field are derived from Eqs.~\eqref{eq:coherence lengths in the linear regimes} and \eqref{eq:velocity in terms of magnetic field in the linear regime with shear viscosity}.
Note that if the initial magnetic field is too weak,
\begin{equation}
\tau< \frac{(\rho+p)\eta}{B_\mathrm{ini}^2},
\end{equation}
the magnetic field is frozen and starts the scaling evolution when the inequality is saturated.
\\
The conditions for the system to be in this regime, Eqs.~\eqref{eq:Conditions of linear regime with shear viscosity}, can be rewritten as
\begin{align}
\text{Re}_\text{M}(\tau)
&= \frac{1}{\rho+p}\frac{\sigma B^2\xi_\text{M}^2}{\eta} (:= \text{Re}_\text{M}^\eta)\ll1,\\
r_\text{diss}(\tau)
&= \frac{\alpha\xi_\text{M}^2}{\eta} (:= r_\text{diss}^\eta) \ll1,\quad
\label{eq:The condition of linear and viscous}
\end{align}
from which the consistency of our solutions with the assumption that the system is magnetically dominant, $B^2/[(\rho+p)v^2] = \text{Pr}_\text{M} \left(\text{Re}_\text{M}^{\eta}\right)^{-1} \gg 1$, is confirmed.
\subsection{\label{sec:Linear with drag force}Linear regime with drag force}
Finally, we consider the case where the magnetic Reynolds number is smaller than unity and the drag force is dominant over the shear viscosity,
\begin{eqnarray}
\text{Re}_\text{M}\ll1,\quad r_\text{diss}\gg1.
\label{eq:Conditions of linear regime with drag force}
\end{eqnarray}
In this case, the velocity field is excited by the Lorentz force and dissipated at smaller scales, similarly to the case in the previous section, Sec.~\ref{sec:Linear with shear viscosity}, and Eq.~\eqref{eq:coherence lengths in the linear regimes} also holds.
The quasi-stationarity of the system implies
\begin{eqnarray}
\frac{1}{\rho+p}\frac{B^2}{\xi_\text{M}}\simeq \alpha v,
\end{eqnarray}
from which one can express the typical velocity, $v$, in terms of the typical strength, $B$, and the coherence length, $\xi_\text{M}$, of the magnetic field as
\begin{eqnarray}
v = \frac{1}{\rho+p}\frac{B^2}{\alpha \xi_\text{M}}.
\label{eq:velocity in terms of magnetic field in the linear regime with drag force}
\end{eqnarray}
It takes the time
\begin{eqnarray}
\tau_\text{eddy}^\alpha
:= \frac{\xi_\text{K}}{v}
= \frac{(\rho+p) \alpha \xi_\text{M}^2}{B^2}
\end{eqnarray}
for the Kolmogorov turbulence to break the eddy of the coherence scale.
The decay of the kinetic energy in this regime proceeds, keeping the condition \cite{Banerjee+04},
\begin{eqnarray}
\tau(T) = \tau_\text{eddy}^\alpha.
\end{eqnarray}
Combining Eqs.~\eqref{eq:Condition from conservation of the Hosking integral} and \eqref{eq:velocity in terms of magnetic field in the linear regime with drag force}, we obtain
\begin{eqnarray}
B &=& B_\text{ini}^{\frac{4}{9}} \,\xi_\text{M,ini}^{\frac{5}{9}} \left[(\rho+p)\alpha\right]^{\frac{5}{18}} \tau^{-\frac{5}{18}},
\label{eq:Magnetic field strength evolution for linear with drag force}\\
\xi_\text{M}
&=& B_\text{ini}^{\frac{4}{9}} \,\xi_\text{M,ini}^{\frac{5}{9}} \left[(\rho+p)\alpha\right]^{-\frac{2}{9}} \tau^{\frac{2}{9}}.
\label{eq:Magnetic field coherence length evolution for linear with drag force}
\end{eqnarray}
The evolution of the velocity field is derived from Eqs.~\eqref{eq:coherence lengths in the linear regimes} and \eqref{eq:velocity in terms of magnetic field in the linear regime with drag force}.
Note that if the initial coherence length is large,
\begin{equation}
\tau< \frac{(\rho+p) \alpha \xi_\mathrm{M,ini}^2}{B_\mathrm{M}^2},
\end{equation}
the magnetic field is frozen and starts the scaling evolution when the equality is satisfied. \\
The conditions for the system to be in this regime, Eqs.~\eqref{eq:Conditions of linear regime with drag force}, can be rewritten as
\begin{align}
\text{Re}_\text{M} (\tau)
&= \frac{1}{\rho+p}\frac{\sigma B^2}{\alpha} (:= \text{Re}_\text{M}^\alpha)\ll 1,\\
r_\text{diss} (\tau)
&=\frac{\alpha\xi_\text{M}^2}{\eta} (:=r_\text{diss}^\alpha) \gg1,\quad
\label{eq:The condition of linear and dragged}
\end{align}
from which the consistency of our solutions with the assumption that the system is magnetically dominant, $B^2/[(\rho+p)v^2] = \text{Pr}_\text{M} (\text{Re}_\text{M}^{\alpha})^{ -1} r_\text{diss}^\alpha \gg 1$, is confirmed.\\
Note that the linear regimes with both shear viscosity (Sec.~\ref{sec:Linear with shear viscosity}) and drag force (Sec~\ref{sec:Linear with drag force}) were studied in Ref.~\cite{Banerjee+04}, and it was claimed that the system in the linear regimes with shear viscosity is frozen in the realistic situation.
On the contrary, here we have performed a general study, not restricting ourselves to the situation which was discussed in the literature.
In Sec.~\ref{sec:Discussion}, we will argue that
the system can evolve according to the scaling law we derived
also in realistic situation.
\section{\label{sec:Integartion of the analysis}Integration of the analyses}
In the previous section, we have conducted regime-dependent analyses.
Let us confirm the consistency, by showing that the solutions coincide at the boundary of linear and non-linear regimes.
First, we focus on the properties of the magnetic and velocity field in the scaling evolution regimes. On the coherence length, we have
\begin{eqnarray}
\xi_\text{M}=\xi_\text{K},\quad \text{when}\quad \text{Re}_\text{M}=1
\label{eq:Relation between magnetic and kinetic coherence length}
~~~~
\end{eqnarray}
in all regimes.
We identify $v$ as the representative velocity in each regime.
For the non-linear regime, we use characteristic velocity in the Sweet--Parker reconnection. We may confirm that these expressions match velocities for linear regimes on the boundary between the linear and non-linear regimes,
\begin{eqnarray}
v =
\left\{
\begin{matrix}
\frac{1}{\rho + p} \frac{B^2 \xi_\text{M}}{\eta},
&
\text{($\text{Re}_\text{M}=1$, viscous regimes)}
~
\\
\frac{1}{\rho + p} \frac{B^2}{\alpha \xi_\text{M}},
~~
&
\text{($\text{Re}_\text{M}=1$, dragged regimes)}
\end{matrix}
\right.
\label{eq:Relations between velocity}
\end{eqnarray}
Using these expressions of $v$ for each regime, we may express
\begin{eqnarray}
\frac{B^2}{\rho+p}=
\left\{\begin{matrix}
\text{Pr}_\text{M} v^2,
~~~~~~~
\text{($\text{Re}_\text{M}=1$, viscous regimes)}
~\,
\\
\text{Pr}_\text{M}r_\text{diss} v^2.
~~
\text{($\text{Re}_\text{M}=1$, dragged regimes)}
\end{matrix}
\right.\quad
\label{eq:Relations between magnetic and kinetic energy density}
\end{eqnarray}
Also, we confirm that the time scales of the evolution of the system are connected smoothly at the boundary between the linear and non-linear regimes,
\begin{eqnarray}
\tau_\text{SP} =
\left\{\begin{matrix}
\tau_\text{eddy}^\eta,
&
\text{($\text{Re}_\text{M}=1$, viscous regimes)}
~
\\
\tau_\text{eddy}^\alpha,
&
\text{($\text{Re}_\text{M}=1$, dragged regimes)}
\end{matrix}\right.\quad
\label{eq:Relations between time scales}
\end{eqnarray}
which remains identical to the conformal time of the universe, $\tau(T)$, in the scaling regime.
This also suggests that the condition for the system to be frozen is also connected smoothly.
Note that these expressions, Eqs.~\eqref{eq:Relations between velocity}-\eqref{eq:Relations between time scales}, are clearly self-consistent at the boundary of the viscous and dragged regimes ($r_\text{diss}=1$).
We conclude that the decay laws in linear and non-linear regimes
as well as viscous and dragged regimes are consistent at their boundary ($\text{Re}_\text{M} = 1$ and $r_\text{diss}=1$).\\
Now let us summarize the decay laws in Table \ref{tab:summary of the decay laws}.
Detailed discussions are found in the sections shown in the first column.
The state of the system is classified into four regimes, according to the criteria written in the second-to-fourth columns. In each regime, the formulae describing the decay laws are the equations specified in the fifth and sixth columns. The decay time scales are shown in the last column. When the conformal time of the universe $\tau(T)$ is smaller than these time scales, the decay processes are too slow to operate, and the system is frozen until when $\tau(T)$ catches up with the decay time scales.
We have confirmed that each regime including the condition to be frozen is connected smoothly.
Therefore, we may continuously evolve the system, even involving multiple regimes, by following the decay laws in Table~\ref{tab:summary of the decay laws}.
\begin{table*}
\caption{\label{tab:summary of the decay laws}Summary of the decay laws of non-helical and magnetically dominant regimes.}
\begin{ruledtabular}
\begin{tabular}{c|ccc|ccc|c}
& Regimes & Dissipation & Condition & \multicolumn{2}{c}{Decay laws of the magnetic field} & velocity field & Decay time scale\\ \hline
&&&&&&&\vspace{-3mm}\\
Sec.~\ref{sec:Nonlinear with shear viscosity} &\multirow{2}{*}{Nonlinear} & Shear viscosity & Eqs.~\eqref{eq:The condition of low r_diss for the Sweet--Parker with shear viscosity} & Eqs.~\eqref{eq:Magnetic field strength evolution for Sweet--Parker with shear viscosity} and \eqref{eq:Magnetic field coherence length evolution for Sweet--Parker with shear viscosity} & or frozen & \multirow{2}{*}{Eqs.~\eqref{eq:Velocity field coherence length evolution for Sweet--Parker with shear viscosity}} & \multirow{2}{*}{$\tau_\text{SP} = \sigma\xi_\text{M}\xi_\text{K}$}\\
Sec.~\ref{sec:Nonlinear with drag force} && Drag force & Eqs.~\eqref{eq:The condition of low r_diss for the Sweet--Parker with drag force} & Eqs.~\eqref{eq:Magnetic field strength evolution for Sweet--Parker with drag force} and \eqref{eq:Magnetic field coherence length evolution for Sweet--Parker with drag force} & or frozen &&\\
Sec.~\ref{sec:Linear with shear viscosity} &\multirow{2}{*}{Linear} & Shear viscosity & Eqs.~\eqref{eq:The condition of linear and viscous} & Eqs.~\eqref{eq:Magnetic field strength evolution for linear with shear viscosity} and \eqref{eq:Magnetic coherence length evolution for linear with shear viscosity} & or frozen & Eqs.~\eqref{eq:coherence lengths in the linear regimes} and \eqref{eq:velocity in terms of magnetic field in the linear regime with shear viscosity} & \multirow{2}{*}{$\left.\begin{matrix}\tau_\text{eddy}^\eta \\ \tau_\text{eddy}^\alpha\end{matrix}\right\} =\dfrac{\xi_\text{K}}{v}$}\\
Sec.~\ref{sec:Linear with drag force} && Drag force & Eqs.~\eqref{eq:The condition of linear and dragged} & Eqs.~\eqref{eq:Magnetic field strength evolution for linear with drag force} and \eqref{eq:Magnetic field coherence length evolution for linear with drag force} & or frozen & Eqs.~\eqref{eq:coherence lengths in the linear regimes} and \eqref{eq:velocity in terms of magnetic field in the linear regime with drag force} &
\end{tabular}
\end{ruledtabular}
\end{table*}
\section{\label{sec:Discussion}Discussion}
In this letter, we have provided a comprehensive analysis that describes the evolution of the magnetically dominant and non-helical magneto-hydrodynamic system.
We limit ourselves to the cases when the magnetic field is not very strong so that the Lundquist number is not very large and that the Sweet--Parker reconnection (but not the fast reconnection) is relevant.
The results are summarized in Table~\ref{tab:summary of the decay laws}.
Importantly, our analysis predicts quite different evolution history of the primordial magnetic field, compared with the one by Banerjee and Jedamzik \cite{Banerjee+04}.
The difference mainly comes from the fact that the approximately conserved Hosking integral was not taken into account there, which was first pointed out by Hosking and Schekochihin \cite{Hosking+21}.
Let us see how the analysis with the Hosking integral better describes the results of existing numerical MHD simulations compared with the one by Banerjee and Jedamzik \cite{Banerjee+04}.
See the plot in Fig.~\ref{fig:Residuals}.
By parametrizing time dependence of each quantity as~\cite{Brandenburg:2016odr}
\begin{eqnarray}
\frac{B^2}{\rho+p}\propto \tau^{-p_\text{M}},\;
\xi_\text{M}\propto \tau^{q_\text{M}},\;
v^2\propto \tau^{-p_\text{K}},\;
\xi_\text{K}\propto \tau^{q_\text{K}},\quad
\end{eqnarray}
we compare the analytic formula,
which is first studied in Ref.~\cite{Hosking+21} and summarized in Sec.~\ref{sec:Nonlinear with shear viscosity} of the present letter, and the numerical results in the literature \cite{Brandenburg+17, zhou2022scaling, Brandenburg:2016odr}.
The five runs that can be interpreted to be magnetically dominant and non-helical from Ref.~\cite{Brandenburg+17}, the non-helical one in Ref.~\cite{Brandenburg:2016odr}, and the three with the standard magnetic dissipation term in Ref.~\cite{zhou2022scaling} are employed.
Plots near the origin indicate that the theory well describes the numerical calculations.
The dots representing our analysis scatter around the origin\footnote{The apparent outlier (the red point in the above-right of the origin) is the run ``K60D1c'' in Zhou et al \cite{zhou2022scaling}. We do not have a clear explanation about the origin of the deviation, but could be originated from the insufficient resolution.}, while the crosses representing the analysis in Ref.~\cite{Banerjee+04} are off the origin rightward.
Note that we take values of the parameters, $p_\mathrm{M/K}$ and $q_\mathrm{M/K}$, up to their fluctuations in time (which may be roughly $\lesssim 0.1$), from the results of numerical simulations in the literature, and they are not so accurate.
Nevertheless, this plot strongly suggests that the new analysis with the Hosking integral better describes the reality, compared with the one in Banerjee and Jedamzik \cite{Banerjee+04}, which tends to predict too fast decay of both the magnetic and the kinetic energy.
\begin{figure}[b]
\includegraphics[width=0.45\textwidth]{Residuals.pdf}
\caption{\label{fig:Residuals} Comparison between theoretical analyses (dots: the analysis in Sec.~\ref{sec:Nonlinear with shear viscosity}; crosses: Banerjee and Jedamzik \cite{Banerjee+04}) and the numerical results in the literature \cite{Brandenburg+17,zhou2022scaling, Brandenburg:2016odr}.
Since the power indices read off from numerical simulations may well contain errors around $\pm 0.1$, one should conclude that the theoretical model reproduces the numerical simulation well if the points are located
within a circle of a radius $\sim 0.14$ centered at the origin.}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.475\textwidth]{StrengthEWSB.pdf}
\caption{\label{fig:Strength} The evolution of the strength of the magnetic field. Solid lines are based on our analysis, while the dashed line is based on the naive extrapolation of the analysis by Banerjee and Jedamzik \cite{Banerjee+04}. Colored bars indicate which regime determines the scaling evolution at each temperature range. The black shaded region corresponds to over-closure of the universe.}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.475\textwidth]{LengthEWSB.pdf}
\caption{\label{fig:Length} The evolution of the coherence length of the magnetic field. Solid lines are based on our analysis, while the dashed line is based on the naive extrapolation of the analysis by Banerjee and Jedamzik \cite{Banerjee+04}. Colored bars indicate which regime determines the scaling evolution at each temperature range. The dotted line indicates the Hubble horizon.}
\end{figure}
Now we turn to the implication of the main topic of this letter, the cosmological evolution of the magnetic field that involves not only the non-linear regime but also the linear regime with dissipation due to the shear viscosity as well as dissipation due to the drag force.
As a demonstration, we investigate the cosmological evolution of the magnetic field generated at the time of electroweak symmetry breaking.
In this case, we need to take into account the shear viscosity due to electron-electron, electron-neutrino, proton-proton, electron-photon, and hydrogen-hydrogen collisions and the drag force due to the free-streaming neutrinos and photons, when their mean free path is larger than the relevant length scale of the system \cite{arnold2000transport, Banerjee+04}.
The Hubble friction in the matter domination and the ambipolar drag \cite{Banerjee+04} also serve as the drag forces.
These phenomena lead to the change of the regimes, where we can smoothly connect the evolution of magnetic field by using Table~\ref{tab:summary of the decay laws}.
Figures \ref{fig:Strength} and \ref{fig:Length} show an example of the evolution history of magnetic field strength and coherence length, respectively.
The initial condition is set such that the energy density of the magnetic field is several orders below the critical energy density, $B_\text{ini} = 10^{-10.3}\;\text{G}$, and that the magnetic coherence length is safely short, $\xi_\text{M,ini}=10^{-17.4}\;\text{Mpc}$, at the electroweak symmetry breaking temperature $\sim 100\;\text{GeV}$.
Note that if we assume that the primordial magnetic field is generated before the electroweak symmetry breaking, such a choice of the initial condition is motivated by the constraint that comes from the big-bang nucleosynthesis \cite{kamada+21}.
A stronger or longer-ranged $\text{U}(1)_Y$ magnetic field unavoidably generates baryon isocurvature perturbation through the chiral anomaly at the electroweak symmetry breaking, leading to the deuterium overproduction at the big-bang nucleosynthesis to be inconsistent with the present universe.
The initial condition chosen here almost saturates this constraint.
With this initial condition of the magnetic field, the reconnection is driven by the Sweet--Parker mechanism at the non-linear stage, and thus the analysis in the present letter applies.
At first, the system is in the nonlinear regimes for a while.
Since the dimensionful Fermi constant $G_\text{F}$ is involved in the electron-neutrino collision, the dominant contributor to the shear viscosity eventually changes from electron-electron collisions
to electron-neutrino collisions.
Since the neutrino viscosity evolves as $\eta_{\nu} \propto \tau^4$, which compensates the $\tau$ dependence in Eqs.~\eqref{eq:Magnetic field strength evolution for Sweet--Parker with shear viscosity} and \eqref{eq:Magnetic field coherence length evolution for Sweet--Parker with shear viscosity}, the time evolution is accidentally frozen.
When the neutrinos start free-streaming, neutrino drag begins to dissipate the energy of the system.
Noting that the neutrino drag $\alpha_\nu \propto \tau^{-4}$, the magnetic coherence length increases following Eq.~\eqref{eq:Magnetic field coherence length evolution for Sweet--Parker with drag force}.
Therefore, $r_\text{diss}$ decreases and the system enters the nonlinear regime with the electron viscosity again.
At around the QCD phase transition, the system becomes frozen due to the sudden change of the viscosity, which becomes dominated by the proton-proton collision.
The scaling evolution resumes when electrons become massive and the photon drag coefficient decreases.
Then $r_\text{diss}$ drops below unity, and the linear regime with proton viscosity begins.
After the recombination, the system becomes frozen again.
Let us demonstrate how our analysis is different from what is commonly accepted.
According to the analysis by Banerjee and Jedamzik \cite{Banerjee+04}, the evolution in the turbulent regime is $B\propto \tau^{-5/7}$ and $\xi_\text{M}\propto \tau^{2/7}$, assuming the Batchelor spectrum.
This scaling behavior is often just extrapolated throughout the history before the recombination, to connect initial conditions and resultant configurations.
We plot these naive extrapolations with dashed lines in Figs.~\ref{fig:Strength} and \ref{fig:Length}.
One can see that the evolution history is quite different.
First, based on our analysis, the resultant strength is much stronger and the coherence length is much longer than the previous expectations.
This is because the conserved quantity, the Hosking integral, leads to a slower decay of the magnetic energy with the inverse cascade.
Note that the resultant properties of magnetic field are not likely to account for the long-range intergalactic magnetic field suggested by the blazar observations~\cite{2020ApJ...902L..11A} although the magnetic field strength becomes stronger than the previous estimate.
Second, it is impossible to approximate the evolution throughout the history before the recombination by a single power law.
This is because the time-dependent dissipation coefficients, $\sigma, \eta,$ and $\alpha$, play critical roles even in the nonlinear regime.
It is essential that the magnetic field is often frozen in its evolution history. When it is frozen, the initial conditions and the decay time scale at that time are insufficient to determine the properties at that time.
To connect arbitrary initial conditions and the corresponding final configurations, one should draw the evolution history like Figs.~\ref{fig:Strength} and \ref{fig:Length} for every initial condition, since the evolution histories are highly diverse depending on the initial conditions.
Since the system experiences multiple regimes in general, integration of regime-dependent analysis given in Sec.~\ref{sec:Integartion of the analysis} is inevitable to follow this evolution.
In summary, we have clarified decisive elements to determine the evolution history of the cosmological magnetic field in the case it is non-helical and its energy surpasses the kinetic energy. As a result, the evolution may be classified into the four possible regimes summarized in Table \ref{tab:summary of the decay laws}.
Cases with helical magnetic field and kinetically dominant regimes require some extension of the analysis.
Also, we have only considered the magnetic reconnection described by the Sweet--Parker model \cite{sweet58, Parker57}.
Different mechanisms of magnetic reconnection \cite{ji2011phase} should also be taken into account for full description \cite{Hosking+22}.
Our forthcoming paper \cite{FKUYinPrep} will give a more general description about the evolution of the cosmological magneto-hydrodynamic system.
\begin{acknowledgments}
The work of FU was supported by the Forefront Physics and Mathematics Program to Drive Transformation (FoPM).
The work of MF was supported by the Collaborative Research Center SFB1258 and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2094 - 390783311, JSPS Core-to-Core Program (No.JPJSCCA20200002).
This work was partially supported by JSPS KAKENHI, Grant-in-Aid for Scientific Research Nos.\ (C)JP19K03842(KK), (S)20H05639(JY), the Grant-in-Aid for Innovative Areas Nos.\ 18H05542(MF) and 20H05248(JY).
\end{acknowledgments}
\bibliographystyle{unsrt}
| 2024-02-18T23:39:46.954Z | 2023-01-02T02:09:33.000Z | algebraic_stack_train_0000 | 377 | 8,161 |
|
proofpile-arXiv_065-2020 | \section{Introduction}
Software Development is a very long and complicated process, involving many stages and multiple participants. More so, today's software systems have become large and highly complex. Given these characteristics of the software development process, it is inevitable that some defects will end up in applications. Software Quality Assurance procedures provide a means to try and locate these defects. These procedures are very time-consuming and not complete, and usually involve intensive human intervention (unit test writing, for example). In order to help focus quality assurance efforts, many defect prediction and detection approaches were developed. In the last few decades, much progress has been made in using machine learning techniques to help in defect prediction and identification \cite{kamei_defect_2016-1}.
In the field of Defect Prediction, one can divide the different approaches into two main categories. The first is Cross Project Defect Prediction (\textbf{CPDP}), which performs transfer learning from one project to another. The main issue in \textit{CPDP} is to find from which project to learn and how to transfer the learned model so that the model will be valid for the target project.
The second category is Within Project Defect Prediction (\textbf{WPDP}) aims at predicting defective code in a given project using the same project's past data. This approach can be divided into two subcategories: Inner Version Defect Prediction (\textbf{IVDP}) and Cross Version Defect Prediction (\textbf{CVDP}). In \textbf{IVDP}, data from the same project version is used as training data and test data, while in \textbf{CVDP}, data from past project versions are used as training data while the latest ("new") version is the test data. The data in \textbf{IVDP} is usually more homogeneous, but this scenario is usually less realistic, and the data available is usually sparse. In the case of \textbf{CVDP}, there is usually more data, but the distribution tends to change between versions, making it hard to transfer the knowledge. The focus of our work is on the \textbf{CVDP} scenario.
Class Dependency data was used in different studies to predict software defects(e.g.\cite{zimmermann_predicting_2008-1,qu_node2defect_2018},\cite{premraj_network_2011},\cite{ma_empirical_2016}). Most studies use manually crafted features over the Class Dependency Network, usually Social Network measures. In \cite{qu_node2defect_2018}, graph embedding was first used to generate automatic features from CDN for the Defect Prediction process, specifically in a \textbf{IVDP} setup. In this work, we try to apply the same methodology but in the more complex \textbf{CVDP} setup. This higher complexity is because software modules exhibit different statistics in different versions\cite{xu_cross_2018}. In order to use embeddings in a cross-version setup, we need to align the embeddings learned in the test set to the embeddings from the train set. We use different alignment techniques and combine the aligned embeddings with traditional static code metrics to train a classifier and achieve an improvement over the state-of-the-art baseline method.
The main contributions of our study are:
\begin{itemize}
\item We develop a novel approach to module level CVDP, based on a combination of static code metrics and CDN embeddings. We incorporate the use of embedding alignment techniques when moving from one version to another.
\item We developed two anchor selection techniques for the alignment process and evaluated them with two alignment procedures. We also used two different embedding frameworks during the experiments.
\item We performed an experimental analysis of our techniques on a public dataset, with nine software projects written in Java\cite{noauthor_java_nodate}. These projects contain a total of 24 (old version, new version) pairs. We calculated and analyzed two performance metrics, AUC and F1-score, and compared them to a state-of-the-art baseline technique.
\item We built a meta-model that combines the best models for each of the embedding techniques. Our meta-model achieves an improvement of 4.7\% in the AUC score over the baseline method.
\end{itemize}
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{Flowchart.png}
\caption{A Flowchart of Our Proposed Solution }
\label{fig:flow}
\end{figure*}
\section{Related Work}
\subsection{Cross Version Defect Prediction}
There have been a lot of research done in the area of Defect Prediction \cite{nam_survey_2014, hall_systematic_2012,dambros_extensive_2010}. Most of these studies were done in the areas of same-version and cross-project defect prediction. To the best of our knowledge, the field of cross-version defect prediction has been less studied. Zimmerman et al. \cite{zimmermann_predicting_2007} were one of the first who showed that models learned from past versions data are useful in predicting defects in future releases. They conducted an experiment on Eclipse bug data over three releases and showed that models learned from past releases give consistent results compared to same-version models.
In \cite{bennin_empirical_2016}, the authors conducted a cross-version evaluation of 11 different prediction models. The dataset used was 25 open source projects, having two releases each. They used seven process metrics as the features, and no code features. Bennin et al. \cite{bennin_significant_2017} analyzed the impact of data sampling on CVDP. They used 20 pairs of software versions and analyzed the impact of six data sampling techniques on classification performance. They also used five different classifiers for the experiment. They concluded that data sampling is beneficial for defect prediction, given that many datasets are imbalanced. Xu et al. \cite{xu_cross-version_2018} tried to tackle the issue that software applications evolve, and with it, the distribution of different metrics collected. To address this issue, they devised a two-phase framework for CVDP. The first step (called HALKP) is a hybrid active learning technique which selects modules from the current version (which are unlabeled), based on different measures, and consults an expert that assigns a label to it. This labeled instance is merged with the previous version's dataset, and the process continues. When they decide to stop the process, the result is a dataset of combined previous and current version instances. The second phase performs Kernel PCA (KPCA) to map all instances (labeled and unlabeled) to a new feature space. The result of this process is fed into a regular classifier. They show that their technique improves classification performance over a baseline with just the original features. Amasaki \cite{amasaki_cross-version_2018} investigated the performance of CPDP techniques in a CVDP scenario. In this study, Amasaki evaluated the performance of 20 CPDP techniques in two different scenarios: \textit{Single Older Version(SOV)} and \textit{Multiple Older Versions(MOV)}. They used 11 different projects for the experiment, with 3 to 5 versions each. The experiment had two interesting results: \begin{enumerate*}\item CPDP techniques can be useful in CVDP scenario\item Using multiple past versions also improves the prediction performance.\end{enumerate*} Yang et al.\cite{yang_ridge_2018} performed a different CVDP study, trying to sort software modules according to defect count prediction. The idea was to predict which modules might contain more bugs, rather than simply predicting if a module has or does not have at least one bug. They investigated the performance of Ridge and Lasso Regression in this sorting problem. They concluded that although the datasets used have some issues, the analyzed methods perform better than linear regression and negative binomial regression on CVDP. Shukla et al.\cite{shukla_multi-objective_2018} formulated the CVDP problem as a multi-objective optimization problem, trying to maximize recall by minimizing classification error and QA cost. They compare this method with basic models and show they have improved recall and achieved a smaller misclassification error. In \cite{xu_cross_2018} the authors tried to tackle the distribution dissimilarity between versions problem using a subset selection algorithm called \textit{Dissimilarity based Sparse Subset Selection(\textbf{DS3})}. The idea is to select a subset of modules from a past version that best represents the distribution of the current version, and use these modules as the training set. They compared this technique with simply using all the original data as the training set, and showed improved performance in classification accuracy and effort aware metrics. The authors of \cite{XU201959} tried to improve this technique further by adding a subset selection from the previous version, which represents the data well. This subset is later fed into \textbf{DS3}, similar to the previous work. They show improved results for regular and effort aware metrics. For this work, they use Static Metrics only. In \cite{fan_software_2019}, the authors used an Attention Based Recurrent Neural Network to encode AST information and evaluated the performance of this method in a CVDP scenario. They show that this technique achieves promising results.
\subsection{Class Dependency Network}
The idea behind CDN is to try and capture the structural information of a given software application. Capturing structural information is achieved by creating a graph with different relations between software components or modules. Formally, a CDN is defined \cite{subelj_community_2011} as a multi-graph $G=(N,E)$ where $N$ is the set of nodes (modules), and $E$ is the set of edges (relations).
Class Dependency information has been used in different defect prediction studies. In their seminal work, Zimmerman et al. \cite{zimmermann_predicting_2008-1} studied defect prediction performance when using a combination of network measures and static code metrics. They concluded that network measures increase recall by 10\%, when performed classification on windows server defect data. Premraj et al.\cite{premraj_network_2011} replicated the experiment on different projects and in a CVDP setup and concluded that in a CVDP scenario, the network measures offered no added value. On the other hand, the authors of \cite{ma_empirical_2016} performed a different evaluation and concluded that network measures have a positive relation with defects, and can be used in defect prediction. They also pointed out that these effects are not consistent in some projects. In \cite{qu_node2defect_2018}, the authors used CDN embedding as the features in a same-version defect prediction scenario, combined with static code metrics, and showed that this could improve the results of defect prediction which uses only static code metrics. In a recent study, Qu et al. \cite{qu_using_2019} used a new approach to analyze CDN for defect prediction. They used K-core Decomposition on the CDN and observed that modules within high k-cores have a higher probability of being buggy. They used this new observation in both IVDP and CVDP scenarios and showed an improvement over baseline methods in an effort-aware bug prediction scenario.
\subsection{Graph Embedding}\label{GraphEmb}
Graph (or network) Embedding is the process of generating a representation of graph elements in a vector space, which preserves some property desired. There has been much research in the area\cite{goyal_graph_2017}, and many techniques and applications have been researched and proposed. Graph embedding techniques have been used in multiple domains and provided great results. One main advantage of these techniques is their ability to extract features from graph data automatically, usually with minor tuning. We will provide a short review of the two embedding techniques used in this work: \textit{Node2vec}\cite{grover_node2vec_2016} and \textit{LINE}\cite{tang_line_2015}.
\subsubsection{Node2vec}
\textit{Node2vec} is a framework for graph embedding. The basic idea comes from a Natural Language Processing model called Skip-gram \cite{mikolov_efficient_2013}. The idea in \textit{Node2vec} is to maximize the probability of observing a node's neighborhood, given its vector representation. The algorithm tries to learn a vector representation that tries to maximize that probability. A node's neighborhood is defined by a sampling strategy, meaning a node can have different neighborhoods.
The \textit{Node2vec} algorithm works as follows. For each node in the graph, we sample a set of random walks from it. The length of the walk, the number of walks and the sampling strategy can be modified. After sampling the walks, each walk is used as a "sentence" in natural language, to represent a node's context. These walks are used in the learning process of the embedding function, which maps each node into a k dimensional embedding (also a parameter). The sampling strategy used in \textit{Node2vec} is based on a sampling rule the authors define as a $2^{nd}$ order random walk with two parameters: $p$ and $q$. Generally speaking, a sampling strategy can be biased towards "walking" farther from the start node (like a Depth First Search) or be biased towards "staying" close to the start node (like a Breadth-First Search). A high parameter $p$ value (relative to $q$) causes a more DFS like sampling, and a high $q$ value (relative to $p$) causes a more BFS like sampling.
\subsubsection{LINE}
The idea behind \textit{LINE} (\textbf{L}arge-scale \textbf{I}nformation \textbf{N}etwork \textbf{E}mbedding) is to generate an embedding that models node proximity similarity. We use second-order proximity because our graph is directed. Second-order proximity assumes that similar nodes have similar "neighborhoods", meaning connections to nodes. So the idea is to model these connections and to learn the node similarity based on these connections. Each node is modeled both as a node and as a "context" (like modeling its connections). Later, we measure the probability of getting a node given a context (a different node), and we look for a representation that generates a distribution as similar as possible to the empirical distribution, as defined in the original paper.
\subsection {Embedding Alignment}
As described earlier, a graph embedding is a vector representation for each node or edge, to preserve some target measure. When describing embeddings, we did not constrain the resulting embedding in any way, meaning there can be many different embeddings which have the same "results". For example, in a Euclidean vector space, every rotation of an embedding will preserve the Euclidean distances in that embedding. Even running the same embedding algorithm on the same dataset can result in a very different embedding space. In case we have two embeddings of "similar" items, for example, embeddings of words from two languages, we might want to align those two embeddings to the same coordinates system as close as possible while preserving the relations between the data points. Performing the alignment can give us a unified representation of the embeddings, enabling operations between the datasets. These techniques have been used in Natural Language Processing\cite{smith_offline_2017, xing_normalized_2015}. We will describe an alignment procedure with parallel points bet1ween two embeddings.
We start with two embeddings, $E_{1}$ and $E_{2}$. Our goal is to build a linear transformation $T$ which maps from $E_{2}$ to $E_{1}$. We are also provided with a set of parallel points, $(x^{(i)},y^{(i)}), x^{(i)} \in E_{2}, y^{(i)} \in E_{1}$. We wish to build $T$ s.t. $T(x^{(i)}) \approx y^{(I)}$. Let $\textbf{X}$ and $\textbf{Y}$ be the matrices whose columns are vectors $x^{(i)}$ and $y^{(i)}$ respectively. Then we wish to solve
\begin{equation}\label{eq:pro}
\min_{T}\|Y-TX\|_{F}
\end{equation}
where $\|A\| = \sqrt{\sum_{i,j}{{|a_{ij}|}^2}}$ is the Frobenius norm. The general problem is hard to solve, but in constraining the solution to be orthogonal matrices, we get the Orthogonal Procrustes problem, which has a closed-form solution. An orthogonal matrix $Q$ is defined by $Q^{T}Q = QQ^{T} = I$, where $I$ is the identity matrix. Schönemann \cite{schonemann_generalized_1966} found the closed-form solution. if $U\Sigma V$ is the SVD decomposition of $YX^{T}$, then the solution to \eqref{eq:pro} is given by $T = UV^{T}$. To the best of our knowledge, no one has tried using this technique in code embedding alignment.
\section{Proposed Framework for CVDP}
In this section, we will describe our solution framework for CVDP.
\subsection{Framework Overview}
Given a pair of software versions $(\mathcal{V}_{0}, \mathcal{V}_{1})$, where $\mathcal{V}_{0}$ is a prior version to $\mathcal{V}_{1}$, we would like to build a defect classifier for the modules in $\mathcal{V}_{1}$. Our solution is composed of a few steps. In the training phase, we do the following:
\begin{enumerate}
\item Calculate Static Code Metrics for $\mathcal{V}_{0}$ [Section \ref{StaticCodeMetrics}]
\item Extract CDN for $\mathcal{V}_{0}$, marked as $G_{0}=(V_{0},E_{0})$ [Section \ref{CDNE}].
\item Learn an embedding for $G_{0}$ [Section \ref{EmbedLearn}].
\item Learn a Classifier using all the data available for $\mathcal{V}_{0}$ [Section \ref{ClsLearn}].
\end{enumerate}
After we built our training model, these are the steps for the classification phase:
\begin{enumerate}
\item Calculate Static Code Metrics for $\mathcal{V}_{1}$.
\item Extract CDN for $\mathcal{V}_{1}$, marked as $G_{1}=(V_{1},E_{1})$.
\item Learn an embedding for $G_{1}$.
\item Perform embedding alignment between the embedding for $\mathcal{V}_{1}$ and the embedding for $V_{0}$ [Section \ref{EmbedAlign}].
\item Perform classification using the aligned embedding for $\mathcal{V}_{1}$.
\end{enumerate}
In the following sections we will describe in detail the different steps of our framework, and discuss different considerations that arose during the study.
\subsection{Static Code Metrics} \label{StaticCodeMetrics}
Static Code Metrics are the classic and state-of-the-art metrics used in defect prediction. These metrics exist for decades and many different metrics were developed during the years\cite{fenton_software_2014}. Most of these metrics try to capture code complexity and size, bad class design etc. We use the metrics defined by \cite{chidamber_metrics_1994,henderson-sellers_object-oriented_1996, bansiya_hierarchical_2002,martin_oo_1994,tang_empirical_1999,mccabe_complexity_1976}. The metrics used are described in Table \ref{table:2}.
\begin{table}[ht]
\centering
\begin{tabular}{||c||}
\hline
Metric Name \\
\hline\hline
Weighted methods per class\cite{chidamber_metrics_1994}\\
\hline
Depth of Inheritance Tree\cite{chidamber_metrics_1994}\\
\hline
Number of Children\cite{chidamber_metrics_1994}\\
\hline
Coupling Between Object classes\cite{chidamber_metrics_1994}\\
\hline
Response for a Class\cite{chidamber_metrics_1994}\\
\hline
Lack of cohesion in methods\cite{chidamber_metrics_1994}\\
\hline
Lack of cohesion in methods 3\cite{henderson-sellers_object-oriented_1996}\\
\hline
Number of Public Methods\cite{bansiya_hierarchical_2002}\\
\hline
Data Access Metric\cite{bansiya_hierarchical_2002}\\
\hline
Measure of Aggregation \cite{bansiya_hierarchical_2002}\\
\hline
Measure of Functional Abstraction\cite{bansiya_hierarchical_2002}\\
\hline
Cohesion Among Methods of Class\cite{bansiya_hierarchical_2002}\\
\hline
Inheritance Coupling \cite{tang_empirical_1999}\\
\hline
Coupling Between Methods\cite{tang_empirical_1999}\\
\hline
Average Method Complexity\cite{tang_empirical_1999}\\
\hline
Afferent Couplings\cite{martin_oo_1994}\\
\hline
Efferent Couplings\cite{martin_oo_1994}\\
\hline
Average McCabe's Cyclomatic Complexity\cite{mccabe_complexity_1976}\\
\hline
Maximum McCabe's Cyclomatic Complexity\cite{mccabe_complexity_1976}\\
\hline
Lines of Code\\
\hline
\end{tabular}
\caption{Static Code Metrics list}
\label{table:2}
\end{table}
\subsection{Class Dependency Network Extraction}\label{CDNE}
We have defined CDN formally in a previous section. We slightly modify this definition and define the CDN to be directed, unlike in \cite{subelj_community_2011}. Each Edge $e_{i} \in E$ has a type associated with it, and the set of edge types defined by $T$.
Since we are analyzing Java programs, our components are classes, interfaces, annotations, and enumerations. We extracted a total of 10 relation types($T$), described in table \ref{table:1}. These edge types are based on the interactions between types in the Java programming language. This list contains most relation types. We chose not to handle relations based on Generic types since these were not very common in our dataset. In Figure 1 there is an example java code with different software dependencies, and the CDN generated from it. This example demonstrates a subset of our recognized types. We have written a tool that parses Java source code and builds the CDN based on references in the source code, and not the compiled version of the application since some changes can occur due to compiler optimizations. The resulting artifact is a single graph for each software version, containing the nodes and edges as described above.
The CDN extraction process runs in linear time and is composed of two passes. The first pass parses all Java files in a project repository and constructs a type dictionary for the project. The second pass traverses the ASTs of the code, analyzes the statements and extracts type references. These references are looked up in the dictionary built in the first pass. The extracted relations are appended to the CDN.`
\begin{table}[ht]
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{||c|c||}
\hline
Edge Type & Description \\ [0.5ex]
\hline\hline
Extends[E] & A class extends another class\\
\hline
Implements[I] & A class implements an interface\\
\hline
Return Type[R] & A Type appears as a return type in a function\\
\hline
Variable[V] & A Type appears as a variable type in a function\\
\hline
Class Member[CM] & A Type appears as a class member type\\
\hline
Object Instantiation[OI] & A Type appears in a "new" statement\\
\hline
Annotation[A] & A Type appears as an annotation\\
\hline
Parameter[P] & A Type appears as a parameter type in a function\\
\hline
Static Class Member[SCM] & A Type appears as a static class member type\\
\hline
Static Method Call[SMC] & A class calls a static method from another class\\
\hline
\end{tabular}}
\caption{List of edge types in our CDN}
\label{table:1}
\end{table}
\begin{figure}
\centering
\begin{subfigure}{.5\linewidth}
\centering
\begin{lstlisting}[language=java]
interface Ifc{
void f();
}
public class Ac implements Ifc{
private Cc c;
public void f(){
c = new Cc();
}
}
public class Bc extends Ac{
public Cc f2(Ifc i, Cc c2){
i.f();
return this.c;
}
}
public class Cc{
//...
}
\end{lstlisting}
\caption{}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\linewidth}
\centering
\begin{tikzpicture}
\Vertex[y=2,x=0,label=Ac,size=1]{Ac}
\Vertex[y=4,x=1,label=Ifc,size=1]{Ifc}
\Vertex[y=2,x=2,label=Bc,size=1]{Bc}
\Vertex[y=0,x=1,label=Cc,size=1]{Cc}
\Edge[Direct,label=I](Ac)(Ifc)
\Edge[Direct,label={CM,OI}](Ac)(Cc)
\Edge[Direct,label={E}](Bc)(Ac)
\Edge[Direct,label={R,P}](Bc)(Cc)
\Edge[Direct,label={P}](Bc)(Ifc)
\end{tikzpicture}
\caption{}
\label{fig:sub2}
\end{subfigure}
\caption{Code and extracted CDN example }
\label{fig:test}
\end{figure}
\subsection{CDN Embedding Learning}\label{EmbedLearn}
As described earlier, we use a few different graph embedding algorithms for the embedding process. We use the CDN extracted from the source code and generate a stripped graph that is directed and without types. The stripping means that if two types have multi connections in the CDN (in the same direction), in the stripped graph they will have a single directed connection. In case two types point to each other, there will be two edges. We do not give weights to the edges. Some classes do not exist in the CDN, so they will not have an embedding. For the process of embedding generation, we use the algorithms described in Section \ref{GraphEmb}.
\subsection{Embedding Alignment}\label{EmbedAlign}
As we discussed, the key to performing CVDP is to align the two version's embeddings, so that we will have a close as possible coordinates system. For this, we used an Orthogonal transformation to map between the embeddings. We also experimented with Linear Regression as a benchmark alignment technique. The relevant results will be discussed in section\ref{ExpRes}. The reason we chose to use an Orthogonal Transformation is that these transformations preserve angles between vectors and vector lengths. Because of these properties, vector distances (euclidean) are preserved, and hence the relations between the embedded elements. Intuitively, an Orthogonal Transformation does not distort the embedding but rotates and reflects it.
An essential part of the alignment procedure is to select the parallel points (or anchors) correctly. Poorly selected anchors can degrade results. In our setting, the nodes are software elements. Since we analyze a pair of versions of the same application, we expect most elements from the old version to be present in the newer version. Our goal is to select a subset of these types as our anchors. To do this, we used two techniques and compared them. For each technique, we calculate a score for a given node and select the $\mathcal{N}$ anchors with the highest scores. We performed experiments with different $\mathcal{N}$ values and the results will be discussed in Section \ref{ExpRes}.
\subsubsection{K-Nearest Neighbors Anchor Selection}
The motivation behind embedding is to generate a vector representation that preserves semantic relations. So, we expect nodes with similar structures in the graph to be located relatively close in an embedding space. We also assume that a node's close neighborhood has semantic meaning. Our assumption is as follows: given a node that exists in both graphs, we assume that if its structural behavior did not change between the graphs, its neighborhood would not change as well. This means that a node with high similarity in its neighbors group should get a high score. Formally, for each node $v_{i}\in V_{0}\cap V_{1}$ we calculate the following KNN score:
$$S_{knn}(v_{i}) = \frac{|N^i_{0}\cap N^i_{1}|}{k}$$
Where $N^i_{0}$ and $N^i_{1}$ are node i's $k$ nearest neighbors in $G_{0}$ and $G_{1}$ respectively. As this ratio gets closer to 1, the greater the similarity between the versions for that specific node. Each neighborhood $N^i_{j}$ is the set of closest nodes in the respective graph's embedding space, using the euclidean distance metric. We have experimented with other metrics, such as cosine similarity, and have achieved similar results.
\subsubsection{Graph Neighbors Similarity Anchor Selection}
The idea behind this technique is similar to the prior one, but from the original graph's point of view. Given a node that exists in both graphs, we extract its direct neighbors in each graph. We then look at the intersection of those groups, and reward nodes with a large intersection. We also assumed that nodes with a high degree and a high similarity can be more important, and the experiment showed that. Formally, we define this measure as follows:
$$S_{GNS}(v_{i}) = \frac{|M^i_{0} \cap M^i_{1}|^2}{|M^i_{0} \cup M^i_{1}|}$$
Where $M^i_{0}$ and $M^i_{1}$ are node i's immediate neighbors group in $G_{0}$ and $G_{1}$ respectively. Formally,
$$M^i_{j} = \{v_{l} | (v_{i},v_{l}) \in E_{j} \vee (v_{l},v_{i}) \in E_{j}\}$$
Where
$$v_{l},v_{i} \in V_{j} , j \in \{0,1\}, i,l \in \{1, ... ,|V_{j}|\}$$
\subsection{Classifier Learning}\label{ClsLearn}
In the previous sections, we demonstrated how we perform CDN extraction, embedding learning, and alignment. We also mentioned the use of static code metrics as additional features. The way we use both feature sets is rather straight forward. We concatenate both the static code metrics and the embedding values into a unified set and use this new set as our training/test data, which is fed into a regular classifier. The classification goal is to predict if a software module contains a bug or not. In the experiments, a Random Forest\cite{breiman_random_2001} classifier was used. Because Random Forest is based on multiple random decisions, each experiment we performed was repeated 30 times to generate an average estimate that is less biased by randomness. We chose the Random Forest classifier because of its popularity in Defect Prediction setups, and because it showed promising results in our experiments.
\section{Experimental Setup}\label{ExpSetup}
To evaluate our methods, we experimented with real-world software applications, comparing our results with two baselines. First, we build a classifier with only static code metrics as the features. Second, we want to build baseline techniques that use embeddings and alignments. For this purpose we provide two models. The first model uses the learned embeddings without performing alignment. This shows the need for performing some alignment. The second model uses Linear Regression as a benchmark alignment technique.
The Linear Regression alignment is rather straightforward. Linear Regression learns a linear relation between a set of variables and a target variable. In our setup, we wish to represent the new version's embeddings as a relation of the old one's embeddings. As was discussed earlier, the idea is to learn a mapping $T$ between
$E_{1}$ and $E_{2}$. This can be broken down to $k$ different linear relations (n is the embedding size). So, given our anchors set, we build k linear regression problems from $E_{1}$ to each of the dimensions of $E_{2}$. These regression problems have a zero coefficient for simplicity. Our embedding alignment matrix $T$ is composed of the learned coefficients.
In our experiments, we used static code metrics collected by Jureczko et al.\cite{jureczko_towards_2010}. Not all projects that are reported in this dataset have an available source code, so we used only the ones we could locate the relevant sources. Also, only projects with more than one version available were used. We used data from a total of 9 projects, and a total of 24 version pairs. Table \ref{table:3} describes the different projects and versions analyzed in the experiments. Version pairs were chosen based on the dataset, where consecutive versions were paired. The original dataset does not cover all versions, so version jumps are sometimes significant. We collected the source code of each version from the relevant project's website, including all peripheral code (tests, for example). This code is used during our CDN construction and provides additional knowledge on the structural dependencies in the core application code. During embedding generation, some software modules do not have an embedding, due to a lack of graph edges in the CDN. To make a fair analysis, we only keep modules that have both static code metrics and an embedding. In table\ref{table:4} we provide a summary of the dataset's statistics. For each project, we calculate average measures of CDN size (vertices and edges), the number of modules that have both an embedding and static code metrics ($|V \cap M_{DS}|$), and average defect percentage in $|V \cap M_{DS}|$.
For the experiments, we used implementations available on Github\cite{github}. The \textit{Node2Vec} implementation is the one released by the authors\cite{node2vec}. The \textit{LINE} implementation is part of the OpenNE toolkit\cite{openne}. For all embedding methods we used an embedding dimesion of 32, and used the default parameters. Specifically, we used $p$ and $q$ (for \textit{Node2Vec}) to be 1.
To evaluate the performance of our methods and the baseline methods, we used two commonly used performance metrics, Area Under the ROC Curve (AUC) and the F1-Score. F1-Score is defined by
$$F_{1} = 2 \cdot \frac{precision \cdot recall}{precision + recall}$$
The results of the different setups were compared statistically using the Wilcoxon signed-rank test\cite{demsar_statistical_2006}. We performed experiments and compared the results on the following scenarios:
\subsubsection{Static Code Metrics (Baseline)}
In this setup we simply trained a classifier only on the static code metrics (of the old version) and tested on the new version. This represents a baseline since most defect prediction studies rely on these features.
\subsubsection{Embedding with No Alignment (Baseline)}
In this setup we train a classifier on static features together with learned embeddings, but without performing the alignment process. This setup comes to show how not performing alignment usually degrades the performance of the model. This is due to the fact that the embedding algorithm is not constrained to learn the same semantic meaning of each of the embedding dimensions (although a close solution might happen by chance).
\subsubsection{Embedding Alignment with Random Anchor Selection (Baseline)}
As another baseline, we evaluated the performance of Linear Regression and Orthogonal Transformation on a randomly selected set of anchors. The number of anchors selected was also modified to get a broader result.
\subsubsection{Embedding Alignment with KNN Anchor Selection}
For this scenario, we evaluated the performance of our KNN Anchor Selection algorithm. We experimented with different numbers of anchors and numbers of nearest neighbors to take into account. The experiments were done using both an Orthogonal Transformation and Linear Regression and on both embedding techniques.
\subsubsection{Embedding Alignment with Graph Neighbors Similarity Anchor Selection}
For this scenario, we evaluated the performance of our Graph Neighbors Similarity Anchor Selection algorithm. In this scenario, we modified the number of anchors selected. The experiments were done using both an Orthogonal Transformation and Linear Regression and on both embedding techniques.
\begin{table}[ht]
\begin{tabularx}{\linewidth}{||Y|Y|Y|| }
\hline
Project Name & Project Description & Version Pairs\\
\hline\hline
Apache Camel & Integration Framework & (1.0,1.2) (1.2,1.4) (1.4,1.6)\\
\hline
JEdit & Text Editor & (3.2,4.0) (4.0,4.1) (4.1,4.2) (4.2,4.3)\\
\hline
Apache Log4J & Logging Library & (1.0,1.1) (1.1,1.2)\\
\hline
Apache Lucene & Information Retrieval Library & (2.0,2.2) (2.2,2.4)\\
\hline
Apache POI & Microsoft Office processing library & (1.5,2.0) (2.0,2.5) (2.5 3.0)\\
\hline
Apache Synapse & Enterprise Service Bus & (1.0,1.1) (1.1,1.2)\\
\hline
Apache Velocity & Template Engine & (1.4,1.5) (1.5,1.6)\\
\hline
Apache Xalan & XSLT and XPath implementation & (2.4,2.5) (2.5,2.6) (2.6,2.7)\\
\hline
Apache Xerces & XML Processing Library & (init,1.2) (1.2,1.3) (1.3,1.4)\\
\hline
\end{tabularx}
\caption{Projects Analyzed In Our Experiment}
\label{table:3}
\end{table}
\begin{table}[ht]
\begin{tabularx}{\linewidth}{||Y|Y|Y|Y|Y|| }
\hline
Project Name & Average $|V|$ & Average $|E|$ & Average $|V \cap M_{DS}|$ & Average Defect Percentage \\ \hline
Apache Camel & 1312 & 4856 & 664 & 19.7 \\ \hline
JEdit & 662 & 2449 & 340 & 18.7 \\ \hline
Apache Log4J & 225 & 626 & 133 & 51.03 \\ \hline
Apache Lucene & 1069 & 5197 & 257 & 55.5 \\ \hline
Apache POI & 814 & 3282 & 341 & 50 \\ \hline
Apache Synapse & 414 & 1470 & 209 & 23.6 \\ \hline
Apache Velocity & 356 & 1311 & 209 & 59 \\ \hline
Apache Xalan & 979 & 4535 & 805 & 52.2 \\ \hline
Apache Xerces & 514 & 1940 & 336 & 35.4 \\ \hline
\end{tabularx}
\caption{Dataset Statistics}
\label{table:4}
\end{table}
\section{Experimental Results}\label{ExpRes}
During our experiments, many different techniques and setups were evaluated. We wish to provide a few points of view on the different results, so this section will be divided into a few subsections that each analyze a different aspect of the results.
\subsubsection{Best Results for Each Embedding Algorithm}
As described in an earlier section, we evaluated two embedding techniques, two anchor selection techniques, and two alignment techniques. In this section, we will provide the results of our best model for each of the embedding techniques. For \textit{LINE} embedding, we achieved the best results (in AUC terms) by using Orthogonal Transformation as the alignment and used the graph similarity anchor selection. These results were statistically significant with $p = 0.01$, compared to the static metrics model. For \textit{Node2vec}, we achieved the best results (in AUC terms) again using Orthogonal Transformation but using KNN anchor selection instead. The \textit{Node2vec} results were also statistically significant, with $p < 0.043$, compared to static metrics model. Figure \ref{fig:best} shows the AUC scores for the two methods just described and three baseline methods. The \textit{Static} method is simply a classifier with static code metrics. The \textit{Not Aligned Embedding} method is a classifier trained on both the static code metrics and the embeddings, but without aligning the embeddings. The \textit{Linear Regression} method uses the same embedding but uses Linear Regression as an embedding alignment mechanism. It is also interesting to note that for the Linear Regression mapping, we achieved the best result when we used all available code modules in both versions as our anchors. This result will be discussed in a later section. The results show that using CDN data provides an improvement in AUC performance. We also measured the F1-score of the different methods we used. In most cases, we got very similar results to the baseline, up to $\pm0.5\%$ difference. The results also show that Orthogonal Transformation alignment performs better than Linear Regression, although these results were not statistically significant. The not aligned model generally provides the worst results, but in some cases it outperforms. This seems like an anomaly, as was discussed earlier.
Another interesting phenomenon we can see from the results in Figure \ref{fig:best} is that the embedding techniques we used are somewhat complementary. In some projects, the results are similar, but there are projects for which one embedding technique is better than the other and vice versa. One possible explanation can be that different embedding techniques (and parameters) extract different information, specifically local vs. global information. Both local and global information have been shown to be relevant to defect prediction \cite{zimmermann_predicting_2008-1}. This difference in performance led us to create a meta-model using the two models we described in this section, and we will discuss it and its results in the next section.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{best.png}
\caption{AUC results of our best methods versus three baseline models}
\label{fig:best}
\includegraphics[width=\textwidth]{meta_AUC.png}
\caption{AUC results of the joint model versus the individual models}
\label{fig:meta}
\end{figure*}
\subsubsection{LINE and Node2vec joint model}
As we described in the previous section, the results of the \textit{LINE} model and the \textit{Node2vec} model complement each other. To utilize this result, we built a Logistic Regression model on top of the individual models. We extracted from each classifier the probability of a defect and used the Logistic Regression model to calculate a better probability estimate. On average, the model improves the individual model's results by about 1\%. The results are shown in Figure \ref{fig:meta}. A representative ROC curve for the meta-model versus the static metrics model is shown in Figure \ref{fig:roc}. We also checked the results for statistical significance and concluded that they are significant with $p < 0.002$, compared to the static metrics model. They were also statically significant when compared to the Linear Regression results, with $p < 0.001$.
\subsubsection{KNN Anchor Selection Analysis}
As described earlier, KNN anchor selection has two parameters: The number of anchors to select and $K$, which is the number of nearest neighbors for each candidate anchor to compare. From the experiments, it appeared that the number of anchors to select is the more significant parameter, especially when using the Orthogonal Transformation embedding alignment. Figure \ref{fig:KNN} shows how the number of anchors impacts the classification performance for all the embedding techniques. For these experiments, $K$ was fixed at 10. Increasing $K$ values gave us similar or worse results, in all experiments. It can be seen that using this setup, \textit{Node2vec} achieves the best performance.
\begin{figure}[ht]
\centering
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=8cm]{KNN_AUC.png}
\caption{}
\label{fig:KNN_AUC}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=8cm]{KNN_F1.png}
\caption{}
\label{fig:KNN_F1}
\end{subfigure}
\caption{KNN anchor selection method AUC (\ref{fig:KNN_AUC}) and F1 (\ref{fig:KNN_F1}) results for all embeddings }
\label{fig:KNN}
\end{figure}
\subsubsection{Graph Similarity Anchor Selection Analysis}
Similarly to the KNN analysis, we evaluated the impact of changing the number of anchors. The results of these experiments are shown in Figure \ref{fig:GraphSim}. In terms of AUC performance, it can be seen that \textit{LINE} achieves better results than the other two embedding techniques. This result is interesting because of how the \textit{LINE} embedding works. As described earlier, the main idea of \textit{LINE} is to model the neighborhoods of nodes. This anchor selection technique looks at precisely that. We try to find the nodes in both graphs that have the most similar neighborhoods. It seems reasonable that this technique would work best with \textit{LINE} and in general achieves the best AUC performance.
\begin{figure}[ht]
\centering
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=8cm]{Graph_AUC.png}
\caption{}
\label{fig:GS_AUC}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=8cm]{Graph_F1.png}
\caption{}
\label{fig:GS_F1}
\end{subfigure}
\caption{Graph similarity anchor selection method AUC (\ref{fig:GS_AUC}) and F1 (\ref{fig:GS_F1}) results for all embeddings }
\label{fig:GraphSim}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=\linewidth]{ROC.png}
\caption{ROC curve for the Meta Model vs Static Metrics model on the (Xerces-init, Xerces-1.2) version pair}
\label{fig:roc}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=\linewidth]{lr-ot.png}
\caption{Average AUC for randomly selected anchors}
\label{fig:LR_OT}
\end{figure}
\subsubsection{Linear Regression and Orthogonal Transformation Comparison}
During this work, we used Linear Regression as a baseline method for the alignment process. Because Linear Regression learns a general linear transformation, some properties of the original space might be modified. For example, angles between vectors in the original domain can change after the transformation. This change in angles does not occur when an Orthogonal Transformation is used. Nevertheless, Linear Regression can still be a good approximation, and our experiments show this.
In the section on the best model for each embedding, we presented the performance of Linear Regression as an alignment technique, while using the \textit{LINE} embedding. The results with the \textit{Node2vec} embedding were very similar and slightly worse. It can be seen that the performance of Linear Regression is usually worse or similar to that of the Orthogonal Transformation. These results shows that Linear Regression distorts some of the information available in the embedding. It appears that Orthogonal Transformations preserves more of the information in the embedding, hence achieving better results.
One more experiment we performed is to compare both alignment techniques in a random anchor selection setup. We performed 30 experiments for every anchor count and sampled randomly from the set of software modules that exist in both versions. Figure \ref{fig:LR_OT} shows the results of this experiment. For this experiment we used the \textit{Node2vec} embedding. The results show an interesting phenomenon. When we increase the number of anchors, Linear Regression achieves better results. On the other hand, randomly selecting anchors when using an Orthogonal Transformation does not achieve great results (less than with our selection techniques). It seems that Linear Regression achieves better alignment as more data points are added.
\section{Discussion}
The results presented in the previous section show an improvement over the basic model, which is based on static code metrics. From analyzing the results, one can see that the improvement is not uniform. Some projects exhibit better performance and some worse. As we discussed earlier, the different embedding techniques appeared to be complementary, and when these techniques were combined in a joint model, the overall performance improved. It still seems that there is more room for improvement because in some cases, the individual models beat the meta-model's performance. Analyzing these phenomenons is something we are looking at for future work.
As we mentioned before, we achieved the best results for different embedding techniques using different anchor selection techniques. This is an interesting and somewhat surprising result. This means that the notion of similarity and closeness in different embeddings is different. Because of this difference, it appears that there is a connection between the embedding algorithm's closeness notion and the anchor selection technique that works best and how it selects the best candidates. For this reason, a future direction will be to explore new anchor selection methods and match similar embedding techniques.
\section{Threats to Validity}
Our work might suffer from threats to its validity. We discuss them briefly.
\subsection{Threats to Internal Validity}
In our work, we measured the performance of our techniques using the widely used AUC and F1-Score. First, there are other performance measures which we did not use and might have different performance than we observed. For this reason, our results might not be relevant to some applications. Second, we observed a slight decrease in F1-Score, and believe it to be negligible. In some scenarios this might not be negligible, and so our conclusions might be mistaken. Nevertheless, this difference was not statistically significant.
\subsection{Threats to External Validity}
We have used the dataset collected by Jureczko et al. in our evaluations. This is a widely used dataset and contains several applications from different domains and of different sizes. There is a possibility that on a different dataset, we will achieve different results. This dataset dates back a few years and might not reflect changes in the Software Development Community. Another issue is that we analyzed Java applications, and on a different programming language, the results might be different.
\section{Conclusion And Future Work}
In this work, we aimed at improving the results of CVDP using Class Dependency Network data. For this purpose, we developed a framework for embedding and aligning CDN data and used its results as inputs for a classifier. We also suggested two anchor selection techniques and used them in different embedding and alignment setups. We performed extensive experiments using two embedding techniques on a publicly available dataset. Our results show that:
\begin{enumerate}
\item As previously shown, CDN data is beneficial for defect prediction. We showed that this is true for the CVDP scenario as well.
\item We developed a framework for the generation and alignment of embeddings of CDNs across versions.
\item We performed multiple experiments and showed that our framework achieves statistically better performance than the state-of-the-art baseline.
\end{enumerate}
For future work, we are considering the following directions:
\begin{enumerate}
\item Experiment with more embedding techniques and different parameter settings. These settings can provide more local vs. global information into the embedding process.
\item Experiment with new datasets, in order to provide a broader performance measure.
\item Try new approaches for CDN embedding that take into account labels and weights.
\item Try to use multiple old versions (MOV) in the learning process, instead of just a single old version (SOV).
\end{enumerate}
| 2024-02-18T23:39:47.391Z | 2023-01-02T02:11:00.000Z | algebraic_stack_train_0000 | 400 | 7,645 |
|
proofpile-arXiv_065-2089 | \section{Introduction}
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=13cm]{4423fig01.eps}
\caption{$R$-band VLT image of SDSS J0924+0219, where objects are labeled
following Inada et al. (\cite{inada}). The stars a, c, d, and e are
used to compute the PSF spectrum (see text). Only stars a, d and e
are used to derive the relative flux calibration between each MOS
mask. The field of view is $3.4 \arcmin \times 3.4 \arcmin$, North
is to the top, East to the left.}
\label{field}
\end{center}
\end{figure*}
COSMOGRAIL is a multi-site optical monitoring campaign of lensed
quasars. Following the original work by Refsdal (\cite{Refsdal64}),
its goal is to measure, with an accuracy close to one percent
(Eigenbrod al. \cite{eigenbrod}), the so-called time delay between
the images of most gravitationally lensed quasars. These time delays
are used in combination with lens models and detailed observations of
individual systems to infer the value of the Hubble parameter H$_0$,
independent of any standard candle (e.g., reviews by Courbin et al.
\cite{courbin2002}, Kochanek \cite{koko_saasfee}).
The present work is devoted to the quadruply imaged quasar
SDSS J0924+0219\ (Inada et al. \cite{inada}) at z = 1.524, discovered in the course of the
Sloan Digital Sky Survey (SDSS). This object is particularly
interesting because of its anomalous image flux ratios, the origin of
which is unclear. It has been argued that the faintest image of SDSS J0924+0219,
which is located at a saddle point of the arrival-time surface, could
be demagnified either from star microlensing (Schechter et al.
\cite{Schech2004}, Keeton et al. \cite{keeton2005}) or subhalos
microlensing (Kochanek \& Dalal, \cite{koko2004}).
We analyse here our deep optical spectra of SDSS J0924+0219\ obtained with the
ESO Very Large Telescope (VLT). These spectra are used to: 1- measure
the redshift of the lensing galaxy, 2- estimate the spectral
variability of the quasar, 3- measure the flux ratio between images A
and B of SDSS J0924+0219, in the continuum and the broad emission lines. Hubble
Space Telescope (HST) ACS and NICMOS images from the STScI archives
are deconvolved using the MCS algorithm (Magain et al.
\cite{magain98}) which unveils two Einstein rings. One of the rings
corresponds to the host galaxy of the quasar source and is used to
constrain the lens models. The second one is probably due to a
star-forming region in the host galaxy of the quasar source or to
another unrelated object.
\section{VLT Spectroscopy}
\subsection{Observations}
Our spectroscopic observations of SDSS J0924+0219\ are part of a low dispersion
spectroscopic survey aimed at measuring all unknown lens redshifts.
They are acquired with the FOcal Reducer and low dispersion
Spectrograph (FORS1), mounted on the ESO Very Large Telescope, used the
MOS mode (Multi Object Spectroscopy) and the high resolution
collimator. This configuration allows the simultaneous observation of
a total of 8 objects over a field of view of
$3.4\arcmin\times3.4\arcmin$ with a pixel scale of $0.1\arcsec$
(Fig.~\ref{field}). The G300V grism, used in combination with the
GG435 order sorting filter, leads to the useful wavelength range 4450
\AA\ $ <\lambda< $ 8650 \AA\ and to a scale of $2.69$ \AA\ per pixel
in the spectral direction. This setup has a spectral resolution
$R=\lambda/\Delta \lambda \simeq 300$ at the central wavelength
$\lambda=5900$ \AA, which translates in velocity space to $\Delta
v=\textrm{c} \Delta \lambda / \lambda \simeq 1000$ km s$^{-1}$.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8.8cm]{4423fig02.eps}
\caption{$R$-band images of SDSS J0924+0219. A short 30 sec
exposure is shown on the left, where the quasar images A, B, C and D
as well as the lensing galaxy, are indicated. The seeing is
$0.37\arcsec$ in this image and the pixel scale is $0.10\arcsec$.
The position of the $0.70\arcsec$ slitlets is also indicated. They
correspond to three epochs of observations with very different
seeings (see Table~\ref{refer}). The slit has not moved at all
between the exposures, even when taken 15 days apart.}
\label{slits}
\end{center}
\end{figure}
\begin{table}[t!]
\caption[]{Journal of the VLT spectroscopic observations of SDSS J0924+0219. The
seeing is measured on the spectrum of the PSF stars.
The exposure time is 1400 s for each of the 6 epochs.}
\label{refer}
\begin{flushleft}
\begin{tabular}{cccccc}
\hline
\hline
ID & Date & Seeing $[\arcsec]$ & Airmass & Weather \\
\hline
1 & 14/01/2005 & 0.66 & 1.188 & Photometric\\
2 & 14/01/2005 & 0.59 & 1.150 & Photometric\\
3 & 14/01/2005 & 0.46 & 1.124 & Light clouds\\
4 & 01/02/2005 & 0.83 & 1.181 & Photometric\\
5 & 01/02/2005 & 0.97 & 1.146 & Light clouds\\
6 & 01/02/2005 & 0.84 & 1.126 & Light clouds\\
\hline
\end{tabular}
\end{flushleft}
\end{table}
The slitlets of the MOS mask are all 19\arcsec\ long and $0.7\arcsec$
wide, which both avoids lateral contamination by the quasar image C
and matches well the seeing values during the observations. Four
slits were centered on the foreground stars a, c, d, e, while a fifth
slit is centered on images A and B of SDSS J0924+0219, after rotation of the mask
to a suitable Position Angle (PA) (Fig. \ref{slits}). The spectra of
the stars are used both to compute the reference Point Spread Function
(PSF) needed for the deconvolution and to carry out a very accurate
relative flux calibration. ``Through-slit'' images acquired just
before exposures \# 1, \# 3, \# 4 in order to check the mask
alignment are displayed in Fig.~\ref{slits}.
\subsection{Reduction and Deconvolution}
The spectra are bias subtracted and flat-fielded using
IRAF\footnote{IRAF is distributed by the National Optical Astronomy
Observatories, which are operated by the Association of Universities
for Research in Astronomy, Inc., under cooperative agreement with the
National Science Foundation.}. The flat fields for each slitlet are
created from 5 dome exposures, using cosmic ray rejection. They are
normalized by averaging 60 lines along the spatial direction,
rejecting the 20 highest and 20 lowest pixels, then block replicating
the result to match the physical size of the individual flat fields.
Wavelength calibration is obtained from numerous emission lines in the
spectrum of Helium-Argon lamps. The wavelength solution is fitted in
two dimension to each slitlet of the MOS mask. The fit uses a
fifth-order Chebyshev polynomial along the spectral direction and a
third-order Chebyshev polynomial fit along the spatial direction.
Each spectrum is interpolated following this fit, using a cubic
interpolation. This procedure ensures that the sky lines are well
aligned with the columns of the CCD after wavelength calibration. The
wavelength solution with respect to the reference lines is found to be
very good, with an rms scatter better than $0.2$ \AA\ for all spectra.
The sky background is then removed by fitting a
second-order Chebyshev polynomial in the spatial direction to the
areas of the spectrum that are not illuminated by the object.
Finally, we perform the cosmic ray removal as follows. First, we shift
the spectra in order to align them spatially (this shift is only a few
tenths of a pixel). Second, we create a combined spectrum for each
object from the 6 exposures, removing the 2 lower and 2 higher pixels,
after applying appropriate flux scaling. The combined spectrum
obtained in that way is cosmic ray cleaned and used as a reference
template to clean the individual spectra. We always check that neither
the variable seeing, nor the variability of the quasar causes
artificial loss of data pixels.
Even though the seeing on most spectra is good, the lensing galaxy is
close enough to the brightest quasar images A and B to be affected by
significant contamination from the wings of the PSF. For this reason,
the spectral version of MCS deconvolution algorithm (Magain et al.
\cite{magain98}, Courbin et al. \cite{courbin}) is used in order to
separate the spectrum of the lensing galaxy from the spectra of the
quasar images. The MCS algorithm uses the spatial information
contained in the spectrum of a reference PSF, which is obtained from
the slitlets positioned on the four isolated stars a, c, d, and e
(Fig. \ref{field}). The final normalized PSF is a combination of the
four PSF spectra. The six individual spectra are deconvolved
separately, extracted, flux calibrated as explained in Section
\ref{section:flux} and combined. The spectrum of the lensing galaxy
is extracted from the ``extended channel'' of the deconvolved data,
while the spectra of the quasar images are extracted from the
``point-source channel'' (see Courbin et al. \cite{courbin}).
\subsection{Flux Calibration}
\label{section:flux}
Our absolute flux calibration is based on the spectrum of the
spectrophotometric standard star Feige 66 taken on the night of 2005
January 16. The response function of the grism is determined for this
single epoch. It is cross calibrated using stars observed in each MOS
mask in order to obtain a very accurate calibration across all epochs.
The spectra of four stars are displayed in Fig.~\ref{PSF_spectra},
without any deconvolution and having used a 4\arcsec\ aperture for
extraction. We find significant differences in flux between the six
epochs, that need to be corrected for. The main causes for these
differences are variable seeing and variable extinction due to thin
cirrus during some of the observations (Table~\ref{refer}). The
effect of mask misalignment is excluded, as can be seen from the
image-through-slit of Fig.~\ref{slits}.
Assuming that the intrinsic flux of the foreground stars has not varied
between the six exposures, and taking the data \# 1 of
Table~\ref{refer} as a reference, we derive the flux ratio between
this reference epoch and the six other dates, for each star. These
curves, fitted with a third-order polynomial, are shown in Fig.
\ref{PSF_ratio}. The corrections computed in this way are found to be
very stable across the mask: the curves obtained for two different
stars only showed slight oscillations with an amplitude below
2\%. This is also the accuracy of the flux correction between
different epochs. A mean correction curve is then computed for each epoch
from all stars, except star c which is much fainter than the others,
and is applied to the deconvolved spectra of the quasars and of the
lensing galaxy.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8.8cm]{4423fig03.eps}
\caption{The spectra of the foreground stars. The index on the right
of each spectrum indicates the exposure number, following
Table~\ref{refer}. Flux differences are mainly due to the presence
of light clouds on observation dates \# 3, \# 5 and \# 6.}
\label{PSF_spectra}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8.8cm]{4423fig04.eps}
\caption{Flux ratios between Date \# 1 and the 5 others, along with the
third-order polynomial fits. We use the ratios of the 3 stars: a, d
and e to determine the mean correction applied to the quasar. Star
c, which is much fainter than the others, is excluded from the
final calibration. The (small) parts of the spectra with strong
atmospheric absorption are masked during the polynomial fit. The
peak-to-peak differences between the ratios computed using star a, d
and e, differ by less than 2\%.}
\label{PSF_ratio}
\end{center}
\end{figure}
\section{Extracted Spectra}
\subsection{The Lensing Galaxy}
The six deconvolved spectra of the lensing galaxy are extracted,
combined, and smoothed with a 5 \AA\ box (2 pixels). Fig. \ref{lens}
shows the final one-dimensional spectrum, where the Ca~II H \& K
absorption lines are obvious, as well as the $4000$ \AA $\,$ break,
the G-band typical for CH absorption, the Mg band, and the H$\beta$,
and Fe~II absorption lines. These features yield a mean redshift of
z$_{\rm lens}$$=0.394 \pm 0.001$, where the 1-$\sigma$ error is the standard
deviation between all the measurements on the individual lines,
divided by the square root of the number of lines used. We do not
consider the $4000$ \AA $\,$ break in these calculations. This
spectroscopic redshift falls very close to the photometric estimate of
$z=0.4$ by Inada et al. (\cite{inada}), and agrees with the
spectroscopic redshift of Ofek et al.~(\cite{ofek05}). In addition,
the absence of emission lines confirms a gas-poor early-type galaxy.
No trace of the quasar broad emission lines is seen in the spectrum of
the lensing galaxy, indicative of an accurate decomposition of the
data into the extended (lens) and point source (quasar images)
channels.
\subsection{The Quasar Images}
The mean spectra of quasar images A and B are shown in
Fig.~\ref{quasars}, smoothed with a $5$ \AA $\,$ box. The Al~III],
Si~III], C~III], [Ne~IV] and Mg~II broad emission lines are clearly
identified. A Gaussian fit to these 5 lines yield a mean redshift of
$1.524 \pm 0.001$ for image A and $1.524 \pm 0.002$ for the fainter
image B. The standard deviation between the fits to the individual
lines, divided by the square root of the number of lines used, is
taken as the error bar. These results are in excellent agreement with
the values obtained by Inada et al. (\cite{inada}), as well as the
redshift from the SDSS database, who both report $z=1.524$.
\subsection{Variability of the Quasar Images}
The spectra of quasar images A and B are shown in Fig.~\ref{micro} for
2005 January 14 and February 1. These are the mean of the three
spectra obtained on each date, smoothed with a 5 \AA\ box. Although
the continuum shows little variation (only B has fadded slightly
between our two observing dates), there are obvious changes in the
broad emission lines of each quasar image. In image A, the red wing
of the Mg~II emission line has brightened, as well as the C~II]
emission line, while in image B, the center of the C~III] emission
line has become double peaked and the Fe~II feature redwards of Mg~II
has fadded. A zoom on these lines is shown in Fig.~\ref{line_zoom}.
The line variations are already visible before averaging the 3
individual spectra at a given date and in the not-so-blended quasar
images of the raw un-deconvolved spectra. We can therefore safely
rule out any deconvolution artefacts due to PSF variations in the MOS
mask. In addition, the residual images after deconvolution (see
Courbin et al. 2000 for more details) are particularly good,
indicative of little or no PSF variations across the slitlet mask.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8.8cm]{4423fig05.eps}
\caption{Spectrum of the lensing galaxy in SDSS J0924+0219, as obtained by
combining the data for the 6 epochs, i.e., a total integration time
of 8400s. The template spectrum of an elliptical galaxy at z=0.394 is
also shown for comparison (Kinney et al. \cite{kinney}). All main
stellar absorption lines are well identified. Prospects for a future
determination of the galaxy's velocity dispersion are therefore
excellent.}
\label{lens}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8.8cm]{4423fig06.eps}
\caption{Spectra of the quasar images A and B of SDSS J0924+0219, as extracted
from the deconvolved data. These figure shows the mean of the 6
spectra taken for the 6 epochs, after the flux calibration described
in Section~\ref{section:flux}.}
\label{quasars}
\end{center}
\end{figure}
\subsection{Image flux ratio}
Keeton et al. (\cite{keeton2005}) have recently observed that the flux
ratio between the images of SDSS J0924+0219\ is different in the continuum and in
the broad emission lines. In their slitless HST/ACS observations, the
flux ratio between A and B is 2.60 in the emission lines, and about
3.5 in the continuum, i.e., the emission lines are 30\% different from
the continuum.
We plot the flux ratio between quasar image B and A as a function of
wavelength at a given date (top panels in Figs.
\ref{BoverA_Jan} and \ref{BoverA_Feb}). This ratio is close to flat,
with some small differences in the broad emission lines.
We construct the spectrum $\alpha$B$+\beta$ and adjust the
parameters using a linear least squares fit so that it matches the
spectrum of quasar A. The result is shown in the middle panels of
Figs.~\ref{BoverA_Jan} and \ref{BoverA_Feb}. Almost no trace of the
emission lines are seen in the difference spectra in the bottom panels
of the figure. Our spectra indicate no strong differential
amplification of the continuum and broad emission lines in the
components A and B of SDSS J0924+0219, and the small residual seen in the
emission lines in the bottom panels of Figs.~\ref{BoverA_Jan} and
\ref{BoverA_Feb} are an order of magnitude smaller than reported in
Keeton et al. (\cite{keeton2005}).
In the 15 days separating the observations, $\alpha$ has changed by
only 2\%. For both dates the residuals of the fit are almost
perfectly flat, indicating no continuum change. Only asymmetric
changes in the emission lines are seen.
Finally, the flat flux ratio between image A and B shows that there is
no significant extinction by interstellar dust in the lensing galaxy.
\subsection{Intrinsic variations vs. microlensing}
\label{subsec:micro}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8.8cm]{4423fig07.eps}
\caption{The spectra of images A and B on 14 January and 1
February 2005 show a stable continuum for both images, but the
broad emission lines do vary on a time-scale of two weeks (see
Fig.~.\ref{line_zoom}). }
\label{micro}
\end{center}
\end{figure}
It is hard, with only two observing points, to infer the origin of the
spectral variations observed in SDSS J0924+0219. Nevertheless, we see rapid (15
days) and asymmetric changes in the emission lines of the quasar
images, and no strong changes in the continuum. Intrinsic variations
of quasars are usually stronger in the continuum than in the emission
lines, and they are also longer than the two-week span we observe
here. Such rapid variations due to microlensing have been seen in at
least one other lensed quasar: HE~1104-1805 (Schechter et
al.~\cite{Schechter03}). SDSS J0924+0219\ might be a second such case.
Microlensing variability is supported by the photometric broad-band
data by Kochanek et al. (\cite{kokoIAU}), showing that A and B have
very different light curves that are hard to match even after shifting
them by the expected time delay. However, microlensing usually acts
on the continuum rather than on the emission lines of quasar spectra,
because of the much smaller size of the continuum region.
Differential amplification of the continuum relative to the emission
lines, as observed by Keeton et al.~(\cite{keeton2005}), would be a
strong support to the microlensing hypothesis. Our spectra do not show
such a differential amplification, but we note that our wavelength
range is very different from that of Keeton et al.~(\cite{keeton2005})
and that they observed in May 2005, i.e., 3 months after our
observations.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8.8cm]{4423fig08.eps}
\caption{Enlargements of Fig. \ref{micro} comparing the
broad emission lines of images A and B on 14 January (solid curve)
and 1 February 2005 (dotted curve). Obvious variations are seen in
the red wing of the Mg~II in image A, in the center of the C~III] in
image B. The Fe~II feature redwards of Mg~II in image B has also
changed by 20\%. These variations are asymmetric about the center
of the lines. The asymmetry is different in C~III] and Mg~II.}
\label{line_zoom}
\end{center}
\end{figure}
Assuming microlensing is the correct interpretation of the data, its
strength depends upon the scale-size of the source, with smaller
sources being more susceptible to large magnification (e.g.
Wambsganss \& Paczynski \cite{wambsganss}). The continuum emitting
region and the broad-line region (BLR) of a quasar can appear small
enough to undergo significant magnifications. The limiting source
size for microlensing to occur is given by the Einstein radius
projected onto the source plane. This means that only structures in
the source with sizes comparable to or smaller than this radius will
experience appreciable amplification. The Einstein radius, projected
onto the source plane for microlenses with masses in the range $0.1
\,M_{\odot}< M < 10 \,M_{\odot}$ is 7\, $<R_E<$ 70\, light-days for a cosmology
with $\Omega_m=0.3$, $\Omega_{\Lambda}=0.7$ and h$_{100}$=0.65.
Kaspi et al. (\cite{kaspi}) derived sizes for active galaxy nuclei
from reverberation mapping of the Balmer lines. As a function of
intrinsic luminosity, they found a global scaling of the broad-line
region (BLR) ranging from approximately $1$ to $300$ light days, which
compares well with the Einstein radius of the microlenses in the
lensing galaxy of SDSS J0924+0219.
The observations also reveal that the broad emission lines and the
continuum do not vary on the same time scale. Indeed, the continuum of
image A remains constant over the 15-day time span of the
observations, while the broad emission lines vary.
Detailed microlensing simulations by Lewis \& Ibata (\cite{lewis})
show that the correlation between the magnification of the BLR and
the continuum source exists, but is weak. Hence variations in the
broad emission lines need not be accompanied by variations in the
continuum. This argument has been confirmed through observations of
other gravitationally lensed quasars (Chartas et al. \cite{chartas},
Richards et al. \cite{richards}).
Another observational fact that needs some enlightening is the
asymmetric amplification of the broad emission lines (see
Fig. \ref{line_zoom}). Such an amplification occurs for the C~II] and
Mg~II emission lines in the spectra of image A. The red wings of
these lines are significantly more amplified than the blue ones. An
explanation for this is given by Abajas et al. (\cite{abajas}) and
Lewis \& Ibata (\cite{lewis}), who show that emission lines can be
affected by substantial centroid shifts and modification of the line
profile. Asymmetric modification of the line profile can be
indicative of a rotating source. Microlensing of the part of the BLR
that is rotating away from us would then explain the observed
asymmetric line amplifications. This would imply that a microlensing
caustic is passing at the edge of the broad line region, and is far
enough from the continuum to leave it unaffected.
\section{HST Imaging}
Optical and near-IR images of SDSS J0924+0219\ are available from the HST archive
in the F555W, F814W and F160W filters. The F555W and F814W
observations have been obtained on 18 November 2003 with the Advanced
Camera for Surveys (ACS) and the Wide Field Channel (WFC). The F555W
data consist of two dithered 1094 s exposures, each one being split in
two (CRSPLIT=2) in order to remove cosmic rays. Two consecutive 1148
s exposures have been taken through the F814W filter, one hour later,
again splitting the exposure time in two. Finally, the NICMOS2
observations, taken on 2003 November 23, consist of 8 dithered
exposures, for a total of 5312 s. The 5-day period separating the
optical and near-IR observations is of the order of the expected time
delay between images A and B of the quasar.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8.8cm]{4423fig09.eps}
\caption{Comparison between the spectra of images A and B taken on
14 January 2005. The top panel shows the dimensionless ratio
B/A. The mean ratio is $0.32$. In the middle panel, a first-order
polynomial $\alpha$B$+\beta$ is fit to the spectra of image A. The
best fit is obtained with $\alpha= 2.80 \pm 0.05$ and $\beta =0.37$. The
difference in flux between A and the fitted $\alpha$B$+\beta$
polynomial is displayed in the bottom panel, and does not exceed a
few percent of the flux.}
\label{BoverA_Jan}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8.8cm]{4423fig10.eps}
\caption{Same as in Fig. \ref{BoverA_Jan} but for the spectra taken on
1 February 2005. The mean B/A ratio is $0.31$, and the best fit of
image A is obtained with $\alpha=2.86 \pm 0.05$ and $\beta =0.43$.}
\label{BoverA_Feb}
\end{center}
\end{figure}
\subsection{Image Deconvolution}
\label{deconv}
\begin{figure*}[t!]
\leavevmode
\begin{center}
\includegraphics[width=8.8cm]{4423fig11.eps}
\includegraphics[width=8.8cm]{4423fig12.eps}
\caption{{\it Left:} composite HST image using the observations
through the F555W, F814W and F160W filters. The resolution is
respectively 0.10\arcsec\ in F555W and F814W, and 0.15\arcsec\ in
F160W. {\it Right:} deconvolved image. It has a pixel size of
0.025\arcsec\ and a resolution of 0.05\arcsec. The lensed host
galaxy of the quasar is clearly seen as red arcs
well centered on the quasar images. A second set of bluer
arcs inside and outside the area delimited by the
red arcs is also revealed. The field of
view is 3.0\arcsec\ on a side. The image is slightly rotated relative to
North, which is at PA=-2.67$^{\circ}$. East is to the left.
The white square shows the position of the perturber found
for the SIE and NFW models of Section~\ref{Simon}.}
\label{J0924_dec}
\end{center}
\end{figure*}
The MCS algorithm (Magain et al. \cite{magain98}) is used to
deconvolve all images. This algorithm sharpens the images and
preserves the flux of the original data. It also decomposes the data
into a set of analytical point sources (the quasar images) and a
numerical ``extended channel'' which contains all the features other
than point sources, i.e., the lensing galaxy and the Einstein ring.
All images are rebinned to a common pixel scale prior to deconvolution
and combined with cosmic ray rejection. The reference image adopted
to carry out the whole deconvolution work is the first image taken
through the F814W filter, i.e., image {\tt j8oi33031} in the HST
archive. The position angle of this reference image relative to the North is
PA=-2.67$^{\circ}$. All the astrometry in the following is given in the
coordinate system of this image. The data used here are the
pipeline-drizzled images available from the archive. The pixel scale
in the deconvolved image is half that of the original image, i.e.,
0.025\arcsec$\times$0.025\arcsec. The spatial resolution is the same
in all deconvolved images, i.e., 0.05\arcsec\ Full-Width-Half-Maximum
(FWHM).
As the HST PSF has significant spatial variations across the field of
view, stars located far away from SDSS J0924+0219\ on the plane of the sky are
not ideal for use in the image deconvolution. To circumvent this
problem we have devised an iterative procedure. We first deconvolve
the images with a fixed PSF, directly measured from stars. This gives
a deconvolved image of the lens and Einstein ring, that we reconvolve
with the PSF and subtract from the original data. A second PSF is
re-computed from this new lens- and ring-subtracted image, directly
from the quasar images, following the procedure described in Magain et
al.~(\cite{magain05}). This is similar to a blind-deconvolution,
where the PSF is modified during the deconvolution process. A new
deconvolved image is created with the improved PSF, as well as a new
lens- and ring-subtracted image. We repeat 4 times in a row the
procedure until the residual map (Magain et al.~\cite{magain98},
Courbin et al.\cite{courbin98}) is flat and in average equal to 1
$\sigma$ after deconvolution, i.e., until the deconvolved image
becomes compatible with the data in the $\chi^2$ sense.
\begin{table}[t!]
\caption[]{Astrometry of SDSS J0924+0219\ and flux ratio between the images.
All positions are given relative to the lensing galaxy in the
coordinate system of our reference HST image {\tt j8oi33031}. The
1-$\sigma$ error bar on the astrometry is 0.005\arcsec, mainly
dominated by the error on the position of the lensing galaxy. The
error bar on the flux ratio is of the order of 10\% for images
B, C and 20\% for image D, and includes the systematic errors
due to the presence of the Einstein ring (see text).}
\label{astrom}
\begin{flushleft}
\begin{tabular}{lccccc}
\hline\hline
Object & X & Y & F555W & F814W & F160W \\
& (\arcsec) & (\arcsec) & & & \\
\hline
Lens & $+$0.000 & $+$0.000 & $-$ & $-$ & $-$ \\
A & $-$0.185 & $+$0.859 & 1.00 & 1.00 & 1.00 \\
B & $-$0.246 & $-$0.948 & 0.51 & 0.46 & 0.44 \\
C & $+$0.782 & $+$0.178 & 0.39 & 0.34 & 0.32 \\
D & $-$0.727 & $+$0.430 & 0.06 & 0.06 & 0.03 \\
\hline
\end{tabular}
\end{flushleft}
\end{table}
\subsection{Results}
The deconvolved images through the three filters are shown in
Fig.~\ref{J0924_dec}, as a colour composite image. Two sets of arcs
are clearly seen, corresponding to the host galaxy of the source
quasar, and to a bluer object not centered on the images of the
quasar. This arc is well explained by a second lensed source (see
Section~\ref{Simon}) which is either a star-forming region in the
source, or another unrelated object.
Instead of using the conventional version of the MCS deconvolution
algorithm, we use a version that involves a semi-analytical model for
the lensing galaxy. In this procedure, the analytical component of the
lensing galaxy is either a two-dimensional exponential disk, or a de
Vaucouleurs profile. All slight departures from these two profiles are
modeled in the form of a numerical array of pixels which includes the
arcs as well.
In all bands, we find that an exponential disk fits the data much
better than a de Vaucouleurs profile, which is surprising for an
elliptical galaxy, as indicated by the VLT spectra.
Table~\ref{astrom} gives a summary of our astrometry, relative to the
center of the fitted exponential disk. The mean position angle of the
lensing galaxy, in the orientation of the HST image, is $PA = -61.3
\pm 0.5^{\circ}$ (positive angles relative to the North in a
counter-clockwise sense) and the mean ellipticity is $e=0.12 \pm
0.01$, where the error bars are the dispersions between the
measurements in the three filters. We define the ellipticity as
$e=1-b/a$, where $a$ and $b$ are the semi-major and semi-minor axis
respectively. Note that although the formal error on the lens
ellipticity and PA is small, the data show evidence for isophote
twisting. The effective radius of the galaxy is
$R_e=0.50\pm0.05$\arcsec.
\begin{figure*}[t!]
\leavevmode
\begin{center}
\includegraphics[width=5.9cm]{4423fig13.eps}
\includegraphics[width=5.9cm]{4423fig14.eps}
\includegraphics[width=5.9cm]{4423fig15.eps}
\caption{The three plots give the reduced $\chi^2$ as a function of lens
ellipticity $e$ and external shear $\gamma$ for the three analytic
models used in the LENSMODEL package. No constraint is used on the
image flux ratios. The contours correspond to the 1, 2 and
3-$\sigma$, confidence levels. The degeneracy between ellipticity and
shear is clear. Only the NFW models are (marginally) compatible with
no external shear at all, as also suggested by the semi-linear
inversion of Section~\ref{Simon}. The black square in each panel
indicated the best fit model, which parameters are given in
Table~\ref{models}.}
\label{ellip_vs_shear}
\end{center}
\end{figure*}
\begin{table*}[t!]
\caption[]{Best-fit parametric models for SDSS J0924+0219, obtained with the LENSMODEL
package (Keeton~\cite{keeton_lensmodel}). The position angles of the
lens $\theta_e$ and of the external shear $\theta_{\gamma}$ are given
in degrees, positive angles being counted counter-clockwise relative
to the North. The coordinates $(x,y)$ of the centres of the models
are given in arcseconds, and the time delays $\Delta t$ are expressed
in days relative to the leading image B. The extreme values
for the time delays within the smallest 1-$\sigma$
region of Fig.~\ref{ellip_vs_shear} are also given.
We adopt a $(\Omega_m, \Omega_\Lambda)=(0.3, 0.7)$ cosmology
and h$_{100}$=0.65. All models have one degree of freedom.}
\label{models}
\begin{flushleft}
\begin{tabular}{llccccccccc}
\hline\hline
Model & Parameters & $(x, y)$ & $e$ & $\theta_e$ & $\gamma$
& $\theta_{\gamma}$ & $\Delta t_{AB}$ & $\Delta t_{CB}$ & $\Delta t_{DB}$ & $\chi2$ \\
\hline
& & & & & & & & & & \\
SIE & $b^{\prime}=0.87$ & $(-0.003,\, 0.002)$ & 0.13 & -73.1 & 0.053 & 65.4 & $5.7^{6.7}_{5.1}$ & $9.1^{10.4}_{8.2}$ & $6.2^{7.2}_{5.5}$ & 0.91 \\
& & & & & & & & & & \\
de Vaucouleurs & $b=2.64$ & $(-0.004,\, 0.002)$ & 0.16 & -70.1 & 0.096 & 77.3 & $8.6^{8.9}_{8.1}$ & $13.8^{14.4}_{12.9}$ & $9.4^{9.7}_{8.8}$ & 1.41 \\
& $R_e=0.50$ & & & & & & & & & \\
& & & & & & & & & & \\
NFW & $\kappa_s=0.70$ & $(-0.003,\, 0.001)$ & 0.10 & -72.0 & 0.047 & 65.4 & $4.9^{8.0}_{3.6}$ & $7.8^{12.7}_{5.8}$ & $5.4^{8.7}_{4.0}$ & 0.72 \\
& $r_s=1.10$ & & & & & & & & & \\
\hline
\end{tabular}
\end{flushleft}
\end{table*}
The flux ratios of the quasar images are derived from the deconvolved
images. The MCS algorithm provides the user with the intensities of
all point sources in the image, decontaminated from the light of the
extended features, such as the ring in SDSS J0924+0219\ and the lensing galaxy.
The error on the quasar flux ratio is dominated by the contamination
by the Einstein ring. If the intensity of a quasar image is
overestimated, this will create a ``hole'' in the deconvolved Einstein
ring at the quasar image position. If it is underestimated, the local
$\chi^2$ at the position of the quasar image will become much larger
than 1 $\sigma$. The flux ratios in Table~\ref{astrom} are taken as
the ones giving at the same time a continuous Einstein ring without any
``hole'', and leading to a good $\chi^2$, close to 1, at the position
of the quasar images. The error bars quoted in Table~\ref{astrom}
are taken as the difference between these two extreme solutions,
divided by 2. They include both the random and systematic errors.
\section{Modeling}
Constraining the mass distribution in SDSS J0924+0219\ is not trivial. Firstly,
we do not have access to the true image magnifications due to
contamination by microlensing and secondly, the light distribution of
the lensing galaxy is not very well constrained. The ellipticity and
position angle of the lens change with surface brightness, indicative
of isophote twisting. Measuring the faintest isophotes on the HST
data leads to PA $\simeq -25^{\circ}$, as is adopted by Keeton et
al.~(\cite{keeton2005}) in his models. However, brighter isophotes
and fitting of a PSF-deconvolved exponential disk profile yields
PA $= -61.3^{\circ}$.
As a blind test for the shape of the mass distribution underlying the
light distribution, and without using any constraint on the
ellipticity or PA of the lens, we use the non-parametric models of
Saha \& Williams (\cite{saha04}). Fitting only the image positions
returns an asymmetric lens whose major axis is aligned approximately
East-West (i.e., PA $= 90^{\circ}$). Given the discrepancy between this
simple model and the observed light distribution, we test in the
following a range of models with differing levels of observational
constraints, in order to predict time delays.
\subsection{Parametric Models}
\subsubsection{Using the flux ratios}
\label{subsec:modA}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8.8cm]{4423fig16.eps}
\caption{Annular mask applied to the F160W (left) and F555W (right)
data with point sources masked out. The annulus in the F555W image is
shifted by 0.1\arcsec\ to the left and 0.2\arcsec\ to the top with
respect to the F160W image, to properly encompass the blue arc seen in
Fig.~\ref{J0924_dec}.}
\label{masked_rings}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8.8cm]{4423fig17.eps}
\caption{Reconstructed source from F160W data (top left) and its
lensed image (bottom left). A second source lying on the rightmost
cusp caustic (top right) is reconstructed from the F555W image
corresponding to the blue arc (bottom right).}
\label{recon_source}
\end{center}
\end{figure}
The LENSMODEL package (Keeton \cite{keeton_lensmodel}) is used to
carry out an analytical modeling of the lensing galaxy. Three
lensing galaxy models are considered: the Singular Isothermal
Ellipsoid (SIE), the Navarro, Frenk \& White (\cite{nfw97}) profile
(NFW), and the de Vaucouleurs (\cite{devauc}) profile. In a first
attempt, we constrain these models with the lensing galaxy position,
the relative positions of the lensed images (Table~\ref{astrom}) and
their flux ratios (taken as the mean of the ratios measured in the
three F555W, F814W, F160W filters). If no external shear is included in
the models, we find a lens ellipticity of $e\simeq 0.3$ with a P.A.
$\theta_e \simeq 85^{\circ}$ and an associated $\chi^2 \simeq 200$.
The ellipticity and PA agree well with the models obtained
from the semi-linear inversion method of Warren \& Dye
(\cite{warren03}) (see Section~\ref{Simon}).
Next, we include external shear to the model. The lens position angle
$\theta_e$, coordinates, and ellipticity agree better with the
measured values in the HST images. The $\chi^2$ values remain bad
($\chi^2 \simeq 30$),
although improved with respect to the models without external
shear. The shear orientation is $\theta_{\gamma} \sim 60^{\circ}$
which is about in the direction of a bright galaxy located 9.5\arcsec\
away from SDSS J0924+0219\ and at PA $= 53^{\circ}$.
The main contribution to the total $\chi^2$ is the anomalous
flux ratios between the images of SDSS J0924+0219. In particular, the extreme flux
ratio between image A and image D of $\sim 15$, when these two images
are predicted to have approximately the same brightness. This is not
surprising because of the evidence of microlensing in image A
(Sect.~\ref{subsec:micro}) and of possible milli-lensing induced by massive
substructures. This lead us to the considerations presented in the
next section.
\subsubsection{Discarding the flux ratios}
The modeling is similar to that of
Sect.~\ref{subsec:modA}. External shear is included but
the flux ratios are discarded. In order to use only models that have
one degrees of freedom (DOF), we have fixed the effective radius of
the de Vaucouleurs model to the observed value. Given the number of
observational constraints, the NFW model would have zero DOF if all its
parameters were left free during the fit. We have therefore fixed
the orientation of the external shear in this model to the value we
found in the SIE+shear model. The best fit models are presented in
Table~\ref{models}, with (reduced) $\chi^2$ improved to values close
to 1.
We map the lens ellipticity vs. external shear plane in order to
estimate the degree of degeneracy between these two parameters. The
results are displayed in Fig.~\ref{ellip_vs_shear}. It is immediatly
seen that the 1-$\sigma$ ellipses of the different models only
marginally overlap. This is confirmed by the time delay values
summarized in Table~\ref{models} where we also give the extreme
values of the time delays within the 68\% confidence interval. The
minimum difference between the extreme time delays predicted with a
constant mass-to-light ratio galaxy (de Vaucouleurs) and by the more
physically plausible SIE model is about 8\%. Since the error
measurement on the time delay propagates linearly in the error budget,
even a rough estimate of the three time delays in SDSS J0924+0219, with 8\%
accuracy will already allow to discriminate efficiently between flat
M/L models and SIE. Distinguishing between SIE and NFW is more
difficult as time delays predicted by NFW models differ by only 1\%
from the SIE time delays. Such an accuracy will be hard to reach in
SDSS J0924+0219, that has short time delays and a short visibility period given
its equatorial position on the plane of the sky (see Eigenbrod et al.
2005).
\begin{table*}
\caption{Minimized lens model parameters and corresponding $\chi^2$.
Model parameters are: $\kappa_0$ = mass normalization in arbitrary
units, $(x,y)$ = offset of lens model centre from lens optical axis in
arcseconds, $e$ = ellipticity, $\gamma$ = external shear, $\theta_e$
and $\theta_{\gamma}$ = PA in degrees counted counter-clockwise from
North. In the case of the NFW, the scale radius is held fixed at
$6''$ in the minimization. The third column gives the number of
degrees of freedom (NDOF). Subscript '$b$' refers to the secondary
SIS in the dual component models (see text).}
\small
\begin{tabular}{l c c l}
\hline\hline
Model & $\chi^2_{min}$ & NDOF & Minimized parameters \\
\hline
SIE & 4280 & 3975 & $\kappa_0=100.0$, $(x,y)=(0.02, 0.04)$, $e=0.270$,
$\theta_e=86.0$ \\
NFW & 4011 & 3974 & $\kappa_0=100.0$, $(x,y)=(0.06, 0.06)$,
$e=0.187$, $\theta_e=84.9$ \\
Dual SIS & 4385 & 3974 & $\kappa_{0}=49.2$, $(x,y)=(0.00, 0.28)$,
$\kappa_{0b}=51.6$, $(x,y)_b=(-0.06, -0.33)$ \\
SIE$+$SIS & 4247 & 3972 & $\kappa_{0}=99.4$, $(x,y)=(0.04, 0.04)$,
$e=0.265$, $\theta_e=85.1$, $\kappa_{0b}=2.1$, $(x,y)_b=(-0.79, -0.03)$ \\
NFW$+$SIS & 3971 & 3971 & $\kappa_{0}=98.0$, $(x,y)=(0.05, 0.08)$,
$e=0.206$, $\theta_e=83.1$, $\kappa_{0b}=2.8$, $(x,y)_b=(-0.80, -0.09)$ \\
NFW$+ \, \gamma$ & 3992 & 3972 & $\kappa_{0}=100.0$, $(x,y)=(0.06, 0.06)$,
$e=0.168$, $\theta_e=86.0$, $\gamma=0.010$, $\theta_{\gamma}=78.3$\\
\hline
\end{tabular}
\normalsize
\label{tab_sl_results}
\end{table*}
\subsection{Using the Arcs of the Lensed Sources}
\label{Simon}
The HST images of SDSS J0924+0219\ reveal two sets of arcs. One is prominent in
the near-IR (in red in Fig.~\ref{J0924_dec}) and is well centered on
the quasar images. It is the lensed image of the quasar host galaxy. A
second set of bluer arcs is best seen in F555W. It is off-centered
with respect to the quasar images, indicating either a companion to
the quasar host, or an independent intervening object along the line
of sight.
We apply the semi-linear inversion method of Warren \& Dye
(\cite{warren03}) to the arcs observed in the F555W and F160W data.
The method incorporates a linear matrix inversion to obtain the source
surface brightness distribution that gives the best fit to the
observed lensed image for a given lens model. This linear step is
carried out per trial lens parametrisation in a standard non-linear
search for the global best fit.
Dye \& Warren (\cite{dye05}) successfully apply this technique to the
Einstein ring system 0047$-$2808. They demonstrate that the extra
constraints provided by the image of the ring results in smaller
errors on the reconstructed lens model, compared to using only the
centroids of the principal images as constraints in this system.
In the case of 0047$-$2808, the source is a star forming galaxy
without any point-like emission whereas the image of SDSS J0924+0219\ is clearly
dominated by the QSO's central point source. To prevent the
reconstruction of SDSS J0924+0219\ from being dominated by the point source and
because in this section only the reconstruction of the QSO host
emission is of interest, we masked out the four point source images in
the F555W and F160W data supplied to the semi-linear inversion code.
The astrometry of the quasar images is not used as a
constraint. Fig. \ref{masked_rings} shows the masked ring images.
\subsubsection{Reconstruction results}
The deconvolved F160W and F555W data are reconstructed with 6
different parametric lens models. Three of these are single mass
component models: the singular isothermal ellipsoid (SIE), the
elliptical NFW, and the elliptical NFW with external shear. The
remaining three test for asymmetry in the lens model by including a
secondary singular isothermal sphere (SIS) mass component that is also
free to move around in the lens plane and vary in normalization in the
minimization. These models are the dual SIS model, the SIE$+$SIS model
and the NFW$+$SIS model.
Since the F160W data have the highest signal to noise arcs, we base
our lens modeling on these data and applied our overall best fit model
to the F555W data to reconstruct the source. In all cases, we
reconstruct with a $0.5''\times0.5''$ source plane comprising
$10\times 10$ pixels. The reconstruction is not regularised, except in
Fig. \ref{recon_source} where first order regularisation (see Warren
\& Dye \cite{warren03}) is applied to enhance visualization of the
source.
Table~\ref{tab_sl_results} lists the minimized parameters for each
model and the corresponding values of $\chi^2$. The SIE$+$SIS and
NFW$+$SIS models clearly fare better than their single component
counterparts, implying the lens is asymmetric. For the SIE$+$SIS, a
decrease in $\chi^2$ of $\Delta \chi^2=33$ for 3 fewer degrees of
freedom has a significance of 5.1 $\sigma$. The decrease of $\Delta
\chi^2=40$ for the NFW$+$SIS has a significance of 5.7 $\sigma$. Both
models consistently place the secondary SIS mass component around
$(-0.80'',-0.05'')$ with a normalization of only $\sim 2.5$\% of the
main component.
Interestingly, the elliptical models listed in Table
\ref{tab_sl_results} have ellipticities close to those
obtained with the LENSMODEL software, when no external shear is
considered. When external shear is added to the NFW model, we do
indeed obtain a significantly better fit compared to the NFW on its
own, but the results differ from those listed in Table~\ref{models}.
While the ellipticity remains almost the same as in
Table~\ref{models}, its PA differs by approximately 25$^o$. Moreover,
we find a ten times smaller amplitude for the shear using the
semi-linear inversion than using LENSMODEL. Note, however, that the
observed quasar image astrometry is used in the LENSMODEL analysis,
whereas it is not in the present semi-linear inversion. If we use the
lens model found by the semi-linear inversion to predict the position
of the quasar images, we find poor agreement between the predicted and
the measured positions. The global, large scale shape of the lens
found by the semi-linear inversion is well adapted to model the
Einstein rings, which are very sensitive to azimuthal asymmetry in the
lens, but additional smaller scale structures are needed to slightly
modify the positions of the quasar images and make them compatible
with the measured astrometry. The disagreement between the
astrometry predicted by LENSMODEL and the one predicted by the
semi-linear inversion adds support to the presence of multipole-type
substructures in the lens (e.g., Congdon \& Keeton~\cite{cong2005}).
The top left plot in Fig.~\ref{recon_source} shows the reconstructed
source corresponding to the best fit NFW$+$SIS model for the F160W
data. The observed arcs are explained by a single QSO host galaxy.
Note that in this figure, purely to aid visualization, we have
regularised the solution and plotted the surface brightness with a
pixel scale half that used in the quantitative reconstruction. The
bottom left corner of Fig.~\ref{recon_source} shows the image of the
reconstructed source lensed by the best fit NFW$+$SIS model.
We then take the best fit NFW$+$SIS model in order to reconstruct the
F555W data shown on the right in Fig.~\ref{masked_rings}. Note that
the annular mask is shifted slightly compared to the F160W data, to
properly encompass the blue arc. The reconstructed source and
corresponding lensed image are shown on the right hand side of
Fig.~\ref{recon_source}.
There are two distinct sources now visible. The QSO host identified
previously has again been reconstructed. This is because its dominant
image, the bright arc in the top left quadrant of the ring, is still
present in the F555W data. A second source, more diffuse and lying on
the rightmost cusp caustic is also visible. This second source is
responsible for the blue arcs.
The redshift of the second source remains unknown. It could be a star
forming object/region lying $0.2\arcsec \cdot D_s \simeq 1200\,
h_{100}^{-1}$ pc away from the quasar, i.e., it would be part of the
host galaxy.
It is, however, not excluded that this second source is at a
different redshift than the quasar, e.g.
located between the quasar and the lens, as it is bluer than the
quasar host galaxy. If the latter is true, SDSS J0924+0219\ might be a unique
object to break the mass sheet degeneracy. Unfortunately, the lens
modeling alone, does not allow to infer a redshift estimate.
\subsection{Note on the different types of models}
The two methods used above differ in several respects. LENSMODEL
has a limited number of free parameters but uses only the constraints
on the astrometry of the quasar images. While a qualitative representation
of the lensed host galaxy of the quasar source can be attempted, the
method does not allow a genuine fitting of the Einstein rings assuming
a (simplified) shape for the quasar host.
The semi-linear inversion carries out a direct reconstruction of the
lensed source as a whole, where each pixel of the HST image is a free
parameter. As the quasar images largely dominate the total flux of
the source, they need to be masked before the reconstruction. For
this reason it is not possible with this method, at the present stage of
its development, to constrain the lens
model using {\it simultaneously} the astrometry of the quasar
images and the detailed shape of the Einstein rings.
Although the two methods used in the present work are fundamentally
different and although they use very different observational constraints,
they agree on the necessity to bring extra mass near image D of SDSS J0924+0219.
Smooth lenses like the ones implemented in LENSMODEL have
PAs that differ by 10$^{\circ}$ from the one measured in the HST images.
In the orientation of Fig.~\ref{J0924_dec}, the mass distribution found
by LENSMODEL is closer to horizontal (PA$=-90^{\circ}$) than the light distribution, hence
giving larger masses next to image D. In the semi linear inversion,
the optimal position found for the SIS perturber is also close to image
D.
Given the above discussion, the poor determination of the lens PA is a main
limitation to the interpretation of the time delays in SDSS J0924+0219.
An alternative route is to determine the dynamical rotation axis
of the lens, a challenge which is now within the reach of integral field
spectroscopy with large telescopes and adaptive optics.
\section{Conclusions}
We have spatially deconvolved deep sharp VLT/FORS1 MOS spectra of
SDSS J0924+0219, and measured the redshift of the lensing galaxy,
z$_{\rm lens}$ $= 0.394\pm0.001$, from numerous stellar absorption lines. The
spectrum beautifully matches the elliptical galaxy template of Kinney
et al. (\cite{kinney}).
The flux ratio between image A and B is $F_A/F_B = 2.80 \pm 0.05$ on
2005 January 14, and $F_A/F_B = 2.86 \pm 0.05$ on 2005 February 1,
i.e., it has not changed between the two dates given the uncertainties
on the flux ratios (Table~\ref{refer}). For each date, this ratio is
mostly the same in the continuum and in the broad emission lines of
the quasar images A and B. This may seem in contradiction
with Keeton et al.~(\cite{keeton2005}) who see differential
amplification of the continuum relative to the lines, but our
observing dates and setup are very different from theirs.
While the continuum of images A and B has not changed in 15 days,
there are obvious and asymmetric changes in some of the quasar broad
emission lines. Microlensing of both A and B is compatible with this,
although somewhat ad hoc assumptions must be done on the position of
the microcaustics relative to the quasar, as well as on the relative
sizes of the continuum and broad line regions.
Deep HST imaging reveals two sets of arcs. One corresponds to the red
lensed host galaxy of the quasar and defined an Einstein ring
connecting the quasar images. The other, fainter and bluer is
off-centered with respect to the quasar images. It is either a
star-forming region in the quasar source host galaxy, or another
intervening object.
The lens ellipticity and PA measured in the HST images are hard to
reconcile with simple models without external shear. The model fits
improve when external shear is added, even though the predicted PA
differs from the measured one by approximately $25^{\circ}$.
Models of Section~\ref{Simon}, involving an additional small (SIS)
structure to the main lens always place it along the East-West axis,
about 0.8\arcsec\ to the East of the main lens, i.e., towards the
demagnified image D. In addition, the models reconstructed using only
the Einstein rings do not predict the correct astrometry for the
quasar images. Einstein rings constrain the overall, large scale of
the lens. Small deviations from this large scale shape are needed to
match the quasar images astrometry. The discrepancy between
the models using the rings and the ones using the quasar image
positions therefore adds support to the presence of multipole-like
substructures in the lens of SDSS J0924+0219.
Finally, the range of time delays predicted by the different lens
models is large and is very sensitive to the presence of external
shear and to the determination of the main lens ellipticity and PA.
The time delay measurement and the lens modeling, combined with
integral field spectroscopy of the lens in SDSS J0924+0219\ might therefore prove
extremely useful to map the mass-to-light ratio in the lens, by
comparing the lensing and dynamical masses, to the light distribution
infered from the HST images.
\begin{acknowledgements}
The authors would like to thanks Dr. Steve Warren for useful
discussions and the ESO staff at Paranal for the care taken with the
crucial slit alignment necessary to carry out the spectra
deconvolutions. The HST archive data used in this article were
obtained in the framework of the CfA-Arizona Space Telescope LEns
Survey (CASTLES, HST-GO-9744, PI: C.S. Kochanek). PM acknowledges
support from the PSS Science Policy (Belgium) and by PRODEX (ESA).
COSMOGRAIL is financially supported by the Swiss National Science
Foundation (SNSF).
\end{acknowledgements}
| 2024-02-18T23:39:47.659Z | 2006-02-20T09:20:44.000Z | algebraic_stack_train_0000 | 421 | 9,093 |
|
proofpile-arXiv_065-2212 | \section{Introduction}
Perhaps the most obvious relation between energy and time is given by the expression for
the energy of a single photon $E= \hbar \nu$. In higher dimensions
a similar energy-time relation
$$
\| H \|_2 =\frac{const.}{\| t \|_2}
$$
\noindent holds for the $L_2$ norms of state energies and characteristic times associated with a canonically distributed system. This relationship is made precise herein. A by-product of the result is the possibility of the determination of surfaces of constant temperature given sufficient details about the trajectory of the system through its path space.
As an initial value problem, system kinetics are determined once an initial state and energy function are specified.
In the classical setting, representative cell occupation numbers may be assigned any compact region of position,
$\mathbf{q}$, momentum, $\mathbf{p}$, space \cite{uhl}. An important model of a quantum system is provided by lattices
of the Ising type \cite{glauber}, \cite{mnv}. Here the state of the system is typically specified by the configuration of spins.
Importantly, two systems that share exactly the same state space may assign energy levels to those states differently.
In the classical context one may, for example, hold the momenta fixed and vary the energy by preparing a second system of slower moving but more massive particles. In the lattice example one might compare systems with the same number of sites but different coupling constants, etc.
Consider a single large system comprised of an ensemble
of smaller subsystems which all
share a common, finite state space. Let the state energy assignments vary from one
subsystem to the next.
Equivalently, one could consider a single, fixed member of the ensemble whose
Hamiltonian, $H_{subsystem}$, is somehow varied (perhaps by varying
external fields, potentials and the like \cite{schro}).
Two observers, A and B, monitoring two different members of the ensemble, $\mathcal{E}_A$
and $\mathcal{E}_B$, would accumulate the same lists of states visited
but different lists of state occupation times.
The totality of these characteristic time scales, when interpreted as a list of coordinates (one list per member
of the ensemble), sketch out a surface of constant temperature in the shared coordinate space.
\section{Restriction on the Variations of $H_{subsystem}$ }
From the point of view of simple arithmetic, any variation of $H_{subsystem}$
is permissible but recall there are constraints inherent in the construction of a
canonical ensemble. Once an energy reference for the subsystem has been declared,
the addition of a single constant energy uniformly to all subsystems states will not be allowed.
Translations of $H_{subsystem}$ are temperature changes in the bath.
The trajectory of the \text { \it total } system takes place in a thin energy shell. If the fluctuations of the subsystem are shifted uniformly then the fluctuations in the bath are also shifted uniformly (in the opposite direction).
This constitutes a change in temperature of the system. This seemingly banal observation is not
without its implications. The particulars of the situation are not unfamiliar.
A similar concept from Newtonian mechanics is the idea of describing the motion
of a system of point masses from the frame of reference of the mass center.
Let $\{ H_1, H_2, \ldots , H_N \}$ be the energies of an $N-$state system.
A different Hamiltonian
might assign energies to those same states differently, say $\{ \tilde H_1, \tilde H_2, \ldots , \tilde H_N \}$.
To describe the transition from the energy assignment $ \mathbf{H}$ to the assignment $\mathbf{\tilde H}$
one might first rearrange the values about the original `mass center'
\begin{equation}
\frac{ H_1+ H_2+ \ldots + H_N}{N}
\end{equation}
and then uniformly shift the entire assembly
to the new `mass center'
\begin{equation}
\frac{ \tilde H_1+ \tilde H_2+ \ldots + \tilde H_N}{N}.
\end{equation}
In the present context, the uniform translations of the subsystem state energies
are temperature changes in the bath.
As a result, the following convention is adopted.
For a given set of state energies
$\{ H_1, H_2, \ldots , H_N \}$,
only those changes to the state energy assignments that
leave the `mass center'
\noindent unchanged will be considered in the sequel.
The fixed energy value of the ``mass center'' serves
as a reference energy in what follows. For simplicity this reference is taken to be zero.
That is
\begin{equation}\label{zero}
H_1+ H_2+ \ldots + H_N = 0.
\end{equation}
\noindent Uniform translation will be treated as a temperature fluctuation in what follows.
An obvious consequence is that only $N-1$ subsystem state energies and the bath temperature
$\theta$ are required to describe the statistics of a canonically distributed system.
\section{Two One-Dimensional Subspaces }
In the event that a trajectory of the subsystem is observed
long enough so that each of the $N$ states
is visited many times, it is supposed that the vector of occupancy times spent in state,
$\{ \Delta t_1, \Delta t_2, \ldots, \Delta t_N \}$, is connected to any vector of N-1 independent
state energies and the common bath temperature, $\{ H_1, H_2, \ldots , H_{N-1}, \theta \}$,
by relations of the form
\begin{equation}\label{t&e_ratio1}
\frac{ \Delta t_{k} }{ \Delta t_{j} }=\frac{ e^{- \frac{H_{k}}{\theta}} }{e^{- \frac{H_{j}}{\theta}}}
\end{equation}
\noindent for any $k, j \in \{1,2,\ldots,N\}$. The value of the omitted state energy, $H_N$, is determined by
equation (\ref{zero}).
The number of discrete visits to at least one of these states will
be a minimum. Select one of these minimally visited states and label it the rare state.
The observed trajectory may be decomposed into cycles beginning and ending on visits to the rare state and
the statistics of a typical cycle may be computed. For each $k \in \{1,2,\ldots,N\}$,
let $\Delta t_k$ represent the amount of continuous time spent in the $k^{th}$ state during a typical cycle.
In the Markoff setting the $L_1$ norm
\begin{equation}\label{cd}
\sum_{k=1}^N \Delta t_{k} = \textrm{characteristic system time},
\end{equation}
\noindent may serve as the Carlson depth. These agreements do not affect the validity of
equation (\ref{t&e_ratio1}).
At finite temperature, it may be the case that the system is uniformly distributed.
That is, the observed subsystem trajectory is representative
of the limiting case where the interaction Hamiltonian has been turned off
and the subsystem dynamics take place on a surface of constant energy.
In the left hand panel of figure \ref{CLscale}, the $\theta-$axis coincides with the set of all state energies and
bath temperatures
corresponding to uniformly distributed systems.
In the time domain, the ray containing the vector $\mathbf{1}$ (see the right hand panel) depicts the set of state occupancy times that give rise to uniformly distributed systems.
\begin{figure}[htbp]
\begin{center}
\leavevmode
\includegraphics[width=60mm,keepaspectratio]{CLscale.eps}
\caption{Schematic of the approach to the uniform distribution for dilatation pairs in
both the energy and time domains.}
\label{CLscale}
\end{center}
\end{figure}
For real constants $c_{\Delta t}$ and $c_{E}$ scale transformations of the type
\begin{eqnarray*}
\Delta \mathbf{t} \longrightarrow &c_{\Delta t} \; \; \Delta \mathbf{t} \\
\{ \mathbf{H}, \theta \} \longrightarrow &c_{E} \; \{ \mathbf{H}, \theta \}
\end{eqnarray*}
\noindent dilatate points along rays in their respective spaces and leave equation (\ref{t&e_ratio1}) invariant.
The left hand panel of figure \ref{CLscale} shows a pair of energy, temperature coordinates: A and B,
related by a dilatation scale factor $c_{E}$, rotated successively toward the coordinates $lim \, A$ and $lim \, B$
which lie on the line of uniform distribution (the $\theta$ axis) in the energy, temperature domain. Throughout the limit process (parameterized by the angle $\phi$) the scale factor $ c_{E}$ is held constant.
Consistent with the relations in equation (\ref{t&e_ratio1}), the points $t(A) $ and $t(B)$ (putative time domain images of the given energy, temperature domain points A and B) as well as the image of their approach to the
uniform distribution in time ($\phi ' = cos^{-1}(\frac{1}{\sqrt{N}}) $, where N is the dimensionality of the system), are shown in the right hand panel of the same figure.
As the angle of rotation $\phi'$ (the putative image of $\phi$ in the time domain) is varied, there is the possibility of a consequent variation of the time domain dilatation scale factor $c_{\Delta t}$ that maps $t(A) $ into $t(B)$. That is,
$c_{\Delta t}$ is an unknown function of $\phi'$. However in the limit of zero
interaction between the subsystem and the bath
the unknown time domain scaling, $c_{\Delta t}$, consistent with the given energy, temperature
scaling, $ c_{E}$, is rather easily obtained.
At any step in the limit process as $\phi' $ approaches $cos^{-1}(\frac{1}{\sqrt{N}})$ equation (\ref{t&e_ratio1}) implies that
\begin{equation}\label{t&e_ratio2}
\frac{ \Delta t(B)_{k} }{ \Delta t(B)_{j} }=\frac{ \Delta t(A)_{k} }{ \Delta t(A)_{j} }
\end{equation}
\noindent for any $k, j \in \{1,2,\ldots,N\}$.
Assuming, as the subsystem transitions from weakly interacting to conservative, that there are no discontinuities
in the dynamics, then equations (\ref{t&e_ratio1}) and (\ref{t&e_ratio2})
hold along the center line $\phi ' = cos^{-1}(\frac{1}{\sqrt{N}})$ as well.
In the conservative case with constant energy $H_{ref}$, the set identity
\begin{widetext}
\begin{equation}\label{setidentity}
\{ (\mathbf{q},\mathbf{p}): \mathbf{H}(\mathbf{q},\mathbf{p}) - H_{ref} = 0 \} \equiv
\{ (\mathbf{q},\mathbf{p}): c_{E} \; ( \mathbf{H}(\mathbf{q},\mathbf{p}) - H_{ref} ) = 0 \}
\end{equation}
\end{widetext}
\noindent together with scaling behavior of the position and momentum velocities given by Hamilton's equations
\begin{equation}\label{spedup}
\begin{split}
\mathbf{ \dot{q}(A)} \rightarrow & c_{E} \, \mathbf{ \dot{q}(A)} \\
\mathbf{ \dot{p}(A)} \rightarrow & c_{E} \, \mathbf{ \dot{p}(A)}
\end{split}
\end{equation}
\noindent illustrate that the phase space trajectory associated with the energy, temperature domain point $lim B$ is simply the trajectory at the point $lim A$ with a time parameterization ``sped up'' by the scale factor $c_{E}$. See figure \ref{trajectory}.
This identifies the the scale factor associated with
the points $t(lim B)$ and $t(lim A)$ as
\begin{equation}\label{limCt}
\lim_{\phi ' \rightarrow cos^{-1}(\frac{1}{\sqrt{N}})} c_{\Delta t}(\phi') = \frac{1}{c_E}.
\end{equation}
\begin{figure}[htbp]
\begin{center}
\leavevmode
\includegraphics[width=60mm,keepaspectratio]{trajectory.eps}
\caption{The trajectory is everywhere tangent to both $\mathbf{H}$ and $\mathbf{c_E H}$ vector fields.}
\label{trajectory}
\end{center}
\end{figure}
\section{ Matched Invariants Principle and the Derivation of the Temperature Formula}
A single experiment is performed and two observers are present.
The output of the single experiment is two data points (one per observer): a single point in
in the $\Delta t$ space and a single point in the $(H;\theta)$ space.
In the event that another experiment is performed and the observers repeat the
activity of the previous paragraph, the data points generated are either both the same
as the ones they produced as a result of the first experiment
or else both are different. If a series of experiments are under observation,
after many iterations the sequence of data points generated traces out a curve.
There will be one curve in each space.
$\it{The \; principle \; follows}$: in terms of probabilites, the two observers will
produce consistent results in the case when the data points
(in their respective spaces) have changed from the first experiment to the second
but the probabilites have not. That is, if one observer experiences a dilatation
so does the other.
Of course, if the observers are able to agree if dilatation has occurred they are also able to agree
that it has not.
In terms of probability gradients, in either space the dilatation
direction is the direction in which all the probabilities are invariant.
In the setting of a system with N possible states,
the N-1 dimensional space perp to the dilatation is spanned by
any set of N-1 probability gradients. We turn next to an application
of the MIP.
Consider two points $\theta_1$ and $\theta_2$ along a ray colocated with the temperature axis in the $(H,\theta)$ space.
Suppose that the ray undergoes a rigid rotation (no dilatation) and that in this way the two points are
mapped to two new points $A$ and $B$ along a ray which makes an angle $\phi$ with the temperature axis.
See the left hand panel of figure \ref{arcs}.
\begin{figure}[htbp]
\begin{center}
\leavevmode
\includegraphics[width=60mm,keepaspectratio]{circarcs.eps}
\caption{The temperature ratio is invariant with respect to rotation in either space }
\label{arcs}
\end{center}
\end{figure}
It's pretty obvious that the temperature ratio is preserved throughout the motion. For whatever the angle
$\phi$
\begin{equation}
\frac{ \theta_1}{ \theta_2 }=\frac{ \theta_1 \;cos(\phi)}{ \theta_2 \;cos(\phi) }=\frac{ \theta(A)}{ \theta(B) }.
\end{equation}
Let $t(\theta_1)$ and $t(\theta_2)$ be the images in the time domain of the points $\theta_1$ and
$\theta_2$ in $(H, \theta)$ space. According to the matched invariants principle, since the rotation
in $(H, \theta)$ space was rigid so the corresponding motion as mapped to the time domain is also a rigid rotation
(no dilatations). See figure \ref{arcs}.
More precisely, to the generic point $A$ in $(H, \theta)$ space with coordinates $(H_1,H_2,\ldots,H_{N}, \theta)$
associate a magnitude, denoted $\| \mathbf{H}\|$, and a unit vector $\hat{ \mathbf{e}}_{\mathbf{H}}$.
Recall that the $H$'s live on the hyperplane $H_1 + H_2 + \cdots + H_N =0.$
It will be convenient to express the unit vector in the form
\begin{equation}
\hat{ \mathbf{e}}_{\mathbf{H}} =\frac{ \{\frac{H_{1}}{\theta},\frac{H_{2}}{\theta},\ldots,\frac{H_{N}}{\theta},1\} }
{ \sqrt{ (\frac{H_1}{\theta})^2+(\frac{H_2}{\theta})^2+\cdots+(\frac{H_{N}}{\theta})^2+1 }}.
\end{equation}
The angle between that unit vector and the temperature axis is determined by
\begin{equation}
cos(\phi) = \hat{ \mathbf{e}}_{\theta} \cdot \hat{ \mathbf{e}}_{\mathbf{H}}
\end{equation}
\noindent where $\hat{ \mathbf{e}}_{\theta} = \{0,0,\ldots,0,1\}$.
The temperature at the point $A$, is the projection of its magnitude, $\| \mathbf{H}_A\|$, onto
the temperature axis
\begin{equation}
\theta(A)= \| \mathbf{H}_A\| \,cos(\phi).
\end{equation}
Another interpretation of the magnitude $\| \mathbf{H}_A\|$ is as the temperature at the point $\theta_1$,
the image of $A$ under a rigid rotation of the ray containing it,
on the temperature axis. See figure \ref{arcs}. With this interpretation
\begin{equation}\label{punchline}
\theta(A)= \theta_1 \,cos(\phi).
\end{equation}
An easy consequence of equation (\ref{zero}) is
\begin{equation}\label{firstformula}
\frac{H_k}{\theta} = \log [ \frac{( \prod_{j=1}^N p_j )^{\frac{1}{N}}}{p_k} ].
\end{equation}
In terms of the occupation times
\begin{equation}\label{firstformulaA}
\frac{H_k}{\theta} = \log [ \frac{( \prod_{j=1}^N \Delta t_j )^{\frac{1}{N}}}{\Delta t_k} ].
\end{equation}
An easy implication of equation (\ref {limCt}) is that
\begin{equation}\label{centerline}
\sqrt{\sum_{j=1}^N \Delta t_j^2}= \frac{\textrm{const.}}{\theta_1}.
\end{equation}
\noindent for an arbitrary but fixed constant carrying dimensions of $\textrm{time}\cdot\textrm{energy}$.
Together equations (\ref{punchline}), (\ref{firstformulaA}), and (\ref{centerline})
uniquely specify the surfaces of constant temperature in time
\begin{figure}[htbp]
\begin{center}
\leavevmode
\includegraphics[width=60mm,keepaspectratio]{greyisotemps.eps}
\caption{The constant temperature surfaces for a two dimensional system. }
\label{contours}
\end{center}
\end{figure}
\begin{widetext}
\begin{equation}\label{daformula}
\theta(\Delta \mathbf{t})= \frac{ \textrm{const.} }
{ \| t \|_2 \, \sqrt{ (\log [ \frac{ \prod }{\Delta t_1} ])^2+(\log [ \frac{ \prod }{\Delta t_2} ])^2+\cdots+
(\log [ \frac{ \prod }{\Delta t_{N}} ])^2+1 }}
\end{equation}
\end{widetext}
\noindent where,
\begin{equation}
\prod = (\Delta t_1\cdot \Delta t_2 \ldots \Delta t_N)^{\frac{1}{N}}.
\end{equation}
The temperature formula (\ref{daformula}) may be recast into the more familiar form
\begin{equation}
\| H \|_2 =\frac{const.}{\| t \|_2}
\end{equation}
With the temperature determined, equation (\ref{firstformulaA}) gives the state energies of a canonically
distributed subsystem. From these, a wealth of useful macroscopic properties of the dynamics may be computed \cite{Fo2}. Surfaces of constant temperature for a two state system are shown in figure 4.
| 2024-02-18T23:39:48.056Z | 2006-02-20T17:32:49.000Z | algebraic_stack_train_0000 | 445 | 2,696 |
|
proofpile-arXiv_065-2216 | \section{Spectroscopic observations}
Two spectra of BAL224 ($\alpha$(2000), $\delta$(2000): 00h 56mn 06.45s, -72$^{o}$ 28' 27.70") were obtained at medium
resolution in setups LR02 (396 - 457 nm, R=6400) and LR06 (644 - 718 nm,
R=8600).
They are dominated by the 2-peak emission components of Balmer lines
which are strongly asymmetric with V$>$$>$R. Due to the resolution used it was possible,
for the first time, to identify emission lines of [FeII], FeII, [CrI] as well as nebular
lines [SII]6717, 6731 (see Fig~\ref{specL26}). The mean radial velocity of these lines (RV)
is 154 km~s$^{-1}$. The FWHMs of metallic emission lines are about 100
km~s$^{-1}$~ and correspond to the instrumental broadening. The low S/N ratio in the continuum
(S/N$\simeq$20) did not allow to measure the radial velocity of HeI lines present in the
spectra of BAL224. The RVs mean values of the shell component of H$\alpha$, H$\gamma$
and H$\delta$ (see Figs~\ref{specL26}, \ref{vitesses} and Table~\ref{vitrad}) is 187 km~s$^{-1}$.
\begin{figure}[ht]
\centering
\vskip 0.5cm
\includegraphics[width=5cm, height=10cm,angle=-90]{figure3vgimp.ps}
\caption{Radial velocities of H$\alpha$, H$\gamma$, H$\delta$ and H$\epsilon$ for
BAL224.}
\label{vitesses}
\end{figure}
\begin{table*}[tbph]
\caption{Observational indications such as radial velocities or intensities of lines in the spectra of BAL224.
The values between brackets come from Hummel et al. (1999).}
\centering
\begin{tabular}{@{\ }c@{\ \ \ }c@{\ \ \ }c@{\ \ \ }c@{\ \ \ }c@{\ \ \ }c@{\ \ \ }c@{\ \ \ }c@{\ \ \ }c@{\ \ \ }c@{\ }}
\hline
\hline
& H$\alpha$ & H$\gamma$ & H$\delta$ \\
\hline
RV$_{V}$ ($\pm$20) km~s$^{-1}$ & 104 [140 $\pm$50] & 86 & 62 \\
RV$_{shell}$ ($\pm$20) km~s$^{-1}$ & 171 & 198 & 204 \\
RV$_{R}$ ($\pm$20) km~s$^{-1}$ & 276 [301 $\pm$50] & 317 & 327 \\
FWHM ($\pm$20) km~s$^{-1}$ & 320 [443 $\pm$50] & 410 & 600 \\
I$_{V}$ & 41.8 & 2.2 & 1.4 \\
I$_{R}$ & 33.4 & 1.9 & 1.3 \\
Mean I & 37.6 & 2.1 & 1.4 \\
EW ($\pm$20) \AA & 360 [202 $\pm$20] & & \\
\hline
Ratios & H$\gamma$/H$\alpha$=0.055 & H$\delta$/H$\alpha$=0.036 & H$\delta$/H$\gamma$=0.66\\
\hline
\end{tabular}
\label{vitrad}
\end{table*}
\section{Photometric Variability}
According to Balona (1992), this star displayed fading of 0.2 mag and periods
close to 1 day but none of these periods could fit satisfactorily the data.
Thanks to the MACHO and OGLE databases, 2 strong bursts (Fig.~\ref{photvar})
could be observed with an amplitude of 0.4 mag on a time scale of about 3100
days. Between these 2 strong bursts, smaller ones which do not seen to be
periodic could also be observed. We searched for short-term variability and like
in Balona (1992) we find periods close to 1 day which do not give a
satisfactory fit of the data. But irregular short- and long-term variabilities
may also be explained by the presence of a multiple object.
\begin{figure*}[!ht]
\centering
\includegraphics[width=5cm, height=10cm,angle=-90]{figure4vgimp.ps}
\caption{Light-curve of BAL224 from MACHO database.}
\label{photvar}
\end{figure*}
\section{On the nature of BAL224}
From VLT-FORS1 low resolution spectroscopic observations Hummel et al. (1999)
suggest that the absence of emission in HeI lines and the strong Balmer decrement
can indicate that this star has a shell with a gas cooler than 5000K. Kucinskas et
al (2000) thanks to their photometric study found a strong mid-IR excess
compatible with a dust shell with a very low temperature: 360K. This infrared
excess is compatible with B[e] and Herbig stars but the temperature determined is not
compatible with B[e] stars. We confirm a strong Balmer decrement. No emission components
can be observed on HeI lines and some lines of neutral elements such as [CrI] are
present so we can conclude that a cool dust shell is present (Table~\ref{vitrad},
Figs~\ref{specL26}, \ref{vitesses}). The presence of FeII and [FeII] and their
FWHM lower than 100 km~s$^{-1}$~ are common points between B[e] and Herbig B[e] stars.
But, we find an EW(H$\alpha$) smaller than 1000 \AA, which does not correspond to a
B[e]. The H$\alpha$ spectrum seen in Hummel et al (2000) and in this study clearly
shows a strong asymmetric double peak which may be explained by an accretion disk
(Fig.~\ref{vitesses}). This type of disk is a main characteristic of Herbig objects. Moreover,
the short- and long-term irregular variabilities are characteristic of Herbig
objects which may be explained by an aggregate of stars. Properties of B[e]
supergiants, HAeBe and isolated HAeBe (or HB[e]) are compared with properties of
BAL224 in Table~\ref{nature}. From this comparison, \textbf{we propose BAL224 as an isolated Herbig
B[e] object.}
\begin{table*}[tbph]
\scriptsize{
\caption{Comparisons between properties of: a B[e] supergiant (Sg), a Herbig Be (HAeBe), an isolated Herbig Be or HB[e] and BAL224.}
\centering
\begin{tabular}{@{\ }c@{\ \ \ }c@{\ \ \ }c@{\ \ \ }c@{\ \ \ }c@{\ \ \ }c@{\ \ \ }c@{\ \ \ }c@{\ \ \ }c@{\ \ \ }c@{\ }}
\hline
\hline
Properties & B[e] Sg & HAeBe & HB[e] & BAL224\\
\hline
FeII and [FeII] lines in emission & Yes & & Yes & Yes (this study)\\
FWHM FeII, [FeII]$<$100km~s$^{-1}$ & Yes & & & Yes (this study)\\
EW H$\alpha$ $>$1000\AA & Yes & & & No (this study + Hummel et al. 1999)\\
Near or far IR excess & Yes & Yes & Yes & Yes (Sebo \& Wood 1994)\\
IR excess, T$_{envelope}$$>$1000K & Yes & & & No (Kucinskas et al. 2000)\\
Excretion disk & Yes & & & No (this study + Hummel et al. 1999)\\
In obscure region & & Yes & & No (Balona 1992)\\
A-type or earlier & & & & \\
+ emission lines & & Yes & & Yes (this study + Hummel et al. 1999)\\
Star illuminates nebulosity & & & & \\
in immediate vicinity & & Yes & Yes & ? \\
Accretion disk & & Yes & Yes & Yes (this study + Hummel et al. 1999)\\
Irregular variations & & Yes & Yes & Yes (this study + Balona 1992)\\
Isolated object & & & Yes & Yes (Balona 1992)\\
Center of small aggregates & & & & \\
of low-mass stars & & & Yes & ? \\
\hline
\end{tabular}
\label{nature}
}
\end{table*}
| 2024-02-18T23:39:48.077Z | 2005-10-24T17:32:32.000Z | algebraic_stack_train_0000 | 447 | 1,007 |
|
proofpile-arXiv_065-2218 | \section{Introduction}
According to the most likely theory for Jovian planet formation,
Jupiter formed in a three stage process, lasting about 6~Myr. In the
classic model \citep{PP4_WGL}, a 5-15$M_\oplus$ rock/ice core forms at
a distance of $\sim$5~AU from the central star \citep{Boss95} over a
period of about 1/2 million years. It then begins to grow a
hydrostatic, gaseous envelope which slowly cools and contracts while
continuing to accrete both gas and solids until the total
core$+$envelope mass reach about 30 $M_\oplus$, after another $\sim6$
Myr have passed. In the final stage, the envelope begins to collapse
and a period of relatively rapid gas accretion follows, ending with
the planet in its final morphology. As stated, the timescale presents
a considerable problem for the model because circumstellar disks are
observed \citep{HLL01} to survive for only $\sim4$~Myr, though a large
dispersion in age remains and a few survive until much later. More
recent models \citep{IWI03,AMB04,HBL04} cut the timescale to
$\sim1$~Myr by invoking additional physical processes such as
migration or opacity modifications in the material in the forming
envelope.
A critical assumption in all versions of the core accretion model is
that the gaseous envelope is hydrostatic. We present a study designed
to investigate whether this assumption is in fact valid, and to
investigate the existence and character of the activity in the flow if
it is not. Our motivation for this study is to begin an exploration of
the possibility that the core accretion timescale may be further
shortened by the dynamical activity without the costs associated with
the other recent models. After finding that the flow is indeed quite
active, we propose that one consequence of the shocks resulting from
the activity is the production of chondrules and other annealed
silicates in the solar nebula.
\section{Initial Conditions and Physical Model}
We simulate the evolution of the gas flow in a 3 dimensional (3D)
Cartesian cutout region of a circumstellar disk in the neighborhood of
an embedded Jovian planet core. We derive the initial conditions in a
two stage process. First, we define a set of global conditions for the
disk as a whole, then we extract a small volume for which we simulate
the evolution.
The global conditions are similar to those described in \citet{JovI}
and assume that the disk orbits a 1$M_\odot$ star modeled as a point
mass. The disk extends from 0.5 to 20 AU and is described by surface
density and temperature power laws, each proportional to $r^{-1}$. We
assume that at an orbital radius of $a_{\rm pl}=5.2$~AU the surface
density and temperature are $\Sigma_{\rm pl}=500$~gm~cm$^{-2}$ and
$T_{\rm pl}=200$~K. We define the orbital velocities such that
centrifugal accelerations are exactly balanced by the combined
pressure and gravitational accelerations from the star and the disk.
Radial and vertical velocities are set to zero. With these dimensions,
the total implied disk mass of our underlying global model is
$M_D\approx.035M_\odot$. At the core's orbit radius, the implied
isothermal scale height is $H=c_s/\Omega\approx0.40$~AU, where $c_s$
and $\Omega$ are the local sound speed and rotation frequency
respectively. The disk is stable against the growth of gravitational
instabilities as quantified by the well known Toomre $Q$ parameter,
which takes a minimum value of $Q\approx5$ near the outer edge of the
disk. In the region near the core's orbit radius, its value is $Q>15$.
To simplify the local initial condition, we neglect the $z$ component
of stellar and global disk self gravity, but include the full
contribution of the core's gravity and local disk self gravity,
defined as the component of disk gravity originating from matter
inside our computational volume. This simplification allows us to
neglect the disk's vertical structure. Since we expect that the most
interesting part of the flow will be confined to the volume in and
near the core's Hill sphere, and both the grid dimensions and the disk
scale height are significantly larger, neglecting the vertical
stratification will have only limited impact on our results.
The origin of our coordinate grid is centered on a 10$M_\oplus$ core,
orbiting the star at $a_{\rm pl}=5.2$~AU. We use a modified `shearing
box' approximation to translate the cylindrical equations of motion
into the rotating Cartesian frame. Our modification includes
non-linear terms neglected in the standard form of
\citet{GLB-shearsheet}, allowing a closer correspondence between the
global and local conditions. Our modification allows the shear in the
$x$ direction, corresponding to the radial coordinate, first, to
include a non-zero offset of the corotation radius from the core's
position and, second, does not need to vary linearly with $x$,
as occurs when pressure contributes to the disk's rotation curve.
We extract the local initial condition from the global condition by
mapping the radial and azimuth coordinates of the two dimensional
global initial condition directly onto the $x$ and $y$ coordinates of
the local Cartesian grid, centered on the core, using the mapping:
$x= r - a_{\rm pl}$ and $y = r\phi$. Quantities in the $z$ direction
are obtained by duplicating the midplane quantities at each altitude.
The $x$ and $z$ velocities are defined to be zero. We obtain the $y$
velocity by subtracting off the orbital motion of the core from the
azimuth velocity at each radius and mapping the remainder into our
Cartesian grid at the appropriate $x$ position.
Although we avoid complications associated with modeling the disk's
vertical structure because we neglect the $z$ component of stellar and
disk gravity, we still require a correspondence between the globally
defined disk surface density and the locally defined volume density
used in the actual calculations. To make the connection, we use the
conversion $\rho=\Sigma/H$, where $\rho$ and $\Sigma$ refer to the
volume and surface densities respectively, and the isothermal scale
height $H=c_s/\Omega$. This conversion introduces a small physical
inconsistency, since of course our physical model omits the physics
responsible for producing vertical structure in the first place. The
inconsistency means that the volume density will contain a small
systematic error in its value, however since the exact value of the
volume density in the Jovian planet environment is not well known, we
believe this inconsistency will not be important for the results of
our simulations.
We use an ideal gas equation of state with $\gamma=1.42$ and include
heating due to compression and shocks, but no radiative heating or
cooling. This value of $\gamma$ is chosen to be representative of
values found in the background, solar composition circumstellar disk,
for which temperatures imply that the rotational modes of hydrogen
will be active. We expect the gas to be optically thick in the region
of interest, so that thermal energy will remain with the fluid rather
than being radiated away. The core is modeled as a point mass, onto
which no gas may accrete. The conditions at the boundaries are fixed
to the values of the global initial condition, resulting in a steady
flow into and out of the grid that mimics the near-Keplerian flow of
the underlying circumstellar disk.
On a global scale, the disk will respond only weakly to the influence
of a 10$M_\oplus$ core and will never form a deep gap
\citep{DHK02,JovI}, so a time varying boundary condition is not
required. A more serious concern is whether the flow within the
simulation volume becomes sufficiently perturbed away from that inside
the boundaries, to cause an unphysical back reaction to develop. We
have monitored the flow for signs of such effects and have found that
for the simulation presented here, perturbations have become well
enough mixed with the background flow so that quantities near the
boundaries are not altered substantially from their initial values. We
believe effects from numerical perturbations of this sort will have
minimal impact on the results. We caution that we have not found the
same statement to be true throughout our parameter space, e.g., at
very low background temperatures.
We use a hydrodynamic code \citep{Ruf92} based on the PPM algorithm
\citep{ColWood84}, which has been adapted to use a set of nested grids
to evolve the flow at very high spatial resolution. Both smooth and
shocked flows are modeled with high accuracy because PPM solves a
Riemann problem at each zone interface to produce the fluxes at each
timestep. Shocks and shock heating are therefore included as an
integral part of the method. No additional artificial viscous heating
is required or included. Each successive grid is overlaid on top of
the central portion of its parent grid, but with one half of its
linear dimensions. Each grid in the nest contains an identical number
of zones, so that the effective linear spatial resolution is doubled
in the overlay region. In the model presented here, we use a nest of
six grids. The simulation volume extends 4 Hill radii ($R_H=a_{\rm
pl}(M_{\rm pl}/3M_\odot)^{1/3}$, corresponding to about $1.1H$) in
each direction, defining a volume of (0.897 AU)$^3$. Regions both
above and below the disk midplane are included in the simulation
volume. The finest grid covers $\pm1/8R_H$ in each direction with a
spacing of $\sim6.5\times 10^9$~cm per zone, corresponding to about
1.3 times the diameter of Neptune.
\vspace{-5mm}
\section{Results of our simulations}
We have performed a large set of simulations covering a range of both
initial conditions and physical models and a paper describing each of
these studies is in preparation. For our purposes, it is sufficient to
summarize the results by examining one model in detail, whose initial
conditions were described in the last section, and which was run for a
total of 100~yr of simulation time. We consider it to be the most
realistic model of those we studied in the sense that it includes the
most complete inventory of physical processes.
\begin{figure}
\psfig{file=nelson-ruffert_f1top.eps,height=95mm,width=120mm,angle=-90}
\psfig{file=nelson-ruffert_f1bot.eps,height=95mm,width=120mm,angle=-90,rheight=91mm}
\caption{\label{fig:cutout-mid-dens}
The volume density in a 2D slice taken through the disk midplane for
the full simulation volume (top), and a blowup of the region within
$\pm1/2R_H$ of the core (bottom). Velocity vectors are shown projected
onto the plane on the coarsest grid in the top panel, and on the
fourth nested grid in the bottom panel. The white circles define the
radius of the accretion sphere $R_A=GM_{\rm pl}/c^2_s$ (small circle)
and the Hill radius (large circle). The grey scale is logarithmic and
extends from $\sim10^{-10}$ to $\sim10^{-8}$ gm~cm$^{-3}$. }
\end{figure}
\begin{figure}
\begin{center}
\psfig{file=nelson-ruffert_f2top.eps,height=100mm,width=120mm,angle=-90}
\psfig{file=nelson-ruffert_f2bot.eps,height=100mm,width=120mm,angle=-90,rheight=90mm}
\end{center}
\caption{\label{fig:cutout-mid-temp}
As in Figure \ref{fig:cutout-mid-dens}, but showing temperature. The
color scales are logarithmic and extend from 180 to 6500 K.
Temperatures as high as 3-5000~K are common very close to the core,
with temperatures decreasing rapidly to the background $\sim$ 200~K
value at increasing distance.}
\end{figure}
In Figures \ref{fig:cutout-mid-dens} and \ref{fig:cutout-mid-temp}, we
show 2D slices of the gas density and temperature, taken through the
disk midplane at a time 74~yr after the beginning of the simulation.
In both Figures, the structures are highly inhomogeneous and become
progressively more so closer to the core. Densities both above and
below that of the background flow develop due to shocks that produce
hot `bubbles', which then expand into the background flow. One such
bubble is particularly visible in the plots of the temperature
distribution, emerging to the lower right. Such structures are common
over the entire the duration of the simulation and emerge in all
directions, depending on details of the flow at each time. Activity
persists for the entire simulation, and for as long as we have
simulated the evolution without significant decay or growth. Lower
resolution models that were run for much longer ($\sim1600$~yr) also
display continuing activity. However, since we neglect cooling, we
cannot expect the flow to become much less active over time.
In conflict with the expectation from orbital mechanics that the flow
of material approaching the Hill volume will turn around on a
`horseshoe' orbit, matter approaching the outer portion of the Hill
volume is relatively unaffected, often passing completely through its
outer extent with only a small deflection. In contrast with this quiet
flow further away, material is very strongly perturbed on the scale of
the accretion radius, where large amplitude space and time varying
activity develops. This too conflicts with the orbital mechanics
picture, in which matter inside the Hill volume simply orbits the
core. Material can enter the Hill volume from the background flow and
shocked, high entropy material can escape and rejoin the background
flow. Changes in the flow pattern occur on timescales ranging from
hours to years, with a typical encounter time inside the accretion
radius of less than a month.
\vspace{-5mm}
\section{The new scenario for chondrule formation}\label{sec:chond}
As readers of this proceedings volume will be aware, the theory of
chondrule formation suffers from no lack of data, but rather from
insufficient understanding of what physical processes are present in
the solar nebula, where they are present and whether they produce
conditions appropriate for chondrule formation. Briefly summarized
from \citet{PP4_Jones}, we note for our purposes that chondrules
underwent both a very rapid heating event to $\sim$2000~K, followed
quickly by a rapid cooling event of magnitude 50-1000~K/hr. Among a
veritable zoo of models purporting to produce such conditions, passage
of solid material through nebular shocks is currently favored as among
the most likely (see e.g., Desch et~al., in these proceedings).
Among its drawbacks are, first, that shocks that have the right
density, temperature and velocity characteristics are hard to form
and, second, that it is difficult to arrange for these shocks to exist
for a long enough time to produce enough bodies to match the current
observations.
The parameter space for which chondrule production may occur in shocks
\citep[][Table 5]{DC02} is bounded by preshock densities within a
factor of a few of $10^{-9}$~gm~cm$^{-3}$, temperatures near 300~K and
shock speeds near 6--7~km~s$^{-1}$, and that enhancements in the
particle density were important for formation models. Concurrent work
of \citet{CH02,iida01} come to similar conclusions. Figures
\ref{fig:cutout-mid-dens} and \ref{fig:cutout-mid-temp} show that
appropriate background conditions exist in our simulations, and we
propose that dynamical activity in the Jovian envelope could provide a
source for both shocks and reversible compressive heating that remedy
the shortcomings noted above.
\begin{figure}
\psfig{file=nelson-ruffert_f3.eps,height=8.0in,width=6.0in,rheight=7.7in}
\caption{\label{fig:thermo-traj}
The temperature (left panels) and density (right panels) encountered
by three test particles as they are advected through a snapshot of the
simulation volume, each as functions of a fictitious time coordinate.
The zero point for time has been arbitrarily shifted in each case so
that the peak temperature occurs at $\sim10$~days.}
\end{figure}
Although the basic temperature and density conditions can be seen
in the Figures, ascertaining whether short duration heating and
cooling events are also present requires additional analysis. We
considered and discarded the option of including test particles
that could be passively advected with the local flow, because of
the significant added computational and storage cost they would
require. Instead, we rely on the similar but not identical solution
of `advecting' test particles through a snapshot of the conditions
at a specific time.
To that end, we have performed a streamline analysis of the
trajectories of an ensemble of test particles injected into the
simulation restart dump for the same time as that shown for Figures
\ref{fig:cutout-mid-dens} and \ref{fig:cutout-mid-temp}. Particles are
placed at a set of locations at the edge of the grid and allowed to
advect through the simulation volume using the local fluid velocity,
linearly interpolated between the grid zones adjacent to the particle
at each time. It is given a timestep based on that used to advance the
gas at that location, so that it advances forwards through a
fictitious time coordinate, passing through the volume at the rate of
matter in the local flow. Similar linear interpolations of density and
internal energy are used to derive the temperature from the equation
of state.
In Figure \ref{fig:thermo-traj}, we show temperatures and densities
for three test particles for which a passage through the environment
of the core has occurred. The particles shown were chosen to illustrate
the range of peak temperatures and densities that may be
encountered by slightly different trajectories through the envelope.
Each encountered a short duration heating and cooling event as they
passed through the dynamically active region close to the core, but
the magnitudes and duration varied in each case. In the top example
(test particle 1), the peak temperature rose to well over 3000~K and
the density to nearly $10^{-8}$~gm~cm$^{-3}$ as the particle's trajectory
passed through the innermost regions of the envelope. The conditions
encountered by the particle 2 were much more moderate, with peak
temperature and density values of $\sim1800$~K and
$4\times10^{-9}$~gm~cm$^{-3}$ respectively. Particle 3's trajectory took
it only through the outer portion of the envelope, so that it
encountered only much lower temperatures and densities, although in
this case for a much longer period of time than either of the other
two cases shown. All three particles encountered several days of
cooler processing near 500-800~K, and a visual scan of many other
similar events shows that such additional annealing is not uncommon.
The temperature peaks for test particles 1 and 2 offer widths of $\la
1$ day, with both a very rapid rise and fall. Close examination of
their trajectories reveal that the widths reflect essentially the
crossing time for a single grid zone in the simulation. Therefore,
although already quite narrow, we believe that they are actually
overestimates of the true widths that would be obtained from the
models as realized at still higher resolution. The dynamical activity
in the envelope, coupled with the temporally very narrow
temperature/density peaks, especially in cases similar to that of
particle 2, offer evidence that chondrule production could occur
in the environment of Jovian planet formation.
\vspace{-5mm}
\section{Concluding comments, questions and skeptical remarks}
The scenario we present offers a number of attractive advantages over
other models for shock formation in the solar nebula. First, it
naturally provides a mechanism for producing very short duration
heating and cooling events with thermodynamic characteristics similar
to those expected to produce chondrules. Unlike models using global
gravitational instabilities in the circumstellar disk, it does not
require a massive disk for the activity to exist, and in particular,
for a massive disk to continue to exist for the long period of time
required to produce chondrules in large quantities. Production in this
scenario will endure for a significant fraction of the formation
timescale for Jovian planets (itself a significant fraction of the
disk lifetime), resulting both in a large yield of objects and
allowing both processing and reprocessing events to occur. Also,
because there were a number of similar proto-Jovian objects in the
solar system, processing will occur in many locations. If correct, our
results mean that the standard core accretion model for Jovian planet
formation will require significant revision, and will imply both
link between the timescales for chondrule formation and planet
formation, and that chondrules represent a physically examinable link
to the processes present during the formation of Jovian planets.
There are still many unanswered questions contained in this scenario,
however. Before any detailed analysis of the conditions will be of
real use for either the theory of chondrule formation or planet
formation, we must perform simulations that include both radiative
transport and a non-ideal equation of state for the gas. Without them,
the densities and temperatures obtained in our simulations will
contain significant deviations compared to the real systems they are
intended to model. Moreover, including them means the dynamical
properties of the system will change, perhaps eliminating the shocks
altogether. Preliminary indications with locally isothermal and
locally isentropic equations of state suggest that at least some
activity will remain, so we remain hopeful.
We have simulated only a 100~yr segment of the Jovian planet formation
history, during a time when the envelope did not contain much mass. We
cannot be certain that the activity will remain when the envelope
becomes more massive.
If we find that shocks are produced in more physically inclusive
models, it will be interesting to perform a significantly more
detailed analysis of the conditions in those shocks, including their
velocities relative to the fluid flow. Will such analysis show that
the shocks fit into the required density/temperature/velocity
parameter space? One concern already apparent is that the flow
velocities of material flowing through the shocks (1-2 km~s$^{-1}$, as
estimated from the directly available fluid flow velocities
themselves) are uncomfortably low compared to those quoted by
\cite{DC02} and \cite{iida01}. It seems unlikely that the
velocities will be increased as dramatically as that by any of the
improvements to the models we might make.
Although the results from our streamline analysis are promising, they
are no substitute for an investigation of the trajectories of specific
packets of material through system. A detailed study of this issue
will be important on several levels. First, it is not clear that a
particle's thermodynamic trajectory will be the same when it is
advected through an actual time dependent flow, as opposed to the
fictitious advection through a fixed flow that we have performed. It
will also be important to understand what fraction of material that
approaches the core actually encounters conditions appropriate for
chondrule formation during its passage, in comparison to material that
instead encounters regions that are inappropriate. From a slightly
broader perspective, the same question becomes what fraction of the
total budget of solid material in the solar nebula undergoes such
processing? Secondly, after ejection from the envelope, it will be
important to understand how the processed materials get from where
they form (near 5 AU) to their final locations, in meteorites
throughout the inner solar system.
Finally, in our discussion we have focused solely on the conditions
required for the production of chondrules. On the other hand, they are
not the only meteoritic material that has undergone heating and
cooling events. \citet{HD02} discuss one such class of material,
composed of annealed silicates found in comets, for which the required
temperatures and densities are much lower. Will our scenario be able
to produce material of this sort as well?
In future work, we plan to implement successively more advanced models
to simulate the Jovian planet formation process. One important aspect
of this project will be to address questions important for the
formation of chondrules and other annealed silicates in much more
detail than our current models allow.
\acknowledgements{
AFN gratefully acknowledges support from NSF grant NSF/AST-0407070,
and from his UKAFF Fellowship. The computations reported here were
performed using the UK Astrophysical Fluids Facility (UKAFF). AFN
gratefully acknowledges conversations with A. Boley and R. Durisen
during the conference that led to the streamline discussions
surrounding Figure \ref{fig:thermo-traj}. }
\vspace{-5mm}
| 2024-02-18T23:39:48.079Z | 2005-10-14T01:35:20.000Z | algebraic_stack_train_0000 | 448 | 4,005 |
|
proofpile-arXiv_065-2238 | \section{Introduction}
\label{secintro}
Protein ion channels functioning in biological lipid membranes is
a major frontier of biophysics ~\cite{Hille,Doyle}. An ion channel
can be inserted in an artificial membrane in vitro and studied
with physical methods. For example, one can measure a
current--voltage response of a single water--filled channel
connecting two water reservoirs (Fig.~\ref{figshortchannel}) as
function of concentration of various salts in the bulk. It is
well known~\cite{Parsegian,Jordan,Teber,Kamenev} that a neutral
channel creates a high electrostatic self--energy barrier for ion
transport. The reason for this phenomena lies in the high ratio of
the dielectric constants of water, $\kappa_1 \simeq 80$, and the
surrounding lipids, $\kappa_2\simeq 2$. For narrow channels the
barrier significantly exceeds $k_BT$ and thus constitutes a
serious impediment for ion transport across the membrane. It is a
fascinating problem to understand mechanisms ``employed'' by
nature to overcome the barrier.
At a large concentration of salt in the surrounding water the
barrier can be suppressed by screening. However, for biological
salt concentrations and narrow channels the screening is too weak
for that \cite{Kamenev}. As a result, at the ambient salt
concentrations even the screened barrier is usually well above
$k_BT$. What seems to be the ``mechanism of choice'' in narrow
protein channels is ``{\em doping}''. Namely, there is a number
of amino-acids containing charged radicals. Amino-acids with, say,
negative charge are placed along the inner walls of the channels.
The static charged radicals are neutralized by the mobile cations
coming from the water solution. This provides a necessary high
concentration of mobile ions within the channel to suppress the
barrier~\cite{Zhang}.
Similar physics is at work in some artificial devices. For
example, water filled nanopores are studied in silicon or silicon
oxide films~\cite{Li}. Dielectric constant of silicon oxide is
close to $4\ll 80$, so a very narrow and long artificial channels
may have large self-energy barrier. Ion transport through such a
channel can be facilitated by naturally appearing or intentionally
introduced wall charges, "dopants". Their concentration may be
tuned by pH of the solution.
The aim of this paper is to point out that the doping may lead to
another remarkable phenomena: the ion exchange phase transitions.
An example of such a transition is provided by a negatively doped
channel in the solution of monovalent and divalent cations. At
small concentration of divalent cations, every dopant is
neutralized by a single monovalent cation. If the concentration of
divalent salt increases above a certain critical concentration the
monovalent cations leave the channel, while divalent ones enter
to preserve the charge neutrality. Since neutralization with the
divalent ions requires only half as many ions, it may be carried
out with lesser entropy loss than the monovalent neutralization.
The specifics of the 1d geometry is that this competition leads to
the first order phase transition rather than a crossover (as is
the case for neutralizing 2d charged surfaces in the solution). We
show that the doped channels exhibit rich phase diagrams in the
space of salt and dopant concentrations.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.24\textheight]{channel9.eps}
\end{center}
\caption{ Electric field of a cation in a cylindrical channel
with the large dielectric constant $\kappa_1\gg \kappa_2$. $L$ is
the channel length, $a$ is its radius. The self--energy barrier is
shown as a function of the $x$ coordinate. }
\label{figshortchannel}
\end{figure}
Let us remind the origin of the electrostatic self--energy
barrier. Consider a single cation placed in the middle of the
channel with the length $L$ and the radius $a$,
Fig.~\ref{figshortchannel}. If the ratio of the dielectric
constants is large $\kappa_1/\kappa_2\gg 1$, the electric
displacement $D$ is confined within the channel. As a result, the
electric field lines of a charge are forced to propagate within
the channel until its mouth. According to the Gauss theorem the
electric field at a distance $x > a$ from the cation is uniform
and is given by $E_0 = 2e/(\kappa_1 a^{2})$. The energy of such a
field in the volume of the channel is:
\begin{equation}
U_L(0) = {\kappa_1 E_{0}^{2}\pi a^{2}L \over 8\pi} = {e^{2}L \over
2\kappa_1 a^{\, 2}}={eE_{0}L \over 4}\, , \label{short}
\end{equation}
where the zero argument is added to indicate that there are no
other charges in the channel. The bare barrier, $U_{L}(0)$, is
proportional to $L$ and (for a narrow channel) can be much larger
than $k_BT$, making the channel resistance exponentially large.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.065\textheight]{pair.eps}
\end{center}
\caption{A cation (in thin circle) bound to a negative wall charge
(in thick circle). When the cation moves away from the host the
energy grows linearly with the separation $|x|$. }\label{figpair}
\end{figure}
If a dopant with the unit negative charge is attached to the
inner wall of the channel it attracts a mobile cation from the
salt solution (Fig.~\ref{figpair}). There is a confining
interaction potential $\Phi(x)= E_0|x|$ between them. Condition
$e\Phi(x_T)=k_BT$ defines the characteristic thermal length of
such classical ``atom'', $x_T = k_BT/eE_0 = a^{2}/2l_{B}$, where
$l_B \equiv e^{2}/(\kappa_{1}k_{B}T)$ is the Bjerrum length (for
water at the room temperature $l_B=0.7\,$nm). This ``atom'' is
similar to an acceptor in a semiconductor (the classical length
$x_T$ plays the role of the effective acceptor Bohr radius). It is
convenient to measure the one--dimensional concentrations of both
mobile salt and dopants in units of $1/x_T$. In such units the
small concentration corresponds to non--overlapping neutral pairs
(``atoms''), while the large one corresponds to the dense plasma
of mobile and immobile charges. These phases are similar to
lightly and heavily doped $p$-type semiconductors, respectively.
It is important to notice that in both limits {\em all} the
charges interact with each other through the 1d Coulomb potential
\begin{equation}
\Phi(x_i-x_j)=\sigma_i\sigma_j E_0|x_i-x_j|, \label{longrangepot}
\end{equation}
where $x_i$ and $\sigma_i=\pm 1$ are coordinates and charges of
both dissociated ions and dopants. Another way to formulate the
same statement is to notice that the electric field experiences
the jump of $2E_0\sigma_i$ at the location of the charge
$\sigma_i$. Because all the charges inside the channel are
integers in unit of $e$, the electric field is {\em conserved}
modulo $2E_0$. It is thus convenient to define the {\em order
parameter} $q\equiv \mbox{frac}[E(x)/2E_0]$, which is the same at
every point along the channel. The physical meaning of $q\in
[0,1]$ is the image charge induced in the bulk solution to
terminate the electric field lines leaving the channel. One may
notice that the adiabatic transport of a unit charge across the
channel is always associated with $q$ spanning the interval
$[0,1]$. Indeed, a charge at a distance $x$ from one end of the
channel produces the fields $2E_0x/L$ and $2E_0(x/L-1)$ to the
right and left of $x$, correspondingly. Therefore $q=x/L$
continuously spans the interval $[0,1]$ as the charge moves from
$x=0$ to $x=L$.
To calculate the transport barrier (as well as the thermodynamics)
of the channel one needs to know the free energy $F_q$ of the
channel as a function of the order parameter. The equilibrium
(referred below as the {\em ground state}) free energy corresponds
to the minimum of this function $F_{\min}$. Transport of charge
and thus varying $q$ within $[0,1]$ interval is associated with
passing through the maximum of the $F_q$ function $F_{\max}$.
Throughout this paper we shall refer to such a maxima as the {\em
saddle point} state. The transport barrier is given by the
difference between the saddle point and the ground state free
energies: $U_L=F_{\max} - F_{\min}$. The equilibrium
concentrations of ions inside the channel are given by the
derivatives of $F_{\min}$ with respect to the corresponding
chemical potentials related to concentrations in the bulk
solution. We show below that the calculation of the partition
function of the channel may be mapped on a fictitious quantum
mechanical problem with the periodic potential. The function
$F_q$ plays the role of the lowest Bloch band, where $q$ is mapped
onto the quasi-momentum. As a result, the entire information of
the thermodynamical as well as transport properties of the channel
may be obtained from the analytical or numerical diagonalization
of the proper ``quantum'' operator.
For an infinitely long channel with the long range interaction
potential Eq.~(\ref{longrangepot}) we arrive at true phase
transitions, in spite of the one--dimensional nature of the
problem. However, at finite ratio of dielectric constants electric
field lines exit from the channel at the distance $\xi \simeq
a(\kappa_1/\kappa_2)^{1/2}\approx 6.8\,a$. As a result, the
potential Eq.~(\ref{longrangepot}) is truncated and phase
transitions are smeared by fluctuations even in the infinite
channel. In practice all the channels have finite length which
leads to an additional smearing. We shall discuss sharpness of
such smeared transitions below.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.07\textheight]{dopea.eps} \hfill
\includegraphics[height=0.056\textheight]{dopeb.eps}
\end{center}
\caption{The ground state and the transport saddle point of the
channel with negative monovalent dopants. Only the right part of
the channel is shown. (a) The ground state ($q=0$): all dopants
(thick circles) bound mobile cations (thin circles). (b) The
transport saddle point ($q=1/2$): cations are free in sections
between two adjacent dopants.}\label{figacceptor}
\end{figure}
The outline of this paper is as follows: In section \ref{sec2} we
briefly review results for the simplest model of periodically
placed negative charges in the monovalent salt solution (Fig.
\ref{figacceptor}), published earlier in the short
communication~\cite{Zhang}. In this example the barrier is
gradually reduced with the increasing doping. Sections \ref{sec3}
-- \ref{sec5} are devoted to several modifications of the model
which, contrary to expectations raised by the results of section
\ref{sec2}, lead to ion exchange phase transitions. In section
\ref{sec3} we consider a channel with the alternating positive and
negative dopants in monovalent salt solution and study the phase
transition at which mobile ions leave the channel. In section
\ref{sec4} and \ref{sec5} we return to equidistant negative
dopants, but consider the role of divalent cations in the bulk
solution. In particular, in section \ref{sec4}, we assume that all
cations in the bulk solution are divalent and show that this does
{\em not} lead to four times increase of the self--energy barrier.
The reason is that the divalent ions are effectively
fractionalized in two monovalent excitations. In section
\ref{sec5} we deal with a mixture of monovalent and divalent
cations and study their exchange phase transition. We discuss
possible implications of this transition for understanding the
Ca-vs.-Na selectivity of biological Ca channels. For all the
examples we present transport barrier, latent ion concentration
and phase diagram along with the simple estimates, explaining the
observed phenomenology. The details of the mapping onto the
effective quantum mechanics as well as of the ensuing numerical
scheme are delegated to sections \ref{secanalytical} and
\ref{secnumerical}. Results of sections \ref{sec3} -- \ref{sec5}
are valid only for very long channels and true 1d Coulomb
potential. In section \ref{secxi} we discuss the effects of
finite channel length and electric field leakage from the channel
on smearing of the phase transitions. In section \ref{Donnan} we
consider boundary effects at the channel ends leading to an
additional contact (Donnan) potential. We conclude in section
\ref{secconclusion} by brief discussion of possible
nano-engineering applications of the presented models.
\section{Negatively doped channel in a monovalent solution }
\label{sec2}
As the simplest example of a doped channel we consider a channel
with negative unit--charge dopants periodically attached to the
inner walls at distance $x_T/\gamma$ from each other (Fig.
\ref{figacceptor}). Here $\gamma$ is the dimensionless
one--dimensional concentration of dopants. At both ends (mouths)
the channel is in equilibrium with a monovalent salt solution with
the bulk concentration $c$. It is convenient to introduce the
dimensionless monovalent salt concentration as $\alpha_1\equiv c
\pi a^2 x_T$. We shall restrict ourselves to the small salt
concentration (or narrow channels) such that $\alpha_1\ll 1$. In
this case the transport barrier of the undoped ($\gamma=0$)
channel is given by Eq.~(\ref{short}) (save for the small
screening reduction which scales as $1-4\alpha_1$,
[\onlinecite{Kamenev}]).
The calculations, described in details in sections
\ref{secanalytical} and \ref{secnumerical}, lead to the barrier
plotted in Fig.~\ref{figfgamma} as a function of the dopant
concentration $\gamma$. The barrier decreases sharply as $\gamma$
increases. For example, a very modest concentration of dopants
$\gamma=0.2$ is enough to suppress the barrier more than five
times (typically bringing it below $k_B T$). There are {\em no}
phase transitions in this system in the entire phase space of
concentrations $\gamma$ and $\alpha_1$.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.16\textheight]{barrier9.eps}
\end{center}
\caption{The function $U_L(\gamma)/U_L(0)$ for $\alpha_1=10^{-5}$.
Its $\gamma \ll 1$ asymptotic, Eq.~(\ref{barriereg}), is shown by
the dotted line. } \label{figfgamma}
\end{figure}
For the small dopant concentration, $\gamma\ll 1$, the result may
be understood with a simple reasoning. The ground state of the
channel corresponds to all negative dopants being locally
neutralized by the mobile cations from the solution
(Fig.~\ref{figacceptor} a). As a result, there is no electric
field within the channel and thus in the ground state $q=0$. The
corresponding free energy has only entropic component (the absence
of the electric fields implies no energy cost) which is given by:
\begin{equation}\label{groundeg}
F_0 = -\gamma L k_B T \ln (2\alpha_1)= U_{L}(0) \cdot 4\gamma \ln
[1/(2\alpha_1)]\, .
\end{equation}
Indeed, bringing one cation from the bulk solution into the
channel to compensate a dopant charge leads to the entropy
reduction $S_0=k_B \ln (\pi a^2 2x_T c)=k_B\ln(2\alpha_1)$. Here
$\pi a^2 2x_T$ is the allowed volume of the cation's thermal
motion within the channel, while $c^{-1}$ is the accessible volume
in the bulk.
The maximum of the free energy is associated with the state with
$q=1/2$, see Fig.~\ref{figacceptor} b. It can be viewed as a
result of putting a vacancy in the middle of the channel. The
latter creates the electric field $\pm E_{0}$ which orients the
dipole moments of all the ``atoms''. In other words, it orders all
the charges in an alternating sequence of positive and negative
ones. This unbinds the cations from the dopants and makes them
free to move between neighboring dopants (Fig.~\ref{figacceptor}
b). Indeed, upon such rearrangement the electric field is still
$E=\pm E_0$ everywhere in the channel, according to the Gauss
theorem. Therefore, the energy of $q=1/2$ state is still given by
Eq.~(\ref{short}). However, its entropy is dramatically increased
with respect to $q=0$ state due to the unbinding of the cations:
the available volume is now $\pi a^2 x_T/\gamma$ and the resulting
entropy per cation is $S_{1/2}=k_B \ln (\alpha_1/\gamma)$. The
corresponding free energy of the saddle point state is:
\begin{equation}\label{saddleeg}
F_{1/2} = U_{L}(0) [1 - 4\gamma \ln (\alpha_1/\gamma)]\, .
\end{equation}
Recalling that the transport barrier is given by the difference
between the saddle point and the ground state free energies, one
obtains:
\begin{equation}\label{barriereg}
U_L(\gamma)=U_{L}(0) [ 1 - 4\gamma \ln (1/2\gamma)]\, .
\end{equation}
This expression is plotted in Fig.~\ref{figfgamma} by the dotted
line. It provides a perfect fit for the transport barrier at small
dopant concentration. Equation~(\ref{barriereg}) is applicable for
$\alpha_1 < \gamma \ll 1$. In the opposite limit $\gamma <\alpha_1
\ll 1$ more free ions may enter the channel in the saddle point
state. As a result, the calculation of $S_{1/2}$ should be
slightly modified \cite{Zhang}, leading to:
\begin{equation}\label{barrieracueg}
U_L(\gamma) = U_{L}(0) \left[1-4\gamma\ln\left({1\over
2\alpha_1}\sinh{\alpha_1\over \gamma}\right)\right] \, .
\end{equation}
This result, exhibiting non--singular behavior in the small
concentration limit, is valid for an arbitrary relation between
$\alpha_1$ and $\gamma$ (both being small enough).
\section{Channel with alternating positive and negative dopants}
\label{sec3}
As a first example exhibiting the ion--exchange (actually
ion--release) phase--transition we consider a model of a
``compensated'' channel (the word compensation is used here by
analogy with semiconductors, where acceptors can be compensated
by donors). This is the channel with positive and negative
unit--charge dopants alternating in one-dimensional NaCl type
lattice with the lattice constant $2 x_T/\gamma$
(Fig.~\ref{figdopealternative}).
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.08\textheight]{dopealternative.eps}
\end{center}
\caption{A channel with the alternating dopants (thick circles).
The mobile counterions are shown as thin circles. }
\label{figdopealternative}
\end{figure}
The channel is filled with the solution of the monovalent salt
with the bulk dimensionless concentration $\alpha_1=\pi a^2 x_T
c$. The transport barrier calculated for $\alpha_1=0.01$ as a
function of the dopant concentration $\gamma$ is depicted in
Fig.~\ref{figalternative}.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.16\textheight]{alternative.eps}
\end{center}
\caption{ The transport barrier in units of $U_L(0)$ for the
``compensated'' channel of Fig.~\ref{figdopealternative} for
$\alpha_1=0.01$. The dotted line shows Eq.~(\ref{correctionb}).}
\label{figalternative}
\end{figure}
One observes a characteristic sharp dip in a vicinity of a certain
dopant concentration $\gamma_c\approx 0.06$. To clarify a reason
for such an unexpected behavior (cf. Fig.~\ref{figfgamma}) we plot
the free energy as a function of the order parameter $q$ for a few
values of $\gamma$ close to $\gamma_c$, see
Fig.~\ref{qalternative}. Notice that for small $\gamma$ the
minimum of the free energy is at $q=0$, corresponding to the
absence of the electric field inside the channel. The maximum is
at $q=1/2$, i.e. the electric field $\pm E_0$. However, once the
dopant concentration $\gamma$ increases the second minimum
develops at $q=1/2$, which eventually overcomes the $q=0$ minimum
at $\gamma=\gamma_c$. In the limit of large $\gamma$ the ground
state corresponds to $q=1/2$ (electric field $\pm E_0$), while the
saddle point state is at $q=0$ (no electric field). It is clear
from Fig.~\ref{qalternative} that the transition between the two
limits is of the first order.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.15\textheight]{qalternative.eps}
\end{center}
\caption{Free energy in units of $U_L(0)$ as a function of $q$ for
$\gamma=0.02,~0.05,~0.07$ and $0.09$ (from top to bottom) at
$\alpha_1=0.01$. The lower three graphs are vertically offset for
clarity.} \label{qalternative}
\end{figure}
To understand the nature of this transition, consider two
candidates for the ground state. The $q=0$ state, referred below
as $0$, is depicted on Fig.~\ref{figdopecompensate} a. In this
state every dopant tightly binds a counterion from the solution.
Such a state does not involve an energy cost and has the negative
entropy $S_0=k_B\ln(2\alpha_1)$ per dopant. As a result, the
corresponding free energy is (cf. Eq.~(\ref{groundeg})):
\begin{equation}\label{ground1}
F_0 = U_{L}(0) \cdot 4\gamma \ln [1/(2\alpha_1)]\, .
\end{equation}
An alternative ground state is that of the channel free from any
dissociated ions, Fig.~\ref{figdopecompensate} b. There is an
electric field $\pm E_0$ alternating between the dopants. This is
$q=1/2$ state, or simply $1/2$ state. Since no mobile ions enter
the channel, there is no entropy lost in comparison with the bulk.
There is, however, energy cost for having electric field $\pm E_0$
across the entire channel. As a result, the free energy of the
$1/2$ state is (cf. Eq.~(\ref{short})):
\begin{equation}\label{ground2}
F_{1/2} = U_L(0)\, .
\end{equation}
Comparing Eqs.~(\ref{ground1}) and (\ref{ground2}), one expects
that the critical dopant concentration is given by
\begin{equation}\label{gammac}
\gamma_c = -\left[4 \ln(2 \alpha_1)\right]^{-1}\, .
\end{equation}
For $\gamma< \gamma_c$ the state $0$ is expected to be the ground
state, while for $\gamma>\gamma_c$ the state $1/2$ is preferable.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.045\textheight]{dopecompensatea.eps} \hfill
\includegraphics[height=0.045\textheight]{dopecompensateb.eps} \hfill
\includegraphics[height=0.045\textheight]{dopecompensatec.eps}
\end{center}
\caption{States $0$, $1/2$ and $00$. The corresponding free
energies are given by Eqs.~(\ref{ground1}), (\ref{ground2}) and
(\ref{line3}).}\label{figdopecompensate}
\end{figure}
In Fig.~\ref{figphase} we plot the phase diagram on the $(\gamma,
~\alpha_1)$ plane. The phase boundary between $0$ and $1/2$ states
is determined from the condition of having two degenerate minima
of the free energy $F_q$ at $q=0$ and $q=1/2$. In the small
concentration limit (see the inset in Fig.~\ref{figphase}) the
phase boundary is indeed perfectly fitted by Eq.~(\ref{gammac}).
For larger concentrations it crosses over to $\gamma_c\propto
\sqrt{\alpha_1}$. This can be understood as a result of a
competition between dopant separation $x_T/\gamma$ and the Debye
screening length $r_D\propto x_T/ \sqrt{\alpha_1}$. For $\gamma <
\sqrt{\alpha_1}$ we have $r_D < x_T/\gamma$, and thus each dopant
is screened locally by a cloud of mobile ions. As a result, the
neutral state $0$ is likely to be the ground state. In the
opposite case $\gamma > \sqrt{\alpha_1}$ the Debye length is
larger than the separation between oppositely charged dopants.
This is the limit of a week screening when most dopants may not
have counterions to screen them. Thus the $1/2$ state has a chance
to have a lower free energy.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.21\textheight]{transition.eps} \hfill
\end{center}
\vspace{-5.2cm} \hspace{-1.5cm}
\includegraphics[height=0.1\textheight]{phaseinset.eps}
\vspace{2.5cm} \caption{The phase diagram of the channel with
alternating doping (dotted lines). The phase boundary between the
$q=0$ and $q=1/2$ phases can be fitted as $\alpha_1\approx
5\gamma^2$ for $\alpha_1>1$. Inset: the small concentration part
of the phase boundary fitted by Eq.~(\ref{gammac}) (the full
line).} \label{figphase}
\end{figure}
Crossing the phase boundary in either direction is associated with
an abrupt change in the one-dimensional density of the salt ions
within the channel. The latter may be evaluated as the derivative
of the ground state free energy with respect to the chemical
potential of the salt: $n_{\mbox{ion}}= - {\alpha_1 \over L k_BT
}\, {\partial F_{\min}/ \partial \alpha_1}$. In
Fig.~\ref{figlatentjump} we plot concentration of ions within the
channel, $n_{\mbox{ion}}$, in units of dopant concentration
$\gamma/x_T$ as a function of the bulk salt normality $\alpha_1$.
One clearly observes the latent concentration associated with the
first-order transition. As the bulk concentration increases past
the critical one, the mobile ions abruptly enter the channel. One
can monitor the latent concentration $\Delta n$ along the phase
transition line of Fig.~\ref{figphase}. In
Fig.~\ref{figlatentnumb} we plot the latent concentration along
the phase boundary as function of the critical $\alpha_1$. As
expected, in the dilute limit the latent concentration coincides
with the concentration of dopants (i.e. every dopant brings one
mobile ion). On the other hand, in the dense limit, the latent
concentration is exponentially small (but always finite).
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.17\textheight]{latentjump.eps}
\end{center}
\caption{The concentration of cations inside the channel in units
of $\gamma/x_T$ for $\gamma=0.1$. The discontinuous change occurs
at the phase transition point.} \label{figlatentjump}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.17\textheight]{latentnumb.eps}
\end{center}
\caption{The latent concentration of cations in units of
$\gamma/x_T$ along the phase boundary line.} \label{figlatentnumb}
\end{figure}
Let us return to the calculation of the transport barrier,
Fig.~\ref{figalternative}. To this end one needs to understand the
nature of the saddle point states (in addition to that of the
ground states, discussed above). Deep in the phase where $0$ is
the ground state, the role of the saddle point state is played by
the $1/2$ state. Correspondingly the transport barrier is
approximately given by the difference between Eq.~(\ref{ground2})
and Eq.~(\ref{ground1}). One may improve this estimate by taking
into account that in the $1/2$ state the free ions may enter the
channel (in even numbers to preserve the total charge neutrality).
This leads to the entropy of the $1/2$ state given by
$S_{1/2}=k_B\ln [ \sum_{k=0}^\infty (\alpha_1/\gamma)^{2k} /
(2k)!]$ per dopant. As a result, one finds for the transport
barrier in the $0$ state:
\begin{equation}
U_L(\alpha_1, \gamma) = U_L(0)
\left[1-4\gamma\ln\left({1\over 2\alpha_1}\cosh{\alpha_1\over
\gamma}\right)\right]\, . \label{correctionb}
\end{equation}
This estimate is plotted in Fig.~\ref{figalternative} by the
dotted line.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.19\textheight]{band.eps}
\end{center}
\caption{Free energy diagram for $\alpha_1=0.01$ as function of
dopant concentration $\gamma$. The solid lines are numerical
results for the ground and saddle point states.
Eqs.~(\ref{ground1}), (\ref{ground2}) and (\ref{line3}),
describing states $0$, $1/2$ and $00$, correspondingly, are shown
by dashed, dotted and dash-dotted lines. } \label{figband}
\end{figure}
In the $1/2$ state, Fig.~\ref{figdopecompensate} b, the channel is
almost empty in the ground state configuration. The saddle point
may be achieved by putting a single cation in the middle of the
channel. This will rearrange the pattern of the internal electric
field as depicted in Fig.~\ref{figdopecompensate} c. This state
corresponds to $q=0$ and is denoted as $00$ (to distinguish it
from the state $0$, Fig.~\ref{figdopecompensate} a). Its free
energy (coinciding with the energy) is given by:
\begin{equation}\label{line3}
F_{00} = 2 U_L(0)\, .
\end{equation}
In Fig.~\ref{figband} we plot the free energies of the three
states $0$, $1/2$ and $00$ (cf. Fig.~\ref{figdopecompensate} and
Eqs.~(\ref{ground1}), (\ref{ground2}) and (\ref{line3})) as
functions of the dopant concentration $\gamma$. On the same graph
we also plot the calculated ground state free energy $F_{\min}$
along with the saddle point free energy $F_{\max}$. It is clear
that the ground state undergoes the first order transition
between $0$ and $1/2$ states at $\gamma=\gamma_c$. On the other
hand, the saddle point state experiences two smooth crossovers:
first between $1/2$ and $0$ and second between $0$ and $00$
states. The difference between the saddle point and the ground
state free energies is the transport barrier, which exhibits
exactly the type of behavior observed in
Fig.~\ref{figalternative}. Curiously, at large concentration of
dopants the barrier approaches exactly the same value as for the
undoped channel, $U_L(\alpha_1)$, [\onlinecite{Kamenev}]. This
could be expected, since extremely closely packed alternative
dopants compensate each other, effectively restoring the undoped
situation.
\section{Negatively doped channel with divalent cations}
\label{sec4}
In the above sections all the mobile ions as well as dopants were
monovalent. In this section we study the effect of cations being
{\em divalent} (e.g. Ca$^{2+}$, or Ba$^{2+}$), while all negative
charges (both anions and dopants) are monovalent
(Fig.~\ref{figdopedivalent}). For example, one can imagine a
channel with negative wall charges in CaCl$_2$ solution. We denote
the dimensionless concentration of the divalent cations as
$\alpha_2$. The concentration of monovalent anions is simply
$\alpha_{-1}=2\alpha_2$.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.08\textheight]{dopedivalent.eps}
\end{center}
\caption{A channel with periodic dopants (thick circles) and
divalent cations. Mobile ions (Ca$^{2+}$ and Cl$^-$) are in thin
circles.} \label{figdopedivalent}
\end{figure}
The transport barrier as function of the dopant concentration
$\gamma$ for $\alpha_2=5\cdot 10^{-7}$ is shown in
Fig.~\ref{figdoublefit}.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.22\textheight]{doublefit.eps}
\end{center}
\caption{The transport barrier $U_L(\alpha_2,\gamma)$ in units of
$U_L(0)$ for $\alpha_2=5\cdot 10^{-7}$ (solid line). The dotted
line is $|F_{1/2}-F_0|$, and dashed line is $|F_{00}-F_{1/2}|$
calculated using Eqs.~(\ref{freeenergy0}), (\ref{freeenergy12})
and (\ref{freeenergy1}).} \label{figdoublefit}
\end{figure}
Similarly to the case of the alternating doping (cf.
Fig.~\ref{figalternative}) the barrier experiences a sudden dip at
some critical dopant concentration $\gamma_c\approx 10^{-2}$. To
understand this behavior we looked at the free energy $F_q$ as
function of the order parameter $q$ for several values of $\gamma$
in the vicinity of $\gamma_c$. The result is qualitatively similar
to that depicted in Fig.~\ref{qalternative}. Thus, it is again the
first order transition between two competing states that is
responsible for the behavior observed in Fig.~\ref{figdoublefit}.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.046\textheight]{dopedoublea.eps} \hfill
\includegraphics[height=0.046\textheight]{dopedoubleb.eps} \hfill
\includegraphics[height=0.046\textheight]{dopedoublec.eps}
\end{center}
\caption{The states $0$, $1/2$, and $00$ for the channel with
divalent cations. The corresponding free energies are given by
Eqs.~(\ref{freeenergy0}), (\ref{freeenergy12}) and
(\ref{freeenergy1}). Dopants (thick circles) and anions (thin
circles) are monovalent, while cations (shown by $2+$ in thin
circles) are divalent. Electric fields are shown schematically,
one line per $E_0$. }\label{figdopedouble}
\end{figure}
The two candidates for the ground state, denoted, in accordance
with the fractional part of the internal electric field $q$, as
$0$ and $1/2$, are depicted in Fig.~\ref{figdopedouble} a,b. In
the state $0$ every dopant is screened locally by an anion and a
doubly charged cation (Fig.~\ref{figdopedouble} a). The
corresponding free energy (consisting solely from the entropic
part) is given by~\cite{ft6}:
\begin{equation}\label{freeenergy0}
F_{0} = U_L(0)\cdot 4\gamma\ln\left({1 \over 6 \alpha_2^2}\right).
\end{equation}
The other state, $1/2$, has every second dopant overscreened by a
divalent cation (Fig.~\ref{figdopedouble} b). The free energy of
this state is:
\begin{equation}\label{freeenergy12}
F_{1/2} = U_L(0) \left[1+{4\gamma\over2}\ln(1 / \alpha_2)\right]\,
.
\end{equation}
Comparing Eqs.~(\ref{freeenergy0}) and (\ref{freeenergy12}), one
finds for the critical dopant concentration:
\begin{equation}\label{gammacdiv}
\gamma_c=-\left[ 6\ln (\alpha_2) +4\ln6 \right]^{-1}\, .
\end{equation}
For $\alpha_2=5\cdot 10^{-7}$ it leads to $\gamma_c\approx 0.012$
in a good agreement with Fig.~\ref{figdoublefit}.
To explain the transport barrier observed for small $\gamma$
(Fig.~\ref{figdoublefit}) one needs to know the saddle point
state. Such a state is depicted in Fig.~\ref{figdopedouble} c and
is denoted as $00$. It has one divalent cation trapped between
every other pair of dopants. It is easy to see that for such
arrangement the cations are free to move within the ``cage''
defined by the two neighboring dopants. This renders a rather
large entropy of the $00$ state. Its free energy is given by
\begin{equation}\label{freeenergy1}
F_{00} = U_L(0) \left[2+{4\gamma\over2}\ln({\gamma/
\alpha_2})\right]\, .
\end{equation}
In Fig.~\ref{figdoubleband} we plot the free energies of the
states $0$, $1/2$ and $00$, given by Eqs.~(\ref{freeenergy0}),
(\ref{freeenergy12}) and (\ref{freeenergy1}) correspondingly, as
functions of $\gamma$. On the same graph we plot calculated ground
state free energy $F_{\min}$ along with the saddle point free
energy $F_{\max}$ (full lines). One observes that the ground state
indeed undergoes the first order transition between the states $0$
and $1/2$ upon increasing $\gamma$ (lower full line). On the other
hand, the saddle point evolves smoothly from $1/2$ to $0$ and
eventually to $00$ (upper full line). The difference between the
two gives the transport barrier depicted in
Fig.~\ref{figdoublefit}, where $|F_{1/2}-F_0|$ and
$|F_{00}-F_{1/2}|$ are also shown for comparison.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.19\textheight]{doubleband.eps}
\end{center}
\caption{Free energy diagram for $\alpha_2=5\cdot 10^{-7}$ as a
function of dopant concentration $\gamma$. The full lines are
numerical results for the ground and saddle point states
respectively. Eqs.~(\ref{freeenergy12}), (\ref{freeenergy1}) and
(\ref{freeenergy0}) describing states $1/2$, $00$ and $0$, are
shown by dashed, dash-dotted and solid thin line correspondingly.
State $0$ is the ground state for $\gamma<\gamma_c$, state $1/2$
is the ground state for $\gamma>\gamma_c$, and the state $00$ is
the saddle point for $\gamma>\gamma_c'$. The ground state
undergoes the first order phase transition at $\gamma_c$, while
the saddle point state evolves in a continuous way. }
\label{figdoubleband}
\end{figure}
It is worth noticing that on the both sides of the transition the
transport barrier is close to $U_L(0)$, characteristic for the
transfer of the {\em unit} charge $e$. One could expect that for
charges $2e$ the self-energy barrier should be rather $4U_L(0)$.
This apparent reduction of the charge seems natural for the very
small $\gamma$. Indeed, the corresponding ground state
(Fig.~\ref{figdopedouble} a) contains complex ions (CaCl)$^{1+}$
with the charge $e$. On the other hand, for $\gamma > \gamma_c$
the channel is free from Cl$^-$ ions and the current is provided
by Ca$^{2+}$ ions only. Thus, the observed barrier of $\leq
U_L(0)$ may be explained by fractionalization of charges $2e$ into
two charges $e$. To make such a fractionalization more transparent
one can redraw the saddle point state $00$ in a somewhat different
way, namely creating a soliton (domain wall, or defect) in the
state $1/2$.
Fig.~\ref{figdoublemore} shows the channel with a soliton in the
middle.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.066\textheight]{dopedoublemore.eps}
\end{center}
\caption{The saddle point state $00$ represented as the unit
charge defect (soliton) within the ground state $1/2$. Ca$^{2+}$
ion added to the channel in the state $1/2$ fractionalizes in two
such solitons, each having an excess energy $\leq U_L(0)$. }
\label{figdoublemore}
\end{figure}
One can see that in this version of the $00$ state the fields
$2E_0$ and $0$ alternate similarly to Fig.~\ref{figdopedouble} c.
We can also see that the soliton in the middle has the charge
$e$. Fractionalization of a single Ca$^{2+}$ ion in two
charge--$e$ defects means that a Ca ion traverses the channel by
means of two solitons moving consecutively across the channel.
The self-energy of each soliton (and therefore the transport
barrier) does not exceed $U_L(0)$.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.2\textheight]{doubleabsolute.eps}
\end{center}
\caption{ $\ln[U_L(\gamma)/U_L(0)]$ for $\alpha_2=5*10^{-5}$. Each
dip signals the presence of a phase transition.}
\label{figdoubleabs}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.2\textheight]{phase.eps} \hfill
\end{center}
\vspace{-5.0cm} \hspace{2.8cm}
\includegraphics[height=0.1\textheight]{1stsmall.eps}
\vspace{2.5cm} \caption{The phase diagram of the channel with the
divalent cations on the $(\gamma,~\alpha_2)$ plane. The dotted
lines are the boundaries between phases. The ground states in each
region are labelled. Inset: magnification of the first phase
boundary line at small $\gamma$. The solid line is
Eq.~(\ref{gammacdiv}).} \label{figphases}
\end{figure}
So far the physics of the first order transition in the divalent
system was rather similar to that in the channel with the
alternating doping and a monovalent salt. There are, however,
important distinctions, taking place at larger dopant
concentration. The logarithm of the transport barrier in the wide
range of $\gamma$ is plotted in Fig.~\ref{figdoubleabs}. One
notices a series of additional dips at $\gamma_{c1}\approx 1.7$,
$\gamma_{c2}\approx 5.3$, $\gamma_{c3}\approx 10.8$, etc. (the
first order transition at $\gamma_{c}\approx 0.01$, discussed
above, is not visible at this scale). These dips are indications
of the sequence of reentrant phase transitions, taking place at
larger $\gamma$. The calculated phase diagram is plotted in
Fig.~\ref{figphases}. The leftmost phase boundary line
corresponds to the first order phase transition between $0$ and
$1/2$ states. Its low concentration part along with the fit with
Eq.~(\ref{gammacdiv}) is magnified in the inset. The other lines
are transitions spotted in Fig.~(\ref{figdoubleabs}). We discuss
them in Appendix \ref{app1}.
\section{negatively doped Channel in solution with monovalent and
divalent cations}
\label{sec5}
We turn now to the study of the channel in a solution with the
mixture of monovalent cations with the dimensionless concentration
$\alpha_1$ and divalent cations with the concentration $\alpha_2$.
Neutrality of the solution is maintained by monovalent anions with
the concentration $\alpha_{-1}=\alpha_1+2\alpha_2$. The channel is
assumed to be doped with the unit charge negative dopants,
attached periodically with the concentration $\gamma$.
In Fig.~\ref{figselecbarrier} we plot the barrier as a function of
the divalent cation concentration $\alpha_2$ for $\gamma=0.1$ and
$\alpha_1=0.001$.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.19\textheight]{selecbarrier.eps}
\end{center}
\caption{The transport barrier as a function of $\ln \alpha_2$ for
$\gamma=0.1$ and $\alpha_1=0.001$.} \label{figselecbarrier}
\end{figure}
The overall decrease of the barrier with the growing ion
concentration is interrupted by the two sharp dips at
$\alpha_{2c}$ and $\alpha_{2c}'$. By plotting the $F_q$ function
for several $\alpha_2$ in the vicinity of $\alpha_{2c}$ and
$\alpha_{2c}'$, one observes that they correspond to the two
consecutive first order transitions. As $\alpha_2$ increases the
system goes from $q=0$ phase into $q=1/2$ phase and eventually
back into $q=0$ phase.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.19\textheight]{sellatent.eps}
\end{center}
\caption{Concentration of Ca$^{2+}$ ions in the channel in units
of $\gamma/x_T$ as function of $\ln \alpha_2$ for $\gamma=0.1$
and $\alpha_1=0.001$. There are two discontinuous changes in Ca
concentration at $\alpha_{2c}$ and $\alpha_{2c}'$. Each of these
two latent changes is close to the half of the number of dopants.
} \label{figsellatent}
\end{figure}
The concentration of divalent cations within the channel is given
by
\begin{equation}\label{number2}
n_{{Ca}}= -{\alpha_2 \over k_B T L} \, \left. { \partial F_{\min}
\over \partial \alpha_2} \right|_{\alpha_1, \alpha_{-1}}.
\end{equation}
It is plotted as a function of the bulk concentration in
Fig.~\ref{figsellatent}. One observes that at the first transition
the number of divalent cations entering the channel is close to
the half of the number of dopants. When the second transition is
completed the number of divalent cations is approximately the same
as the number of dopants. This provides a clue on the nature of
the corresponding ground states. For $\alpha_2<\alpha_{2c}$ there
are almost no divalent cations in the channel. Therefore, both the
ground state and the saddle point state are the same as in
sec.~\ref{sec2}, shown in Fig.~\ref{figacceptor}. The ground state
free energy and the transport barrier are given by
Eqs.~(\ref{groundeg}) and (\ref{barriereg}) correspondingly. At
$\alpha_2 =\alpha_{2c}$ the first order ion--exchange phase
transition takes place, where every two monovalent cations are
getting substituted by a single divalent one. The system's
behavior at larger $\alpha_2$ is qualitatively similar to that
described in the previous section. For
$\alpha_{2c}<\alpha_2<\alpha_{2c}'$ the ground state is the $ 1/2$
state, pictured in Fig.~\ref{figdopedouble} b. The corresponding
free energy is given by Eq.~(\ref{freeenergy12}). The second phase
transition at $\alpha_{2c}'$ is similar to that taking place on
the leftmost phase boundary of Fig.~\ref{figphases} (which is
crossed now in the vertical direction). For
$\alpha_2>\alpha_{2c}'$ the ground state is the state $0$
(Fig.~\ref{figdopedouble} a) with the free energy given by
Eq.~(\ref{freeenergy0}).
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.19\textheight]{selecenergy.eps}
\end{center}
\caption{Free energies of the three competing ground states for
$\gamma=0.1$ and $\alpha_1=0.001$. The dash-dotted line is
Eq.~(\ref{groundeg}), the solid line is Eq.~(\ref{freeenergy12}),
and the dashed line is Eq.~(\ref{freeenergy0}). The actual ground
state is chosen as the lowest of them.
} \label{figselecenergy}
\end{figure}
The free energies of the three competing ground states as
functions of $\alpha_2$ are plotted in Fig.~\ref{figselecenergy}
for the same parameters as in Fig.~\ref{figselecbarrier}. They
indeed intersect at the concentrations close to $\alpha_{2c}$ and
$\alpha_{2c}'$. Since at each such intersection the symmetry of
the ground state (the $q$-value) changes, one expects that the
ground state changes via the first-order phase transition. The
critical value $\alpha_{2c}$ of the ion--exchange transition may
be estimated from Eqs.~(\ref{freeenergy12}) and (\ref{groundeg})
as:
\begin{equation}\label{crit}
\alpha_{2c} =(2\alpha_1)^2 e^{1/(2\gamma)} \, .
\end{equation}
Notice that it scales as $\alpha_1^2$ and therefore at small
concentrations the transition takes place at $\alpha_{2c}\ll
\alpha_1$. This is a manifestation of the law of mass action. The
second critical value may be estimated from Eq.~(\ref{gammacdiv})
as $\alpha_{2c}'=e^{-1/(6\gamma)}$ and is approximately
independent on $\alpha_1$. Comparing the two, one finds that the
transitions may take place only for small enough concentration of
the monovalent ions $\ln\alpha_1\lesssim -\gamma/3-\ln 2$. For
larger $\alpha_1$ there is a smooth crossover between the two
$q=0$ states.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.27\textheight]{3d21.eps}
\end{center}
\caption{Phase diagram in the space of cation and dopant
concentrations ($\ln \alpha_2-\gamma-\ln\alpha_1$). Inside the
tent-like shape the ground state is $q=1/2$, outside -- the
ground state is $q=0$. } \label{figselecphase}
\end{figure}
The phase diagram in the space of cation and dopant concentrations
($\ln \alpha_2-\gamma-\ln\alpha_1$) is plotted in
Fig.~\ref{figselecphase}. By fixing some $\gamma$ and not too
large $\alpha_1$ and varying $\alpha_2$ one crosses the phase
boundary twice. This way one observes two first order phase
transitions: from $0$ to $1/2$ and then from $1/2$ to $00$. The
corresponding transport barrier is shown in
Fig.~\ref{figselecbarrier}.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.17\textheight]{seleccurrent.eps}
\end{center}
\caption{Schematic plot of the current through a Ca channel as a
function of Ca$^{2+}$ concentration, cf. Ref.~\onlinecite{Almers}.
The Na$^+$ concentration is $\alpha_1 \approx 10^{-2}$, the wall
charge concentration is $\gamma \approx 0.3$, and the (smeared)
transition is at $\ln \alpha_{2c}\approx -10$.}
\label{figseleccurrent}
\end{figure}
The model presented in this section is a simple cartoon for
Ca$^{2+}$ selective channels. Let us consider the total current
through the channel, $I$, equal to the sum of sodium and calcium
currents. Each of these partial currents in turn is determined by
two series resistances: the channel resistance and the combined
contact (mouth) resistances (the latter are inversely proportional
to the concentration of a given cation in the bulk). We assume
that at the biological concentration of Na$^{+}$ the current $I =
I_0$ and discuss predictions of our model of negatively doped
channel at $\gamma\approx 0.3$ regarding $I/I_0$ with growing
concentration $\alpha_2$ of Ca$^{2+}$ cations
(Fig.~\ref{figseleccurrent}). For $\alpha_2 < \alpha_{2c}$ the
channel is populated by Na$^+$ and Na$^+$ current dominates. For
$\alpha_2 > \alpha_{2c}$ the Na$^+$ current is blocked because
Na$^+$ ions are expelled from the channel and substituted by the
Ca$^{2+}$. Indeed, a monovalent Na$^+$ cation cannot unbind the
divalent Ca$^{2+}$ from the dopants. Therefore, the transport
barrier for Na$^{+}$ ions is basically the bare barrier $U_L(0)
\gg k_BT$. In these conditions the Ca$^{2+}$ resistance of the
channel is small, because Ca$^{2+}$ concentration in the channel
is large. However, at the transition $\alpha_2 = \alpha_{2c}\ll
\alpha_1$ the concentration of Ca$^{2+}$ ions in the bulk water is
so small that the contact Ca$^{2+}$ resistance is very large.
Therefore, the Ca$^{2+}$ current is practically blocked and the
total current, $I$, drops sharply at $\alpha_{2c}$. As $\alpha_2$
grows the Ca$^{2+}$ current increases proportional to $\alpha_2$
due to decreasing contact resistance. As a result, one arrives at
the behavior of $I/I_0$ schematically shown in
Fig.~\ref{figseleccurrent}. This behavior is in a qualitative
agreement with the experimental data of
Refs.~[\onlinecite{Almers,Hille}].
\section{Analytical approach}
\label{secanalytical}
Consider a gas consisting of $N$ mobile monovalent cations and
$N'$ mobile monovalent anions along with non--integer boundary
charges $q$ and $q'$ placed at $x=0$ and $x=L$ correspondingly. We
also consider a single negative unit dopant charge attached at
the point $x_0$ inside the channel: $0<x_0<L$. The resulting
charge density takes the form:
\begin{equation}\label{chargedensity}
\rho(x)\equiv \!\!\sum\limits_{j=1}^{N+N'}\!\! \sigma_j
\delta(x-x_j)+q\delta(x)+q'\delta(x-L) - \delta(x-x_0)\, ,
\end{equation}
where $x_j$ stay for coordinates of the mobile charges and
$\sigma_j=\pm 1$ for their charges. The interaction energy of such
a plasma is given by:
\begin{equation}\label{totalenergy_q}
U = {e\over 2}\int\!\!\!\int\limits_{0}^{L}\!\! dxdx'
\rho(x)\Phi(x-x')\rho(x')\, ,
\end{equation}
where the 1d Coulomb potential $\Phi(x)= \Phi(0) - E_0|x|$ is the
solution of the Poisson equation: $\nabla^2\Phi = -2E_0\delta(x)$.
(The self-energy $e\Phi(0)$ will be eventually taken to infinity
to enforce charge neutrality).
We are interested in the grand-canonical partition function of the
gas defined as
\begin{eqnarray}\label{partition5}
&& Z_L(q,q') = \sum\limits_{N,N'=0}^\infty e^{\mu(N+N')/(k_BT)}{1\over
N!N'!} \\
&&\times \prod\limits_{j=1}^{N+N'}\left({\pi a^{\,2}\over
l_0^2}\int\limits_{0}^{L} {dx_j\over l_0} \right) e^{- U/(k_B T)}
\, ,\nonumber
\end{eqnarray}
where $\mu$ is the chemical potential (the same for cations and
anions) and $l_0$ is a microscopic scale related to the bulk salt
concentration as $c=e^{\mu/k_BT}/l_0^{3}$. Factor $\pi a^2/l_0^2$
originates from the integrations over transverse coordinates.
To proceed with the evaluation of $Z_L(q,q')$ we introduce the
resolution of unity written in the following way:
\begin{widetext}
\begin{eqnarray}\label{res_unity}
1&=&\int\!\!{\cal D}\! \rho(x)\,\, \delta\!\!\left(\rho(x) -
\sum\limits_{j=1}^{N+N'} \sigma_j
\delta(x-x_j)-q\delta(x)-q'\delta(x-L)+\delta(x-x_0) \right)
\nonumber \\
&=&\int\!\!\!\int{\cal D}\! \rho(x){\cal D} \theta(x)\,\,
\exp\left\{\,-i\left(\int\limits_{0}^{L} \!dx\, \theta(x)\rho(x) -
\sum\limits_{j=1}^{N+N'} \sigma_j
\theta(x_j)-q\theta(0)-q'\theta(L) + \theta(x_0) \right)\right\}~.\nonumber
\end{eqnarray}
Substituting this identity into Eq.~(\ref{partition5}), one
notices that the integrals over $x_j$ decouple and can be
performed independently. The result of such integration along with
the summation over $N$ and $N'$ is $\exp\{2\pi a^2 c\int_0^L dx
\cos\theta(x)\}$. Evaluation of the Gaussian integral over
$\rho(x)$ yields the exponent of
$\theta(x)\Phi^{-1}(x-x')\theta(x')$. According to the Poisson
equation the inverse potential is $\Phi^{-1}(x-x')=-(2E_0)^{-1}
\delta(x-x') \partial^2_x$. As a result, one obtains for the
partition function:
\begin{eqnarray}\label{partition1}
Z(q,q')= && \int\!\!\!\! \int\limits_{-\infty}^{\infty}
\frac{d\theta_0
d\theta_L d\theta_{x_0} }{(2\pi)^3}\,\, e^{ iq\theta_i+iq'\theta_f
-i\theta_{x_0} }\int\!\! {\cal D} \theta(x)\,\,
\exp\left\{ - \int\limits_0^L\!\! dx\left[{x_T\over 4} (\partial_x
\theta)^2 - {2\alpha_1\over
x_T} \cos \theta(x)\right] \right\} \, ,\nonumber
\end{eqnarray}
\end{widetext}
where $\alpha_1=\pi a^2 x_T c$.
The integral over $\theta(x)$ runs over all
functions with the boundary conditions $\theta(0)=\theta_0\,$,
$\theta(L)=\theta_L$ and $\theta(x_0)=\theta_{x_0}$.
It is easy to see that this expression represents the matrix
element of the following T-exponent (or rather X-exponent)
operator:
\begin{equation}\label{Texponent}
Z(q,q')=\langle q|e^{-{x_0\over x_T}\, \hat H}
e^{-i\theta} e^{-{L-x_0\over x_T} \, \hat H}|q'\rangle \, ,
\end{equation}
where the Hamiltonian is given by $\hat H=(i\hat\partial_\theta)^2
-2\alpha_1 \cos\theta$ and $|q\rangle$ is the eigenstate of the
momentum operator $i\hat\partial_\theta$. Since the Hamiltonian
conserves the momentum up to an integer value, $q'=q+M$, one can
restrict the Hilbert space down to the subspace with the fixed
{\em fractional} part of the boundary charge $0\leq q<1$. In this
subspace one can perform the gauge transformation, resulting in
the Mathieu Hamiltonian with the ``vector potential''
\cite{Lenard,Edwards,Kamenev}:
\begin{equation}\label{Mathieu}
\hat H_q = (i\hat \partial_\theta -q)^2 -2\,\alpha_1 \cos
\theta\, .
\end{equation}
It acts in the space of periodic functions:
$\Psi(\theta)=\Psi(\theta+2\pi)$. Finally, taking the
``democratic'' sum over all integer parts of the boundary charge
(with the fixed fractional part $q$), one obtains:
\begin{equation}\label{partition-final}
Z(q)=\mbox{Tr}\left\{ e^{-\hat H_q{x_0\over x_T}}\,e^{-i\theta}\,
e^{-\hat H_q{L-x_0\over x_T}} \right\} \, .
\end{equation}
In the more general situation the solution contains a set of ions
with charges (valences) $m\in {\cal Z}$ and the corresponding
dimensionless concentrations $\alpha_m$. The condition of total
electro-neutrality demands that:
\begin{equation}\label{neutrality}
\sum\limits_m m\, \alpha_m =0 \, .
\end{equation}
The Mathieu Hamiltonian (\ref{Mathieu}) should be generalized as
\cite{Edwards}:
\begin{equation}
\label{Mathieu-gen}
\hat H_q = (i\hat \partial_\theta -q)^2 -\sum\limits_m\alpha_m\,
e^{im\theta} \, .
\end{equation}
Despite of being non--Hermitian, this Hamiltonian still possesses
a real band--structure~\cite{ftnt} $\epsilon_q^{(j)}$.
It is safe to assume that monovalent anions are always present in
the solution: $\alpha_{-1}>0$. This guarantees that the
band--structure has the unit period in $q$.
Consider first an undoped channel. Its partition function is given
by $Z(q)=\mbox{Tr}\{\exp(-\hat H_qL/x_T)\}$. In the long channel
limit, $L\gg x_T$, only the ground state, $\epsilon^{(0)}_q$, of
the Hamiltonian (\ref{Mathieu}) or (\ref{Mathieu-gen}) contributes
to the partition function. As a result, the free energy of the 1d
plasma is given by
\begin{equation}
\label{free-energy}
F_q=k_BT \epsilon^{(0)}_q(\alpha_m) L/x_T=4U_L(0)
\epsilon^{(0)}_q(\alpha_m) \, .
\end{equation}
The equilibrium ground state of the plasma corresponds to the
minimal value of the free energy. For the Mathieu Hamiltonian
(\ref{Mathieu}) $\epsilon^{(0)}_q$ has a single minimum at $q=0$
(no induced charge at the boundary). It is also the case for the
more general Hamiltonians (\ref{Mathieu-gen}). Therefore the
equilibrium state of a neutral 1d Coulomb plasma does {\em not}
have a dipole moment and possesses the reflection symmetry.
The adiabatic transfer of the unit charge across the channel is
associated with the slow change of $q$ from $q=0$ to $q=\pm 1$.
In this way the system must overcome the free energy maximum at
some value of $q$ (for the operators (\ref{Mathieu}) and
(\ref{Mathieu-gen}) the maximum is at $q = 1/2$). As a result,
the activation barrier for the charge transfer is proportional to
the band-width of the lowest Bloch band \cite{Kamenev}:
\begin{equation}
\label{barrier}
U_L(\alpha_m) = 4 U_L(0)
\left(\epsilon^{(0)}_{\mbox{max}}(\alpha_m) -
\epsilon^{(0)}_{\mbox{min}}(\alpha_m)\right) \, .
\end{equation}
Notice that in the ideal 1d Coulomb plasma the transport barrier
scales as the system size. It is also worth mentioning that both
the equilibrium free energy $F_0= 4 U_L(0)
\epsilon^{(0)}_{\mbox{min}}(\alpha_m)$ and the transport barrier,
Eq.~(\ref{barrier}), are smooth analytic functions of the
concentrations $\alpha_m$. Since there is the unique minimum and
maximum within the interval $0\leq q<1$, there are {\em no} phase
transitions in the undoped channels.
Most of biological ion channels have internal ``doping'' wall
charges within the channel. If integer charges $n_1, n_2,\ldots,
n_N$ are fixed along the channel at the coordinates
$0<x_1<x_2<\ldots< x_N<L$, the partition function is obtained by
straightforward generalization of Eq.~(\ref{partition-final}):
\begin{equation}\label{dopedpartition}
Z(q) =\mbox{Tr}\left\{ e^{-\hat H {x_1\over x_T} } e^{in_1\theta}
e^{-\hat H{x_2-x_1\over x_T} }\ldots e^{in_N\theta} e^{-\hat H{
L-x_N\over x_T} } \right\}\, .
\end{equation}
As long as all $n_k$ are integer, the boundary charge $q$ is a
good quantum number of the operator under the trace sign. As a
result, the partition function is again a periodic function of $q$
with the unit period.
For the sake of illustration we shall focus on systems with
periodic arrangements of the wall charges. In this case the
partition function (\ref{dopedpartition}) takes the form:
$Z(q)=\mbox{Tr}\{\left( \hat {\cal U}_q \right)^N \}$, where $N$
is the number of dopants in the channel and $\hat {\cal U}_q$ is
the single--period evolution operator. We shall define the
spectrum of this operator as:
\begin{equation}
\label{spectrum}
\hat {\cal U}_q\, \Psi^{(j)}_q(\theta) =
e^{-\epsilon_q^{(j)}/\gamma}\, \Psi^{(j)}_q(\theta) \, ,
\end{equation}
where $\gamma$ is the dimensionless concentration of dopants,
defined as $\gamma\equiv x_TN/L$. The evolution operator $\hat
{\cal U}_q$ is non--Hermitian. Its spectrum is nevertheless real
and symmetric function of $q$. The proof of this statement may be
constructed in the same way as for the operator in
Eq.~(\ref{Mathieu-gen}) \cite{ftnt}. The free energy of a long
doped system is given by $F_q =k_BT\epsilon_q^{(0)}N/\gamma = 4
U_L(0) \epsilon_q^{(0)} $ (cf. Eq.~(\ref{free-energy})). The
equilibrium ground state is given by the absolute minimum of this
function. Similarly, the transport barrier is given by
Eq.~(\ref{barrier}).
The simplest example is the periodic sequence of unit--charge
negative dopants ($x_{k+1}-x_k=L/N$ and $n_k=-1$) in the
monovalent salt solution, section \ref{sec2}. The single--period
evolution operator takes the form:
\begin{equation}
\label{one-period}
\hat {\cal U}_q = e^{-i\theta} e^{-\hat H_q/\gamma } \, ,
\end{equation}
with $\hat H_q$ given by Eq.~(\ref{Mathieu}). As shown in
Ref.~\onlinecite{Zhang} its ground state
$\epsilon_q^{(0)}(\alpha_1,\gamma)$ is a function qualitatively
similar to the lowest Bloch band of the Mathieu operator in
Eq.~(\ref{Mathieu}). As a result, both equilibrium free energy and
the transport barrier are smooth function of the salt
concentration $\alpha_1$ and the dopant concentration $\gamma$.
The examples of sections \ref{sec4} and \ref{sec5} are described
by the evolution operators, which have the form of
Eq.~(\ref{one-period}) with the generalized Hamiltonian,
Eq.~(\ref{Mathieu-gen}). In the example of section \ref{sec4}
there are two non-zero concentrations: $\alpha_{-1}=2\alpha_2$,
while in section \ref{sec5} one deals with three types of ions:
$\alpha_{-1}=\alpha_1+2\alpha_2$. Finally, the alternating doping
example of section \ref{sec3} is described by the evolution
operator of the form
\begin{equation}
\label{uq2}
\hat {\cal U}_q = e^{-i\theta} e^{-\hat H_q/\gamma } e^{+i\theta}
e^{-\hat H_q/\gamma }
\end{equation}
with the Hamiltonian~(\ref{Mathieu}). Such operators may have more
complicated structure of their lowest Bloch band. In particular,
the latter may have more than one minimum within the period of the
reciprocal lattice: $q\in [0,1]$. The competition between (and
splitting of) the minima results in the first (second) order phase
transitions.
The analytic investigation of the spectrum of the evolution
operators, such as Eq.~(\ref{one-period}), is possible in the
limits of small and large dopant concentrations $\gamma$. For
$\gamma\ll 1$ it follows from Eq.~(\ref{one-period}) that only the
lowest eigenvalues of $\hat H_q$ are important. It is then enough
to keep only the ground state of $\hat H_q$, save for an immediate
vicinity of $q=1/2$, where the ground and the first excited states
may be nearly degenerate. If also $\alpha_m\ll 1$ the spectrum of
$\hat H_q$ along with the matrix elements of $e^{\pm i\theta}$ may
be calculated in the perturbation theory. Such calculations lead
to the free energies of the ground and saddle point states, which
are identical to those derived in sections \ref{sec2}--\ref{sec5}
using simple energy and entropy counting.
In the limit $\gamma\gg 1,\alpha_m$ one may develop a variant of
the WKB approximation in the plane of the complex $\theta$,
\cite{Zhang}. It shows that the transport barrier and latent
concentration scales as $\exp\{-c\sqrt{\gamma}\}$, where
non-universal numbers $c$ are given by certain contour integrals
in the complex plane of $\theta$. In the example of section
\ref{sec2} we succeeded in quantitative prediction of the
coefficient $c$, see Ref.~[\onlinecite{Zhang}]. In the case of the
section \ref{sec4} the coefficient in front of $\cos(2\pi q)$ (see
Eq.~(\ref{doublecos})) is an oscillatory function of $\gamma$.
This probably translates into the complex value of the
corresponding $c$--constant. We did not succeed, however, in its
analytical evaluation.
\section{Numerical calculations}
\label{secnumerical}
By far the simplest way to find the spectrum of $\hat {\cal
U}_q$ is numerical. In the basis of the angular momentum $e^{ik
\theta}$ the Hamiltonian (\ref{Mathieu-gen}) takes the form of the
matrix:
\begin{equation}
\label{hamiltonianmatrix}
\left[\hat H_q\right]_{k,k'} = \left[ (k+q)^2
\delta_{k,k'}-\sum\limits_m \alpha_m \delta_{k,k'+m}\right]\, .
\end{equation}
In the same basis the dopant charge $n$ creation operator takes
the matrix form: $\left[e^{in\theta} \right]_{k,k'}= \left[
\delta_{k,k'+n} \right]$, where $k=\ldots, -2,-1,0, 1,2, \ldots$ .
Truncating these infinite matrices with some large cutoff, one may
exponentiate the Hamiltonian and obtain the matrix form of
$\left[\hat {\cal U}_q\right]_{k,k'}$. The latter may be
numerically diagonalized to find the function
$\epsilon^{(0)}_q(\alpha_m,\gamma)$. The free energy and the
transport barrier are then given by Eqs.~(\ref{free-energy}) and
(\ref{barrier}).
We start from the simplest case of section \ref{sec2}. The
monovalent salt with the concentration $\alpha_{-1}=\alpha_{1}$
leads to the term $-\alpha_1 \delta_{k,k'+1}-\alpha_1
\delta_{k,k'-1}$ in the matrix $\left[\hat H_q\right]$. For
illustration we show the $4 \times 4$ truncation of $\left[\hat
H_q\right]$ and $\left[ e^{-i \theta} \right]$ matrices
$\left[\hat H_q\right] \rightarrow \left(\begin{array}{clcr}
(1+q)^2 & -\alpha_1 & 0 & 0 \\
-\alpha_1 & (0+q)^2 & -\alpha_1 & 0 \\
0 & -\alpha_1 & (-1+q)^2 & -\alpha_1 \\
0 & 0 & -\alpha_1 & (-2+q)^2 \\
\end{array}
\right)$ and $\left[ e^{-i \theta} \right] \rightarrow
\left(\begin{array}{clcr}
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 \\
\end{array}
\right)$. For reasonable precision the truncation size of the
numerical calculation has to be much larger. Typically we used $40
\times 40$ and checked that a further increase does not affect the
results.
To calculate the ``energy'' band $F_q$ of a long channel for
certain $\alpha_1$ and $\gamma$ one can use the matrix form of
$\hat H_q$ and $e^{-i \theta} $, equate the largest eigenvalue of
$\hat {\cal U}_q$ in Eq. (\ref{one-period})
to $e^{-\epsilon^{(0)}_q/\gamma}$, and calculate
$F_q=k_BT\epsilon^{(0)}_{q} L/x_T$. Such a calculation gives the
transport barrier $U_L(\alpha_1,\gamma) \equiv F_{max} -F_{min}$
shown in Fig. \ref{figfgamma}.
The models in sections \ref{sec4} and \ref{sec5} are treated
similarly, except that they have salt ion terms $-{\alpha_2}
\delta_{k,k'+2}-2\alpha_2 \delta_{k,k'-1}$ and$-\alpha_1
\delta_{k,k'+1}-\alpha_2 \delta_{k,k'+2}-(\alpha_1+2\alpha_2)
\delta_{k,k'-1}$ respectively in $\left[\hat H_q\right]$. For the
model of section \ref{sec3}, $\left[\hat H_q\right]$ contains
salt ion term $-\alpha_1 \delta_{k,k'+1}-\alpha_1
\delta_{k,k'-1}$. However, $\hat {\cal U}_q$ is of the form of
Eq.~(\ref{uq2}) and its largest eigenvalue is denoted as
$e^{-2\epsilon^{(0)}_q/\gamma}$.
\section{Effects of the finite length and the electric field escape}
\label{secxi}
The consideration of the previous sections was certainly an
idealization that neglected several important phenomena. The most
essential of them are: (i) the finite length $L$ of the channel;
(ii) the escape of the electric field lines from the water into
the media with smaller dielectric constant. Each of these
phenomena leads to a smearing of the ion-exchange phase
transitions transforming them into crossovers. The goal of this
section is to estimate the relative sharpness of these crossovers.
Consider first the effect of the finite length (still neglecting
the field escape). Close to the first order phase transition the
free energy admits two competing minima (typically at $q=0$ and
$q=1/2$) with the free energies $F_0(\alpha_1)$ and
$F_{1/2}(\alpha_1)$, see Fig.~\ref{qalternative} (we focus on the
alternating dopants example of section \ref{sec3}). Being an
extensive quantity, the free energy is proportional to the channel
length: $F_b\propto L$, where $b=0,1/2$. Each of these two minima
is characterized by a certain ion concentration
$n_b(\alpha_1)=-\alpha_1/(k_BTL)\partial F_b/\partial \alpha_1$.
In the vicinity of the phase transition at $\alpha=\alpha_c$ the
difference of the two free energies may be written as:
\begin{equation}\label{finiteL}
\frac{F_0(\alpha_1)-F_{1/2}(\alpha_1)}{k_BT} =
\Delta n_{\mbox{ion}} L \, \frac{\alpha_1-\alpha_c}{\alpha_c}\, ,
\end{equation}
where $\Delta n_{\mbox{ion}}=n_0(\alpha_c)-n_{1/2}(\alpha_c)$ is
the latent concentration of ions across the transition. Taking a
weighted sum of the two states, one finds that the concentration
change across the transition is given by the ``Fermi function'':
\begin{equation}\label{crossover}
\Delta n(\alpha_1)=\frac{\Delta n_{\mbox{ion}} }
{e^{\Delta N(\alpha_c-\alpha_1)/\alpha_c}+1}\, ,
\end{equation}
where $\Delta N\equiv \Delta n_{\mbox{ion}}L$ is the total latent
amount of ions in the finite length channel. This gives for the
transition width $(\alpha_1-\alpha_c)/\alpha_c \propto 1/\Delta
N$. Therefore the transition is relatively sharp as long as
$\Delta N\gg 1$. For small enough $\gamma$ the number of ions
entering or leaving the channel at the phase transition is almost
equal to the number of dopants: $\Delta N\lesssim
N_{\mbox{dopants}}=\gamma L/x_T$. The necessary condition of
having a sharp transition, therefore, is to have many dopants
inside the channel. For example, for the transition of
Fig.~\ref{figlatentjump} $\Delta n_{\mbox{ion}}\approx
0.8\gamma/x_T$ and $\gamma=0.1$, one finds $\Delta N\approx 0.08
L/x_T $. At larger $\gamma$ the number of dopants increases (for
fixed length $L$), but the relative latent concentration is
rapidly decreasing, see Fig.~\ref{figlatentnumb}. Employing
Eq.~(\ref{gammac}), one estimates $\Delta N\approx \gamma
(1-2e^{-1/(4\gamma)})L/x_T$ this expression is maximized at
$\gamma \approx 0.15$, where $\Delta N \approx 0.1L/x_T$. We
notice, in passing, that even for $\Delta N\lesssim 1$, being
plotted as a function of $\log \alpha_2$, the function
(\ref{crossover}) still looks as a rather sharp crossover.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.22\textheight]{potentialcurve.eps}
\end{center}
\caption{Electrostatic potential $\Phi(x)$ in units of $e/\kappa_1
a$ of a point charge in an infinitely long channel as a function
of the dimensionless distance $x/a$ along the channel axis. Here
$a$ is the radius of the cylinder, and $\kappa_1/\kappa_2=40$. The
full line is an exact solution of the Laplace equation
\cite{Smythe,Parsegian}. The dotted lines are 3d Coulomb
potentials: the lower one is $-e/\kappa_1 x$; the upper one is
$\Phi_{\infty}-e/\kappa_2 x$, where $e\Phi_{\infty}=2 U_{\infty}$
and $U_{\infty}$ is the self-energy of a charge in the infinite
channel. The dashed line corresponds to Eq.\ref{xipotential}.
} \label{figsingle}
\end{figure}
We turn now to the discussion of the electric field escape from
the channel, which happens due to the finite ratio of the
dielectric constants of the channel's interior, $\kappa_1$, and
exterior, $\kappa_2$. Fig.~\ref{figsingle} shows the electrostatic
potential of a unit point charge placed in the middle of the
channel with $\kappa_1/\kappa_2=40$, [\onlinecite{Smythe}]. The
potential interpolates between $-e/(\kappa_1 x)$ at small
distances, $x\lesssim a$, and $2U_\infty/e - e/\kappa_2 x$, where
$U_\infty = e^2\xi/(\kappa_1 a^2)$, at large distances $x>\xi$.
Here the length scale $\xi\simeq a\sqrt{\kappa_1/\kappa_2}\approx
6.8a$ is the characteristic escape length of the electric field
displacement from the interior of the channel into the surrounding
media with the smaller dielectric constant. The quantity $U_\infty
= U_L(0)2\xi/L$ is the excess self-energy of bringing a unit
charge inside the infinite channel.
In between the two limits the potential may be well approximated
by the following phenomenological expression (dashed line in
Fig.~\ref{figsingle}):
\begin{equation}\label{xipotential}
\Phi(x)=E_0\xi\left[1-e^{-|x|/\xi}-1.1\,{a\over\xi}\right]\, .
\end{equation}
Our previous considerations correspond to the limit $\xi\to
\infty$ (save for the last term). The last term in
Eq.~(\ref{xipotential}) originates from the fact that in the
immediate vicinity of the charge $|x|\lesssim a$ the electric
field is not disturbed by the presence of the channel walls. As a
result the length of $\approx 1.1 a$ is excluded from paying the
excess self-energy price. This leads to (typically slight)
renormalization of the effective concentrations $\alpha\to
\alpha_{eff}$. The more detailed discussion may be found in
Ref.~[\onlinecite{Kamenev}]; below we neglect the last term in
Eq.~(\ref{xipotential}).
One can repeat the derivation of section \ref{secanalytical} with
the potential Eq.~(\ref{xipotential}), employing the fact that
$\Phi^{-1}=(2E_0)^{-1}\delta(x-x')[\xi^2-\partial^2_x]$. As a
result, one arrives at Eq.~(\ref{Texponent}) with the modified
Hamiltonian $\hat H=(i\hat\partial_\theta)^2 -2\alpha_1
\cos\theta+ (x_T/2\xi)^2\theta^2$. Since the last term violates
the periodicity, the quasi-momentum $q$ is not conserved. However,
in the limit $x_T/\xi\ll 1$ one can develop the quasi-classical
approximation over this small parameter. Transforming the
Hamiltonian into the momentum representation, one notices that
$q(x)$ is a slow quasi-classical variable. As a result, the
partition function of the channel with the finite $\xi$ may be
written as:
\begin{equation}\label{slowq}
Z=\!\int\!\! {\cal D}q(x)\, \exp \left\{ -\int\limits_0^L\!\! dx\left[ {\xi^2\over x_T}
(\partial_x q(x))^2 + {1\over x_T}\, \epsilon_{q(x)}^{(0)} \right] \right\}\, ,
\end{equation}
where $F_q=k_BT \epsilon_{q}^{(0)}L/x_T $ is the free energy as
function of $q$ in $\xi \to \infty$ limit (no electric field
escape). This expression shows that there are no true phase
transitions even if $\epsilon_q^{(0)}$ possesses two separate
minima. Indeed, due to its finite rigidity the $q(x)$ field may
form domain walls and wander between the two. As a result, the
first order transition is transformed into a crossover. Formally
Eq.~(\ref{slowq}) defines the ``quantum mechanics'' with the
potential $\sim \epsilon_q^{(0)}$. The smearing of the transition
is equivalent to the avoiding crossing intersection due to
tunnelling between the two minima of the $\epsilon_q^{(0)}$
potential. Using this analogy, one finds for the concentration
change across the smeared transition:
\begin{equation}\label{crossover}
\Delta n(\alpha_1)= \frac{\Delta n_{\mbox{ion}} }{2}\left[1+
\frac{\alpha_1-\alpha_c}{\sqrt{(\alpha_1-\alpha_c )^2
+\alpha_c^2\delta^2}}\right]\, ,
\end{equation}
where $\delta$ is the WKB tunnelling exponent:
\begin{equation}\label{tunneling}
\delta=\exp\left\{-{\xi\over x_T}\int\limits_0^{1/2}\!\!
dq\, \sqrt{\epsilon_q^{(0)}-\epsilon_0^{(0)}}\right\}\, .
\end{equation}
As a reasonable approximation for $\epsilon_q^{(0)}(\alpha_c)$ one
may use (c.f. Eq.~(\ref{doublecos})) $\epsilon_q^{(0)}(\alpha_c)=
U_c/(8U_L(0)) \cos (4\pi q)$, where $U_c$ is the transport barrier
at the critical point. Substituting this expression in
Eq.~(\ref{tunneling}) one estimates $\delta\approx
\exp\{-\xi/(2\pi x_T)\sqrt{U_c/U_L(0)}\}$. Using $\xi\approx 6.8\,
a$ and $U_c\approx 0.2\,U_L(0)$ (cf. Fig.~\ref{figalternative}),
one obtains $\delta\approx \exp\{-l_B/a\}$. Thus for channels with
$a< l_B$ one obtains $\delta \lesssim 0.4$ and the crossover
Eq.~(\ref{crossover}) is relatively sharp.
\section{Contact (Donnan) potential}
\label{Donnan}
Until now we concentrated on the barrier proportional to the
channel length $L$ (or escape length $\xi$). If $\alpha \ll
\gamma$ there is an additional, independent on $L$, contribution
to the transport barrier. It is related to the large difference in
cation concentrations inside and outside the channel.
Corresponding contact (Donnan) potential $U_D$ is created by
double layers at each end of the channel consisting of one or more
uncompensated negative dopants and positive screening charge near
the channel's mouth.
For $ \gamma \ll 1$ one finds $|U_{D}| \ll U_{L}(\gamma)$ and the
channel resistance remains exponentially large. When $\gamma$
grows the barrier $U_{L}(\gamma)$ decreases and becomes smaller
than $U_{D} =- k_B T \ln(\gamma/\alpha)$, which increases with
$\gamma$. In this case the measured resistance may be even smaller
than the naive geometrical diffusion resistance of the channel.
Let us, for example, consider a channel with $L=5$ nm, $a = 0.7$
nm, $x_T=0.35$ nm at $c= 0.1$ M (which corresponds to
$\alpha=0.035$) and $\gamma=0.3$ (5 dopant charges in the
channel). The bare barrier $U_L(0) = 3.5 k_{B}T$ is reduced down
to $U_L(\gamma) = 0.2 k_{B}T$. At the same time $U_{D} = -
2.5k_{B}T$. Thus due to 5 wall charges, instead of the bare
parabolic barrier of Fig.~\ref{figshortchannel} we arrived at the
wide well with the almost flat bottom (Fig.~\ref{figschottky}).
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.12\textheight]{Uc9.eps}
\end{center}
\caption{The electrostatic potential for cations across the
channel with 5 dopants, considered in the text.}
\label{figschottky}
\end{figure}
Unlike the $L$-dependent self-energy barrier considered above, the
role of the Donnan potential is charge sensitive. In the
negatively doped channel the Donnan potential well exists only for
the cations while anions see the additional potential barrier.
Thus, the Donnan potential naturally explains why in negatively
doped biological channels the cation permeability is substantially
larger than the anion one.
The contact potential $U_D$ may be augmented by the negative
surface charge of the lipid membrane~\cite{Appel} or by affinity
of internal walls to a selected ion, due to ion--specific short
range interactions~\cite{Doyle,Hille}. It seems that biological
channels have evolved to compensate large electrostatic barrier by
combined effect of $U_{D}$ and short range potentials. Our theory
is helpful if one wants to study different components of the
barrier or modify a channel. In narrow artificial nanopores there
is no reason for compensation of electrostatic barrier. In this
case, our theory may be verified by titration of wall charges.
\section{Conclusion}
\label{secconclusion}
In this paper we studied the role of wall charges, which we call
dopants, played in the charge transport through ion channels and
nanopores. We considered various distributions of dopant charges
and salt contents of the solution and showed that for all of them
doping reduces the electrostatic self-energy barrier for ion
transport. This conclusion is in qualitative agreement with
general statement on the role of transitional binding inside the
channel~\cite{Bezrukov}. In the simplest case of identical
monovalent dopants and a monovalent salt solution such a reduction
is monotonous and smooth function of the salt and dopant
concentrations. The phenomenon is similar to the low-temperature
Mott insulator-metal transition in doped semiconductors. However,
due to the inefficiency of screening in one-dimensional geometry
we arrived at a crossover rather than a transition even in
infinite channel.
A remarkable observation of this paper is that the interplay of
the ion entropy and the electrostatic energy may lead to true
thermodynamic ion--exchange phase transitions. A necessary
condition for such a transition to take place is the competition
between more than one possible ground states. This in turn is
possible for compensated (e.g. alternating) doping or for mixture
of cations of various valency. The ion--exchange transitions are
characterized by latent concentrations of ions. In other words,
upon crossing a critical bulk concentration a certain amount of
ions is suddenly absorbed or released by the channel. The phase
transitions also lead to non-monotonic dependencies of the
activation barrier as a function of the ion and dopant
concentrations. For simplicity we restricted ourselves with the
periodic arrangements of dopants. The existence of the phase
transitions is a generic feature based only on the possibility of
having more than one ground state with global charge neutrality.
Thus they exist for arbitrary positioned dopants. In reality the
phase transitions are smeared into relatively sharp crossovers due
to finite size effects along with the finite electric field
escape length, $\xi$.
We have also demonstrated that the doping can make the channels
selective to one sign of monovalent salt ions or to divalent
cations. This helps to understand how biological K, Na, Ca
channels select cations and how Ca/Na channel selects Ca versus
Na. The surprising fact is that Ca$^{2+}$ ions, which could be
expected to have four times larger self-energy barrier, actually
exhibit the same barrier as Na$^{+}$. This phenomenon is
explained by fractionalization of Ca$^{2+}$ on two unit-charge
mobile solitons.
We study here only very simple models of a channel with charged
walls. This is the price for many asymptotically exact results.
Our results, of course, can not replace powerful numerical methods
used for description of specific biological channels~\cite{Roux}.
In the future this theory may be used in nano-engineering projects
such as modification of biological channels and design of long
artificial nanopores. Another possible nano-engineering
application deals with the transport of charged polymers through
biological or artificial channels. A polymer moves slowly and for
ions its charges may be considered as static. Therefore, for thin
and stiff polymers in the channel, the charges on polymers can
play the same role as doping. As a result, all the above
discussions are directly applicable to the case of long charged
polymer slowly moving through the channel. Changing the polymer
one can change the dopants density.
In a more complicated scenario, the polymer can be bulky and
occupy substantial part of the channel's cross-section. Important
example of such situation is translocation of a single stranded
DNA molecule through the $\alpha$-Hemolysin channel~\cite{Meller}.
In this case, the narrow part of the channel, immersed in the
lipid membrane ($\beta$-barrel) can be approximated as an empty
cylinder, while DNA may be considered as a coaxial cylinder
blocking approximately a half of the channel cross-section. The
dielectric constant of DNA is of the same order as one of lipids.
Thus, the electric field lines of a charge located in the water
gap between the two lipid cylinders are squeezed much stronger
than in the empty channel. This may explain strong reduction of
the ion current in presence of the DNA, which is also different
for poly-A and poly-C DNA~\cite{Meller}. The latter remarkable
observation inspires the hope that translocation of DNA may be
used as a fast method of DNA sequencing. We shall discuss the
bulky polymer situation in a future publication.
We are grateful to S. Bezrukov, A. I. Larkin and A. Parsegian for
interesting discussions. A.~K. is supported by the A.P. Sloan
foundation and the NSF grant DMR--0405212. B.~I.~S was supported
by NSF grant DMI-0210844.
\begin{appendix}
\section{Phase transitions at large dopant concentration}
\label{app1}
In this appendix we discuss some details of the phase transitions
at $\gamma_{c1}\approx 1.7$, $\gamma_{c2}\approx 5.3$,
$\gamma_{c3}\approx 10.8$, etc, visible in
Fig.~\ref{figdoubleabs}. The free energy $F_q$ as function of the
order parameter $q$ for a few values of $\gamma$ in the vicinity
of $\gamma_{c1}$ is plotted in Fig.~\ref{figprocess}. The minimum,
initially at $q=1/2$ for $\gamma_c<\gamma<\gamma_{c1}$, splits
into two minima symmetrical around $1/2$. These two gradually move
away from each other until they rich $q=0$ and $q=1$,
correspondingly, for $\gamma > \gamma_{c1}$.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=0.2\textheight]{processg.eps}
\end{center}
\caption{Free energy in units of $10^{-10}U_L(0)$ as a function
of $q$ for $\gamma=1.7179330,~1.7179337,~1.7179341$ and
$1.7179346$ (from top to bottom) with the same $\alpha_2=5\cdot
10^{-5}$. The graphs are vertically offset for clarity. The ground
state continuously changes from $q=1/2$ to
$q=0$.}\label{figprocess}
\end{figure}
The continuous variation of the absolute minimum (as opposed to
the discrete a switch between the two fixed minima) suggests the
second order phase transition scenario, taking place at
$\gamma_{c1}$. The situation is more intricate, however. Namely,
there are two (and not one) very closely spaced second order
transitions, situated symmetrically around $\gamma_{c1}$ point. At
the first transition the minima depart from $q=1/2$ and start
moving towards $q=0$ and $q=1$, while at the second one they
``stick'' to $q=0,1$. Therefore, unless one has a very fine
resolution (in $\gamma$ and/or $\alpha_2$), the entire behavior
looks as a single first order transition.
The $F_q$ functions of Fig.~\ref{figprocess} are very well fitted
with the following phenomenological expression:
\begin{equation}\label{doublecos}
F_q(\gamma) = a (\gamma_{c1}-\gamma) \cos(2 \pi q) + b \cos(4 \pi
q)\, ,
\end{equation}
where $a \gg b > 0$. For any $\gamma$ the ground state corresponds
to the value $q_0$ which minimizes $F_q$. Therefore $q_0$ is
found from: either $\cos (2 \pi
q_0)=(\gamma-\gamma_{c1}){a/(4b)}$, or $\sin(2\pi q_0)=0$. The
former equation has solutions only in the narrow interval
$\gamma_{c1}-4b/a<\gamma< \gamma_{c1}+4b/a $. In this interval of
dopant concentrations the minima move from $q=1/2$ to $q=0,1$.
The edges of this interval constitute two second-order phase
transitions located in the close proximity to each other. Near the
first transition $|q_0-1/2|\simeq \sqrt{\gamma
+4b/a-\gamma_{c1}}$, while near the second one: $|q_0|\simeq
\sqrt{\gamma -4b/a-\gamma_{c1}}$. Therefore the critical exponent
is the mean--field one: $\beta=1/2$. This could be anticipated for
the system with the long-range interactions.
It is interesting to notice that the first order transition,
discussed in the main text, may be also well fitted with
Eq.~(\ref{doublecos}) but with {\em negative} coefficients $a$ and
$b$. The accuracy of our calculations is not sufficient to
establish if the subsequent transitions are the first order ones
or pairs of the very closely spaced second order transitions. As
far as we can see the sequence of the reentrant transitions
continues at larger dopant concentrations. Notice, however, that
the difference of the corresponding free energies (and thus
associated latent concentrations) are exponentially small at large
$\gamma$.
\end{appendix}
| 2024-02-18T23:39:48.137Z | 2005-10-12T21:15:10.000Z | algebraic_stack_train_0000 | 452 | 13,350 |
|
proofpile-arXiv_065-2324 | \section{Introduction}
New stellar evolution models which include the effects of rotationally
induced mixing (Heger \& Langer 2000; Meynet \& Maeder 2000) have
considerably changed our understanding of the evolution of high-mass
stars, particularly during the early phases of core hydrogen burning.
Rotation is now recognized as an important physical effect which
substantially changes the lifetimes, chemical yields and stellar
evolution. Theoretical predictions can be observationally tested,
and some attempts at this have already been made (c.f.~Venn et al.\ 2002).
Chemical analysis of the components of binary stars with precisely
known fundamental stellar parameters allows a powerful comparison
with theory. However, the precision of empirical abundances from
doub\-le-lined binaries is hampered by increased line blending and by
dilution of the spectral lines in the composite spectra. The
techniques of spectral disentangling (Simon \& Sturm 1994; Hadrava
1995) and Doppler tomography (Bagnuolo \& Gies 1991) overcome these
difficulties by separating the spectra of the individual components
contained in a time-series of composite spectra taken over the
orbital cycle.
\begin{figure}
\begin{tabular}{ll}
\includegraphics[width=5.7cm]{pavlovskifig1a.ps} &
\includegraphics[width=5.7cm]{pavlovskifig1b.ps}
\end{tabular}
\caption{The best fit of the calculated profiles (thin black line) of the
H$\gamma$ line compared to the observed profiles (thick gray) for the
components of V453 Cyg, left panel, and the primary component
of V380 Cyg, right panel.}
\end{figure}
Pavlovski \& Hensberge (2005) have performed a detailed spectral line
analysis of disentangled component spectra of the eclipsing early-B
binary V578 Mon in the open cluster NGC~2244, which is embedded in the
Rosette Nebula. It is based on the disentangled spectra obtained by
Hensberge, Pavlovski \& Verschueren (2000) when deriving the orbit and
the fundamental stellar parameters of this eclipsing, detached,
double-lined system. V578 Mon consists of very young ($2.3\pm0.2
\times 10^6$ yr) high-mass stars, $M_A = 14.54\pm 0.08$ M$_\odot$ and
$M_B = 10.29 \pm 0.06$ M$_\odot$. The stars rotate moderately fast
($v \sin i \sim 100$ km$\,$s$^{-1}$). By comparison with spectra of
single stars in the same open cluster (Vrancken et al.\ 1997),
temperature-dependent, faint spectral features are shown to reproduce
well in the disentangled spectra, which validates a detailed
quantitative analysis of these component spectra. An abundance
analysis differential to a sharp-lined single star, as applied
earlier in this cluster to single stars rotating faster than the
components of V578 Mon, revealed abundances in agreement with the
cluster stars studied by Vrancken et al.\ (1997) and the large
inner-disk sample of Daflon et al.\ (2004). Pavlovski \& Hensberge
(2005) have concluded that methods applicable to observed single
star spectra perform well on disentangled spectra, given that the
latter are carefully normalised to their intrinsic continua.
Since the fundamental stellar and atmospheric parameters of eclipsing,
double-lined spectroscopic binaries are known with much better
accuracy than in the case of single stars, the comparison with
evolutionary models can be more direct and precise. The present work
is a continuation of an observational project to test rotationally
induced mixing in high-mass stars from disentangled component spectra
of close binary stars.
We will now present preliminary results on two interesting early-B
type systems, V453 Cyg and V380 Cyg. Both systems are detached,
eclipsing, double-lined spectroscopic binaries and have reliable
modern absolute dimensions, published by Southworth et al.\ (2004)
for V453 Cyg and Guinan et al.\ (2000) for V380 Cyg (Table I).
\begin{table}[!t]
\begin{tabular}{lrrrr} \hline
Qnty/Comp & V453 Cyg A$^1$ & V453 Cyg B$^1$ & V380 Cyg A$^2$ & V380 Cyg B$^2$ \\
\hline
$M$ [M$_{\odot}]$ & $14.36\pm0.20$ & $11.11\pm0.13$ & $11.1\pm0.5$ & $6.95\pm0.25$ \\
log $g$ [cgs] & $3.731\pm0.012$ & $4.005\pm0.015$ & $3.148\pm0.023$ & $4.133\pm0.023$ \\
$T_{\rm eff}$ [K] & $26\,600\pm500$ & $25\,500\pm800$ & $21\,350\pm400$ & $20\,500\pm500$ \\
$v \sin i$ & $107\pm9$ & $97\pm20$ & $98\pm4$ & $32\pm6$ \\
$\epsilon_{\rm He}$$^3$ & $0.13\pm0.01$ & $0.09\pm0.01$ & $0.14\pm0.01$ & -- \\
\hline \end{tabular}
\caption[]{Fundamental parameters for the stars in V453 Cyg and V380 Cyg.}
Notes: (1) Southworth et al.\ (2004); (2) Guinan et al.\ (2000); (3) This work.
\end{table}
\begin{figure}
\begin{tabular}{cc}
\includegraphics[width=5.7cm]{pavlovskifig2a.ps} &
\includegraphics[width=5.7cm]{pavlovskifig2b.ps}
\end{tabular}
\caption{The best fitting calculated profiles (thin black) of the He~{\sc i}
4471 {\AA} and 6678 {\AA} lines compared to the observed profiles
(gray) in the primary component of V453 Cyg (left panel), and the
primary component of V380 Cyg (right panel). Light gray lines represent
profiles for the solar helium abundance.}
\end{figure}
\section{Spectroscopy and Method}
Several different sets of spectra were obtained for both binaries. We
will briefly describe these observations.
{\em V453 Cyg}: This binary was observed in 1991 and 1992 with the 2.2-m
telescope at German-Spanish Astronomical Center on Calar Alto, Spain.
Four spectral windows were observed with the coud\'e spectrograph. A
total of 28 spectra were collected. These spectra were kindly put at
our disposal by Dr. Klaus Simon. Further description can be found in
Simon \& Sturm (1994). Another similar set, in two spectral windows,
was secured by one of the authors (JS) in 2001 with the 2.5-m Isaac Newton
Telescope at La Palma (Southworth et al.\ 2004). A total of 41 spectra
were obtained. An additional set of six spectra in the red region centred
on H$\alpha$ were secured by DH on the 1.2-m telescope at the DAO in 2001.
{\em V380 Cyg}: Eight spectra centred on H$\gamma$ were obtained by PK
and KP at the coud\'e spectrograph on the 2-m telescope in Ond\v{r}ejov
in 2004. An additional two spectra in the same region were obtained by
PK on the 1.2-m telescope at the DAO, Victoria, also in 2004. An
additional set of eight red spectra centred on H$\alpha$, from the same
telescope, were obtained by SY in 2002 and are also used here.
To isolate the individual spectra of both components in V453 Cyg we
have made use of the spectral disentangling technique (Simon \& Sturm
1994, Hadrava 1995). The computer codes {\sc FDBinary} (Iliji\'{c} et
al.\ 2004) and {\sc cres} (Iliji\'{c} 2004), which rely on the Fourier
transform technique (Had\-ra\-va 1995), and the SVD technique in
wavelength space (Simon \& Sturm 1994), respectively, were used.
Spectral disentangling is a powerful method which has found a variety
of applications in close binary research (c.f.~Holmgren et al.\ 1998;
Hensberge et al.\ 2000; Harries et al.\ 2003; Harmanec et al.\ 2004).
The non-LTE line-formation calculations are performed using
{\sc Detail} and {\sc Surface} (Butler \& Giddings 1985). However,
hydrostatic, plane-parallel, line-blanketed LTE model atmospheres
calculated with the {\sc Atlas9} code (Kurucz 1983) have also been
used. This hybrid approach has been compared with the state-of-the-art
non-LTE model atmosphere calculations and excellent agreement has been
found for the hydrogen and helium lines (Przbylla 2005).
\begin{figure}
\centerline{\includegraphics[width=7.0cm]{pavlovskifig3.ps}}
\caption{Abundances of helium in the components of the close binaries
(filled symbols; dark symbols show the results of this work)
overplotted on the results for large sample of single early-B
type stars (open symbols) of Lyubimkov et al.\ (2004).}
\end{figure}
\section{Results and Conclusion}
In the observed spectral ranges the helium abundance can be derived only
from the lines centred at 4378, 4471, 4718 and 6678 \AA. As discussed
by Lyubimkov et al.\ (2004), calculations for He I 4378 {\AA} are less
reliable using {\sc detail} since only transitions up to level $n = 4$
are considered explicitly. Since level populations can be affected by
the microturbulent parameter $V_{turb}$ it should be also included in
calculations and adjusted to the observed line profiles.
First, a check and slight adjustment of the effective temperature to
the individual component spectra for V453 Cyg, and the primary of V380
Cyg, has been made. As an example, fitting of the calculated to the
observed line profiles of the H$\gamma$ line is shown in Fig.\ 1. A
simultaneous fit of the helium abundance $\epsilon_{\rm He}$, and
microturbulent velocity $V_{turb}$ has then been performed from the
grid of the calculated spectra, while $T_{\rm eff}$ and $\log g$ have
been kept fixed.
The helium enrichment has been found for the primary component in the
system V380 Cyg by Lyubimkov et al.\ (1996). The helium abundance they
derived, $\epsilon_{\rm He} = 0.19\pm0.05$, is considerably larger than
the value derived in the present work. The complete analysis and
discussion of possible sources of the discrepancy will be published
elsewhere.
Recently, Lyubimkov et al.\ (2004) have derived the helium abundances
in a large sample of early-B type stars. Their results are plotted in
Fig.\ 3 as open symbols. This confirmed their finding that helium is
becoming enriched in high-mass stars already on the main sequence.
However, due to large errors in deriving the fundamental parameters
for the single stars, there is considerable scatter in their diagram.
Overplotted by filled symbols are results for the components of
eclipsing, double-lined spectroscopic binaries, in light-gray
(c.f.~Pavlovski 2004). In dark-gray are presented the results of this
work. The general finding that in the later phases on the main
sequence helium is enriched is confirmed, but there is disagreement
for early phases for which results from the close binaries are very
consistent and are giving a helium abundance close to the solar
value. However, the sample is still rather limited and more work
is needed to have complete picture on the helium enrichment on the MS
for high-mass stars.
| 2024-02-18T23:39:48.389Z | 2005-10-24T13:30:55.000Z | algebraic_stack_train_0000 | 464 | 1,650 |
|
proofpile-arXiv_065-2327 | \section{Introduction}
At a distance of $\sim 4$ Mpc (Soria et~al.~\cite{soria96}, Rejkuba~\cite{rejkuba04},
Harris et~al.~\cite{harris04a}), \object{NGC\,5128} (Centaurus A)
is the closest giant elliptical (gE) galaxy (see Israel~\cite{israel98} for a review).
It possesses a rather low specific frequency of globular clusters (GCs), with
$S_N=1.4\pm0.2$ (Harris et~al.~\cite{harris04b}, hereafter HHG), yet still hosts around
1000 GCs. This makes its globular cluster system (GCS) the largest of any
galaxy known within $\sim 15$Mpc. It is therefore a prime target for studies of
extragalactic GCSs. This is especially important given that systematic
differences are suspected among GCSs depending on their host galaxy type and environment
(Fleming et~al.~\cite{fleming95}, Kissler-Patig~\cite{kissler00}).
\object{NGC\,5128} offers a unique opportunity to study in great detail the GCS of a gE and to
both compare it with closer systems, hosted only by late-type and dwarf galaxies, and
to use it as a prototype for the GCSs of more distant gEs.
However, the study of the \object{NGC\,5128} GCS has been hampered by a set of
observational circumstances which make further study far from straightforward. The
low galactic latitude ($b=+19^{o}$) makes the contamination by foreground stars a major issue. These,
together with background galaxies, vastly outnumber the cluster population and
many of them occupy a similar range in colour and magnitude, even if using a colour
like the Washington $C-T_{1}$ index, which has proven especially powerful in distinguishing
clusters from contaminating objects (Geisler et~al.~\cite{geisler96b}, Dirsch et~al.~\cite{dirsch03}).
In their wide-field Washington photometric investigation of \object{NGC\,5128}, Harris et~al.~(\cite{harris04a})
estimated that bonafide GCs constitute only $\sim1$\% of the $10^5$ objects they observed.
In addition, \object{NGC\,5128} is so close that its GCS is very spread out in angular size, and some
clusters have been found at distances as large as $40\arcmin$ from the optical center
(Harris et~al.~\cite{harris92}, Peng et~al.~\cite{peng04}), requiring the use of very wide
field of view detectors for a comprehensive study. Yet it is distant enough that GCs
cannot be easily told apart from the background and foreground population via
image resolution, at least with typical ground-based images. With characteristic
half-light radii of $0.3\arcsec - 1.0\arcsec$ (Harris et~al.~\cite{harris02}),
excellent seeing conditions are needed to resolve the majority of the clusters
from the ground.
The study of the structure of globular clusters has led to the discovery that they define a ``fundamental plane''
in analogy to that of elliptical galaxies. This is, they occupy a narrow region in multi-parameter space. This
has been shown for Milky Way clusters (Djorgovski~\cite{djorgovski95}) as well as for a few other GCSs in the Local
Group (Djorgovski et~al.~\cite{djorgovski97}, Barmby et~al.~\cite{barmby02}, Larsen et~al.~\cite{larsen02})
and a sample of \object{NGC\,5128} GCs studied with HST by Harris et~al.~(\cite{harris02}).
As the structure of a cluster is the
result of its dynamical history, it is of great importance to compare cluster structures from a variety of galaxies
along the Hubble sequence and to look for correlations with galaxy type. This is especially true in the case of
elliptical galaxies, which have presumably a more complex formation history and are likely to have experienced
several distinct formation events, as suggested by the usual bimodality in the colours of their GCSs
(Geisler et~al.~\cite{geisler96b}, Kundu \& Whitmore~\cite{kundu01},
Larsen et~al.~\cite{larsen01b}). The half-light radius $r_{h}$ should remain roughly
constant throughout the life of a GC (e.g. Spitzer \& Thuan~\cite{spitzer72}, Aarseth
\& Heggie~\cite{aarseth98}), so its current size should reflect conditions of the
proto-GC cloud. Any systematic variation of $r_{h}$ within or among galaxies
can provide insights into GC formation. For example, Mackey \& Gilmore~(\cite{mackey04}) found
very different $r_{h}$ distributions for disk/bulge, old halo and young halo Galactic GCs.
Therefore, studying structural parameters of more
GCSs and especially those of gE galaxies may help our understanding of galaxy formation.
In addition, a number of cluster subtypes have been suggested recently
on the basis of their distinct properties. For example, Larsen \& Brodie~(\cite{larsen00})
find a class of extended, intermediate luminosity clusters in NGC\,1023 which
they refer to as 'Faint Fuzzies'. They also find similar objects in NGC\,3384
(Larsen et~al.~\cite{larsen01b}). Since these objects have so far only been identified
in these two lenticular galaxies, it has recently been suggested (Burkert
et~al.~\cite{burkert05}) that these objects form {\em only\/} in such galaxies and indeed that their
formation may be intimately related to that of their parent galaxy. Similarly,
Huxor et~al.~(\cite{huxor05}) have discovered 3 very large, luminous GCs in the halo
of M31 which appear to be unique. In addition, a new type of Ultra Compact
Dwarf (UCD) galaxy now appears to exist (Hilker et~al.~\cite{hilker99}, Drinkwater
et~al.~\cite{drinkwater02}) which may or may not be related to GCs (Mieske et~al.~\cite{mieske02}).
Ha\c{s}egan et~al.~(\cite{hasegan05}) report the discovery of several bright
objects in the Virgo Cluster which they refer to as DGTOs (Dwarf-Globular Transition Objects)
and propose the use of the M/L ratio to distinguish between bright GCs and UCDs.
How unique are such objects? Do they exist in other galaxies, of other types?
Are their properties truly distinct from those of other GCs or do GCs populate
a continuum with no clear subclasses? What is the relation of GCs to the UCDs,
if any? Such questions can be addressed by obtaining high quality structural
parameters for as many different GCs in as many different types of galaxies as possible.
There have been intriguing hints (Kundu \& Whitmore~\cite{kundu01}, Larsen et~al.~\cite{larsen01b},
Larsen \& Brodie~\cite{larsen03}) that the blue clusters in gEs are systematically
larger by some $20\%$ on average than their red counterparts, based on WFPC2 data.
Recent Virgo Cluster ACS data (Jord\'an et~al.~\cite{jordan05}) strengthen this result.
It is still not clear how wide-spread this effect is and what its cause may be.
For example, it has been suggested that the effect may stem from real differences in
the formation and evolution of these distinct cluster subpopulations (Jordan~\cite{jordan04})
or may simply reflect the fact that the red clusters are generally more centrally
concentrated than their blue companions (e.g. Geisler et~al.~\cite{geisler96b}) and that
the larger tidal forces there lead to more compact clusters in the inner regions
(Larsen \& Brodie~\cite{larsen03}).
However, to date little is known about the structural parameters
of GCs in gEs.
The only exception is the \object{NGC\,5128} GCS.
Using WFPC2 data, Harris et~al.~(\cite{harris02}) obtained
images for 27 GCs and derived their structural parameters. Combining with similar data
for inner halo clusters from Holland et~al.~(\cite{holland99}), they found that
the light profiles fit classic King models very well and that their structural
parameters were similar to those of MW GCs, although their ellipticities were
substantially larger than those of MW GCs and much more like those of M31
clusters.
Recently, Martini \& Ho~(\cite{martini04}) have obtained the velocity dispersions for the brightest 14
clusters in \object{NGC\,5128}.
Combining these data with the Harris et~al.~(\cite{harris02}) structural parameters,
they were able to construct the fundamental plane for the clusters and showed that they follow approximately
the same relationships found for Local Group clusters. This, in spite of their
extreme masses and luminosities (about 10 times larger than nearby counterparts).
However, since the discovery of the first GC in \object{NGC\,5128} (Graham \& Phillips~\cite{graham80}), its known GC
population has steadily increased and is estimated today to be $\sim1000$ (Harris et~al.~\cite{harris04b}),
so one of course would like to extend such analysis towards less luminous clusters and study a much more
representative sample before definitive conclusions can be reached.
In this paper, we report the observation of a number of small fields around \object{NGC\,5128} under exceptional
seeing obtained at the Magellan I telescope. These images allowed us to resolve known cluster candidates
(and thus confirm or discard their cluster nature on the basis of their resolution and shape)
and to detect a number of new ones. We have used these high-resolution images to derive structural parameters,
surface brightnesses and central mass densities, which we compare to those of other well-studied GCSs.
These are the first structural parameters derived for GCs beyond the Local Group using ground-based images.
The paper is organised as follows. In Section 2, we present the observations and reductions
and describe our procedure for identifying clusters. In Section 3, we derive the structural parameters
by fitting to models. In Section 4, we discuss our results and compare the derived structural parameters with
those of other GCSs. We discuss the contamination by background galaxies and its effects in Section 5.
Finally, we summarize our major findings in Section 6.
\section{The data}
\subsection{Observations}
The fields were selected from the sample of Harris et~al.~(\cite{harris04a}).
They present a list of 327 candidate GCs based on colour and incipient resolution on
their CTIO 4m BTC (Big Throughput Camera) frames with seeing $\sim1.2\arcsec$ and pixel
scale of $0.42\arcsec$/pix. On the basis of brightness, we selected 30 fields,
which contain about 45 of these candidates
within the field of view of MagIC (Magellan Instant Camera).
We concentrated on clusters in the distant halo of the galaxy in order to
explore their nature in greater detail. These candidates had not been observed in
previous spectroscopic work, thus their true nature was unknown.
Images were obtained with the Magellan~I 6.5m telescope at Las Campanas Observatory
on Jan. 30, 31 and Feb. 1, 2004 with MagIC.
This is a 2K $\times$ 2K CCD with a scale of $0.069\arcsec$/pix (very
similar to that of the PC on WFPC2), spanning a field of view of
$2.3\arcmin \times 2.3\arcmin$. All of the fields (overlaid in Fig.~\ref{fig.n5128_dss})
were observed at least once in $R$ and a few of them also in $B$. For these latter
observations (which were taken during the third night) the seeing was significantly
worse (over $0.7\arcsec$). Therefore, our results refer only to the $R$ frames, in which
we enjoyed superb seeing of $0.3 - 0.6\arcsec$.
Typical exposure times were 180-300 sec.
Tables~1 and 2 give details of the observations and observed clusters.
The nights were not photometric but none of our results depends on absolute
photometry acquired during this run.
\begin{figure}
\centering
\includegraphics[width=9cm]{3393f1.eps}
\caption{A one square-degree image from DSS. Overlaid are 29 of the 30 fields observed
with MagIC (one is slightly off this FOV) in actual size (2.3'$\times$2.3').
The orientation is N up, E to the left.}
\label{fig.n5128_dss}
\end{figure}
\subsection{Data reduction}
The frames were bias-subtracted and flatfield corrected using the
IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatories,
which are operated by the Association of Universities for Research
in Astronomy, Inc., under cooperative agreement with the National
Science Foundation.} script
MagIC-tools, as described by Phillips~(\cite{phillips02}). For a few cases, the fields were observed twice.
Those frames with comparable seeing were combined using {\it imcombine\/} after the alignment
with {\it geomap\/} and {\it geotrans\/} with respect to the frame with higher S/N. No cosmic
rays were detected near or on the cluster candidates, hence no attempt was made to remove
them. Bad pixels were replaced by interpolation with neighbouring pixels using the task
{\it fixpix\/}.
\subsection{Cluster identification}
The globular clusters in \object{NGC\,5128} have characteristic half-light radii
of $0.3''-1''$ (measured from HST imaging, Harris et~al.~\cite{harris02}), and so with sub-arcsecond
seeing quality the great majority of the clusters should be distinguished from stars
and large and/or irregular and/or elliptical galaxies.
Given our excellent seeing and
the high resolution of MagIC, we were not only able to achieve this but also
discovered many new faint candidates that serendipitously lie in the same fields.
This was done by subtracting the stellar PSF from
all sources in each frame and visually inspecting the residuals. DAOPHOT II (Stetson~\cite{stetson87})
was run under IRAF first to detect the sources with the {\it daofind\/} algorithm. Aperture
photometry was then performed with a radius of 4 pixels. This was the basis to build the PSF
by using $\sim$30 bright, isolated stars per frame. Cluster candidates had broader profiles
and we are confident that they were not used for the PSF. Anyway, the large number of
stars used for the PSF would blur any effect of including possible compact clusters into
this category.
The non stellar appearance of the cluster candidates allowed a straightforward identification.
After subtraction of the PSF, resolved objects (both \object{NGC\,5128} GCs as well as
certain types of
background galaxies) leave a ``doughnut-shape'', being undersubtracted in the
wings and oversubtracted in the center, as shown in Fig.~\ref{fig.psf_subtraction}.
Resolved objects were further culled by eliminating those that had very
large and/or irregular profiles in order to weed out as
many galaxies as possible. However, relatively small, regular, round galaxies
will not be recognized and may contaminate our sample.
\begin{figure}
\centering
\includegraphics[width=8.8cm]{3393f2.ps}
\caption{llustration of how the resolution technique works in discriminating
resolved globular clusters in \object{NGC\,5128} from unresolved stars and also
background galaxies. On the left is a small area ($37\arcsec \times37\arcsec$) of one of our MagIC images.
On the right, one sees that after subtraction of a stellar psf, stars disappear while globular
clusters (two of which are shown) appear as round doughnuts with oversubtracted
centers and undersubtracted wings.}
\label{fig.psf_subtraction}
\end{figure}
After a visual inspection of the resulting images, we found that, starting from 44 cluster
candidates, only 17 of them (39\%) had cluster-like residuals. Twelve of the 44 were not resolved and show a
star-like profile, and 15 are background galaxies. This illustrates the general severe problem
of field contamination discussed above: even after careful selection of candidates
from their colors and (in some cases) barely extended appearance on good-quality
ground-based images, the total number of non-cluster objects in the list is larger than
the cluster population we are trying to find.
In addition, we could identify several new fainter candidates in our MagIC fields, typically 2-3
per frame. For the majority of them, Washington photometry already exists from
Harris et~al.~(\cite{harris04a}) and was used to apply the same color selection
as they used to generate their candidate list. In all, 48 new candidates were found.
Of these, 15 have very red colours and are presumably background galaxies
and 11 had no photometry available.
This happened either because they were too faint or they were located in the gaps between the BTC
frames, where the photometry data come from. Thus, 22 new, good cluster
candidates have been added to the analysis.
The comparison with previous work shows that three of them were already in the confirmed GC
catalog of Peng et~al.~(\cite{peng04}), and one is listed as a GC by Hesser et~al.~(\cite{hesser84}).
They are marked in Tables 2 and 3 as `PFF' and `C' objects respectively and hence the number
of truly new clusters is 18.
This work therefore gives a total of 39 high-quality candidate clusters, 18 new
and 21 previously existing candidates.
After our imaging observations were procured, Woodley et~al.~(\cite{woodley05}) obtained spectra
for an independent sample of nearly a hundred of the HHG GC candidates and were able to classify them as
bonafide \object{NGC\,5128} GCs, foreground stars or background galaxies on the basis
of their radial velocities. Nine objects are in common between the two studies and are
accordingly labelled in Table 1.
Four of the objects identified as GCs by us are
also GCs from their radial velocities, while two objects we classified as GCs
(\#127 and \#94) are actually distant galaxies according to radial velocity (and
labelled 'WHH gal.' in Table 1). In addition, there is
perfect agreement in the (independent) classification for three foreground stars (see Table~1.)
We were not able to perform this comparison for
our newly discovered cluster candidates since no other data exists for them
besides the Harris et~al.~(2004a) photometry other than the fact that four of
them were independently classified as GCs based on lower resolution images.
This test indicates that galaxies still contaminate our final GC sample, as
expected. A much more detailed estimate of galaxy contamination is given in Section~5.
Although we have eliminated the two galaxies 94 and 127 from further consideration,
both of them provide interesting properties.
Object 127 does not deviate significantly from a typical King profile and all derived parameters
(as well as photometry) are in the range of GCs. Perhaps its small core and effective radii could be the
only indication of its strangeness. Object 94 was taken for a cluster in projection next
to a distant galaxy but
from its velocity, it might consist of a pair of galaxies, separated by $\sim1.6''$.
In Table 2 we present observational data for our new cluster candidates, labeled
with an 'N' prefix, and present the full list of our final cluster candidates.
\begin{table*}
\caption{Washington photometry and classification of the 44 observed targets
(from Harris et al (2004a). The first column gives their
identification. Objects in common
with Woodley et~al.~(\cite{woodley05})
are indicated by WHH and their
classification (star/galaxy) or cluster number. The coordinates ($\alpha$,$\delta$)
are given in the second and third column. The colours in the Washington system
from Harris et al. (2004a) are listed in the following three columns.
The last column indicates our classification, according to their morphology in our MagIC frames.}
\label{tab.list}
\centering
\begin{tabular}{l c c c c c l }
\hline\hline
object ID & $\alpha_{2000}$ & $\delta_{2000}$ & $T_{1}$ & $M-T_{1}$ & $C-T_{1}$ & Classification \\
(HHG) & & & & & & \\
\hline
017 (WHH star) & 201.586426 & -43.15366 & 17.447 & 0.644 & 1.176 & star \\
021 & 201.627686 & -43.01647 & 17.546 & 0.568 & 1.433 & star \\
022 (WHH001) & 201.089172 & -43.04356 & 17.630 & 0.704 & 1.484 & cluster + star \\
032 & 200.909683 & -42.77305 & 17.984 & 0.740 & 1.462 & cluster \\
034 & 201.629135 & -43.01801 & 18.001 & 0.636 & 1.524 & star \\
036 (WHH star) & 201.010620 & -42.82483 & 18.046 & 0.724 & 1.586 & star \\
037 & 200.983215 & -42.61264 & 18.083 & 1.134 & 2.588 & star \\
038 (WHH star) & 201.637817 & -43.05389 & 18.092 & 0.526 & 1.088 & star \\
039 & 201.670975 & -43.27308 & 18.108 & 0.803 & 1.563 & star \\
049 & 201.773285 & -43.25498 & 18.365 & 0.923 & 2.461 & galaxy \\
050 & 200.926392 & -43.16050 & 18.490 & 0.791 & 1.630 & star \\
051 (WHH004) & 201.169159 & -43.22168 & 18.531 & 0.868 & 1.927 & cluster \\
060 & 201.638031 & -43.04905 & 18.667 & 1.078 & 2.082 & star \\
068 & 201.231567 & -42.74451 & 18.765 & 0.743 & 1.770 & galaxy \\
069 & 201.639236 & -43.01849 & 18.770 & 0.541 & 1.198 & star \\
074 & 202.025970 & -43.06270 & 18.894 & 1.074 & 2.653 & galaxy \\
080 & 201.435043 & -42.64040 & 18.972 & 1.048 & 2.754 & cluster \\
081 & 201.134018 & -43.18247 & 18.992 & 0.764 & 1.470 & galaxy \\
084 & 200.956879 & -43.12678 & 19.023 & 0.885 & 2.166 & galaxy \\
086 (WHH031) & 201.672623 & -43.19029 & 19.048 & 0.846 & 1.618 & cluster \\
093 & 201.606171 & -42.95170 & 19.173 & 0.801 & 1.792 & cluster \\
094 (WHH gal.) & 201.117294 & -42.88461 & 19.191 & 0.978 & 1.965 & cluster + galaxy \\
098 & 201.188309 & -43.38884 & 19.255 & 0.824 & 1.660 & galaxy \\
099 & 201.099915 & -42.90296 & 19.266 & 0.679 & 1.270 & galaxy \\
102 & 201.263916 & -43.48138 & 19.351 & 0.916 & 2.200 & cluster \\
104 & 201.942627 & -43.03859 & 19.374 & 0.921 & 1.943 & cluster \\
105 & 201.576157 & -42.75731 & 19.377 & 0.879 & 1.865 & galaxy \\
106 (WHH029) & 201.591995 & -43.15297 & 19.379 & 0.788 & 1.538 & cluster \\
120 & 201.935272 & -42.97654 & 19.521 & 0.806 & 1.799 & galaxy \\
127 (WHH gal.) & 201.144211 & -43.21404 & 19.588 & 0.949 & 1.932 & cluster \\
128 & 201.100601 & -42.90571 & 19.598 & 0.903 & 1.939 & cluster \\
129 & 201.949738 & -42.91334 & 19.600 & 0.972 & 2.599 & cluster \\
130 & 202.090408 & -42.90478 & 19.602 & 0.753 & 1.649 & galaxy \\
141 & 201.157684 & -43.16919 & 19.720 & 1.000 & 2.053 & cluster \\
145 & 201.120895 & -43.11254 & 19.776 & 0.835 & 1.377 & galaxy \\
147 & 201.864517 & -43.01892 & 19.794 & 0.670 & 1.197 & galaxy \\
200 & 201.201157 & -43.47463 & 20.398 & 0.656 & 1.460 & star \\
208 & 201.398788 & -42.64296 & 20.467 & 1.176 & 2.235 & star \\
210 & 201.201355 & -43.47885 & 20.485 & 0.974 & 2.225 & cluster \\
225 & 201.375885 & -43.45494 & 20.725 & 1.001 & 1.745 & cluster \\
228 & 201.665817 & -43.27797 & 20.740 & 1.009 & 1.819 & galaxy \\
244 & 201.200668 & -43.50775 & 21.004 & 0.799 & 1.390 & galaxy \\
246 & 201.927948 & -43.32676 & 21.017 & 1.028 & & galaxy \\
327 & 201.620071 & -43.00220 & & & & cluster + star \\
\hline
\end{tabular}
\end{table*}
\begin{table*}
\caption{Washington photometry and observational details for all of our final GC
candidates. Clusters with no prefix are candidates from Harris et~al.~(\cite{harris04a}) database (from Table 1). Those
labelled as 'N' are newly identified objects in this study (see text for detail) WHH objects are those also present
in Woodley et~al.~(\cite{woodley05}).
Note that clusters 022 and N3
were observed twice.}
\label{tab.observations}
\centering
\begin{tabular}{l c c c c c c l c }
\hline\hline
cluster ID & $\alpha_{2000}$ & $\delta_{2000}$ & T$_{1}$ & $C-T_{1}$ & date & Airmass & Exp. time & Seeing \\
(HHG) & & & & & & & (sec.) & ('') \\
\hline
022 (WHH001) & 201.089172 & -43.043560 & 17.630 & 1.484 & 30.Jan & 1.08 & 180 & 0.32 \\
022 (WHH001) & 201.089172 & -43.043560 & 17.630 & 1.484 & 30.Jan & 1.07 & 300 & 0.35 \\
032 & 200.909683 & -42.773048 & 17.984 & 1.462 & 30.Jan & 1.14 & 300 & 0.38 \\
051 (WHH004) & 201.169159 & -43.221680 & 18.531 & 1.927 & 30.Jan & 1.05 & 180 & 0.38 \\
080 & 201.435043 & -42.640400 & 18.972 & 2.754 & 31.Jan & 1.14 & 180 & 0.53 \\
086 (WHH031) & 201.672623 & -43.190289 & 19.048 & 1.618 & 31.Jan & 1.09 & 180 & 0.59 \\
093 & 201.606171 & -42.951698 & 19.173 & 1.792 & 31.Jan & 1.11 & 180 & 0.54 \\
102 & 201.263916 & -43.481380 & 19.351 & 2.200 & 30.Jan & 1.04 & 300 & 0.35 \\
104 & 201.942627 & -43.038589 & 19.374 & 1.943 & 31.Jan & 1.06 & 180 & 0.44 \\
106 (WHH029) & 201.591995 & -43.152969 & 19.379 & 1.538 & 31.Jan & 1.12 & 180 & 0.50 \\
128 & 201.100601 & -42.905708 & 19.598 & 1.939 & 30.Jan & 1.07 & 180 & 0.30 \\
129 & 201.949738 & -42.913342 & 19.600 & 2.599 & 31.Jan & 1.06 & 240 & 0.43 \\
141 & 201.157684 & -43.169189 & 19.720 & 2.053 & 30.Jan & 1.05 & 180 & 0.41 \\
210 & 201.201355 & -43.478851 & 20.485 & 2.225 & 30.Jan & 1.04 & 180 & 0.39 \\
225 & 201.375885 & -43.454941 & 20.725 & 1.745 & 31.Jan & 1.15 & 180 & 0.57 \\
327 & 201.620071 & -43.002201 & & & 31.Jan & 1.11 & 180 & 0.53 \\
N1 (C40) & 200.926392 & -43.160500 & 18.490 & 1.630 & 30.Jan & 1.12 & 180 & 0.34 \\
N3 & 201.072800 & -43.039169 & 20.897 & 1.848 & 30.Jan & 1.08 & 180 & 0.32 \\
N3 & 201.072800 & -43.039169 & 20.897 & 1.848 & 30.Jan & 1.07 & 300 & 0.35 \\
N5 & 201.136230 & -43.094910 & 20.774 & 1.680 & 30.Jan & 1.06 & 300 & 0.30 \\
N7 (PFF06) & 201.098740 & -43.131149 & 18.731 & 1.373 & 30.Jan & 1.06 & 300 & 0.30 \\
N8 & 201.103271 & -43.119518 & 20.057 & 1.654 & 30.Jan & 1.06 & 300 & 0.30 \\
N9 & 201.116486 & -43.109959 & 21.933 & 1.871 & 30.Jan & 1.06 & 300 & 0.30 \\
N10 (PFF09) & 201.130554 & -43.190781 & 19.258 & 1.474 & 30.Jan & 1.05 & 180 & 0.41 \\
N11 & 201.172958 & -43.214771 & 20.296 & 1.464 & 30.Jan & 1.05 & 180 & 0.38 \\
N12 & 201.140121 & -43.200439 & 20.119 & 1.568 & 30.Jan & 1.05 & 180 & 0.38 \\
N21 & 201.437943 & -42.649410 & 19.845 & 2.331 & 31.Jan & 1.14 & 180 & 0.53 \\
N23 & 201.561600 & -42.757729 & 20.072 & 0.815 & 31.Jan & 1.13 & 240 & 0.49 \\
N24 & 201.598083 & -43.151272 & 21.290 & 1.691 & 31.Jan & 1.12 & 180 & 0.50 \\
N25 & 201.571945 & -43.166100 & 20.914 & 2.207 & 31.Jan & 1.12 & 180 & 0.50 \\
N26 (PFF092) & 201.588715 & -42.955292 & 19.458 & 1.727 & 31.Jan & 1.11 & 180 & 0.54 \\
N30 & 201.675735 & -43.286640 & 19.916 & 1.247 & 31.Jan & 1.09 & 180 & 0.57 \\
N32 & 201.691513 & -43.194920 & 20.034 & 1.366 & 31.Jan & 1.09 & 180 & 0.59 \\
N33 & 201.686783 & -43.204910 & 20.156 & 1.887 & 31.Jan & 1.09 & 180 & 0.59 \\
N34 & 201.766739 & -43.259491 & 20.267 & 1.626 & 31.Jan & 1.08 & 180 & 0.60 \\
N35 & 201.787048 & -43.271809 & 21.005 & 1.957 & 31.Jan & 1.08 & 180 & 0.60 \\
N37 & 201.854248 & -43.002270 & 21.318 & 1.993 & 31.Jan & 1.08 & 240 & 0.52 \\
N41 & 201.974548 & -42.922680 & 19.988 & 2.515 & 31.Jan & 1.06 & 240 & 0.43 \\
N42 & 201.955078 & -42.932850 & 18.436 & 2.591 & 31.Jan & 1.06 & 240 & 0.43 \\
\hline
\end{tabular}
\end{table*}
\section{Analysis}
\subsection{Fit to the light profiles}
All of our final cluster candidates have FWHMs significantly larger than those of stars, thus the
residuals are noticeable. No identification was attempted for faint objects, as they were beyond the
limiting magnitude of the Harris et~al.~(\cite{harris04a}) database
($R\sim 22$ mag).
For all objects listed in Table~2, a 2-D fitting was performed to derive their morphological parameters after
deconvolution with the stellar PSF. This was done using the task {\it ishape\/} under
BAOLAB\footnote{BAOLAB is available at http://www.astro.ku.dk/$\sim$soeren/baolab} (Larsen~\cite{larsen99}).
In two cases, a star was located $\sim 2-3''$ away from the cluster candidate. They were first removed
using PSF-fitting and then {\it ishape\/} was run on the clean image.
Ishape deconvolves the observed light distribution with the PSF and then performs a 2-D fitting
by means of a weighted least-square minimization and assuming
an intrinsic profile. For the latter we have chosen a King profile (King~\cite{king62}), which has the form:
\begin{equation}
\mu(r) = k \Bigl[\frac{1}{\sqrt{1 + r^{2}/r_{c}^{2}}} - \frac{1}{\sqrt{1 + r_{t}^{2}/r_{c}^{2}}}\Bigr]^{2} ~~, r < r_{t}
\end{equation}
Ishape also offers other profiles. Among them, the `Moffat-type' is the most
frequently used (see Elson et~al.~\cite{elson87},
Larsen~\cite{larsen01a}, Mackey \& Gilmore~\cite{mackey03}). Although we extensively tried different
models and parameters, the best results were obtained with King profiles, except for a very few cases
which were better fit by a Moffat model and which are presumably background galaxies (see discussion).
King profiles are known to provide excellent fits to Milky Way GCs and are characterised
by a tidal radius $r_{t}$ and a core radius $r_{c}$. Alternatively, a concentration parameter $c=r_t/r_c$
can be defined, which we use here. Note that this definition differs from the more familiar
$c=\mathrm{log}(r_t/r_c)$. We prefer
to use the first definition to keep consistency with the outputs from {\it ishape\/}. The $c$ parameter
can be kept fixed or allowed to vary during the fitting process and is the most uncertain of the fitted
parameters (Larsen~\cite{larsen01a}). However, a number of tests with our images show that, in fact,
$c$ is stable typically within 20\% when the radius of the fit increases from 15 to 25 pixels. This
is encouraging given the large errors found in the literature. On the other hand, one should keep in mind the
corresponding uncertainty in the tidal radii, which are derived directly from $c$ and $r_{c}$.
The fit radius was 25 pixels or $1.725\arcsec$, corresponding to 33 pc if a distance of 4 Mpc is assumed ($1\arcsec \sim19$ pc).
Smaller radii were
tried, but the subtracted images showed noticeable residuals in the wings. With the sole exception of $c$, the derived
parameters do not change significantly with fitting radius. Fig.~\ref{fig.ishape} shows a typical example for one of
the known cluster candidates (HHG128).
\begin{figure}
\centering
\includegraphics[width=8cm]{3393f3.ps}
\caption{{\bf Upper left:} Original image of the cluster candidate HHG128. {\bf Upper right:} the model constructed
using ishape and assuming a King profile, after the convolution with the PSF. {\bf Lower left:} The weights assigned by
ishape to each pixel. {\bf Lower right:} The residuals after subtracting the original image with the model. Little
structure is seen beyond the photon noise associated with the cluster's light.}
\label{fig.ishape}
\end{figure}
Formally, the lowest $\chi^{2}$ is achieved with $c$ as a free parameter. However, as this is the most
uncertain of the parameters if left free, we have used fixed values of $c=30$ and $c=100$ to estimate the
actual errors using these plausible values for clusters in our Galaxy (Djorgovski \& Meylan~\cite{djorgovski93}).
Hence, all derived sizes, ellipticities and position angles (PA) are the average from the fits assuming King models
with $c=30,100$ and free. Their corresponding errors are the $\sigma$ in each case.
\begin{table*}
\caption{Results from the fitting procedure. The first column is the identification of the cluster. Then follow
the core radius $r_{c}$ in pc (assuming $d=4$ Mpc), ellipticity and the position angle (PA). Column~5 gives the
concentration parameter $c~(=r_{t}/r_{c})$.
The effective or half-light radius $r_{e}$ (in pc) is given in Column~6. Column~7 lists the projected galactocentric
distance in kpc, using $\alpha=201.364995$ and $\delta=-43.01917$ as the center of \object{NGC\,5128}.
The central surface brightness in
$V$, $\mu_{0}(V)$ is listed in Column~8 (see text).}
\label{tab.results}
\centering
\begin{tabular}{l c c r c c l c c}
\hline\hline
cluster ID & $r_{c}$ & e & PA & c & $r_{e}$ & $R_{gc}$ & $\mu_{0}(V)$ \\
(HHG) & (pc) & & (deg.) & & (pc) & (kpc) & (arcsec$^{-2}$) \\
\hline
022 (WHH001) & 0.73 $\pm$ 0.39 & 0.12 $\pm$ 0.00 & 2. $\pm$ 0.6 & 61.9 & 2.96 $\pm$ 1.58 & 14.2 & 13.85 \\
022 (WHH001) & 0.75 $\pm$ 0.39 & 0.12 $\pm$ 0.01 & 1. $\pm$ 1.4 & 58.0 & 2.94 $\pm$ 1.53 & 14.2 & 13.88 \\
032 & 1.25 $\pm$ 0.44 & 0.30 $\pm$ 0.00 & -21. $\pm$ 0.1 & 44.8 & 4.36 $\pm$ 1.53 & 29.0 & 15.23 \\
051 (WHH004) & 0.41 $\pm$ 0.54 & 0.00 $\pm$ 0.01 & 53. $\pm$ 13.7 & 317.5 & 3.73 $\pm$ 4.81 & 17.4 & 14.01 \\
080 & 3.32 $\pm$ 0.32 & 0.08 $\pm$ 0.01 & 32. $\pm$ 4.6 & 36.4 & 10.42 $\pm$ 1.01 & 26.7 & 18.38 \\
086 (WHH031) & 2.14 $\pm$ 0.37 & 0.24 $\pm$ 0.00 & 31. $\pm$ 0.5 & 35.5 & 6.62 $\pm$ 1.16 & 19.7 & 17.34 \\
093 & 2.67 $\pm$ 0.29 & 0.18 $\pm$ 0.00 & -48. $\pm$ 0.1 & 43.1 & 9.08 $\pm$ 1.00 & 13.3 & 17.96 \\
102 & 1.99 $\pm$ 2.95 & 0.38 $\pm$ 0.02 & 11. $\pm$ 0.7 & 30.0 & 5.69 $\pm$ 8.41 & 32.7 & 17.61 \\
104 & 3.18 $\pm$ 0.25 & 0.11 $\pm$ 0.00 & -18. $\pm$ 1.0 & 21.4 & 7.71 $\pm$ 0.61 & 29.6 & 18.39 \\
106 (WHH029) & 0.63 $\pm$ 0.46 & 0.00 $\pm$ 0.01 & -28. $\pm$ 22.6 & 62.9 & 2.58 $\pm$ 1.86 & 14.9 & 15.31 \\
128 & 0.93 $\pm$ 0.31 & 0.07 $\pm$ 0.00 & 42. $\pm$ 2.8 & 30.9 & 2.69 $\pm$ 0.89 & 15.7 & 16.20 \\
129 & 3.21 $\pm$ 0.20 & 0.28 $\pm$ 0.00 & -90. $\pm$ 0.9 & 75.0 & 14.33 $\pm$ 0.90 & 30.8 & 18.99 \\
141 & 5.21 $\pm$ 0.21 & 0.06 $\pm$ 0.01 & -65. $\pm$ 8.1 & 46.5 & 18.42 $\pm$ 0.77 & 14.9 & 19.75 \\
210 & 3.97 $\pm$ 0.37 & 0.06 $\pm$ 0.02 & -11. $\pm$ 23.8 & 33.1 & 11.90 $\pm$ 1.12 & 33.2 & 20.06 \\
225 & 5.40 $\pm$ 0.41 & 0.19 $\pm$ 0.02 & -17. $\pm$ 1.3 & 43.9 & 18.55 $\pm$ 1.42 & 30.5 & 20.73 \\
327 & 0.71 $\pm$ 0.51 & 0.17 $\pm$ 0.04 & -64. $\pm$ 2.4 & 39.2 & 2.32 $\pm$ 1.65 & 13.1 & \\
N1 (C40) & 1.38 $\pm$ 0.38 & 0.29 $\pm$ 0.00 & 79.8 $\pm$ 0.4 & 40.3 & 4.54 $\pm$ 1.26 & 24.5 & 15.94 \\
N3 & 2.19 $\pm$ 0.08 & 0.02 $\pm$ 0.01 & 55.7 $\pm$ 30.0 & 34.6 & 6.69 $\pm$ 0.25 & 15.1 & 19.28 \\
N3 & 2.09 $\pm$ 0.16 & 0.06 $\pm$ 0.03 & -54.6 $\pm$ 10.4 & 50.8 & 7.71 $\pm$ 0.59 & 15.1 & 19.27 \\
N5 & 1.62 $\pm$ 0.21 & 0.09 $\pm$ 0.02 & -23.6 $\pm$ 9.7 & 22.4 & 4.00 $\pm$ 0.53 & 12.9 & 18.36 \\
N7 (PFF06) & 1.06 $\pm$ 0.44 & 0.36 $\pm$ 0.00 & 88.8 $\pm$ 0.8 & 29.2 & 2.99 $\pm$ 1.25 & 15.7 & 15.45 \\
N8 & 1.71 $\pm$ 0.59 & 0.38 $\pm$ 0.01 & -81.7 $\pm$ 0.0 & 17.6 & 3.76 $\pm$ 1.30 & 15.1 & 17.64 \\
N9 & 1.34 $\pm$ 0.31 & 0.00 $\pm$ 0.04 & -90. $\pm$ 21.8 & 24.6 & 3.47 $\pm$ 0.80 & 14.2 & 19.21 \\
N10 (PFF09) & 1.13 $\pm$ 0.37 & 0.14 $\pm$ 0.01 & 85.3 $\pm$ 0.9 & 44.8 & 3.92 $\pm$ 1.30 & 17.0 & 16.29 \\
N11 & 1.20 $\pm$ 0.25 & 0.13 $\pm$ 0.01 & 23.2 $\pm$ 2.8 & 34.1 & 3.64 $\pm$ 0.78 & 16.9 & 17.36 \\
N12 & 1.25 $\pm$ 0.28 & 0.14 $\pm$ 0.02 & -25.8 $\pm$ 1.2 & 22.9 & 3.14 $\pm$ 0.70 & 17.1 & 17.14 \\
N21 & 3.32 $\pm$ 0.32 & 0.08 $\pm$ 0.01 & 31.7 $\pm$ 4.6 & 36.4 & 10.42 $\pm$ 1.01 & 26.1 & 19.15 \\
N23 & 2.61 $\pm$ 0.24 & 0.23 $\pm$ 0.00 & 47.9 $\pm$ 0.2 & 43.2 & 8.89 $\pm$ 0.82 & 20.9 & 18.57 \\
N24 & 2.40 $\pm$ 0.17 & 0.12 $\pm$ 0.03 & -73. $\pm$ 20.2 & 56.8 & 9.37 $\pm$ 0.68 & 15.1 & 19.90 \\
N25 & 2.98 $\pm$ 0.21 & 0.08 $\pm$ 0.02 & 44. $\pm$ 13.7 & 29.1 & 8.38 $\pm$ 0.60 & 14.8 & 19.94 \\
N26 (PFF092) & 0.98 $\pm$ 0.35 & 0.07 $\pm$ 0.01 & 44.5 $\pm$ 4.2 & 55.8 & 3.79 $\pm$ 1.34 & 12.3 & 16.33 \\
N30 & 2.27 $\pm$ 0.26 & 0.10 $\pm$ 0.03 & 11.2 $\pm$ 19.9 & 31.3 & 6.61 $\pm$ 0.78 & 24.5 & 18.20 \\
N32 & 3.94 $\pm$ 0.32 & 0.12 $\pm$ 0.01 & -8.1 $\pm$ 3.1 & 51.4 & 14.61 $\pm$ 1.19 & 20.7 & 19.43 \\
N33 & 0.99 $\pm$ 0.46 & 0.01 $\pm$ 0.03 & -3.2 $\pm$ 26.4 & 24.0 & 2.53 $\pm$ 1.16 & 21.0 & 16.76 \\
N34 & 4.39 $\pm$ 0.12 & 0.09 $\pm$ 0.00 & -23.4 $\pm$ 9.1 & 51.0 & 16.24 $\pm$ 0.45 & 26.5 & 19.91 \\
N35 & 3.65 $\pm$ 0.76 & 0.17 $\pm$ 0.03 & -77.3 $\pm$ 4.2 & 71.1 & 15.84 $\pm$ 3.31 & 27.8 & 20.45 \\
N37 & 4.10 $\pm$ 0.21 & 0.09 $\pm$ 0.04 & 52.6 $\pm$ 21.2 & 39.3 & 13.35 $\pm$ 0.70 & 25.1 & 20.91 \\
N41 & 2.41 $\pm$ 0.16 & 0.24 $\pm$ 0.01 & -40.9 $\pm$ 0.0 & 49.7 & 8.80 $\pm$ 0.60 & 31.9 & 18.80 \\
N42 & 2.64 $\pm$ 0.20 & 0.09 $\pm$ 0.00 & -39.1 $\pm$ 4.3 & 31.0 & 7.65 $\pm$ 0.58 & 30.7 & 17.35 \\
\hline
\end{tabular}
\end{table*}
Ishape does not return either the core radius $r_{c}$ or the tidal radius $r_{t}$. Instead, some assumptions must be made
to convert the given FWHM and concentration parameter into these more familiar quantities. A discussion of this can be found
in Larsen~(\cite{larsen01a}) and Larsen~(\cite{larsen04}), where both a relation between the FWHM and $r_{c}$ and a
numerical approximation for the effective (half-light) radius $r_{e}$ are presented:
\begin{equation}
FWHM = 2 \left[ \biggl[ \sqrt{1/2} + \frac{1-\sqrt{1/2}}{\sqrt{1 + c^{2}}}\biggr]^{-2} -1 \right]^{1/2} r_{c}
\end{equation}
\begin{equation}
r_{e} / r_{c} \approx 0.547 c^{0.486}
\end{equation}
\noindent
The latter is good to $\pm$2\% when $c>4$. These relations are valid when the King profiles are circularly symmetric.
In our case (2-D fit) the average of the FWHM along the minor and major axis was used to compute $r_{c}$ and $r_{e}$.
The values are listed in Table~\ref{tab.results}.
The ellipticities (Column~3 in Table~\ref{tab.results}) are measured at the FWHM as $e=1 - b/a$, where $a$ and $b$ are
the semi-major and semi-minor axes respectively. Their values and comparison with other GCSs will be discussed in Sect.~4.1.
The central surface brightness of the clusters can be derived assuming that a King profile remains a good representation
of the light distribution of the cluster towards the center, even if this part cannot be resolved by our images.
The central surface brightness $\mu_{0}$ of a King profile and total luminosity $L(R)$ within a radius $R<r_{t}$
are related by (Larsen~\cite{larsen01a}):
\begin{equation}
\mu_{0} = k \left( 1 - \frac{1}{\sqrt{1 + c^{2}}} \right)^{2}
\end{equation}
\begin{equation}
L(R) = \pi k \left( r_c^2 \mathrm{ln} \left(1 + \frac{R^2}{r_c^2} \right) + \frac{R^2}{1+c^2} - \frac{4r_c^2}{\sqrt{1+c^2}}\left(\sqrt{1+\frac{R^2}{r_c^2}}-1\right)\right)
\end{equation}
Note that Eq.~(4) is essentially a definition of the magnitude zeropoint parameter $k$;
for the range of $c-$values of interest here, we have $k \simeq \mu_0$.
$R$ was set to 20 pixels ($1.38\arcsec$) and aperture photometry was consequently performed to compute the total luminosity $L(R)$
within this radius, which is roughly three times larger than the typical FWHM. As no Landolt standards were observed
(since the nights were not photometric), we have used the Washington photometry from Harris~(\cite{harris04a}), which
includes virtually everything in the field of \object{NGC\,5128} in order to calibrate our photometry. For this
purpose an aperture correction between their and our aperture radius had to be computed. The curves of growth (i.e. the $R$
magnitude vs. the aperture radius) show that this correction is $\Delta R = -0.130 \pm 0.005$. Our $R$ photometry can be
directly compared to the $T_{1}$ filter as the difference is $\sim 0.01$ for
old, globular cluster-like objects (see Geisler~\cite{geisler96a}).
In addition to the aperture correction, we used $E_{B-V} = 0.11$ (Schlegel et~al.~\cite{schlegel98})
to correct for galactic absorption. Since all of our clusters are well away from the
central dust lane, the reddening should be uniform.
The central surface brightness in the $R$ band for each cluster is thus derived using eqs. (4) and (5). However, most values
found in the literature are given only for the $V$ filter. Hence, we have taken an average colour of $V-R=0.5$ to estimate
$\mu_0(V)$. The results are listed in Column~8 of Table~\ref{tab.results}. The spread in the intrinsic colour remains
the main uncertainty for the values listed.
Two clusters (HHG022 and N3, see Tables 1-3) were observed twice under similar conditions. They allowed us to test the
internal accuracy of the derived solutions. As can be seen from these Tables, the results are encouraging given the very
good agreement between the listed values. Furthermore, a much more telling -and fully independent- comparison is that with
Harris et~al.~(\cite{harris02}). Their typical sizes and ellipticities are comparable to those derived by us.
\section{Discussion}
\subsection{Ellipticity}
The Galactic GCs show, on average, little elongation, apart from a few cases (e.g. $\omega$~Cen with $e=0.19$,
Frenk \& Fall~\cite{frenk82}). The range of ellipticities of GCs is interesting to study
given that systematic differences have been found for other systems in the Local Group. Geisler \&
Hodge~(\cite{geisler80}) concluded that only a few massive (especially intermediate age) clusters
in the LMC are round, the large majority showing significant
ellipticities. A similar feature was observed for GCs in M31 (Barmby et.~al.~\cite{barmby02}),
where the brightest member G1 (=Mayall II) has $e=0.25\pm0.02$ measured from HST images
(Rich et~al.~\cite{rich96}). A common property of all these
very elongated clusters is their high luminosity. In fact, the brightest GCs in our Galaxy, as well as
in M31 and in the LMC, are the most flattened in their respective galaxies (van~den~Bergh~\cite{vandenbergh84}).
Hesser et~al.~(\cite{hesser84}) also noted that 4 of the 6 brightest known \object{NGC\,5128} GCs
were noticeably flattened.
Although our sample is still small, we do observe a similar tendency of bright clusters to have
higher ellipticities, as shown in Fig.~\ref{fig.MV_ellip}. It is intriguing to suppose that
such clusters may be the stripped nuclei of former nucleated dwarf elliptical galaxies
(e.g. Martini \& Ho~\cite{martini04}), as is often supposed for $\omega$~Cen. However,
note that the second most luminous Galactic GC (M54) is nearly spherical, although it is now
generally regarded as the former nucleus of the Sgr dSph (e.g. Layden \& Sarajedini~\cite{layden00}).
\begin{figure}
\centering
\includegraphics[width=8cm]{3393f4.eps}
\caption{The ellipticity $(e=1-b/a)$ as a function of the cluster's luminosity in the $V$ band, $M_{V}$.
The same behaviour is observed for Galactic and M31 clusters, in the sense that brighter clusters tend
to be more flattened and there are no relatively faint, elliptical clusters. Our data are indicated
by filled circles, while open circles represent the clusters from Harris et~al.(~\cite{harris02}).}
\label{fig.MV_ellip}
\end{figure}
The histogram of our ellipticities (Fig.~\ref{fig.histo_ellip}) is in very good agreement with
the samples studied by Barmby et~al.~(\cite{barmby02}) for M31 GCs and Holland
et~al.~(\cite{holland99}) and Harris et~al.~(\cite{harris02}) for Cen A GCs, although it goes
to somewhat larger values.
The bump in the interval $e=0.35-0.4$ is not present in any of the samples described above,
although H2002 find a cluster with $e=0.33$. Objects having this extreme elongation should not
necessarily be discarded as GCs. Indeed, the H2002 object is certainly a cluster. In addition,
Larsen~(\cite{larsen01a}), using HST images, studied a bright
cluster in NGC\,1023 and estimated its ellipticity as $0.37\pm0.01$, thus being one of the most
flattened GCs known so far. On the other hand, we must recognize from the above discussion that
galaxies certainly must contaminate our sample and these may well help to skew the distribution
to higher ellipticity (see Section~5 for a discussion on the background contamination).
\begin{figure}
\centering
\includegraphics[width=8cm]{3393f5.eps}
\caption{Histogram of the ellipticities. The reader is referred to Fig.~7 of Harris et~al.~(\cite{harris02})
for a similar plot for other GCSs. The strikingly high ellipticities observed in a few cases in our sample put them among
the most flattened GCs observed so far.}
\label{fig.histo_ellip}
\end{figure}
\subsection{Size, luminosity and surface brightness}
Both the effective radii $r_e$ and core radii $r_c$ are listed in Table~3. The core radii agree very well with the
range of values for GCs in NGC 5128 given in
Harris et~al.~(\cite{harris02}), and as can be seen from Fig.~\ref{fig.rc_comp}, are also well in the range of Galactic GCs.
The dip observed at $r_c < 0.5$ pc is probably due to a selection effect and is also present in the H2002 sample,
i.e. the known GCs tend to be the largest, most easily resolved.
\begin{figure}
\centering
\includegraphics[width=8cm]{3393f6.eps}
\caption{Histogram of the core radius $r_{c}$ for Galactic clusters (top panel, 85 objects) and \object{NGC\,5128} (bottom panel, 39 objects).}
\label{fig.rc_comp}
\end{figure}
Fig.~\ref{fig.rc_muV} compares the core radii and the central surface brightness $\mu_0(V)$ between clusters of
\object{NGC\,5128} (filled circles) and Galactic ones (open circles). No major systematic difference is observed and both groups show
a similar spread in their central surface brightness at a given $r_c$.
However, the tendency of the smaller clusters in \object{NGC\,5128} to have higher central
surface brightness could indicate a selection bias. This is very plausible as our detection is based on the presence of
residuals and faint compact clusters would be hardly noticeable.
Although the core radii are not surprising, the effective radii are generally somewhat larger than the
typical 2-7 pc observed for MW GCs, with a few extreme clusters with $r_e \sim$15 pc.
However, clusters with sizes of $\sim$30 pc have been observed in M31
(Huxor et~al.~\cite{huxor05}) at galactocentric distances between 15 and 35 kpc.
Again, we are certainly biassed against the smaller, more compact clusters.
In addition to these large GCs, Martini \& Ho~(\cite{martini04}) have derived masses and sizes for a sample of 14 bright clusters in \object{NGC\,5128}.
Their masses are in the range $10^6 - 10^7 M_{\odot}$ and thus some clusters are even more massive than some dwarf galaxies.
In our Galaxy, the most massive GC ($\omega$~Cen) may well be a stripped dwarf nucleus (see Martini \& Ho~\cite{martini04}
and references therein). A similar origin has been proposed for G1 in M31 (e.g. Bekki \& Freeman~\cite{bekki03},
Bekki \& Chiba~\cite{bekki04}). By implication, these very massive \object{NGC\,5128} GCs may also have had such an origin.
On the other hand, de~Propris et~al.~(\cite{depropris05}) have compared surface brightness profiles
of the nuclei of Virgo dwarf galaxies with those of UCDs. They concluded that UCDs are more extended and brighter
than the dwarf nuclei, so the ``threshing scenario'' is unlikely.
Whatever the physical processes that led to the formation of these very
massive clusters, their location in $r_h - M_V$ parameter space
is quite different from what is observed for MW GCs, as is clear in Fig.~\ref{fig.MV_rh}.
The solid line shows the equation derived by van~den~Bergh \&
Mackey~(\cite{vandenbergh04}) who found that only 2 MW GCs lie above this line:
$\omega$~Cen and NGC\,2419.
The data have been primarily taken from Huxor et~al.~(\cite{huxor05}), to which
we have added the samples of de~Propris et~al.~(\cite{depropris05}), Richtler et~al.~(\cite{richtler05})
and Ha\c{s}egan et~al.~(\cite{hasegan05}). The references for each dataset and their
symbols are given in Fig.~\ref{fig.MV_rh}.
In spite of the difference between the most massive NGC 5128 GCs and
their MW counterparts, the most striking feature of this diagram is that
{\em there are essentially no longer any gaps in the distribution of the ensemble of GCs.\/}
When Huxor et al. first presented this diagram, they used it to point to
the unique position of their newly-discovered M31 clusters. However,
the addition of our data and that of H2002 now nicely fills the gaps that
were present in the Huxor et~al.~(\cite{huxor05}) version. Many \object{NGC\,5128}
GCs are found above the van~den~Bergh \& Mackey~(\cite{vandenbergh04}) line.
Clearly, this line no longer appears to have any special significance.
In particular, we find a number of \object{NGC\,5128} clusters that are large and of
intermediate luminosity, falling only slightly below the Huxor et~al. M31 clusters.
In addition, several \object{NGC\,5128} clusters are found in the region
formerly inhabited almost exclusively by the Faint Fuzzies of Larsen \& Brodie~(\cite{larsen00}).
Note that the majority of the \object{NGC\,5128} clusters in this region are
also red and presumably metal-rich, as are the Faint Fuzzies. The 3 M31 clusters,
on the other hand, are rather blue.
Therefore, large, faint clusters are not exclusive to lenticular galaxies.
However, note that FFs are also distinguished by their disk-like kinematics.
This cannot be further studied in our sample without spectroscopic observations
and we are in the process of obtaining such data.
Thus, our data serve to illustrate that, although they
exhibit a range of 1-2 orders of magnitude in both luminosity and size,
{\em GCs can not be broken down into well-separated, distinct subtypes but instead
form a continuum\/}. This is in keeping with the assessment of Larsen~(\cite{larsen02b}).
This continuum now extends to the realm of the Ultra Compact Dwarfs (e.g. Hilker
et~al.~\cite{hilker99}, Drinkwater et~al.~\cite{drinkwater02}), lending support to the idea
that these objects are indeed similar (Martini \& Ho~\cite{martini04}) and
have similar origins (Mieske et~al.~\cite{mieske02}). Note that Ha\c{s}egan
et~al.~(\cite{hasegan05}) find several objects in their Virgo Cluster ACS survey
that they term ``dwarf-globular transition objects'' (DGTOs). Several of these fall close
to the main locus of GCs in Fig.~\ref{fig.MV_rh} and several lie closer to that
of previously identified UCDs. The existence of such objects further serves to
fill in the parameter space in this diagram. Note, however, that Ha\c{s}egan
et~al.~(\cite{hasegan05}) suggest that at least UCDs and GCs may be best
distinguished via other parameters, e.g. M/L ratio.
\begin{figure}
\centering
\includegraphics[width=8cm]{3393f7.eps}
\caption{Comparison of the central surface brightness in the V-band $\mu_{0}(V)$
vs. $r_c$ for
clusters in \object{NGC\,5128} (filled circles) and Galactic GCs (open circles).}
\label{fig.rc_muV}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=13.8cm]{f8.eps}
\caption{Half-light radius $r_{h}$ (in pc) versus $M_{V}$. A distance of 4 Mpc for
\object{NGC\,5128} has been assumed. The references for the different symbols follow. Open circles
are the GCs analysed in this work. Filled circles are the luminous clusters found in the
halo of M31 by Huxor et~al.~(\cite{huxor05}). Downward-pointing empty triangles represent the data
for GCs in \object{NGC\,5128} using HST images from Harris et~al.~(\cite{harris02}). The plus signs
are dwarf Spheroidals associated with the Milky Way (Irwin \& Hatzidimitriou~\cite{irwin95}).
The sample of massive clusters in \object{NGC\,5128} studied by Martini \& Ho (\cite{martini04}) are plotted
using filled squares. Open squares show the dwarf Spheroidals associated with M31 (Caldwell
et~al.~\cite{caldwell92} and Caldwell~\cite{caldwell99}).
UCDs from Mieske et~al.~(\cite{mieske02}) are represented by the upward-pointing empty triangles.
The MW GCs are the asterisks (data taken from the catalog of Harris~\cite{harris96}).
G1 (=Mayall II), the brightest GC in M31, is shown with a cross. The data from de~Propris
et~al.~(\cite{depropris05}) and Ha\c{s}egan et~al.~(\cite{hasegan05}) are shown with the
filled upward and downward pointing triangles, respectively. Finally, the empty diamonds
represent the striking clusters/UCDs found by Richtler et~al.~(\cite{richtler05}) around NGC\,1399.
The equation $log r_h = 0.2 M_V + 2.6$ from van~den~Bergh \& Mackey~(\cite{vandenbergh04})
is the solid line and the dashed line shows a value of constant average surface brightness
with $r_h$. The solid L-shape indicates the region (above and to the right) where FFs are
found. GCs form a continuum in this diagram and even approach the region occupied by UCDs.}
\label{fig.MV_rh}
\end{figure*}
\subsection{Correlations with galactocentric distance}
Several of the structural parameters may depend on the galactocentric distance $R_{gc}$, including the size,
brightness and concentration. For the Milky Way system, van~den~Bergh et~al.~(\cite{vandenbergh91}) have noticed that
$r_h$ is correlated with $R_{gc}$, with larger clusters observed at larger distances.
A similar correlation was observed for the GCS of M31 (Barmby et~al.~\cite{barmby02}), but it is not so clear for
the H2002 sample. This is in part, because of the different cameras used for inner and outer clusters and, most
important, because of projection effects over a small sample, which can blur any subtle trend with the actual,
three-dimensional galactocentric distance.
Our data show, nevertheless, a clear agreement with the trend defined by the Galactic GCs (see Fig.~\ref{fig.sep_rh}).
On the logarithmic scale, it is apparent that the observed range of $r_h$ in \object{NGC\,5128} matches that in the Milky Way,
although our clusters have on average larger projected distances (see also Fig.~\ref{fig.n5128_dss}).
More clusters at smaller distances need to be observed
before a better comparison can be made. Regarding the luminosity, we do not see any correlation with $R_{gc}$.
In analogy to Fig.10 of H2002, we searched for possible correlations between the concentration parameter and
ellipticity with $R_{gc}$, but none were observed.
\begin{figure}
\centering
\includegraphics[width=8cm]{3393f9.eps}
\caption{Dependence of the half-light radius $r_{h}$ on the projected distance of the clusters to
the optical center of \object{NGC\,5128} (filled circles). A distance of 4 Mpc has been assumed for the galaxy. For
comparison, the open circles represent Milky Way GC, taken from the catalog of Harris~(\cite{harris96}).}
\label{fig.sep_rh}
\end{figure}
\subsection{Metallicity and size}
To estimate the metallicity of the clusters, we have used the relation from Harris \&
Harris~(\cite{harris02b}) between the Washington system $(C-T_1)$ colour and metallicity,
and assumed a uniform reddening of $E(B-V)$=0.11 (Schlegel et~al.~\cite{schlegel98})
for all clusters. Given the very large distances of our clusters from
the central dust lane, this assumption should be valid.
No trend is seen between the derived metallicities and $R_{gc}$, but due to the still small
sample, we do not further comment on this. Instead, we have arbitrarily divided the sample into metal-poor and
metal-rich at Fe/H=--1 (see Harris et~al.~\cite{harris04a}) and looked for systematic differences in the cluster size.
The analysis performed on a number of galaxies ( see Kundu et~al.~\cite{kundu99}, Kundu \& Whitmore~\cite{kundu01},
Larsen et~al.~\cite{larsen01b}, Jord\'an~\cite{jordan04},
Jord\'an et~al.~\cite{jordan05}) indicate that metal--poor (blue)
clusters have mean half-light radii $\sim20\%$ larger
than those of metal--rich (red) ones. However, this is {\em not\/} the case for the H2002 sample. Using the
same metallicity cut, they find mean radii of $r_{h}=7.37 \pm 1.03$ and $r_{h}=7.14 \pm 0.76$ for the
metal--poor and metal--rich groups respectively, i.e. indistinguishable.
Our results are in better agreement with those of H2002 than with other studies. Fig.~\ref{fig.rc_met}
shows the histogram of half-light radii $r_h$ for both subpopulations and it is clear that, if a
relation between metallicity and cluster size exists, then it is opposite to that
reported by Jord\'an~(\cite{jordan04}) and most previous WFPC2 studies of barely resolved clusters.
The returned median values are $r_{h}=4.1 \pm 1.2$ and $r_{h}=8.0 \pm 0.9$ pc
for the metal--poor and metal--rich
subgroups respectively. The paucity of large metal--poor clusters is apparent.
It is striking that the Barmby et~al.~(\cite{barmby02}) histogram of $r_h$
for M31 GCs is broader for the metal--poor than for the metal--rich sample, which is contrary
to what we see. This holds also if metallicities via broad-band colours are included.
The general agreement of our results with those of H2002 lends credence to our
findings. Although the combined sample size is small, these data are
the best available in terms of number of pixels per resolved image and the
results should be the most robust. The possibility exists that some background galaxies
still contaminate our sample and only a few of them would be needed to affect the shape of the
size histograms. However, galaxy contamination should only be a small effect (see Section~5).
In fact, contaminating galaxies from the Woodley et~al.~(\cite{woodley05}) sample
are preferentially in the blue (metal-poor) part
of our GC colour selection. This is opposite to the effect required
to explain the size difference between metal-rich and metal-poor
clusters by galaxy contamination.
Another possibility is that clusters in \object{NGC\,5128} are different from those observed in
other galaxies. Clearly, more observations are required to resolve this intriguing issue.
\begin{figure}
\centering
\includegraphics[width=8cm]{3393f10.eps}
\caption{Half-light radii $r_h$ for the metal--poor and metal--rich subgroups. The metallicity is estimated
from the Washington C--T1 colour according to Harris \& Harris (2002) and the subgroups are divided at [Fe/H]=--1.}
\label{fig.rc_met}
\end{figure}
\subsection{Total Cluster Population}
Harris et~al.~(\cite{harris04b}) found that the \object{NGC\,5128} GCs follow a steep projected
radial distribution, of the form $\sigma \sim r^{-2}$, and that the clusters
were mostly confined to the inner $\sim 10\arcmin$. However, note that {\em all\/} of
our fields are {\em beyond\/} this radius, extending from $11\arcmin$
out to $29\arcmin$, with most outside $15\arcmin$.
In the 30 fields we observed, we confirmed the GC nature of 1/3 of the 44 original
Harris et~al.~(\cite{harris04a}) candidates
and found 22 other GC candidates, roughly 1 per field. The success of
our GC search in these distant regions of the galaxy's halo leads us to believe
both that the radial extent of the \object{NGC\,5128} GC population is indeed large,
with a significant population out to $30\arcmin$ and beyond, and that the number
of clusters derived by Harris et~al.~(\cite{harris04a}) may be an underestimate.
A crude calculation derives a mean GC surface density for our sample that
agrees well with the value derived by Harris et~al.~(\cite{harris04b}), at the
mean galactocentric distance of our sample. However, our sample only includes
a few of the {\em known} GCs at this radius. In addition, for their total population
calculation, Harris et~al.~(\cite{harris04b}) assumed a value derived from
the halo {\em light} profile, which fits well their GC profile out to
$\sim13\arcmin$ but beyond this their GC densities appear higher than those of the
galaxy itself, as is generally observed.
If their surface density of clusters beyond $\sim 10\arcmin$ is only slightly
underestimated, this could lead to a significant increase in their
total GC population estimate of $\sim 1000$ clusters and this indeed
appears to be the case.
Clearly, the final census of \object{NGC\,5128} GCs will not be achieved until a complete
search of the outer halo to at least $45\arcmin$ is conducted. Radial velocity
surveys would be very inefficient and time-consuming given the huge ($\sim$100 to 1)
ratio of field interlopers to genuine GCs.
For optimum efficiency, such a search requires
both the widest field coverage combined with very high spatial resolution
in order to resolve the clusters. Such an optimum combination is provided
by the IMACS instrument at Magellan and we are in the process of obtaining
these data.
\section{Background Contamination}
Certainly, even after the selection criteria applied to our cluster candidates,
a (hopefully small) fraction of our objects consists of background galaxies.
Foreground stars can be surely discarded as part of the contaminating population,
because they cannot be observed as extended objects in our frames. The purpose of this
section is to give an idea of the contamination level and its effects on our results.
\subsection{Effects of including galaxies in our sample}
To see how galaxies behave in our diagrams (i.e. Fig~\ref{fig.rc_muV} and
Fig.~\ref{fig.MV_rh}), we have re-analysed our frames and selected objects which are
either spectroscopically confirmed background galaxies or fall without doubt into this category
after visual inspection. All of the latter are well resolved objects, at least twice
as large as the clusters and many had substructures like disks or spiral arms.
Fainter objects were included only if they were spectroscopically confirmed galaxies.
This is the case for WHH094, WHH127 and WHH43, all from Woodley et~al.~(\cite{woodley05}).
Our images of these galaxies were analysed in exactly the same way as with the cluster
candidates, i.e. light profiles were fitted via {\it ishape\/} and using the same stellar
PSF for the corresponding frame. The fitting radius and all parameters were handled
as was done for the cluster candidates, the only difference is that we are now working with
a sample of purely galaxies, so `normal' Washington colours and magnitudes and reasonable
fits with King profiles (i.e. little residual in the subtracted image) are not expected.
The derived structural parameters were then combined with the photometry from Harris
et~al.~(\cite{harris04a}).
After correcting for aperture size, reddening and forcing the galaxies to be at the
assumed distance for \object{NGC\,5128} (4 Mpc), we derived their surface brightnesses (a quantity
which is anyway independent of the distance), absolute magnitudes, half-light and core radii in
parsecs, etc.
Strikingly, the King profile did in fact fit quite well to the majority of galaxies for which
no substructure was visible, at least within the inner 25 pixels. For disky or spiral galaxies,
the residuals were clear. Moreover, the galaxies occupy a similar region as the GCs in the
$\mu_{0}- r_{c}$ parameter space, as Fig.~\ref{fig.background} shows
(upper panel), with perhaps a reasonable tendency to larger sizes and lower surface
brightnesses. We stress that, in many aspects, the comparison between galaxies and clusters
based on their structural parameters has no physical meaning. In this case, the derivation
of $r_{c}$ via a King profile is not more than an artifact but it is adequate for the
purpose of assessing the contamination level and to illustrate its behaviour.
Similarly, there is little observed difference in the $r_{h} - M_{V}$ space
(Fig.~\ref{fig.background}, lower panel). Indeed, the galaxies extend to higher sizes and span a region
which is a bit above `normal' GCs in \object{NGC\,5128} and in our Galaxy in this logarithmic scale.
Note, however, that the smallest galaxies in our sample
(again, small in the sense of $r_h$ obtained by brute force) is comparable to the mid-size/large
clusters. Truly large galaxies have no counterparts in our sample of GCs.
\subsection{The level of contamination}
To assess the level at which our diagrams and results could be affected by contamination
is the aim of this section.
Without doubt, the most reliable way to determine if a cluster candidate is actually a
background galaxy is via spectroscopy. Nevertheless, an independent image-based analysis
can also be used as a reasonable first measurement. For this, we made use of images
taken independently with the Gemini 8m Telescope and the GMOS Camera (Harris et~al.~2006).
These are of poorer seeing than the MagIC frames (0.7''--0.8'') and cover the central
region of \object{NGC\,5128}, so there are no common objects. However, the published spectroscopic work
concentrates almost solely in this region and therefore, dozens of confirmed clusters and
galaxies are available for study.
We apply the same reduction, analysis and selection criteria to the cluster-like
candidates within these GMOS fields and look for existing velocities for this sample.
Of course, this is only meaningful if one does not know {\it a priori\/} which objects are
the clusters. Thus, the correlation with spectra was done as the very last step.
First, the stellar PSF was made from 20-30 stars per frame. Then, this PSF was subtracted
from all objects and GC candidates were chosen among those which left a `doughnut' shaped
residual, exactly as was done for our original sample.
After this, we looked for their Washington photometry in H2004 and applied
the same colour and magnitude cuts. We then ran {\it ishape\/} again with the same parameters
as before and in exactly the same way.
All in all, we obtained 87 cluster-like objects for which structural parameters could be derived
and which passed the selection criteria. 42 of these had been previously observed
spectroscopically (Peng et~al.~\cite{peng04} and Woodley et~al.~\cite{woodley05}). 39 of
the 42 GC candidates turned out to be clusters and 3 are actually background galaxies.
There are, in addition, several GCs which we failed to detect, presumably due to the low
resolution of the images and their compactness.
This shows that the galaxy contamination level is roughly 7\% for the GMOS images and
can be even lower for the MagIC study, which benefitted from much better resolution.
On the one hand, we have shown that background galaxies cannot be easily told from the clusters
based on their structural parameters and location in the diagrams, so that the risk exists
if no careful selection is performed, although in this case one still should not be
led to erroneous conclusions since the properties are similar.
On the other hand, an independent test shows that this contamination is {\it at most\/} 7\% when
the morphology and Washington colours are used as selection criteria. Thus, our results
reflect by far the behaviour of GCs and {\it not\/} background galaxies. Note that
the ratio of galaxies to GCs certainly increases with galactocentric distance, due to
the strong central concentration of the latter, and thus one might naively expect
that the contamination level of our sample may be higher since it is more distant than
the GMOS fields. However, we have shown that, given sufficiently good seeing and our
analysis techniques, one should be able to maintain the contamination to a low level
irrespective of galactocentric distance.
As explained in Section 2.3, Woodley et~al.~(\cite{woodley05}) find that two of our
cluster candidates are actually background galaxies. For one of them (HHG94),
our much better seeing shows that it is actually a cluster candidate
projected $1.6\arcsec$ from a galaxy observed. This separation is close to
the typical seeing for the Harris et~al.~(\cite{harris04a}) database, which was the
basis of the Woodley et~al.~(\cite{woodley05}) work. It is therefore reasonable
that the velocity obtained corresponds actually to that of the galaxy,
or a composite of the two velocities.
If this is indeed the case, then there is only one galaxy in the common sample
of 6 cluster candidates, which is roughly what we should expect from our
results, given the small numbers involved. Our GMOS test is more
definitive, so we give more weight to its result and estimate our final
galaxy contamination level at about 10\%.
\begin{figure}
\centering
\includegraphics[width=8.8cm]{3393f11.eps}
\includegraphics[width=8.8cm]{3393f12.eps}
\caption{{\bf Upper Panel:} Same as Fig.~7 with background galaxies represented by the plus sign.
Besides the tendency to have larger $r_{c}$ and perhaps lower surface brightnesses than the
\object{NGC\,5128} (filled circles) and MW (open circles) GCs, galaxies cannot be clearly
distinguished in this space.{\bf Lower Panel:} Same as Fig~8, but the analysed sample of galaxies
has been included and is represented by the grey circles. Open circles are the GC candidates and
filled circles represent all objects from the references given in Fig.~8.}
\label{fig.background}
\end{figure}
\section{Conclusions}
We have obtained very high spatial resolution images in excellent seeing
conditions with the Magellan telescope + MagIC camera of 44 GC candidates in
the outer regions of \object{NGC\,5128} from the list provided by Harris et~al.~(\cite{harris04a}).
These data not only allow us to determine the nature of these candidates via spatial
resolution but also allow us to derive their detailed structural parameters.
This is the first time such parameters are determined for GCs beyond the
Local Group from ground-based images. About one third of the candidates appear to be
bonafide GCs. In addition, we serendipitously
discovered 18 new GC candidates and also derived their structural parameters.
We compare our cluster sample in detail with those of other GC samples in other
galaxies with similar information available. We find that, in general, our
clusters are similar in size, ellipticity, core radius and central surface
brightness to their counterparts in other galaxies, in particular those in
\object{NGC\,5128} observed with HST by Harris et~al.~(\cite{harris02}).
However, our clusters extend to higher ellipticities and larger half-light
radii than their Galactic counterparts, confirming the results of H2002.
Combining our results with those of Harris et~al.~(\cite{harris02}) fills in the gaps
previously existing in the $r_h - M_V$ parameter space and indicates that any
substantial difference between presumed distinct cluster types in this diagram,
including for example the Faint Fuzzies of Larsen \& Brodie~(\cite{larsen00})
and the `extended, luminous' M31 clusters of Huxor et~al.~(\cite{huxor05})
is now removed and that clusters form a continuum. Indeed, this continuum now
extends to the realm of the Ultra Compact Dwarfs.
The metal-rich clusters in our sample have half-light radii in
the mean that are almost twice as large as their metal-poor counterparts,
at odds with the generally accepted trend.
Finally, our discovery of a substantial number of new cluster candidates
in the relatively distant regions of the \object{NGC\,5128} halo suggest that current
values of the total number of globular clusters may be underestimates.
We have performed extensive tests to study the effect of background galaxies
on our results and the expected amount of contamination. They show that
galaxies and clusters cannot be clearly distinguished from their loci in the
$r_h - M_V$ and $\mu_{0,V} - r_{c}$ parameter spaces. However, if high resolution
images are combined with an appropiate colour index like the Washington $(C-T_{1})$,
the level at which the GC sample is contaminated by background galaxies is
about 10\%. Therefore, we expect that our results reflect largely the
physical properties of actual clusters rather than background galaxies.
\begin{acknowledgements}
M.G. thanks S\o ren Larsen for his help with {\it ishape\/} and comments, as
well as Avon Huxor for the data in Fig.~\ref{fig.MV_rh}. D.G. gratefully acknowledges
support from the Chilean {\sl Centro de Astrof\'\i sica} FONDAP No. 15010003.
We thank the referee for his/her comments and suggestions which greatly improved
this paper, especially the discussion about the galaxy contamination.
\end{acknowledgements}
| 2024-02-18T23:39:48.395Z | 2005-10-19T08:00:10.000Z | algebraic_stack_train_0000 | 465 | 12,051 |
|
proofpile-arXiv_065-2630 | \section{Introduction}\label{introduction}
Over the past few years there has been a heightened interest in
testing the statistical properties of Cosmic Microwave Background
(CMB) data. The process has been accelerated by the release of the
WMAP first year results. The WMAP data provide the first ever,
full-sky maps which are signal dominated up to scales of a few
degrees. Thus, for the first time we can test the Gaussianity and
isotropy assumptions of the cosmological signal over large scales in
the sample variance limit.
Ever since the release of the COBE-DMR results \citep{cobe} a
consensus has been hard to reach on tests of non-Gaussianity with some
studies reporting null results \citep{kog96a,contaldi2000,sandvik}
while others claimed detections of non-Gaussian features
\citep{fmg,joao,novikov, pando}. With the release of the WMAP first
year results a limit on the non-Gaussianity of primordial
perturbations in the form of an estimate of the non-linear factor
$f_{\rm NL}$ was obtained by \cite{wmapKomatsu}. However a number of
authors \citep{hbg,erik1,coles,park,copi,teglarge,kate,efstathiou,
football,npoint,jaffe} have also reported analysis of the maps that
suggest violations of the Gaussian or isotropic nature of the signal.
One of problems with testing Gaussianity is that one can devise a
plethora of tests to probe the infinite degrees of non-Gaussianity,
therefore different tests represent different perspectives on the
statistical patterns of the signal. For WMAP there are already a
number of detections of so called anomalies, most pointing to
different unexpected features in the microwave sky. The most
documented case \citep{peiris,efstathiou,slos,teglarge,erik1}
is the low amplitude of the quadrupole and octupole in comparison to
the inflationary prediction, something we can categorize as
{\sl amplitude anomalies}. Although it is simple to design inflationary
spectra with sharp features which reproduce, more or less closely, the
amplitude anomaly (see e.g. \cite{contaldi2003,bridle,salopek}) these
invariably suffer fine tuning problems. Another approach is to relate
the anomaly to the breakdown of statistical isotropy or Gaussianity.
Other reported features relate to the correlation of phases in the
multipole coefficients which are an indication of
non-Gaussianity. These can be dubbed {\sl phase anomalies}. One
example is the hemisphere asymmetries \citep{erik1}; the northern
ecliptic hemisphere is practically flat while the southern hemisphere
displays relatively high fluctuations in the power spectrum. Other
functions, such as the bispectrum \citep{kate} and n-point correlation
functions \citep{npoint} also show related asymmetries. Furthermore,
there is the anomalous morphology of several multipoles, in
particular, the striking planarity of the quadrupole and octupole and
the strong alignment between their preferred directions
\citep{teglarge}. Overall, there is a strong motivation to continue
probing the statistical properties of the data and find possible
sources for these signals, be it instrumental, astrophysical or
cosmological.
The first test to have provided indications of possible non-Gaussian
features in the CMB data was reported by \cite{fmg} and \cite{joao}
using a bispectrum estimator, the fourier analog of the three point
function. Both those detections were later found to be caused by
systematic effects rather than by cosmological source as reported by
\cite{banday}. For the case of the bispectrum signal detected by
\cite{joao}, which used an estimator tuned to detect correlations
between neighbouring angular scales, finding the source of the signal
had to wait for the release of the high precision WMAP data
\citep{wmap} which was able to provide a comparative test of the
cosmological signal. The WMAP data did not reproduce COBE's result and
systematic errors were found to be the cause \citep{joaoes}. The WMAP
data was later analysed with the bispectrum in more detail
by \cite{kate}. In that paper, the bispectrum of the clean, coadded maps
was analysed and a connection between the hemisphere asymmetries in
the 3-point correlation function and the bispectrum was established,
although the full sky as a whole was found to be consistent with
Gaussianity.
In this paper, we study the effect that foreground contaminations have
on bispectrum estimators. In section~\ref{sec:def} we define a
set of bispectrum estimators with set $\ell$ configurations. In
section~\ref{sec:foregrounds} we describe the template dust, free-free
and synchrotron maps used to characterize the effect on the
bispectrum. In section~\ref{sec:method} we determine the distribution
of the estimators in the presence of residual foregrounds with
different amplitudes and discuss the application of this method to
detect residuals in the data by introducing a number of statistical
and correlation measures. In section~\ref{sec:application} we discuss
the application of the the statistical tools developed in the previous
sections to the raw and cleaned WMAP first year maps. We conclude with
a discussion of our method and results in section~\ref{sec:disc}.
\section{The Angular Bispectrum}\label{sec:def}
We now introduce the angular bispectrum estimator \citep{fmg}. The
bispectrum is related to the third order moment of the spherical
harmonic coefficients $a_{\ell m}$ of a temperature fluctuation map
$\Delta T({\bf \hat n})/T$. The coefficients describe the usual
expansion of the map over the set of spherical harmonics $Y_{\ell
m}({\bf \hat n})$ as
\begin{eqnarray}
\frac{\Delta T}{T}({\bf \hat n})=
\sum_{\ell m}a_{\ell m}Y_{\ell m}({\bf \hat n}).
\end{eqnarray}
Given a map, either in pixel space or harmonic space, and assuming
statistical isotropy, one can construct a set hierarchy of
rotationally invariant statistical quantities characterizing the
pattern of fluctuations in the maps. These are the n-point correlation
functions in the temperature fluctuations $\langle\frac{\Delta
T}{T}({\bf \hat m})\frac{\Delta T}{T}({\bf \hat n})...\frac{\Delta
T}{T}({\bf \hat p})\rangle$ or in the spherical harmonic coefficients,
$\langle a_{\ell_1 m_1}a_{\ell_2m_2}...a_{\ell_nm_n}\rangle$.
The unique {\it quadratic} invariant is the angular power spectrum
defined as
$\langle a_{\ell_1 m_1} a^\star_{\ell_2 m_2}\rangle = \delta_{\ell_1\ell_2}\delta_{m_1m_2}C_\ell$, whose estimator can be written as ${\hat
C}_\ell=\frac{1}{2\ell+1}\sum_m|a_{\ell m}|^2$. This gives a measure
of the overall intensity for each multipole $\ell$. Following
\cite{fmg}, the most general {\it cubic} invariant defines the angle
averaged bispectrum,
\begin{equation}
\left<a_{\ell_1 m_1}a_{\ell_2 m_2}a_{\ell_3 m_3}\right>=
B_{\ell_1\ell_2\ell_3}\left (
\begin{array}{ccc} \ell_1 & \ell_2 & \ell_3 \\ m_1 & m_2 & m_3
\end{array} \right ),
\end{equation}
where the $(\ldots)$ is the Wigner $3J$ symbol. Parity invariance of
the spherical harmonic functions dictates that the bispectrum be
non-zero only for multipole combinations where the sum
$\ell_1+\ell_2+\ell_3$ is even. An unbiased estimator (for the full
sky) can be evaluated as
\begin{eqnarray}
{\hat B}_{\ell_1\ell_2\ell_3}&=&\frac{{\cal N}^{-1}_{\ell_1\ell_2\ell_3}}{\sqrt{4\pi}}\sum_{m_1m_2m_3}\left ( \begin{array}{ccc} \ell_1 & \ell_2 & \ell_3
\\ m_1 & m_2 & m_3
\end{array} \right )\times\\
&&a_{\ell_1 m_1}a_{\ell_2 m_2} a_{\ell_3 m_3},\nonumber
\end{eqnarray}
with the normalization factor defined as
\begin{eqnarray}
{\cal N}_{\ell_1\ell_2\ell_3}&=&{\left
(\begin{array}{ccc} \ell_1 & \ell_2 & \ell_3 \\ 0 & 0 & 0\end{array}
\right
)}\times\\&&\sqrt{\frac{(2\ell_1+1)(2\ell_2+1)(2\ell_3+1)}{4\pi}}.\nonumber
\end{eqnarray}
The bispectrum can be related to the three-point correlation functions
of the map just as the power spectrum $C_\ell$ can be related to the
correlation function $C(\theta)$ through the well known expression
\begin{equation}
C(\theta) = \frac{1}{4\pi}\sum_\ell (2\ell+1)C_\ell P_\ell(\cos\theta).
\end{equation}
For example, the pseudo-collapsed,
three-point correlation function, $C^{(3)}(\theta)=\langle
\frac{\Delta T}{T}({\bf \hat n})^2\frac{\Delta T}{T}({\bf \hat m})
\rangle$, is related to our definition of the bispectrum $
B_{\ell_1\ell_2\ell_3}$ as
\begin{equation}\label{3pt}
C^{(3)}(\theta)= \frac{1}{4\pi}\sum_{\ell_1\ell_2\ell_3}{\cal N}_{\ell_1\ell_2\ell_3}B_{\ell_1\ell_2\ell_3}P_{\ell_3}(\cos\theta),
\end{equation}
where ${\bf \hat n}\cdot{\bf \hat m}=\cos\theta$.
It is important to use both tools, the bispectrum and the three-point
correlation function, to probe the sky maps as they have the capacity
to highlight different features of the data. In principle, harmonic
space based methods are preferred for the study of primordial
fluctuations whereas real space methods are more sensitive to
systematics and foregrounds, which are strongly localized in real
space. In addition, the three-point correlation function is intrinsically very
sensitive to the low-$\ell$ modes, whereas the bispectrum can pick up
different degrees of freedom with respect to the different mode
correlations we want to probe
For the choice $\ell_1=\ell_2=\ell_3=\ell$ we can define the
single-$\ell$ bispectrum ${\hat B_\ell}=\hat B_{\ell\, \ell \,\ell}$
\citep{fmg}, which probes correlations between different $m$'s. Other
bispectrum components are sensitive to correlations between different
scales $\ell$. This can be extended to study correlations from
different angular scales. The simplest of these is the $\Delta\ell=1$
inter-$\ell$ bispectrum between neighbouring multipoles defined as
$\hat B_{\ell-1\, \ell \,\ell+1}$ \citep{joao}. It is convenient to
consider estimators normalized by their expected Gaussian variance
$\hat C_{\ell_1}\hat C_{\ell_2}\hat C_{\ell_3}$ which have been shown
to be more optimal and more Gaussian distributed than the unnormalized
estimators, and are not sensitive to the overall power in the
maps. Here we will introduce the $\hat I_\ell$,$\hat J_\ell$, and
$\hat K_\ell$ bispectra defined as
\begin{equation}\label{i3j3}
I^3_\ell = { {\hat B}_{\ell} \over ({\hat C}_{\ell})^{3/2}} , \ \
J^3_\ell = { \hat B_{\ell-1\, \ell \,\ell+1} \over ({\hat C}_{\ell-1}{\hat
C}_{\ell} {\hat C}_{\ell+1})^{1/2}},
\end{equation}
and
\begin{equation}\label{k3}
K^3_\ell = { \hat B_{\ell-2\, \ell \,\ell+2}
\over ({\hat C}_{\ell-2}{\hat C}_{\ell}
{\hat C}_{\ell+2})^{1/2}},
\end{equation}
where have extended the formalism to a separation $\Delta\ell=2$ to
probe signals with both odd and even parity in the inter-$\ell$
correlations.
\section{Foreground Templates}\label{sec:foregrounds}
The standard method of foreground removal used by cosmologists
makes use of a set of template maps for each of the dominant
sources of foreground contamination in the CMB frequency maps.
These are maps obtained from independent astronomical full-sky
observations at frequencies where the respective mechanisms of
emission are supposed to be dominant. These templates are the H
$\alpha$ map \citep{halpha}, for the free-free emission, the 408
MHz Haslam map \citep{haslam}, for the synchrotron emission, and
the FDS 94 GHz dust map \citep{FDS}. These are then subtracted
from the WMAP data with coupling coefficients determined by cross
correlating with the observed maps in the Q (41 GHz), V (61 GHz),
and W (94 GHz) bands. Nevertheless the templates are a poor
approximation of the of the real sky near the galactic plane, so a
Kp2 mask must still be used in the analysis. The method is
described in \cite{wmapfor} and \cite{komatsu};
\begin{eqnarray}\label{eq:amp}
\overline{T}_{Q} &=& T_{Q} - 1.044\,[1.036\, T^{\rm FDS} +
\frac{1.923}{\eta}\,T^{H\alpha} \nonumber\\ &&+1.006\,T^{\rm
Sync}],\nonumber\\ \overline{T}_{V} &=& T_{V} - 1.100\,[0.619\,
T^{\rm FDS} + \frac{1.923}{\eta} \,\left(\frac{\nu_{V}}{
\nu_{Q}}\right)^{-2.15}\,T^{H\alpha} \nonumber\\ && +1.006
\,\left(\frac{\nu_{V}}{ \nu_{Q}}\right)^{-2.7}\,T^{\rm Sync}],\\
\overline{T}_{W} &=& T_{W}- 1.251[0.873\, T^{\rm FDS} +
\frac{1.923}{\eta}\, \left(\frac{\nu_{W}}{
\nu_{Q}}\right)^{-2.15}\,T^{H\alpha} \nonumber \\
&&+1.006\,\left(\frac{\nu_{W}}{ \nu_{Q}}\right)^{-2.7}\,T^{\rm
Sync}],\nonumber
\end{eqnarray}
where $ \eta$ is a correction factor due to reddening in the free-free
template and $\nu_{Q}=40.7$ GHz, $\nu_{Q}=60.8$ GHz and $\nu_{W}=93.5$
GHz . The values in front of the left bracket convert the detector's
temperature to thermodynamic temperature. It is considered that this
is a sufficiently good method to remove the foregrounds outside the
Kp2 plane since it matches the correct amplitudes quite well, however
the usual doubts remain, especially in the light of the alignment/low
multipoles controversies. Another point one can make is that
whereas this may be a satisfactory technique to correct the
foregrounds at the power spectrum level, its effect on higher order
statistics is unknown and may actually induce unexpected correlations.
\section{The effect of foregrounds on the bispectrum}\label{sec:method}
\begin{figure*}
\centerline{\psfig{file=fig1.ps,angle=270,width=15cm}}
\caption{ The average functions for the angular spectra of the
simulations. Black (solid) is Gaussian, red (short-dashed) is for the
contaminated simulations with $\alpha=0.5$ and blue (long-dashed) is
for the contaminated simulations with $\alpha=1.0$. The top panel
shows the power spectrum $\hat C_\ell$, second panel shows the
single-$\ell$ bispectrum $I^3_\ell$, the third panel the
$\Delta\ell=1$ inter-$\ell$ bispectrum $J^3_\ell$ and the bottom
panel shows the $\Delta\ell=2$ inter-$\ell$ bispectrum
$K^3_\ell$. The shaded regions represent the Gaussian variance
measured directly from the ensemble of simulations. For the Gaussian
simulations, the average power spectrum is just the input $\Lambda$CDM
power spectrum and the average bispectra is effectively zero. On the
other hand, the average angular spectra of the contaminated
simulations have an emerging pattern of intermittency in both second-
and third-order statistics. This intermittent pattern comes about
due to the even parity of galactic foregrounds, ie, even modes
are enhanced relatively to the odd modes. This can be seen in the
significant increase of power in the even modes of the power
spectrum. In terms of the bispectrum, we see that the $\Delta\ell=0$
and the $\Delta\ell=2$ inter-$\ell$ components will be more
significantly enhanced than the $\Delta\ell=1$ inter-$\ell$
bispectrum because the latter includes correlations between even and
odd modes.}\label{fig:bisp}
\end{figure*}
We have generated a set of 3000 Gaussian, CMB simulations of the WMAP
first year Q, V, and W maps in
\textsc{HEALPix}\footnote{http://healpix.jpl.nasa.gov}\citep{healpix}
format with a resolution parameter $N_{\rm side}=512$. Each simulation
is smoothed with the Q, V and W frequency channel beams and channel
specific noise is added. We adopted the WMAP best-fit $\Lambda$CDM
with running index power spectrum\footnote{http://lambda.gsfc.nasa.gov} to generate the $a_{\ell m}$
coefficients of the maps. The Kp2 galactic mask is imposed on each
map. The masked maps are then decomposed into spherical harmonic
coefficients $a_{\ell m}$ using the \textsc{Anafast} routine. We then
calculate the four spectra; namely the the power spectrum $\hat
C_\ell$, single-$\ell$ bispectrum $I^3_\ell$, $\Delta\ell=1$
inter-$\ell$ bispectrum $J^3_\ell$ and $\Delta\ell=2$ inter-$\ell$
bispectrum $K^3_\ell$ as described in section~\ref{sec:def}.
We then add channel-specific foregrounds outside the Kp2 zone to the
same set of Gaussian simulations with amplitudes set as in
Eqn.~(\ref{eq:amp}). The addition of the foreground is scaled linearly
by a factor $\alpha$ as
\begin{equation}
T_{\rm \{Q,V,W\}} = T_{\rm CMB} + \alpha\overline T_{\rm \{Q,V,W\}},
\end{equation}
which we use to check the sensitivity of the bispectra to the
foregrounds (typically $\alpha=1.0 $ or $ \alpha=0.5$). The power
spectrum and bispectra are then calculated for the set of
contaminated maps.
In Fig.~\ref{fig:bisp} we show the mean angular spectra of the
simulations obtained by averaging over the ensembles. We show the mean
spectra for the Gaussian (solid, black) and the contaminated
simulations for $\alpha=0.5$ (short-dashed, red) and $\alpha=1.0$
(long-dashed, blue). The shaded area shows the variance of the three
bispectra obtained directly from the Gaussian simulations.
We see that even for the fully contaminated set of maps ($\alpha=1.0$)
the average signal is not significantly larger than the expected
Gaussian variance indicating that a detection would require averaging
over a large number of modes. However we see some important
distinguishing features in the signal in that it is sensitive to the
parity of the multipole, being suppressed for odd $\ell$. This is due
to the approximate symmetry of the foreground emission about the
galactic plane which means that most of the signal will be in even
$\ell$ modes since these have the same symmetry. This effect can be
seen in all the spectra but most significant is the suppression of the
odd inter-$\ell$ bispectrum $J^3_{\ell}$ with respect to the even
inter-$\ell$ bispectra $I^3_{\ell}$ and $K^3_{\ell}$.
Another obvious feature of the even parity nature of the signal
is the correlation between the spectra. In particular the absolute
values of the $I^3_\ell$ and the $K^3_\ell$ are correlated with
the structure visible in the fully contaminated power
spectrum.
Overall the $K^3_\ell$ is the most sensitive statistic with the
largest amplitude with respect to the Gaussian variance although
still quite small even at $50\%$ contamination. We now describe a
number of statistical estimators we use to test the detectability
of the template matched foregrounds in the Q, V, and W channel
maps.
\subsection{Chi-Squared Test}\label{sec:chisq}
Having seen how foregrounds affect the angular statistics of CMB
maps, we can now devise specific tests to probe these properties on
the bispectrum and test their sensitivity. The standard way to use
the bispectrum as a test of general non-Gaussianity is to use a
reduced $\chi ^2 $ statistic \citep{joao,joaoes,kate}. This is defined
as
\begin{equation}
\chi^2={1\over N_\ell}{\sum_{\ell = \ell_{\rm min}}^{\ell_{\rm max}}
\chi_\ell^2}= {1\over N_\ell} \sum_{\ell = \ell_{\rm min}}^{\ell_{\rm
max}} { {( X_\ell-\langle X_\ell\rangle)^2} \over
{\sigma_\ell^2} }.
\end{equation}
where $ X_{\ell} $ is a given bispectrum statistic, $\langle
X_\ell\rangle$ is its mean value computed over the Monte Carlo
ensembles, and $\sigma_\ell^2$ is the variance for each angular
scale. The $\chi^2 $ test is a measure of the deviation of the
observed data from the expected mean, weighted by the Gaussian
variance of the estimator.
Foregrounds increase the amplitude of the bispectra foregrounds, but
as shown in Fig.~\ref{fig:bisp}, we can see that only $K^3_\ell$ seems
to stand of chance of significant detections since the average
amplitude of the signal is comparable to the variance, unlike the
other components of the bispectrum.
The detectability of the template matched signals using any of the
bispectra can be tested by comparing the distribution of the $\chi
^2 $ values obtained from the contaminated simulations with that
obtained from Gaussian simulations. We compute the $\chi^2 $
values for the contaminated maps using the mean and the variance
obtained from the Gaussian simulations, ie, the expected Gaussian
functions.
We compare the distribution of the $\chi^2 $ values for the Gaussian
simulations against the distribution obtained for the simulations with
contamination ($\alpha=1.0$). We concentrate on the Q band since it is
the most contaminated frequency. The histograms of the $\chi^2$ are
shown in the left column of Fig.~\ref{fig:histo}. For the $I^3_\ell$
and $J^3_\ell$ spectra the histograms overlap completely. This means
that the probability of finding contaminated simulations with a high
$\chi^2$ is the same as for the Gaussian simulations indicating that
the $\chi ^2 $ test is insensitive to the presence of foreground
contaminations at this level. However the $K^3_\ell$ spectrum tells a
different story. There is a significant shift between the two
distributions which implies that this component of the bispectrum has
more sensitivity to foregrounds.
The sensitivity can be quantified in terms of the fraction of the
contaminated simulations (with $\alpha=1.$) with a $\chi^2$ larger
value than 95.45 $\%$ (i.e. 2 $\sigma$) of the Gaussian simulations
(with $\alpha =0$). The sensitivity for the $I^3_\ell$ and $J^3_\ell$
is low (< 0.05), whereas for $K^3_\ell$ the fraction increases to
0.355.
\subsection{Template Correlation Test}\label{sec:correlation}
A template matched statistic can be defined by correlating the
observed bispectra in the data with those of the foreground
templates. This is more sensitive to the structure in the template
signal as opposed to the $\chi^2$ test introduced above. We define
a cross correlation statistic $\rho$ as
\begin{equation}\label{corr_coef}
\rho=\frac{\sum_{\ell = \ell_{\rm min}}^{\ell_{\rm max}}{ X_{\ell} X^{F}_{\ell} }}{\left(\sum_{\ell = \ell_{\rm
min}}^{\ell_{\rm max}} X_{\ell}^2\sum_{\ell = \ell_{\rm
min}}^{\ell_{\rm max}} X^{F\,2}_{\ell}\right)^{1/2}}
\end{equation}
where $X_{\ell}$ are the bispectra obtained from the data and
the $X_{\ell}^{F}$ are those obtained from the foreground
templates.
In the middle column of Fig.~\ref{fig:histo} we display the histograms
for the $\rho $ values for the Gaussian simulations against the
distribution obtained for the contaminated ($\alpha=1$) simulations of
the Q band maps. The sensitivity has improved over the $\chi^2 $ test,
with the histograms of the input and output data sets being clearly
shifted, meaning that there is a higher probability of detection of
foregrounds using this method. Again the effect is stronger in the
$K^3_\ell$. This result simply quantifies the statement that a matched
template search for a contamination signal is more sensitive than a `blind'
statistic such as the $\chi^2$ test. The values for the sensitivity of the
test are given in table~\ref{tab:corr} for all three WMAP bands.
\subsection{Power Spectrum and Bispectra Cross-Correlation Test}\label{sec:rstat}
For a Gaussian field, the normalized bispectrum is statistically
uncorrelated with the power spectrum \citep{conf}. However,
foreground residuals in the map induce non-Gaussian correlations which
in turn will induce correlations between the normalized bispectra and
the power spectrum of the maps. This can provide another specific
signature that one can use to detect the presence of foreground
contamination.
For Gaussian simulations, the average power spectrum
is just the input $\Lambda$CDM power spectrum and the bispectrum is
effectively zero. On the other hand, the average angular spectra of
the contaminated simulations have an emerging pattern of intermittency
in both first- and second-order statistics. Correlations between the
power spectrum and the bispectra therefore come about due to the even
parity induced by the characteristic galactic foregrounds. This means
that the even modes of the power spectrum will be correlated
with the even modes of the bispectra, whereas odd modes will remain
uncorrelated. In order to test this effect on the maps, we introduce
the ${\rm R}$ correlation statistic defined as
\begin{equation}
{\rm R}^X = \sum^{\ell_{\rm max}}_{\ell=\ell_{\rm min}}(-1)^{{\rm
int}[\frac{\ell}{2}]+1} \hat C_\ell X_\ell
\end{equation}
where $\hat C_\ell$ is the observed power spectrum. We have chosen
$\ell_{\rm max}=30$ and $\ell_{\rm min}=4$ as we are interested in the
large angular scales where the effects of foreground contamination
will dominate. We use the absolute value of the bispectrum in order to
avoid the discrimination between negative and positive correlations
which would affect our sum. We are only interested in the
discrimination between the existence of absolute correlations against
null correlations between the $\hat C_\ell $ and the bispectra $X_\ell$.
Again we test the sensitivity of this method by computing a
distribution of ${\rm R}$ for Gaussian ensembles against the
contaminated ensembles. We make sure that for Gaussian ensembles we
use the correlation of $\hat C^{\rm S+F}_\ell$ with $X^{\rm S}_\ell$
and for the contaminated ensemble the correlation of $\hat C^{\rm
S+F}_\ell$ with $X^{\rm S+F}_\ell$ where $\rm S$ stands for the
Gaussian CMB signal and $\rm S+F$ indicates contaminated
ensembles. This allows us to cancel the effect of the increase of
power due to foregrounds in the correlation of the two statistics
between the two tests. The results for the contaminated ensemble,
$\alpha=1$, are plotted in the right column of Fig.~\ref{fig:histo}
and are summarized in table~\ref{tab:corr} for all three bands.
\begin{table}
\caption{ Sensitivity of the $\rho$ and ${\rm R}$ tests in terms of
the fraction of the contaminated simulations ($\alpha=1.0$) with a
larger value than 95.45 $\%$, ie 2 $\sigma$, of the Gaussian
simulations ($\alpha =1.0$). We present values for the Q,V and W
frequency channels and for the $I^3_\ell$, $J^3_\ell$ and
$K^3_\ell$. Note that $\rho(X_\ell)$, where $X_\ell$ is a given
bispectrum component stands for the correlation between $X_\ell(\rm
data)$ with$X_\ell(\rm template)$, whereas ${\rm R}(X_\ell)$,
represent the correlation of that specific bispectrum component with
the respective power spectrum of the map. The values in the table
quantify what can be seen in the histograms in
figure~\ref{fig:histo}. Applying the tests for the $K^3_\ell$
component provides better sensitivity to the foregrounds. Between
the two tests $\rho$ seems to provide a marginally better
sensitivity. Also, the Q channel, being the most foreground
contaminated yields the higher chances of detection.}
\begin{tabular}[b]{|c|c|c|c||c|c|c}
\hline & \multicolumn{1}{|c|}{ $\rho_{\rm Q}$ } &
\multicolumn{1}{|c|}{ $\rho_{\rm V}$ } & \multicolumn{1}{|c|}{
$\rho_{\rm W}$ } & \multicolumn{1}{|c|}{ ${\rm R}_{\rm Q}$ } &
\multicolumn{1}{|c|}{ ${\rm R}_{\rm V}$} & \multicolumn{1}{|c|}{
${\rm R}_{\rm W}$ } \\
\hline\hline
$I^3_\ell$ & 0.541& 0.085 & 0.139 & 0.280& 0.030 & 0.080 \\
$J^3_\ell$ & 0.225& 0.100 & 0.091 & 0.080& 0.060 & 0.060 \\
$K^3_\ell$ & 0.714& 0.072& 0.113& 0.690& 0.290 & 0.110 \\
\hline
\end{tabular}
\label{tab:corr}
\end{table}
\begin{figure*}
\centerline{\psfig{file=fig2.ps,width=15cm}}
\caption{ Distributions of values obtained for the three different
tests applied to the Q channel to detect the presence of foregrounds
($\chi^2$, $ \rho $ and ${\rm R}$), for Gaussian simulations (black)
and contaminated ($\alpha=1.0$) simulations (grey). The level of
sensitivity of a given method can be determined in terms of the shift
between the histograms for the Gaussian and the contaminated case. We
see that the $\rho^{K}$ and ${\rm R}^K$ statistics are the most
sensitive channels to probe the existence of foregrounds.}
\label{fig:histo}
\end{figure*}
\begin{table}
\caption{ Results for the WMAP data for $\rho$ and ${\rm R}$. The
results are shown as the fraction of Gaussian simulations below the
level obseverved in the data. We have highlighted values with
greater than 98\% in the foreground cleaned maps.}
\begin{tabular}[b]{|c|c|c|c|c|c|c}
\hline & \multicolumn{3}{|c|}{ RAW } & \multicolumn{3}{|c|}{
CLEANED}\\
\hline & \multicolumn{1}{|c|}{
${\rm Q}$ } & \multicolumn{1}{|c|}{ ${\rm V}$} &
\multicolumn{1}{|c|}{ ${\rm W}$ }& \multicolumn{1}{|c|}{ ${\rm Q}$ }
& \multicolumn{1}{|c|}{ ${\rm V}$} & \multicolumn{1}{|c|}{ ${\rm W}$
}\\
\hline\hline
${\rm R}^I$ & 0.726 & 0.178 & 0.328 & 0.475 & 0.421 & 0.775 \\
${\rm R}^J$ & 0.758 & 0.802 & 0.749 & 0.822 & 0.869 & $\fbox{0.983}$\\
${\rm R}^K$ & 0.983 & 0.450 & 0.486 & 0.362 & 0.364 & 0.188 \\
\hline
$\rho^{I}$ & 0.998 & 0.408 & 0.762 & 0.491& 0.550& 0.452\\
$\rho^{J}$ & 0.933 & 0.906 & 0.856 & $\fbox{0.998}$& $\fbox{0.985}$ & $\fbox{0.986}$\\
$\rho^{K}$ & 0.922 & 0.166 & 0.272 & 0.013 & 0.021 & 0.044 \\
\hline
\end{tabular}
\label{tab:data}
\end{table}
\section{Application to the WMAP data}\label{sec:application}
We have applied the statistical tools described above to the WMAP
first year data \citep{wmap}. We considered both the raw and
cleaned maps of the Q, V, and W channels
using the Kp2 exclusion mask. We summarise the results in
table~\ref{tab:data} showing the separate confidence limits from
each channel for both the raw and cleaned maps.
For the raw maps we find that only the Q channel ${\rm R}^K$
result is above the 95\% threshold while for the Q channel $\rho$
statistic, all confidence levels are above the 90\% level with the $\rho^I$
above the 95\% level. This is consistent with there being a component
most correlated to the foreground templates at the lowest frequencies
and with significant correlations between the $\Delta\ell= 2$ inter-$\ell$
bispectrum and the power spectrum. Since the raw maps do not have any
foreground subtracted from them this is not a surprise although the
confidence level suggests that the correlations are larger than what
was found for the expected amplitude ($\alpha=1$) of the foregrounds.
For all $I^3_\ell$ and $K^3_\ell$ statistics the cleaned map results
show confidence levels below the 95\% level and indeed show an overall
reduction in the significance of the correlations, indicating that the
cleaning has removed a component correlated to the foreground
templates, as one would expect. However for the $J^3_\ell$ statistics,
which should in principle be the least sensitive to the foregrounds
considered, we see that the confidence levels have all
increased. Indeed all three channels now have correlations significant
above the 95\% level in the $\rho$ statistic with the W channel also
having a~$>95\%$ confidence level. The cleaning algorithm appears to
have introduced significant correlations with the foreground templates
in the $\Delta\ell=1$ inter-$\ell$ bispectra and significant correlations
between the $\Delta\ell=1$ inter-$\ell$ bispectrum and power spectrum of
the W channel which is indicative of a non-Gaussian component.
In figure~\ref{fig:bisp_data} we show the bispectra for each cleaned
channel map and compare to the bispectra of the foreground
template ($\alpha=1$) for each channel. This shows the nature of
the result above. For both the $I^3_\ell$ and $K^3_\ell$ the
cleaned map bispectra are anti-correlated with the foreground
templates. In addition the the $K^3_\ell$ for all channels are
heavily suppressed in the cleaned maps for multipoles $\ell < 20$
compared to the expected Gaussian variance shown in
figure~\ref{fig:bisp}. The $J^3_\ell$ gives the only bispectra that
are correlated with the those of the templates.
Figure~\ref{fig:data} shows the break down of the $R^J$ result into
individual multipole contributions for each of the three bands. In
particular it is interesting to note how the W band ${\rm R}^J$ result
shown in table~\ref{tab:data} is dominated by an outlier at $\ell=26$.
\begin{figure*}
\centerline{\psfig{file=fig3.ps,angle=270,width=15cm}}
\caption{ The bispectra for each cleaned channel map (solid line)
against the bispectra of the foreground template ($\alpha=1.0$)
(dashed lines). The three frequency channels are shown as Q (red), V
(black) and W (blue). Both the cleaned map $I^3_\ell$ and
$K^3_\ell$ bispectra are anti-correlated with the foreground
templates. In addition the the $K^3_\ell$ for all channels are heavily
suppressed in the cleaned maps for multipoles $\ell < 20$ compared to
the expected Gaussian variance shown in figure~\ref{fig:bisp}. The
$J^3_\ell$ gives the only bispectra that are correlated with the those
of the templates.} \label{fig:bisp_data}
\end{figure*}
\begin{figure*}
\centerline{\psfig{file=fig4.ps,angle=270,width=15cm}}
\caption{ ${\rm R}^J$ as a function of angular scale, $\ell$. The
values displayed correspond to the foreground cleaned Q (red squares),
V (black triangles) and W (blue circles) frequency channels. The
values are offset by $\ell=0.25$ and $\ell=0.5$ for the Q and W bands
respectively. The $3 \sigma$ detection in the W foreground-cleaned
channel is dominated mainly by the $\ell=26$ mode. The error bars are
computed from 3000 Gaussian simulations assuming the specific channel
noise and beam.}
\label{fig:data}
\end{figure*}
\section{Discussion}\label{sec:disc}
At first sight our results appear contradictory. We have studied the
effect of foreground contamination on the maps and concluded that
foregrounds mainly affect the $I^3_\ell$ and $K^3_\ell$ components
of the bispectrum due to its parity. By comparing the results for
the raw and the foreground-cleaned maps, we are able to verify that
the amplitude of $I^3_\ell$ and $K^3_\ell$ reduces as expected after
foreground subtraction.
On the other hand, as shown in table~\ref{tab:data}, the correlations
induced in the $J^3_\ell$ appear to be close to inconsistent to a
Gaussian hypothesis with the correlation with the foreground templates
at a significance above the $3\sigma$ level for the Q-band, cleaned
map. It is also of interest to note that the cleaned maps do worse in
all bands for the $\rho$ measure.
This is not what we naively expected since the foregrounds considered
here have the wrong parity and their $J^3_\ell$ signal is heavily
suppressed. However the cleaning procedure used by the WMAP team {\sl
does} appear to increase the correlations $\rho$ of $J^3_\ell$
bispectrum to the input maps and its correlation $\rm R$ with the
power spectrum. Recall that we expect the normalized bispectra to be
independent of the power spectrum only in the Gaussian case.
The possibility of the foregrounds being more complex than accounted
for in this type of treatment is to be considered carefully as this
work has shown. The results shown here would suggest that the
procedure used to go from the {\sl raw} to {\sl cleaned} WMAP maps is
under or over subtracting a component with $\ell\pm 1$ parity in the
bispectrum. This is probably not an indication that the procedure is
faulty but rather that the templates used are not accurate enough to
subtract the foregrounds. One source of inaccuracy is the simple
scaling of the templates with respect to frequency. The {\sl cleaned}
maps are obtained assuming uniform spectral index and \cite{wmapfor}
acknowledge that this is a bad approximation particularly for the 408
MHz Haslam (synchrotron) template. This is seen when producing the
Internal Linear Combination (ILC) map which accounts for variation of
the spectral index of the various component. Unfortunately ILC maps
cannot be used in quantitative studies as their noise attributes are
complicated by the fitting procedure and one cannot simulate them
accurately.
Future WMAP ILC maps or equivalent ones obtained by
`blind' foreground subtraction \citep{tegclean,erik2} may be
better suited for this kind of analysis once their statistical
properties are well determined. It is expected that the impending
second release of WMAP data will allow more accurate foreground
analysis and the statistical tools outlined in this work will be useful in
determining the success of foreground subtraction.
It may be worthwile to include information of the higher order
statistics when carrying out the foreground subtraction itself, for
example by extending the ILC method to minimise higher order map
quantities such as the skewness and kurtosis of the maps.
\section*{Acknowledgments}
We thank H. K. Eriksen for advice and for making the
simulations available to us. We are also grateful to Jo\~ao
Magueijo, Kate Land and A.J. Banday for useful conversations
throughout the preparation of this work. Some of the results in
this paper have been derived using the HEALPix package. J.
Medeiros acknowledges the financial support of Fundacao para a
Ciencia e Tecnologia (Portugal).
| 2024-02-18T23:39:49.293Z | 2005-10-30T17:57:00.000Z | algebraic_stack_train_0000 | 523 | 5,995 |
|
proofpile-arXiv_065-2650 | \section{Introduction}
Recent observations on the Type Ia Supernova (SNIa)\cite{sn},
Cosmic Microwave Background Radiation (CMB)\cite{map} and Large
Scale Structure (LSS)\cite{sdss} all suggest that the Universe
mainly consists of dark energy (73\%), dark matter (23\%) and
baryon matter (4\%). How to understand the physics of the dark
energy is an important mission in the modern cosmology, which has
the EoS of $\omega<-1/3$, and leads to the recent accelerating
expansion of the Universe. Several scenarios have been put forward
as a possible explanation of it. A positive cosmological constant
is the simplest candidate, but it needs the extreme fine tuning to
account for the observed accelerating expansion of the Universe.
This fact has led to models where the dark energy component varies
with time, such as quintessence models\cite{quint}, which assume
the dark energy is made of a single (light) scalar field. Despite
some pleasing features, these models are not entirely
satisfactory, since in order to achieve $\Omega_{de}\sim\Omega_m$
(where $\Omega_{de}$ and $\Omega_m$ are the dark energy and matter
energy densities at present, respectively) some fine tuning is
also required. Many other possibilities have been considered for
the origin of this dark energy component such as a scalar field
with a non-standard kinetic term and k-essence models\cite{k}, it
is also possible to construct models which have the EoS of
$\omega=p/\rho<-1$, the so-called phantom\cite{phantom}. Some
other models such as the generalized Chaplygin gas (GCG)
models\cite{GCG}, the vector field models\cite{vec} also have been
studied by a lot of authors. Although these models achieve some
success, some problems also exist. One essential to understand the
nature of the dark energy is to detect the value and evolution of
its EoS. The observation data shows that the cosmological
constant is a good candidate\cite{seljak}, which has the effective
equation $p=-\rho$, $i.e.$ $\omega\equiv-1$. However, there is an
evidence to show that the dark energy might evolve from
$\omega>-1$ in the past to $\omega<-1$ today, and cross the
critical state of $\omega=-1$ in the intermediate
redshift\cite{trans}. If such a result holds on with accumulation
of observational data, this would be a great challenge to the
current models of dark energy. It is obvious that the cosmological
constant as a candidate will be excluded, and dark energy must be
dynamical. But the normal models such as the quintessence fields,
only can give the state of $-1<\omega<0$. Although the k-essence
models and the phantom models can get the state of $\omega<-1$,
but the behavior of $\omega$ crossing $-1$ can not be realized,
and all these will lead to theoretical problem in field theory. To
answer this crossing phenomenon of $\omega$, a lot of people have
advised some more complex models, such as the quintom
models\cite{quintom,quintom1}, which is made of a quintessence
field and a phantom field. The model with higher derivative term
has been suggested in Ref.\cite{lmz}, which also can get from
$\omega>-1$ to $\omega<-1$, but it also will lead to theoretical
difficulty in field theory.
We have advised that the YM field\cite{Zhang,zhao} can be used to
describe the dark energy. There are two major reason that prompt
us to study this system. First the normal scalar models the
connection of field to particle physics models has not been clear
so far. The second reason is that the weak energy condition can
not be violated by the field. The YM field we have advised has the
desired interesting featured: the YM field are the indispenable
cornerstone to any particle physics model with interactions
mediated by gauge bosons, so it can be incorporated into a
sensible unified theory of particle physics. Besides, the EoS of
matter for the effective YM condensate is different from that of
ordinary matter as well as the scalar fields, and the state of
$-1<\omega<0$ and $\omega<-1$ can also be naturally realized. But
if it is possible to build a YM field model with EoS crossing
$-1$? In this paper, we focus on this topic. First we consider the
YM field with a general lagrangian, and find the state of
$\omega\sim-1$ is easily realized, as long as it satisfies some
constraint. From the kinetic equation of the YM field, we find
that $\omega+1\propto a^{-2}$ with the expansion of the Universe.
But no matter what kind of lagrangian and initial condition we
choose, this model can not get a behavior of $\omega$ crossing
$-1$. But it can be easily got in the models with two YM fields,
one with the initial condition of $\omega>-1$, which is like a
quintessence field, and the other with $\omega<-1$ like a phantom
field.
This paper is organized as follows. In section 2 we discuss the
general YM field model, and study the evolution of its EoS by
solving its kinetic equation. But we find that this kind of model
can not get the state of $\omega$ crossing $-1$. Then we study the
two YM fields model in section 3, and solve the evolution of
$\omega$ with scale factor for an example model. We find that
$\omega$ crossing $-1$ can be easily realized in this model, which
is very like the quintom models. At last, we have a conclusion and
discussion in section 4.
\section{ Single YM Field Model}
In the Ref.\cite{zhao}, we have discussed the EoS of the YM field
dark energy models, which has the effective
lagrangian\cite{pagels, adler}
\begin{equation}
\L_{eff}=F/2g^2.
\end{equation}
here $F=-(1/2)F^a_{\mu\nu}F^{a\mu\nu}$ plays the role of the order
parameter of the YM condensate, and $g$ is the running coupling
constant which, up to 1-loop order, is given by
\begin{equation}
\frac{1}{g^2}=b\ln|\frac{F}{\kappa^2}-1|.
\end{equation}
Thus the effective lagrangian is
\begin{equation}
\L_{eff}=\frac{b}{2}F\ln|\frac{F}{e\kappa^2}|, \label{L}
\end{equation}
where $e\simeq2.72$. $b=11N/24\pi^2$ for the generic gauge group
$SU(N)$ is the Callan-Symanzik coefficient\cite{Pol}, $\kappa$ is
the renormalization scale with the dimension of squared mass, the
only model parameter. The attractive features of this effective YM
action model include the gauge invariance, the Lorentz invariance,
the correct trace anomaly, and the asymptotic
freedom\cite{pagels}. With the logarithmic dependence on the field
strength, $\L_{eff}$ has a form similar to he Coleman-Weinberg
scalar effective potential\cite{coleman}, and the Parker-Raval
effective gravity lagrangian\cite{parker}.
It is straightforward to extend this model to the expanding
Robertson-Walker (R-W) spacetime. For simplicity we will work in a
spatially flat R-W spacetime with a metric
\begin{equation}
ds^2=a^2(\tau)(d\tau^2-\gamma_{ij}dx^idx^j),\label{me}
\end{equation}
where we have set the speed of light $c\equiv1$,
$\gamma_{ij}=\delta^i_j$ denoting the background space is flat,
and $\tau=\int(a_0/a)dt$ is the conformal time. Consider the
dominant YM condensate minimally coupled to the general relativity
with the effective action,
\begin{equation}
S=\int \sqrt{-\tilde{g}}~[-\frac{R}{16\pi G}+\L_{eff}] ~d^{4}x,
\label{S}
\end{equation}
where $\tilde{g}$ is the determinant of the metric $g_{\mu\nu}$.
By variation of $S$ with respect to the metric $g^{\mu\nu}$, one
obtains the Einstein equation $G_{\mu\nu}=8\pi GT_{\mu\nu}$, where
the energy-momentum tensor is given by
\begin{equation}
T_{\mu\nu}=\sum_{a}~\frac{g_{\mu\nu}}{4g^2}F_{\sigma\delta}^a
F^{a\sigma\delta}+\epsilon F_{\mu\sigma}^aF^{a\sigma}_{\nu}.
\label{T}
\end{equation}
The dielectric constant is defined by
$\epsilon=2\partial\L_{eff}/\partial F$, and in this one-loop
order it is given by
\begin{equation}
\epsilon=b\ln|\frac{F}{\kappa^2}|.\label{epsilon}
\end{equation}
This energy-momentum tensor is the sum of the several different
energy-momentum tensors of the vectors,
$T_{\mu\nu}=\sum_a~^{(a)}T_{\mu\nu}$, neither of which is of
prefect-fluid form, which can make the YM field being anisotropy.
This is one of the most important character of the vector field
dark energy models\cite{vec}. If it is true and this anisotropy
YM field is dominant in the Universe, this will make the Universe
being anisotropy, one would expect an anisotropy expansion of the
Universe, in conflict with the significant isotropy of the
CMB\cite{isotropy}. But on the other hand there also appear to be
hints of statistical anisotropy in the CMB
perturbations\cite{fluctuate}. But here we only consider the other
case. For keeping the total energy-momentum tensor $T_{\mu\nu}$ is
homogeneous and isotropic, here we assume the gauge fields are the
functions of only time $t$, and
$A_{\mu}=\frac{i}{2}\sigma_aA_{\mu}^a(t)$ (here $\sigma_a$ are the
Pauli's matrices) are given by $A_0=0$ and $A_i^a=\delta_i^aA(t)$.
Define the YM field tensor as usual:
\begin{equation}
F^{a}_{\mu\nu}=\partial_{\mu}A_{\nu}^a-\partial_{\nu}A_{\mu}^a+f^{abc}A_{\mu}^{b}A_{\nu}^{c},
\end{equation}
where $f^{abc}$ is the structure constant of gauge group and
$f^{abc}=\epsilon^{abc}$ for the $SU(2)$ case. This tensor can be
written in the form with the electric and magnetic field as
\begin{equation}
F^{a\mu}_{~~\nu}=\left(
\begin{array}{cccc}
0 & E_1 & E_2 & E_3\\
-E_1 & 0 & B_3 & -B_2\\
-E_2 & -B_3 & 0 & B_1\\
-E_3 & B_2 & -B_1 & 0
\end{array}
\right).
\end{equation}
It can be easily found that $E_1^2=E_2^2=E_3^2$, and
$B_1^2=B_2^2=B_3^2$. Thus $F$ has a simple form with $F=E^2-B^2$,
where $E^2=\sum_{i=1}^3E_i^2$ and $B^2=\sum_{i=1}^3B_i^2$. In this
case, each component of the energy-momentum tensor is
\begin{equation}
^{(a)}T_{\mu}^{0}=\frac{1}{6g^2}(B^2-E^2)\delta^{0}_{\mu}+\frac{\epsilon}{3}
E^2\delta^{0}_{\mu},
\end{equation}
\begin{equation}
^{(a)}T_{j}^{i}=\frac{1}{6g^2}(B^2-E^2)\delta^i_j+\frac{\epsilon}{3}E^2\delta^i_j\delta^a_j
-\frac{\epsilon}{3}B^2\delta^i_j(1-\delta^a_j).
\end{equation}
Although this tensor is not isotropic, its value along the $j=a$
direction is different from the one along the directions
perpendicular to it. Nevertheless, the total energy-momentum
tensor $T_{\mu\nu}=\sum_{a=1}^3~^{(a)}T_{\mu\nu}$ has isotropic
stresses, and the corresponding energy density and pressure are
given by (here we only consider the condition of
$B^2\equiv0$)\cite{zhao}
\begin{equation}
\rho=\frac{E^2}{2}(\epsilon+b),~~~~p=\frac{E^2}{2}(\frac{\epsilon}{3}-b),
\end{equation}
and its EoS is
\begin{equation}
\omega=\frac{\epsilon-3b}{3\epsilon+3b}.\label{13}
\end{equation}
It is easily found that at the critical point of $\epsilon=0$,
which follows that $\omega=-1$, the Universe is exact a de Sitter
expansion. Near this point, if $\epsilon<0$, we have $\omega<-1$,
and $\epsilon>0$ follows $\omega>-1$. So in these models, the EoS
of $0<\omega<-1$ and $\omega<-1$ all can be naturally realized.
For studying the evolution of this EoS, we should solve the YM
field equations, which is equivalent with solving the Einstein
equation\cite{zhao}. By variation of $S$ with respect to
$A_{\mu}^a$, one obtains the effective YM equations
\begin{equation}
\partial_{\mu}(a^4\epsilon~
F^{a\mu\nu})+f^{abc}A_{\mu}^{b}(a^4\epsilon~F^{c\mu\nu})=0.
\label{F1}
\end{equation}
For we have assumed the YM condensate is homogeneous and
isotropic, from the definition of $F^{a}_{\mu\nu}$, it is easily
found that the $\nu=0$ component of YM equations is an identity
and the $i=1,2,3$ spatial components are:
\begin{equation}
\partial_{\tau}(a^2\epsilon E)=0.
\end{equation}
If $\epsilon=0$, this equation is also an identity. When
$\epsilon\neq 0$, this equation follows that\cite{zhao},
\begin{equation}
\beta~ e^{\beta/2}\propto a^{-2},\label{16}
\end{equation}
where we have defined $\beta\equiv\epsilon/b$, and used the
expression of $\epsilon$ in Eq.(\ref{epsilon}). In this equation,
the proportion factor can be fixed by the initial condition. This
is the main equation, which determines the evolution of $\beta$,
and $\beta$ directly relate to the EoS of the YM field. Combining
the Eqs.(\ref{13}) and (\ref{16}), one can obtains the evolution
of EoS in the YM field dark energy Universe. In Fig.[1], we plot
the the evolution of $\omega$ in the YM field dark energy models
with the present value $\omega_0=-1.2$ and $\omega_0=-0.8$, and
find that the former one is very like the evolution of the phantom
field, and the latter is like a quintessence field. They all have
same attractor solution with $\omega=-1$. So in these models, the
Big Rip is naturally avoided. This is the most attractive feature
of the YM field models.
In the Eq.(\ref{16}), the undetermined factor can be fixed by the
present value of EoS $\omega_0$, which must be determined by
observations on SNIa, CMB or LSS. In this paper, we will only show
that the observation of CMB power spectrum is an effective way to
determine it. The dark energy can influence the CMB temperature
anisotropy power spectrum (especially at the large scale) by the
integral Sachs-Wolfe(ISW) effect\cite{isw}. Consider the flat R-W
metric with the scalar perturbation in the conformal Newtonian
gauge,
\begin{equation}
ds^2=a^{2}(\tau)[(1+2\phi)d\tau^2-(1-2\psi)\gamma_{ij}dx^idx^j].\label{metric}
\end{equation}
The gauge-invariant metric perturbation $\psi$ is the Newtonian
potential and $\phi$ is the perturbation to the intrinsic spatial
curvature. Always the background matters in the Universe are
perfect fluids without anisotropic stress, which follows that
$\phi=\psi$. So there is only one perturbation function $\phi$ in
the metric of (\ref{metric}), and its evolution is determined
by\cite{evolution}
\begin{equation}
\phi''+3\H(1+\frac{p'}{\rho'})\phi'-\frac{p'}{\rho'}\nabla^2\phi
+[(1+3\frac{p'}{\rho'})\H^2+2\H']\phi=4\pi
Ga^2(\delta p-\frac{p'}{\rho'}\delta\rho),\label{Phi}
\end{equation}
where $\H\equiv a'/a$, and the $'$prime$'$ denotes $d/d\tau$. The
pressure $p=\sum_ip_i$, and energy density $\rho=\sum_i\rho_i$,
which should include the contribution of baryon, photon, neutron,
cold dark matter, and the dark energy. Especially at late time of
the Universe, the effect of the dark energy is very important. We
remind that the ISW effect stems from the time variation of the
metric perturbations,
\begin{equation}
C_l^{ISW}\propto\int\frac{dk}{k}[\int_0^{\chi_{LSS}}d\chi~(\phi'+\psi')j_l(k\chi)]^2,
\end{equation}
where $\chi_{LSS}$ is the conformal distance to the last
scattering surface and $j_l$ the $l'$th spherical Bessel function.
The ISW effect occurs because photons can gain energy as they
travel through time-varying gravitational wells. One always solves
the CMB power spectrum in the numerical
methods\cite{cmbfast,camp}. In Fig.[2], we plot the CMB power
spectrum at large scale with these two kind of YM dark energy
models, where we have chosen the cosmological parameters as: the
Hubble parameter $h=0.72$, the energy density of baryon
$\Omega_bh^2=0.024$, and dark matter $\Omega_{dm}h^2=0.14$, the
reionization optical depth $\tau=0.17$, the spectrum index and
amplitude of the primordial perturbation spectrum being $n_s=0.99$
without running and $A=0.9$. Where we haven't consider the
perturbation of the dark energy. From this figure, one can find
that the values of the CMB power spectrums are very sensitively
dependent on $\omega_0$.
Comparing with the $\Lambda$CDM model (which is equivalent with
the YM model with $\omega_0=-1$), the model with
$\omega_0=-0.8>-1.0$, which is like the quintessence field model,
the CMB spectrums have smaller values, especially at scale of
$l<10$, the difference is very obvious; but the model with
$\omega_0=-1.2<-1.0$, which is like the phantom field model, the
CMB spectrums have larger values. For the evolution of EoS is only
determined by the $\omega_0$, the value of it can be determined by
fitting the CMB observation. It is obvious that the recent
observations on the CMB power spectrums at large scale from WMAP
satellite have large error. The further results will depend on the
observation of the following WMAP and Planck satellites.
Now let's return to the evolution of $\omega$. From Fig.[1], one
finds that $\omega$ crossing $-1$ can not be realized in these
models with a single YM field, no matter what values of $\omega_0$
we have chosen. For studying it more clear, assume the YM field
has an initial state of $|\omega+1|\ll1$, which follows that
$\beta\ll1$, the Eq.(\ref{16}) becomes
\begin{equation}
\beta\propto a^{-2}.
\end{equation}
The value of $\beta$ will go to zero with the expansion of the
Universe. This means that $E$ will go to a critical state of
$E^2=\kappa^2$. And the EoS is
\begin{equation}
\omega+1\simeq\frac{4\beta}{3}\propto a^{-2}.\label{o+1}
\end{equation}
This result has two important characters: i) with the expansion of
the Universe, $\omega$ will go to the critical point of
$\omega=-1$. This is the most important character of this dark
energy model, which is very like the behavior of the vacuum energy
with $\omega\equiv-1$; ii) the value of $\omega>-1$ and
$\omega<-1$ all can realized, but it can not cross $-1$ from one
area to another. This character is same with the scalar field such
as the quintessence field, the k-essence and the phantom field
models.
It is interesting to ask: if these characters are correct just for
the YM model with the lagrangian as formula (\ref{L})? Whether or
not one can build a model, whose EoS can cross $-1$? So let's
consider the YM field model with a general effective lagrangian
as:
\begin{equation}
\L_{eff}=G(F)F/2,
\end{equation}
where $G(F)$ is the running coupling constant, which is a general
function of $F$. If we choose $G(F)=b\ln|\frac{F}{e\kappa^2}|$,
this effective lagrangian returns to the from in Eq.(\ref{L}). The
dielectric constant also can be defined by
$\epsilon=2\partial\L_{eff}/\partial F$, which is
\begin{equation}
\epsilon=G+FG_{F}.
\end{equation}
Here $G_F$ represents $dG/dF$. We also discuss the homogeneous and
isotropy YM field with electric field $(B=0)$, then the energy
density and the pressure of the YM field are:
\begin{equation}
\rho=E^2(\epsilon-\frac{G}{2}),
\end{equation}
\begin{equation}
p=-E^2(\frac{\epsilon}{3}-\frac{G}{2})
\end{equation}
The energy density $\rho>0$ follows a constraint $G>-2FG_F$. The
EoS of this YM field is
\begin{equation}
\omega=-\frac{3-2\gamma}{3-6\gamma},\label{omega}
\end{equation}
where we have defined that $\gamma\equiv\epsilon/G$. When the
condition of $\gamma=0$ can be got at some state with $E^2\neq0$
and $G(F)\neq0$, the state of $\omega=-1$ is naturally realized.
This condition can be easily satisfied. In the discussion as below
we only consider these kind of YM fields. For example, in the
model with the lagrangian (\ref{L}), $\gamma=0$ is got at the
state $E^2=\kappa^2$. Near this state, $\gamma>0$ leads to
$\omega<-1$, and $\gamma<0$ leads to $\omega>-1$. But if the YM
field has a trivial lagrangian with $G=constant$, which follows
that $\gamma\equiv1$, and $\omega\equiv1/3$. This is exactly the
EoS of the relativistic matter, and it can not generate the state
of $\omega<0$.
To study the evolution of EoS, we also consider the YM equation,
which can be got by variation of $S$ with respect to
$A_{\mu}^{a}$,
\begin{equation}
\partial_{\mu}(a^4\epsilon~
F^{a\mu\nu})+f^{abc}A_{\mu}^{b}(a^4\epsilon~F^{c\mu\nu})=0,
\label{F1}
\end{equation}
from the definition of $F^{a}_{\mu\nu}$, it is found that these
equations become a simple relation:
\begin{equation}
\partial_{\tau}(a^2\epsilon E)=0,
\end{equation}
where $E$ is defined by $E^2=\Sigma_{i=1}^3E_i^2$. If
$\epsilon=0$, this equation is an identity, and from
(\ref{omega}), we know $\omega=-1$, which can't be differentiated
from cosmological constant. When $\epsilon\neq
0$, this equation can be integrated to give
\begin{equation}
a^2\epsilon E=constant.\label{k1}
\end{equation}
For we want to study whether or not the EoS of this YM field can
cross $\omega=-1$, here we assume its initial state is
$\omega\sim-1$. In this condition, from the expression of $p$ and
$\rho$, it follows that $\epsilon\sim0$, $E$ and $G(F)$ nearly
keep constant, which is for the Universe is nearly de Sitter
expansion and $\rho\sim-G(F)E^2/2$ is nearly a constant in this
Universe. So the YM equation suggests that
\begin{equation}
\epsilon\propto a^{-2}.
\end{equation}
From the EoS of (\ref{omega}), one knows that
\begin{equation}
\omega+1\propto a^{-2}.\label{o1+1}
\end{equation}
This is the EoS evolution equation of the general YM field dark
energy models. It is exactly same with special case of
Eq.(\ref{o+1}). So it also keeps the characters of the special
case with the lagrangian (\ref{L}): $\omega$ will run to the
critical point $\omega=-1$ with the expansion of the Universe. But
it can not cross this critical point. These is the general
characters of these kind of YM field dark energy models. For
showing this more clear, we discuss two example models.
First we consider the YM field with the running coupling constant
\begin{equation}
G(F)=B(F^n-F_c^n),
\end{equation}
where $B$ and $F_c$ are quantity with positive value, and $n$ is a
positive number. The constraint of $\rho>0$ follows that
\begin{equation}
F>\frac{F_c}{\sqrt[n]{1+2n}}.
\end{equation}
The dielectric constant can be easily get
\begin{equation}
\epsilon=G+FG_F=B(n+1)F^n-BF_c^n,
\end{equation}
and
\begin{equation}
\gamma=(n+1)+\frac{nF_c^n}{F^n-F_c^n}.
\end{equation}
It is obvious that when $F=F_c/\sqrt[n]{n+1}$, $\gamma=0$ is
satisfied, and which leads to $\omega=-1$. Near this critical
state, $E\sim\sqrt[2n]{F_c^n/(n+1)}$. So the YM equation of
(\ref{k1}) becomes
\begin{equation}
\frac{An}{n+1}\sqrt[2n]{\frac{F_c^{3n}}{n+1}} \gamma\propto
a^{-2},
\end{equation}
which follows that $\gamma\propto a^{-2}$. From the expression of
$\omega$ in Eq.(\ref{omega}), one can easily get
\begin{equation}
\omega+1\simeq-\frac{4\gamma}{3}\propto a^{-2}.
\end{equation}
This is exact same with the evolution behavior shown in formula
(\ref{o1+1}).
Another example, we consider the YM field with the coupling
constant of
\begin{equation}
G(F)=1-\exp(1-\frac{F}{F_c}),
\end{equation}
where the constant quantity $F_c\neq0$. When $F\gg F_c$, this
lagrangian becomes the trivial case with $G(F)=1$, but when $F$ is
near $F_c$, the nonlinear effect is obvious. Then
\[
\epsilon=1+(\frac{F}{F_c}-1)\exp(1-\frac{F}{F_c}).
\]
so the critical state of $F=0.433F_c$ leads to $\gamma=0$ and
$\omega=-1$. By the similar discussion, from the YM field
(\ref{k1}), one can also get $\gamma\propto a^{-2}$ near this
critical state, which generates $\omega+1\propto a^{-2}$.
\section{ Two YM Fields Model}
In the former section, we have discussed that the dark energy
models with single YM field can't form a state of $\omega$
crossing $-1$, not matter what kind of lagrangian or initial
condition. But we should notice another character: the YM field
has the EoS of $\omega\propto-1+ a^{-2}$, when its initial value
is near the critical state of $\omega=-1$. So if the YM field has
an initial state of $\omega>-1$, it will keep this state with the
evolution of the Universe, which is like the quintessence models.
But if its initial state is $\omega<-1$, it will also keep it,
which is like the phantom models. This makes that we can build a
model with two different free YM fields, one having an initial
state of $\omega>-1$ and the other being $\omega<-1$. In this kind
of models, the behavior of $\omega$ crossing $-1$ is easily got.
This idea is like the quintom models\cite{quintom}, where the
authors built the model with a quintessence field and a phantom
field.
In the below discussion, we will build a toy example of this kind
of model. Assume the dark energy is made of two YM fields with the
effective lagrangian as Eq.(\ref{L})
\begin{equation}
\L_{i}=\frac{b}{2}F_i\ln|\frac{F_i}{e\kappa_i^2}|,~~~(i=1,2)
\end{equation}
where $F_i=E_i^2 (i=1,2)$, and $\kappa_1\neq\kappa_2$. Their
dielectric constants are
\begin{equation}
\epsilon_i\equiv\frac{2\partial\L_{i}}{\partial F_i}=b\ln|\frac{F_i}{\kappa_i^2}|.
\end{equation}
From the YM field kinetic equations, we also can get the
relations:
\begin{equation}
a^2\epsilon_iE_i=C_i,\label{kin}
\end{equation}
where $C_i (i=1,2)$ are the integral constant, which are
determined by the initial state of the YM fields. If the YM field
is a phantom like field with $\omega_i<-1$, then $\epsilon_i<0$
and $C_i<0$. At the same time, a quintessence like YM field
follows that $C_i>0$. Here we choose the YM field of $\L_1$ as the
phantom like field with $C_1<0$, and $\L_2$ as the quintessence
like field with $C_2>0$. The energy density and pressure are
\begin{equation}
\rho_i=\frac{E_i^2}{2}(\epsilon_i+b),~~p_i=\frac{E_i^2}{2}(\frac{\epsilon_i}{3}-b),
\end{equation}
so the total EoS is:
\begin{equation}
\omega\equiv\frac{p_1+p_2}{\rho_1+\rho_2}=\frac{E_1^2(\frac{\beta_1}{3}-1)+E_2^2(\frac{\beta_2}{3}-1)}
{E_1^2(\beta_1+1)+E_2^2(\beta_2+1)},
\end{equation}
where we have also defined that $\beta_i\equiv\epsilon_i/b$. Using
the relation of $\beta_i$ and $E_i$, we can simplify the equation
of state as
\begin{equation}
\omega+1=\frac{4}{3}\frac{e^{\beta_1}\beta_1\alpha+e^{\beta_2}\beta_2}
{e^{\beta_1}(\beta_1+1)\alpha+e^{\beta_2}(\beta_2+1)},\label{ome}
\end{equation}
where $\alpha\equiv\kappa_1^2/\kappa_2^2$. We need this dark
energy has the initial state of $\omega>-1$, which requires that
the field of $\rho_2$ is dominant at the initial time. This is
easily obtained as long as at this time
$E_1^2(\beta_1+1)<E_2^2(\beta_2+1)$ is satisfied. The finial
state, we need to get $\omega<-1$, which means that $\rho_1$ is
dominant, and the behavior of crossing $-1$ realized in the
intermediate time. But how to get this? From the before
discussion, we know in the Universe with only one kind of YM field
(i=1 or 2), the YM equation follows that $\epsilon_i\propto
a^{-2}$. And it will go to the critical state $\epsilon_i=0$ with
the expansion of the Universe. At this state, $E_i^2=\kappa_i^2$,
and $\rho_i=bE_i^2/2=b\kappa_i^2/2$ keeps constant. So in this two
YM fields model, if we choose the condition
$\kappa_1^2>\kappa_2^2~ (\alpha>1)$, this may follow that the
finial energy density $\rho_1>\rho_2$, and $\rho_1$ is dominant.
For this intent, we build this model with the condition as below:
choosing $\alpha=1.5$, which can ensure the finial state, the
first kind of YM field $(i=1)$ is the dominant matter. At the
present time, corresponding to the scale factor $a_0=1$, we choose
that $\beta_1=-0.4<0$ and $\beta_2=0.2>0$, which keeps that the
first field always having a state of $\omega_1<-1$ (like the
phantom) and the second field with $\omega_2>-1$ (like the
quintessence). These choice of $\beta_i$ leads to the present EoS
\begin{equation}
\omega=-1+\frac{4}{3}\frac{e^{\beta_1}\beta_1\alpha+e^{\beta_2}\beta_2}
{e^{\beta_1}(\beta_1+1)\alpha+e^{\beta_2}(\beta_2+1)}=-1.10<-1,
\end{equation}
which is like the phantom field. Since $\rho_1$ increases, and
$\rho_2$ decreases with the expansion of the Universe, there must
exist of a time, before which $\rho_2$ is dominant, and this leads
to the total EoS $\omega>-1$ at that time.
Combining the Eqs.(\ref{kin}) and (\ref{ome}), we can solve the
evolution of EoS $\omega$ with the scale time in numerical
calculation, where the relation of $C_1$ and $C_2$ is easily got
\[
\frac{C_1}{C_2}=\frac{\beta_1e^{\beta_1/2}}{\beta_2e^{\beta_2/2}}=-1.48.
\]
For each kind of YM field, its EoS is
\begin{equation}
\omega_i=\frac{\beta_i-3}{3\beta_i+3}~~~(i=1,2).
\end{equation}
The condition of $\beta_i>0~(\beta_i<0)$ will generate
$\omega_i>-1~(\omega_i<-1)$. The evolution of them are shown in
Fig.[3]. This is exact result we expect, $\beta_i$ runs to the
critical point of $\beta_i=0$ with the expansion of the Universe,
which makes $\omega_i$ runs to $\omega_i=-1$, no matter what kind
of initial values. This is same with the single YM field model.
With $\beta_i\rightarrow 0$, the strength of the field $E_i^2$
will also go to its critical point $E_i^2=\kappa_i^2$. This can be
shown in Fig.[4]. For we have chosen the condition of
$\alpha\equiv\kappa_1^2/\kappa_2^2>1$, it must lead to
$E_1^2>E_2^2$ at some time, the first kind of YM field becomes
dominant, and the total EoS $\omega<-1$ is realized. This can be
shown in Fig.[1]. In Fig.[2], we also plot the CMB power spectrum
in the Universe with this kind of YM field dark energy, and find
that it is difficult to be distinguished from the $\Lambda$CDM
Universe, which is for the effect of the dark energy on the CMB
power spectrum is an integral effect from CMB decoupling time to
now. But the evolution detail of $\omega$ is not obvious. This is
the disadvantage of this way to detect dark energy.
Now, let's conclude this dark energy model, which is made of two
YM fields. One has the EoS of $\omega_1<-1$ and the other is
$\omega_2>-1$. At the initial time, we choose their condition to
make $\rho_1<\rho_2$, and second kind of YM field is dominant,
which makes the total EoS $\omega>-1$ at this time. This is like
the quintessence model. For the $\omega_1<-1$ keeps for all time,
from the Friedmann equations, one knows that its energy density
will enhance with the expansion of the Universe. And at last it
will run to its critical point $\rho_1=b\kappa_1^2/2$. And the
same time, $\rho_2$ will decrease to its critical state
$\rho_2=b\kappa_2^2/2$. For we have chosen
$\kappa_1^2/\kappa_2^2>1$, which must make $\rho_1=\rho_2$ and
some time, and after this, $\rho_1$ is dominant, the total EoS
$\omega<-1$. So the equation of state $\omega$ crossing $-1$ is
realized. It is simply found that this kind of crossing must be
from $\omega>-1$ to $<-1$, which is exactly same with the
observations. But the contrary condition, from $\omega<-1$ to
$>-1$ can't be realized in this kind of models.
\section{Conclusion and Discussion}
In summary, in this letter we have studied the possibility of
$\omega$ crossing $-1$ in the YM field dark energy models, and
found that the single YM field models can not realize, no matter
what kind of their effective lagrangian, although this kind of
models can naturally give a state of $\omega>-1$ or $\omega<-1$,
which depends on their initial state. Near the critical state of
$\omega=-1$, the evolution of their EoS with the expansion of the
Universe is same, $\omega+1\propto a^{-2}$, which means that the
Universe will be a nearly de Sitter expansion. This is the most
attractive character of this kind of models, and this makes it
very like the cosmological constant. So the Big Rip is naturally
avoided in this model. But this evolution behavior also shows that
the single field models can not realize $\omega$ crossing $-1$.
This is same with the single scalar field models.
But in these models, $\omega>-1$ and $\omega<-1$ all can be easily
got. The former behavior is like a quintessence field, and the
later is like a phantom field. So one can build a model with two
YM fields, and one field with $\omega<-1$ and the other with
$\omega>-1$. This idea is very like the quintom models. Then we
give an example model and find that in this model, the property of
crossing the cosmological constant boundary can be naturally
realized, and we also found that this crossing must be from
$\omega>-1$ to $<-1$, which is exact the observation result. In
this model, the state will also go to the critical state of
$\omega=-1$ with the expansion of the Universe, as the single YM
field models. This is the main character of the YM field dark
energy models, which makes the Big Rip is avoided. The present
models we discuss in this paper are in the almost standard
framework of physics, \emph{e.g}. in general relativity in
4-dimension. There does not exist phantom or higher derivative
term in the model, which will lead to theory problems in field
theory. Instead, the YM field as (\ref{L}), is introduced, which
includes the gauge invariance, the Lorentz invariance, the correct
trace anormaly, and the asymptotic freedom. These are the
advantages of this kind of dark energy models. But these models
also exist some disadvantages: first, what is the origin of the YM
field? and why its renormalization scale $\kappa^2$ is so low as
the present density of the dark energy? In the two YM fields
model, we must choose $\alpha>1$ to realize the $\omega$ crossing
$-1$, which is a mild fine-tuning problem. All these make this
kind of models being unnatural. These are the universal problems
which exist in most dark energy models. If considering the
possible interaction between the YM field and other matter,
especially the dark matter, which may have some new
character\cite{zhang2}. This topic had been deeply discussed in
the scalar field dark energy models\cite{inter}, but had not been
considered in this paper.
\section*{Acknowledgements}
Y. Zhang's research work has been
supported by the Chinese NSF (10173008) and by NKBRSF (G19990754).
\baselineskip=12truept
| 2024-02-18T23:39:49.356Z | 2008-11-28T22:43:40.000Z | algebraic_stack_train_0000 | 526 | 5,891 |
|
proofpile-arXiv_065-2666 | \section{Introduction}
The coupling of photons and baryons by Thompson scattering in the early
universe results in gravity driven acoustic oscillations of the photon-baryon
fluid.
The features that appear in both the Cosmic Microwave Background (CMB)
anisotropies and matter power spectra are snapshots of the phases of these
oscillations at the time of decoupling, and provide important clues
used to constrain a host of cosmological parameters. Features in
the matter power spectrum, referred to as baryon (acoustic)
oscillations, have the potential to strongly constrain the expansion
history of the universe and the nature of the dark energy.
These features in the matter power spectrum induce correlations in the
large-scale clustering of the IGM, clusters or galaxies.
Indeed a large, high redshift, spectroscopic galaxy survey encompassing a
million objects has been proposed \cite{WhitePaper} as a dark energy probe,
building on the successful detection (at low redshift) of the features in
the clustering of the luminous red galaxy sample of the Sloan Digital Sky
Survey \cite{SDSS}.
Key to this program is the means to localize the primordial features in
the galaxy power spectra, which necessarily involves theoretical understanding
of such complex issues as galaxy bias, non-linear structure evolution, and
redshift space distortions. These effects can shift the observed
scales of peaks and troughs in observations of the baryon oscillations,
affecting the transverse and radial measurements differently. Marginalizing
over this uncertain scale dependence can be a large piece of the error budget
of proposed surveys.
Inroads into modeling galaxy bias, non-linear evolution, and redshift
space distortions have been made by several groups using N-body
simulations \cite{Sims}, but these simulations are complex and often
mask the underlying physical mechanisms. A route to understanding
the sophisticated simulations is provided by an analytic tool known as
the halo model \cite{Halo}.
The halo model makes the assumption that all matter in the universe lives
in virialized halos, and that the power spectrum of this matter can be
divided up into two distinct contributions; the 1-halo term arising from
the correlation of matter with other matter existing in the same halo,
and the 2-halo term quantifying the correlation of matter with matter
that lives in a different halo.
The halo model is extended to calculate galaxy power spectra through the
introduction of a Halo Occupation Distribution (HOD), that describes the
number of galaxies that exist in a halo of a given mass, and their
distribution inside that halo.
The HOD will naturally depend on the details of galaxy formation, but is
not in itself a theory of galaxy formation. Rather it is a description of
the effects of galaxy formation on the number and distribution of the
galaxies. The impact of galaxy formation on cosmological observables such
as the baryon oscillation can be studied in the halo model by investigating
the sensitivity of the observable to changes made to the HOD.
In this paper, we employ a simple version of the analytic halo model
to study the origins of scale dependence in galaxy bias.
We investigate the impact of changing the HOD on the observed spectra,
and show that the scale dependence of the bias arises in a natural way from
extending the dark matter description to a description of galaxies.
Specifically, in generalizing the description to an ensemble of rare
tracers of the dark matter, the 1-halo and 2-halo terms in the power
spectrum are shifted by different amounts.
The scale dependence in the galaxy bias arises from this difference.
We find that for small $k$, the galaxy power spectrum is sensitive to
the number of galaxies that occupy a halo, but not to their positions
within the halo. We quantify the impact of redshift space distortions,
which find a natural description in the halo model, and discuss the
implications for large galaxy redshift surveys.
\section{The Halo Model}
We will try to understand the scale-dependence of the galaxy bias by
examining a simple model. Our goal is not to make precise predictions,
but rather to use simple analytic approximations to help interpret the
results of N-body simulations, which are much more appropriate for studying
the fine details.
Our investigation makes use of the halo model \cite{Halo}, which assumes that
all of the mass and galaxies in the universe live in virialized halos,
whose clustering and number density is characterized by their mass.
This model can be used to approximate the two point correlation function of
the mass, and of various biased traces of the mass, such as luminous galaxies.
In this framework, there are two contributions to the power spectrum:
one arises from the correlation of objects that reside in the same halo
(the 1-halo term), and the other comes from the correlation of objects that
live in separate halos (the 2-halo term). For the dark matter, for example,
the dimensionless power spectrum can be written
\begin{align}
\Delta^2_{\rm dm} \equiv \frac{k^3 \, P_{\rm dm}(k)}{2\pi^2}
\, =\, {}_{\rm 1h}\Delta^2_{\rm dm} + \,_{\rm 2h}\Delta^2_{\rm dm}
\end{align}
where \cite{Halo}:
\begin{align}
\label{dm1h}
{}_{\rm 1h}\Delta^2_{\rm dm}&=\frac{k^3}{2 \pi^2}\, \frac{1}{\bar{\rho}^2}\,
\int_{0}^\infty dM \, n_h(M) M^2\, |y(M,k)|^2
\\
\label{dm2h}
{}_{\rm 2h}\Delta^2_{\rm dm}&=\Delta^2_{\rm lin}\left[\frac{1}{\bar{\rho}}\,
\int_{0}^\infty dM \, \,n_h(M) \, b_h(M,k) \, M \, y(M,k) \right]^2
\end{align}
with $M$ the virial mass of the halo, $\bar{\rho}$ the mean background
density, $n_h(M)$ the number density of halos of a given virial mass,
and $b_h(M,k)$ the halo bias \cite{PeaksBias}.
The function $y(M,k)$ is the Fourier transform of the halo profile which
describes how the dark matter is spatially distributed within the halo.
Expressed this way, Eqs.~(\ref{dm1h}) and (\ref{dm2h}) lend themselves to a
fairly intuitive interpretation.
In the two halo term, the dark matter being correlated lives in widely
separated halos, of different masses.
For each of the two halos, the mass is multiplied by the function $y$ that
governs its spatial distribution within a halo, weighted by the number
density of halos $n_h$ with bias $b_h$, and integrated over all possible
halo masses. The one halo term is even simpler -- correlating two bits of
dark matter residing in the same halo. Thus there are two factors of
$M\times y$ weighted with the number density of halos $n_h$, and integrated
over all halo masses.
We can generalize this framework to compute the 1- and 2-halo terms for
a galaxy population that traces the density field \cite{Halo}.
We divide the galaxies into two sub-populations, centrals and satellites.
The centrals will reside at the center of the host halo, while the
satellites will trace the dark matter.
The Halo Occupation Distribution (HOD) sets the number of tracers in a
halo of mass M. We assume there is either a central galaxy or not, and
the number of satellites is Poisson distributed \cite{KBWKGAP}.
For our model we will use
\begin{align}
\left\langle N_c \right\rangle &= \Theta(M-M_{\rm min}) \\
\left\langle N_s \right\rangle &= \Theta(M-M_{\rm min})
\left(\frac{M}{M_{\rm sat}}\right)^a
\end{align}
where $\Theta$ is the Heaviside function, and $M_{\rm min} < M_{\rm sat}$.
Note that the central galaxies do not trace the halo profile, and are not
weighted by $y$.
The generalization of the 1-halo and 2-halo terms is given by \cite{Halo}
\begin{align}
_{\rm 2h}\Delta^2_{\rm g}&=\Delta^2_{\rm lin}
\left[ \frac{1}{\bar{n}_g} \int_{M_{\rm min}}^\infty dM \,
n_h(M) \, b_h(M) \,
\left(1+\left( \frac{M}{M_{\rm sat}}\right)^a y(M,k) \right) \right]^2 \\
_{\rm 1h}\Delta^2_g&=\frac{k^3}{2 \pi^2}\frac{1}{\bar{n}_g^2}
\int_{M_{\rm min}}^\infty dM\, n_h(M) \,
\left( 2 \left( \frac{M}{M_{\rm sat}}\right)^a y(M,k)
+ \left( \frac{M}{M_{\rm sat}}\right)^{2 a} |y(M,k)|^2 \right)
\end{align}
where
\begin{align}
\bar{n}_{\rm gal}=\int_{M_{\rm min}}^\infty dM \,n_h(M) \, \left( 1+
\left(\frac{M}{M_{\rm sat}}\right)^a \right)
\end{align}
is the number density of galaxies.
The interpretation of these expressions is similar to that of the dark matter.
For the purposes of this toy model, we shall adopt a power law for the
linear power spectrum,
\begin{align}
\Delta^2_{\rm lin}=\left(\frac{k}{k_\star}\right)^{3+n}=\kappa^{3+n}
\end{align}
and define a dimensionless wavenumber $\kappa\equiv k/k_\star$.
In order to simplify many of the expressions in the calculation, we find
it useful to change variables from $M$ to a dimensionless quantity, $\nu$,
related to the peak height of the overdensity. For the power law model
considered here $\nu$ is a simple function of the mass:
\begin{align}\label{nudef}
\nu(M)=\left(\frac{\delta_c}{\sigma(M)}\right)^2 =
\left(\frac{M}{M_\star}\right)^{(n+3)/3} = m^{(n+3)/3} \quad .
\end{align}
Here, $\delta_c=1.686$ and $\sigma(M)$ is the linear theory variance
in top hat spheres of radius $R=(3M/4\pi\bar{\rho})^{1/3}$.
We have introduced a a dimensionless mass $m$ in terms of the scale
mass $M_\star$, the mass for which $\sigma(M_\star)=\delta_c$ and $\nu=1$.
Note $M_\star$ is a function of the power spectrum normalization $k_\star$
and the index $n$, and can be computed using the relation
\begin{align} \label{eqn:sigdef}
\sigma^2(R)=\int_0^\infty \frac{dk}{k}\ \Delta^2_{\rm lin} \left[
\frac{3j_1(kR)}{kR}\right]^2
\end{align}
where $j_1$ is the spherical Bessel function of order 1.
The mass function $n_h(M)$ takes a simple form when expressed in terms of
the multiplicity function, $f(\nu)$. The multiplicity function is a
normalized number density of halos at a given mass:
\begin{align}
f(\nu) d\nu = \frac{M}{\bar{\rho}}\,n_h(M)\,dM
\qquad {\rm with} \qquad
\int f(\nu) \, d\nu=1 \quad .
\end{align}
For the Press-Schechter (P-S) mass function \cite{PreSch}
\begin{align}
f(\nu) = \frac{e^{-\nu/2}}{\sqrt{2\pi\nu}}
\end{align}
and the halos form biased tracers of the density field.
On large scales, for small fluctuations, the bias is \cite{PeaksBias}
\begin{align}
b_h(\nu) &= 1+\frac{\nu-1}{\delta_c}
\label{kfreehbias}
\end{align}
which satisfies $\int d\nu f(\nu)b(\nu)=1$.
In detail of course the halos do not provide a linearly biased tracer of
the linear density field. On smaller scales both higher order terms and
halo exclusion effects give rise to a scale-dependence.
Both analytic calculations \cite{PeaksBias} and simulations \cite{Sims}
suggest that this is a few percent correction on the scales of interest
to us and we shall henceforth neglect it.
When looking at large scales, such as those relevant to the baryon wiggles
in the linear power spectrum, the function $y(M,k)$ can be accurately
approximated by a Taylor expansion into powers of $kr_v$, where $r_v$ is the
virial radius, which depends upon the mass.
Assuming an NFW form \cite{NFW} for $y(M,k)$ and expressing the mass
dependence explicitly, the expression is
\begin{align}
y(M,k)&= 1+c_2\,(kr_v)^2+c_4\,(kr_v)^4 + \cdots \nonumber \\
&=1+c_2 (k_\star r_\star)^2 \kappa^2 m^{2/3}
+c_4 (k_\star r_\star)^4 \kappa^4 m^{4/3} + \cdots
\end{align}
Here we have introduced another quantity, the virial radius of an $M_\star$
halo, $r_\star\equiv r_v(M_\star)= (3 M_\star/4\pi\Delta\bar{\rho})^{1/3}$,
where $\Delta$ is the virialization overdensity, which we will take
to be $\Delta=200$.
The expansion coefficients $c_2$ and $c_4$ are functions of the halo
concentration, and for the NFW model are ratios of gamma functions.
For cluster sized halos we expect $c\simeq 5$ which leads to $c_2\simeq-0.049$
and $c_4 \simeq 0.0014$, while for galaxies $c\simeq 10$ making
$c_2\simeq -0.04$ and $c_4\simeq 0.0011$.
The quantity $k_\star r_\star$ can be computed using the relation in
Eq.~(\ref{eqn:sigdef}), and turns out to be a function only of the index $n$.
We have tabulated the expressions and values of $k_\star r_\star$ in
Table \ref{tab:nquant}.
On large scales, where $k<k_\star$, we see that the coefficients of the last
two terms in the expression for $y(M,k)$ are extremely small.
We have repeated our analysis neglecting these terms and find that these
are insignificant corrections to the scale dependence we are studying.
This is consistent with the results of \cite{SchWein}, who found that the
local relation between galaxies and mass within a halo
does not significantly impact
the large scale galaxy correlation function.
For simplicity we shall set $y(M,k)=1$ for the remainder of the paper.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|} \hline
$n$ & \multicolumn{2}{|c|}{$k_\star r_\star$} &
\multicolumn{2}{|c|}{$A_\star$} & $\gamma_{\rm dm}$ \\
\hline
$0$ & $\Delta^{-1/3}\left(3 \pi/2 \delta_c^{2}\right)^{1/3}$
& 0.2023 & $\delta_c^{-2}$ & 0.3518 & 1 \\
$-\frac{1}{2}$ & $\Delta^{-1/3}\left(12 \sqrt{\pi}/7\delta_c^2\right)^{2/5}$
& 0.1756 & $8/7 \left(12/7 \pi^2\right)^{2/5} \delta_c^{-12/5}$ & 0.2299
& 1.18 \\
$-1$ & $\Delta^{-1/3}\left(3/2 \delta_c\right)$
& 0.1521 & $\left(9/4 \pi\right) \delta_c^{-3}$ & 0.1494 & 1.60 \\
$-\frac{3}{2}$ & $\Delta^{-1/3}\left(16 \sqrt{\pi}/15\delta_c^2\right)^{2/3}$
& 0.1303 & $\left(512/675\right)\delta_c^{-4}$ & 0.0939 & 3 \\
$-2$ & $\Delta^{-1/3}\left(3 \pi/5\delta_c^2\right)$
& 0.1134 & $(18 \pi^2 / 125) \delta_c^{-6}$ & 0.0619 & 15 \\ \hline
\end{tabular}
\end{center}
\caption{Expressions and values for $k_\star r_\star$ and $A_\star$ in terms
of $\delta_c=1.686$ and the virialization overdensity $\Delta=200$.
Here $k_\star$ is the normalization of the dark matter power spectrum, and
$r_\star=r_v(M_\star)$ is the virial radius of a halo of mass $M_\star$.
The factor $A_\star=k_\star^3 M_\star/(2 \pi^2 \bar{\rho})$ relates the
amplitude of the 1- and 2-halo terms and $\gamma_{\rm dm}$ is defined in
Eq.~(\protect\ref{eqn:gdm}).}
\label{tab:nquant}
\end{table}
\section{Results -- real space}
Having argued that we can safely approximate $y$ and $b_h$ as scale independent
quantities when studying clustering at large scales our expressions simplify
dramatically. The mass power spectrum is simply\footnote{If appropriate halo
profiles, e.g.~Gaussians, are chosen the full $k$-dependent integrals can also
be done in terms of special functions.}
\begin{equation}
\Delta^2_{\rm dm}(k) = \kappa^{3}
\left( \kappa^n + A_\star \gamma_{\rm dm} \right)
\qquad \kappa \ll 1
\end{equation}
where $A_\star$ and $\gamma_{\rm dm}$ are $n$-dependent constants
\begin{align}
A_\star &=\frac{k_\star^3 \, M_\star}{2 \pi^2 \bar{\rho}} \\
\gamma_{\rm dm} &= \int_0^\infty m(\nu) f(\nu) d\nu =
2^{3/(3+n)}\pi^{-1/2}\ \Gamma\left[1/2 + 3/(n+3)\right] \label{eqn:gdm}
\end{align}
We list the values of $A_\star$ and $\gamma_{\rm dm}$ for some values of $n$
in Table \ref{tab:nquant}.
Referring to the Table we see that, for $n$ near $-1$, the 1-halo term
dominates only for $k>k_\star$, outside of the range of relevance for us.
The scale-dependent bias can be defined as
\begin{align}\label{genb}
B^2(k)\equiv \frac{_{\rm 2h}\Delta^2_g+_{\rm 1h}\Delta^2_g}
{_{\rm 2h}\Delta^2_{\rm dm}+_{\rm 1h}\Delta^2_{\rm dm}}
\end{align}
which can be re-written to explicitly exhibit its scale dependence as
\begin{align}
B^2(k)&=\left(\frac{1}{\alpha_g^2}\right)
\frac{\beta_g^2+A_\star \gamma_g\kappa^{-n}}
{1+A_\star \gamma_{\rm dm}\kappa^{-n}} \label{eqn:b1} \\
&\simeq (\beta_g/\alpha_g)^2 \, \left(1+\zeta\, \kappa^{-n} + \cdots\right)
\label{eqn:b2}
\end{align}
where $\alpha_g$, $\beta_g$ and $\gamma_g$ are dimensionless integrals of
$\nu$, $\alpha_{g}$ is the galaxy number density in dimensionless units,
$\beta_g/\alpha_g$ is the galaxy weighted halo bias and $\gamma_{g}$ counts`
the number of galaxy pairs in a single halo\footnote{In the limit
$y(\nu,k)=1$ we have $\alpha_{\rm dm}=\beta_{\rm dm}=1$. The expression
for $\gamma_{\rm dm}$ is given in Eq.~(\protect\ref{eqn:gdm}).}.
We have neglected terms higher order in $\kappa$.
The term $\kappa^{-n}$ encodes the leading order scale dependence and is
proportional to the inverse of the linear dark matter power spectrum.
Choosing $a=1$ in our HOD as a representative example the relevant
integrals are
\begin{align}
\alpha_g&=\int_{\nu_{\rm min}}^\infty m(\nu)^{-1} f(\nu)
\left[1+m(\nu)/m_{\rm sat} \right] \,d\nu \\
\beta_g&=\int_{\nu_{\rm min}}^\infty m(\nu)^{-1} f(\nu)
\left[1+m(\nu)/m_{\rm sat}\right] b_h(\nu) \, d\nu \\
\gamma_g&=\int_{\nu_{\rm min}}^\infty
m(\nu)^{-1} f(\nu) \left[2m(\nu)/m_{\rm sat} +
(m(\nu)/m_{\rm sat})^2\,\right]\, \, d\nu
\label{eqn:defs}
\end{align}
and the factor governing the scale-dependence is
\begin{equation}
\zeta(\nu_{\rm min},m_{\rm sat}, n)=
A_\star \left(\gamma_g/\beta_g^2 - \gamma_{\rm dm} \right)
\end{equation}
Note $\zeta$ depends on the number of pairs of galaxies divided by the
square of the large-scale bias.
If one wishes to reintroduce the halo profiles there is a simple modification
to the integrals. In $\alpha_g$, $\beta_g$, and $\gamma_g$, every occurrence
of $m(\nu)/m_{\rm sat}$ will be multiplied by $y(\nu,k)$.
For the dark matter, $\gamma_{\rm dm}$ will have an extra factor of
$|y(\nu,k)|^2$ in the integrand, and the $1$ in the denominator of
Eq.~(\ref{eqn:b1}) will be replaced by
$\int b_h(\nu)f(\nu)y(\nu,k) d\nu$ squared.
In this form it is clear that the scale-dependent bias arises because
the 1- and 2-halo terms for the galaxies are different multiples of their
respective dark matter counterparts. Typically the 1-halo term is enhanced
more than the 2-halo term, leading to an increase in the bias with decreasing
scale. A cartoon of this is shown in Fig.~\ref{fig:cartoon}.
The relative enhancements of the 1 and 2-halo terms depend on the HOD
parameters for the galaxy population used as tracers.
Note also that in our simple model the 2-halo term retains the oscillations
of the linear theory spectrum, while in the 1-halo term they are absent.
This provides a partial\footnote{In a more complex/realistic model the 2-halo
term involves the non-linear power spectrum and non-linear bias of the
tracers, including halo exclusion effects. Mode coupling thus appears to
reduce the baryon signal even at the 2-halo level.} explanation of the
reduction of the contrast of the oscillations with increasing $k$.
Figure \ref{fig:zeta} shows $\zeta$ vs.~$\nu_{\rm min}$ for several different
values of the other HOD parameter $M_{\rm sat}$.
We see that the scale dependence is more prominent as the number of satellite
galaxies is increased, and that a higher threshold halo mass for containing
a central galaxy leads to a more scale dependent bias at large scales.
At fixed number density $\zeta$ increases with increasing bias and is
more rapidly increasing for rarer objects. At fixed bias $\zeta$ is larger
the rarer the object.
Our model is not sophisticated enough to expect good agreement with large
N-body simulations, however comparing to the work of \cite{Sims} we
find good qualitative agreement in the scale-dependence of the bias for
$0.01<k/(h\,{\rm Mpc}^{-1})<0.1$.
\begin{figure}
\begin{center}
\resizebox{4.5in}{!}{\includegraphics{cartoon.eps}}
\end{center}
\caption{A cartoon illustrating the difference in the shift of the
1-halo and 2-halo terms (dashed lines)
of the galaxy power spectra with respect
to the dark matter. Because of this difference, the 1-halo term
dominates on larger scales for the galaxy spectrum. This leads to
a change in the ratio of the total power (solid curves), which leads
to a scale dependent galaxy bias.}
\label{fig:cartoon}
\end{figure}
\begin{figure}
\begin{center}
\resizebox{4.5in}{!}{\includegraphics{coeff.ps}}
\end{center}
\caption{The factor $\zeta$ that governs the strength of the scale dependent
part of the galaxy bias.}
\label{fig:zeta}
\end{figure}
We note that the 1- and 2-halo decomposition leads us to a new
parameterization of the scale-dependent bias.
In the limit where halo profiles and scale-dependent halo bias can be
neglected the most natural description of the galaxy spectrum is
\begin{equation}
\Delta_g^2 = b^2 \Delta_{\rm lin}^2(k) + \left( \frac{k}{k_1} \right)^3
\end{equation}
which has two free parameters, $b$ and $k_1$. We expect this will describe
the largest part of the scale-dependent bias. Non-linear bias, halo
exclusion and profiles will show up as smaller corrections to this formula,
such as a scale-dependence in $b$. It is difficult to compare
the scale dependence in this framework to
other treatments of scale dependent bias (e.g. \cite{soccfry}) where the galaxy
density contrast is expanded in moments of the matter
density contrast ($b_1$, $b_2$, etc.),
because the matter density contrast itself has both 1-halo
and 2-halo contributions, and furthermore
we have not extended our analysis to the bispectrum
or higher order.
\section{Results -- redshift space}
Keeping to our philosophy of examining qualitative behavior in a simple
model, we can extend these results to redshift space.
The 2-point function in redshift space differs from that in real space due
to two effects \cite{RedReview}.
The first, effective primarily on very large scales, accounts for the fact
that dark matter and the galaxies that trace it have a tendency to flow
toward overdensities as the structure in the universe is being assembled,
enhancing the fluctuations in redshift space \cite{Kaiser}.
The second comes into play inside virialized structures, where random
motions within the halo reduce fluctuations in redshift space.
These corrections impact the 1-halo and 2-halo terms. The inflow effect
primarily impacts the 2-halo term while virial motions primarily affect
small scales which are dominated by the 1-halo term \cite{HaloRed}.
The boost in the observed density contrast, $\delta_k$, due to instreaming
is given\footnote{It has been argued in Ref.~\cite{Sco} that the form
$1+{\rm f}\mu^2$ is not highly accurate on the scales relevant to observations
and higher order corrections apply. Since it is our intent to gain qualitative
understanding rather than quantitative accuracy we shall use the simplest
form: $1+{\rm f}\mu^2$. Deviations from this will be yet another source of
scale-dependence, but numerical simulations suggest it is small.}
by $(1+{\rm f}\mu^2)$ where $\mu=\hat{r}\cdot\hat{k}$ and
${\rm f}\simeq\Omega_m^{0.6}$ \cite{Kaiser}.
The small scale suppression we take to be Gaussian.
In general, when extending the model to galaxies tracing the dark matter,
one should distinguish between central and satellite galaxies,
since the latter have much larger virial motions and will therefore suffer
more distortion in redshift space. We approximate this by taking
$\sigma^2_{v,{\rm cen}}\approx 0$ and $\sigma^2_{v,{\rm sat}}=GM/2r_{\rm vir}$.
Converting from velocity to distance we have for an $M_\star$ halo
$\sigma_\star\to \sqrt{\Delta/4}\,r_\star\simeq 7\,r_\star$.
Defining $y_s(\nu,k)=y(\nu,k)\,e^{-(k\sigma_v\mu)^2/2}$ the 1- and 2-halo
terms are then given by \cite{HaloRed}
\begin{align}
_{1h}\Delta^2_{\rm dm}
&=\frac{k^3}{2\pi^2} \frac{M_\star}{\bar{\rho}}
\int_0^\infty m(\nu)\,f(\nu)\, |y_s(\nu,k)|^2\, d\nu \\
_{2h}\Delta^2_{\rm dm}
&=\Delta^2_{\rm lin} \left[
\int_0^\infty f(\nu)\,(1+{\rm f}\mu^2)\,b_h(\nu)\,y_s(\nu,k)\, d\nu\right]^2
\end{align}
for the dark matter and
\begin{align}
_{1h}\Delta^2_g &=
\frac{k^3}{2\pi^2} \frac{\bar{\rho}}{n^2_g\,M_\star}
\int_{\nu_{\rm min}}^\infty m^{-1}(\nu)\, f(\nu) \nonumber \\
&\left[
2\left(\frac{m(\nu)}{m_{\rm sat}}\right)\,y_s(\nu,k)
\,+\left(\frac{m(\nu)}{m_{\rm sat}}\right)^2 |y_s(\nu,k)|^2 \right] d\nu \\
_{2h}\Delta^2_g &=
\Delta^2_{\rm lin} \left[ \frac{\bar{\rho}}{n_g\,M_\star}
\int_{\nu_{\rm min}}^\infty m^{-1}(\nu)\,f(\nu) \,b_h(\nu)
\left(1+\frac{m(\nu)}{m_{\rm sat}}\,y_s(\nu,k)\right)
\ d\nu\right. + \nonumber \\
& \left. {\rm f}\mu^2 \int_{0}^\infty f(\nu) \,b_h(\nu)
\, y_s(\nu,k) d\nu \right]^2
\end{align}
for the galaxies. As discussed in \cite{HaloRed},
in the 2-halo term the effect of peculiar velocities,
going as ${\rm f}\mu^2$, is governed by the mass rather than the galaxy
density field, requiring the addition of
a separate integral over $\nu$. Conceptually,
this term is added to account for extra clustering in redshift
space induced by
the bulk peculiar flow of the galaxies in one halo under the influence of
the dark matter in other halos.
For some purposes it is useful to average over orientations of the
galaxy separations (i.e. integrate over $\mu$) but in the case of studying
baryon oscillations, doing so throws away valuable information.
As before we note that $y(\nu,k)\approx 1$ and
$\exp[-(k\sigma_v\mu)^2/2]\approx 1$ for $k\ll k_\star$ so the effect of
redshift space distortions is primarily to enhance the 2-halo term --
this makes the power spectrum ``more linear'' in
redshift space than real space. However the second of our approximations,
$\exp[-(k\sigma_v\mu)^2/2]\approx 1$, is not as good as the first,
$y(\nu,k)\approx 1$, so there is enhanced $k$-dependence from the
individual terms. For the interesting range of $n$,
$k_\star\sigma_\star\sim 1$ so the exponential can only be neglected
when $\kappa^2\nu^{2/(n+3)}\ll 1$ for all values of $\nu$ that significantly
contribute to the integral; i.e. near the peak of the integrand.
For example, we see scale dependence at smaller $k$ in the 1-halo term in
redshift space than in real space. At $\kappa=1/2$,
the exponential term induces a 13-14\% change in the 1-halo terms
along the line of sight for
both the dark matter and a moderately biased sample of galaxies,
leading to a percent level correction in the ratio
of power spectra.
The error decreases rapidly as $|\mu|$ decreases.
The importance of the exponential factor depends somewhat on the HOD parameters.
The correction to the galaxy 1-halo term
is larger as $M_{\rm min}$ increases, but is smaller as
$M_{\rm sat}$ increases, due to the decreasing number of satellite-satellite pairs.
For completeness we write the scale-dependent bias
in redshift space in the approximation that $y_s(\nu,k)\simeq 1$.
\begin{eqnarray}
B^2(k,\mu)&=&\left(\frac{1}{\alpha_g^2}\right)
\frac{\left[\beta_g+\alpha_g f \mu^2\right]^2
+ \kappa^{-n} A_\star \gamma_g}{\left[1+{\rm f}\mu^2\right]^2
+ \kappa^{-n} A_\star \gamma_{\rm dm}} \\
&\simeq& \frac{\beta_g^2}{\alpha_g^2} \,\frac{\Xi_g^2}{\Xi_{\rm dm}^2}
\, \left( 1 + A_\star \kappa^{-n}
\left[ \frac{\gamma_g}{\Xi_g^2} -
\frac{\gamma_{\rm dm}}{\Xi_{\rm dm}^2} \right]
+\cdots \right)
\end{eqnarray}
where we have defined $\Xi_g=1+(\alpha_g{\rm f}/\beta_g)\mu^2$ and
$\Xi_{\rm dm}=1+{\rm f}\mu^2$ to simplify the equations.
\section{Conclusions}
Models of structure formation where $\Omega_{\rm b}\not\ll\Omega_{\rm m}$
predict a series of features in the linear theory matter power spectrum,
akin to the acoustic peaks seen in the angular power spectrum of the cosmic
microwave background. These peaks provide a calibrated standard ruler, and
a new route to constraining the expansion history of the universe.
In order to realize the potential of this new method, we need to understand
the conversion from what we measure -- the non-linear galaxy power spectrum
in redshift space -- to what the theory unambiguously provides -- the linear
theory matter power spectrum in real space.
The ability of N-body simulations to calibrate this mapping is improving
rapidly, but the complexity of the simulations can often mask the essential
physics. In this paper we have tried to investigate the issues using a
simplified model which can give qualitative insights into the processes
involved.
In our toy model we find that the distribution of galaxies within halos
and the complexities of scale-dependent halo bias are sub-dominant
contributions to the scale-dependence of galaxy bias. The dominant effect
is the relative shifts of the 1- and 2-halo terms of the galaxies compared
to the matter. The amplitude of the scale dependent bias on very large
scales is parameterized by a quantity, $\zeta$, which depends on the
galaxy HOD. For our two parameter HOD we find $\zeta$ increases with
increasing bias at fixed number density and is more rapidly increasing for
rarer objects. At fixed bias, $\zeta$ is larger the rarer the object.
The 1- and 2-halo decomposition leads us to a new parameterization of the
scale-dependent bias.
In the limit where halo profiles and scale-dependent halo bias can be
neglected the most natural description of the galaxy spectrum is
\begin{equation}
\Delta_g^2 = b^2 \Delta_{\rm lin}^2(k) + \left( \frac{k}{k_1} \right)^3
\end{equation}
which has two free parameters, $b$ and $k_1$.
This is very close to the phenomenologically motivated form proposed by
\cite{SeoEis05}. The extra $k$-dependence these authors allowed in their
multiplicative and additive terms can be understood here as the effect
of non-linear power, non-linear bias and halo exclusion and halo profiles.
The corrections appear first in the 2-halo term and then at smaller
scales
in the 1-halo term.
Our results also suggest that on very large scales, the bias in configuration space
has relatively little scale dependence because the effects of the 1-halo term
are strictly limited to scales smaller than the virial radius of the largest
halo.
We would like to thank D. Eisenstein and R. Sheth for conversations.
The simulations referred to in this work were performed on the IBM-SP
at NERSC. This work was supported in part by NASA and the NSF.
| 2024-02-18T23:39:49.417Z | 2005-11-18T18:26:14.000Z | algebraic_stack_train_0000 | 529 | 5,191 |
|
proofpile-arXiv_065-2725 | \section{Introduction}
Authentication services are required by many applications of ad hoc networks, both mobile (MANETs) or wired, like peer-to-peer. As an example, consider chats, games, or data sharing in a ad-hoc network, or in a MANET. As more practical applications of MANETS will be developed, the need for authentication services will grow. In addition, many forms of secure routing in MANTETs or general ad-hoc networks cannot operate without a form of authentication.
At the same time, ad-hoc networks and their applications are more vulnerable to a number of well-known threats, such as identity theft (spoofing), violation of privacy, and the man-in-the-middle attack. All these threats are difficult to counter in an environment where membership and network structure are dynamic and the presence of central directories cannot be assumed.
Applications of ad-hoc networks can have ano\-ny\-mi\-ty requirements that cannot be easily reconciled with some forms of authentication known today. On the other hand, service providers that are bound by legal regulations have to be able to trace the actions of user of a MANET. Finding a reasonable trade-off between these two requirements is rather hard. In this paper, we use the term \emph{revocable anonymity} for a system in which a user cannot be identified to the outside world, but a trusted authority is provided with the possibility to identity actions performed by each user.
These considerations lead to the conclusion that mobile ad hoc networks can benefit from new, specialized methods of authentication. In this article, we combine two cryptographic techniques - Merkle's puzzles and zero-knowledge proofs - to develop a protocol for authentication in ad-hoc networks. This protocol is resistant to man-in-the-middle and eavesdropping attacks and prevents identity theft. However, the protocol allows for revocable anonymity of users and is adapted to the dynamic and decentralized nature of these networks. Finally, our protocol works with any MANET routing protocol and does not assume any properties of MANET routing.
We study the protocol for a model chat application in an ad-hoc network that needs to authenticate users to continue concurrent private conversations. Users of the chat prefer to remain anonymous, but they must have identities for the duration of the conversation. However, the applications of an authentication protocol in ad-hoc networks can be wide, and our authentication protocol can be adapted to many other applications. In this paper, we aim to demonstrate the principle that lies behind the new authentication method, to compare the new method to existing techniques and to analyze its security and performance.
\paragraph{Organization of paper.} In the next section, we consider how existing techniques such as Public Key Infrastructure or their modifications can be used for authentication in MANET applications. We present a simple case study of a chat application. We demonstrate how the use of PKI is difficult if users have no previous knowledge of the receiver of messages. Other disadvantages of PKI are the lack of global availability and the lack of anonymity. In section~\ref{crypto}, we present and explain the cryptographic primitives used in our protocol. Section~\ref{prot} presents the protocol, and concludes with an analysis of the protocol's security and efficiency. Section~\ref{concl} concludes and discusses further work.
\section{Related work}
General security architectures for MANETs almost exclusively use public key cryptography (PKI or the Web of trust) \cite{confidant, adhoc-PGP, adhoc-book}. These systems provide authentication without anonymity, and will be discussed in more details below.
Most systems that provide anonymity are not interested in allowing to trace the user under any circumstances. \emph{Chaum mixing networks}, proxy servers, have not been designed to provide accountability. For mobile ad hoc networks, approaches exists that provide unconditional anonymity, again without any accountability \cite{adhoc-onion}.
A Chaum mixing network, mentioned earlier, is a collection of special hosts (mixing nodes) that route user messages. Each node simply forwards an incoming messages to other nodes in the mixing network. The path (sequence of nodes) is chosen by the sender, and the message is put into envelopes (based on PKI infrastructure), one for each node on the path.
An area that requires both anonymity and accountability is agent systems (\cite{NIST}). Most of the security architectures for those systems do not provide any anonymity, e.g.,~\cite{C},~\cite{G},~\cite{F}.
However there exists work devoted to different anonymity aspects e.g.~\cite{KK2}.
A different scheme that preserves anonymity is proposed in~\cite{D}. The scheme is based on a credential system and offers an optional anonymity revocation. Its main idea is based on oblivious protocols, encryption circuits and the RSA assumption.
\section{Authentication in MANET applications}
In this section, we discuss a model chat application in a MANET that will be used to guide our discussion of authentication. However, the conclusions of the discussion apply to any MANET application that requires authentication.
\subsection{Chat of users in a MANET}
Consider a chat application in a mobile ad-hoc network. The system makes it possible for users to execute two operations: $SALL(m)$ and $SPRIV(u, m)$. The first operation sends a message, $m$ to all users in the network. The second operation sends a private message $m$ to a selected user, $u$. Note that a user that executes the $SALL$ operation needs not to know who receives the message.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{fig1-1.eps}
\caption{Chat of users in a MANET}
\label{fig1}
\end{figure}
The described system is visualized on figure~\ref{fig1}. The $SALL$ and $SPRIV$ messages are routed by the network using the $ROUTE$ operation (using any MANET routing protocol). On the figure, the $SALL$ message is routed from user $P_I$ to all other users, among them to $P_U$ by the nodes $P_1$ and $P_3$.
After that, $P_U$ responds by sending a message $SPRIV$ to $P_I$. The exchange of private messages may continue concurrently to the sending of messages to all users by either $P_I$ or $P_U$.
Consider now that the application wishes to authenticate the private message senders. For example, after receiving the first $SPRIV$ message, the application creates a file that will contain all messages exchanged by the two users during a private conversation. Only the user that has sent the first $SPRIV$ message is authorized to continue the conversation. In order to enforce this, some form of authentication is required.
The first question is whether the user address in the MANET is sufficient for authentication. Is it possible for a malicious user, $P_M$, to assume the address of an innocent user, $P_I$? In MANETs, the possibility of successfully spoofing an IP address cannot be overlooked.
Let us assume that $P_I$ uses his own address as authentication information. The $SPRIV$ message takes the form of $SPRIV(receiver_address, m, sender_address)$. However, $P_M$ runs a DoS attack against $P_I$, forcing $P_I$ to leave the network. After $P_I$ has left, $P_M$ joins the network assuming the address of $P_I$. Next, $P_M$ can take over the private conversation of $P_I$.
What is needed to implement access permissions that allow private conversations? An authentication mechanism that
\begin{itemize}
\item allows a user to authenticate its conversation partners
\item does not use centralized control during authentication
\item is safe against playback attack
\item is safe against eavesdropping
\item is safe against man-in-the-middle attack
\item provides controlled anonymity
\end{itemize}
\subsection{Case Study: $PKI$}
Before we present a new method of authentication, let us first describe and analyze available means of providing authentication in MANETs. The most well known (and most frequently used) method is authentication using public key cryptography and Public Key Infrastructure ($PKI$) certificates. If such a method is used in a MANET, all users must obtain a $PKI$ certificate from a certificate authority ($CA$) in order to access certain system functions (perhaps some functions may be available without access control).
An alternative would be to use a trusted source of authentication information that is part of the MANET: a bootstrap server. This element (we shall refer to it as authenticating bootstrap, $AB$) issues certificates to users that join the system. A drawback of this approach is that the identity of users is not externally verified. A similar approach is to allow all users to issue certificates like in the PGP "Web of trust" model. In \cite{ESpeak}, this approach has been chosen along with the use of SPKI; however, this work does not significantly differ from the approach described in this case study.
Let us consider, how the described authentication methods could be used to solve the problem posed above: implementing access permissions for a chat with private conversations in a MANET. A proposed solution is shown on fig.~\ref{fig2}.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{fig2-2.eps}
\caption{Using certificates for authentication in a MANET application}
\label{fig2}
\end{figure}
$P_I$ has a certificate, $C$, that contains its public key and a signature, $SIG$. The certificate and the $SPRIV$ message are routed through the MANET (the message contains a nonce to avoid playback attacks). For simplicity, let us assume that there is a single, malicious user on the path from $P_I$ to $P_U$. When $P_U$ receives the message, he can verify the validity of the signature and accept the public key of $P_I$ as authentication information. In the future, $P_U$ will only display private messages from $P_I$ if the message has been signed by $P_I$. Verification of the certificate may require communication with $CA$ or $AB$, if $P_U$ does not know the public key of the $CA$ or $AB$.
However, note that the presented scenario is insecure. $P_M$ is capable of a man-in-the-middle attack that exchanges the certificate, $C$, with a certificate of $P_M$, $C'$, and the address of $P_I$ with the address of $P_M$. Unless the receiver, $P_U$, is capable of verifying that the certificate belongs to the sender $P_I$, then $P_M$ will be able to continue the conversation of $P_I$ afterwards (and $P_I$ will not!). To fix this problem, the proposed protocol has to be modified as presented on fig.~\ref{fig2}. The only way for $P_U$ to make sure that the certificate belongs to $P_I$ is to communicate with $P_I$ over a channel that is not controlled by $P_M$ and receive a proof that $P_I$ has a private key that matches the public key in the certificate.
After the private conversation has been accepted by $P_U$, $P_I$ may wish to send messages using another $SPRIV$ operation. For the second time, authentication can be simpler. $P_I$ and $P_U$ now know the public keys of each other. This information, or a secret value associated with the conversation during initiation, is enough to establish an encrypted channel between $P_I$ and $P_U$ and to authenticate $P_I$.
\subsection{Disadvantages of using $PKI$ for authentication in MANETs}
However, the proposed solution has several drawbacks:
\begin{enumerate}
\item It requires direct communication with $P_I$. This may not be possible if $P_I$ is outside the radio range of $P_U$.
\item The certificate must contain the address of $P_I$ (or the system must include a directory where this address may be found). This requires updates whenever $P_I$ changes its address.
\item Communication with the $CA$ or $AB$ must occur during every transaction, if $P_U$ does know the public key of $CA$ or $AB$.
\item It requires a 3-way exchange of information.
\item If $PKI$ certificates are used, the users cannot be anonymous.
\item Note that we do not consider how to provide message integrity during communication from $P_I$ to $P_U$. We focus solely on authentication.
\end{enumerate}
As pointed out in~\cite{DIN}, the use of $PKI$ for authentication has other drawbacks. The use of $PKI$ is difficult because of the necessity of verifying legal identities of all participants. This is a difficult task, and may limit the participation of users from countries or geographical areas where the access to $PKI$ infrastructure is limited. Other users may have privacy concerns, depending on the type of application.
Finally, the security of $PKI$ has been questioned due to its hierarchical nature. In~\cite{BUR}, authors observe that if a high-level certification authority is compromised, then the result is a failure of a large part of the system. For these reasons, it may be worthwhile to consider a more lightweight, scalable and robust authentication mechanism for MANET applications.
\section{Proposal}
In this paper, we describe a new protocol for authentication for MANET applications. The protocol allows users to securely send private messages to another user (as described in section $3$).
First, utilized cryptographic primitives are briefly introduced: the concept of zero-knowledge proofs and Merkle's puzzles. Then, we present the authentication protocol.
\subsection{Cryptographic primitives}
\label{crypto}
Our scheme involves two cryptographic primitives: Merkle's puzzles and zero-know\-led\-ge proofs. We describe them shortly below.
\paragraph{Merkle's puzzles} Ralph Merkle introduced his concept of cryptographic puzzles in~\cite{M}. The goal of this method was to enable secure communication between two parties: A and B, over an insecure channel. The assumptions were that the communication channel can be eavesdropped (by any third party, called E). Assume that A selected an encryption function ($F$). $F$ is kept by A in secret. A and B agree on a second encryption function, called $G$:
\begin{center}
\emph{G(plaintext, some key) = some encrypted message}.
\end{center}
$G$ is publicly known. A will now create $M$ puzzles (denoted as $s_i$, $0 \leq i \leq M$) in the following fashion:
\begin{displaymath}
s_i = G((K,X_i,F(X_i)),R_i)
\end{displaymath}
$K$ is simply a publicly known constant term, which remains the same for all messages. The $X_i$ are selected by A at random. The $R_i$ are the "puzzle" part, and are also selected at random from the range $(M \cdot (i-1), M \cdot i)$. B must guess $R_i$. For each message, there are $N$ possible values of $R_i$. If B tries all of them, he is bound to chance upon the right key. This will allow B to recover the message within the puzzle: the triple $(K,X_i,F(X_i))$. B will know that he has correctly decoded the message because the constant part, K, provides enough redundancy to insure that all messages are not equally likely. Without this provision, B would have no way of knowing which decoded version was correct, for they would all be random bit strings. Once B has decoded the puzzle, he can transmit $X_i$ in the clear. $F(X_i)$ can then be used as the encryption key in further communications. B knows $F(X_i)$ because it is in the message. A knows $F(X_i)$ because A knows $X_i$, which B transmitted in the clear, and also knows F, and so can compute $F(X_i)$. E cannot determine $F(X_i)$ because E does not know F, and so the value of $X_i$ tells E nothing. E's only recourse is to solve all the $N$ puzzles until he encounters the 1 puzzle that B solved. So for B it easy to solve one chosen puzzle, but for E is computationally hard to solve all $N$ puzzles.
\label{merkle}
\paragraph{Zero-knowledge proofs}
A zero knowledge proof system (\cite{P}, \cite{ID}, \cite{FS}, \cite{OG2}, \cite{OG4}, \cite{BDLP}) is a protocol that enables one party to \emph{prove} the possession or knowledge of a "secret" to another party, without revealing anything about the secret, in the information theoretical sense. These protocols are also known as minimum disclosure proofs. Zero knowledge proofs involve two parties: the prover who possesses a secret and wishes to convince the verifier, that he indeed has the secret. As mentioned before, the proof is conducted via an interaction between the parties. At the end of the protocol the verifier should be convinced only if the prover knows the secret. If, however, the prover does not know it, the verifier will be sure of it with an overwhelming probability.
The zero-knowledge proof systems are ideal for constructing identification schemes. A direct use of a zero-knowledge proof system allows unilateral authentication of P (Peggy) by V (Victor) and require a large number of iterations, so that verifier knows with an initially assumed probability that prover knows the secret (or has the claimed identity). This can be translated into the requirement that the probability of false acceptance be $2^{-t}$ where $t$ is the number of iterations. A zero knowledge identification protocol reveals no information about the secret held by the prover under some reasonable computational assumptions.
\subsection{The authentication protocol}
\label{prot}
The proposed protocol offers an authentication method for the model MANET chat application. The node that wishes to send a private message is equipped with a zero-knowledge value. After the setup of a private conversation, this value will enable only the right node to send new private messages. Using the proposed protocol, the authentication information cannot be used by a node that routes the message for its own purpose. A short overview is presented in this section and a detailed description in the next.
The proposed protocol has three phases:
\begin{enumerate}
\item initial: when a bootstrap creates necessary values for authentication
\item initialization of private conversation: the first private message contains additional zero-knowledge values that will enable the sender (and no one else) to continue the private conversation.
\item exchange of private messages: the sender uses a zero-knowledge proof and Merkle's puzzles to authenticate itself and to safely send a private message.
\end{enumerate}
The node that initializes the private conversation is denoted as $P_I$, the receiving node as $P_S$ and nodes that route the message as $P_1, P_2, \ldots$, the message as $m'$ (first message) and $m'', m''', \ldots$ (next messages). $A$ is the authentication data.
In this basic scenario we assume that routing nodes do not modify the data, just forward it correctly. Attacks: scenarios where these nodes can modify or eavesdrop information are described in section~\ref{sec}.
\paragraph{Phase 1 - initial}
This proposal is not directly based on zero-knowledge protocols, but on an identification system based on a zero-knowledge proof. We choose the GQ scheme (\cite{GQ}) as the most convenient for our purposes. In this scheme, the bootstrap has a pair of RSA-like keys: a public $K_P$ and a private one $k_p$. The bootstrap also computes public modulus $N = p \cdot q$, where $p, q$ are RSA-like primes. The following equation has to be true:
\begin{displaymath}
K_P \times k_p \equiv 1 (\textrm{mod }(p-1)\cdot(q-1)).
\end{displaymath}
The pair ($K_P$, $N$) is made public. The keys can be used for different purposes, not only for our system.
The bootstrap computes a set of so-called identities, denoted by $ID$, and their equivalencies, denoted by $J$. It does not matter how $J$ is obtained if it is obvious for all participants how to obtain $J$ from $ID$. The pairs $(ID, J)$ are generated for every node that requests them. The identity is used to authenticate $P_I$ during an attempt to continue the conversation. The bootstrap also computes a secret value for each $ID$:
\begin{displaymath}
\sigma \equiv J^{-k_p} (\textrm{mod }N).
\end{displaymath}
The secret $\sigma$ is used by $P_I$ to compute correct values for the $GQ$ authentication scheme. $P_I$ obtains the following information in the initial phase: $ID$ (public) and $\sigma$ (secret).
To preserve anonymity, node $P_I$ should request at least a few different pairs $(ID, \sigma)$ or, if possible, obtain a new pair for each private conversation (key).
\paragraph{Phase 2 - initialization of the private conversation}
The purpose of this phase is to associate a proper $ID$ with the conversation. Different methods may be used for that purpose, depending on the security and performance requirements of the system.
Here are some possibilities:
\begin{enumerate}
\item The node $P_I$ can simply send the $ID$ with the message $m$ in open text.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{fig_p2-1a.eps}
\label{fig3}
\end{figure}
In that situation, the node $P_I$ has to trust all other nodes that they do not change neither message nor $ID$.
\item The node $P_I$ can ask the bootstrap to store an $ID$ value associated with the conversation. During conversation initialization, $P_S$ contacts the bootstrap and obtains the proper $ID$.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{fig_p2-2a.eps}
\label{fig4}
\end{figure}
\item A more secure way is to use the bootstrap's keys for a different purpose, not only for the zero-knowledge protocol. After creation of an $ID$ for node $P_I$ in the initial phase, the bootstrap can sign the $ID$ with his private key. In this case, the $ID$ can be sent securely over multiple nodes. After receiving the first message, $P_S$ can check the validity of the bootstrap's signature and accept only a valid $ID$. To provide message integrity, the bootstrap would have to sign a hash of the message ($h(m)$), as well.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{fig_p2-3a.eps}
\label{fig5}
\end{figure}
\end{enumerate}
\paragraph{Phase 3 - exchange of private messages}
\begin{enumerate}
\item The node $P_I$ creates a set of puzzles: $S = \{s_1, \ldots, s_n\}$. Each puzzle has a zero-knowledge challenge. This challenge is a number computed basing on a random value $r$, $r \in \{1, \ldots, N - 1\}$. It is computed as following:
\begin{equation}
u = r^{K_P} \textrm{ (mod } N).
\label{eq1}
\end{equation}
\textbf{Creating a set of puzzles}\\
Each puzzle used in the proposed scheme has a following form: $G(K,X_i,F(X_i),u),R_i)$, where $K$, $X_i$, $R_i$ and $F$, are described in section~\ref{merkle}.
\begin{table}[h]
\caption{Possible puzzles}
\label{tab1}
\centering
\begin{tabular}{ll}
Puzzle no & puzzle \\
\hline
1 ($s_1$)& $G(K,X_1,F(X_1),u),R_1)$ \\
2 ($s_2$)& $G(K,X_2,F(X_2),u),R_2)$ \\
& \ldots \\
n ($s_n$)& $G(K,X_n,F(X_n),u),R_n)$ \\
\end{tabular}
\end{table}
Each puzzle can contain a different $u$ value (computed from $r$), which gives additional security.
\item The node $P_I$ sends the whole set of puzzles to $P_S$.
\begin{figure}[ht]
\centering
\includegraphics[width=8cm]{fig_p3-1a.eps}
\label{fig6}
\end{figure}
\newpage
\item $P_S$ solves a chosen puzzle and chooses a random value $b \in \{1, \ldots, N\}$. $P_S$ sends the puzzle's number ($X_i$) and $b$ to $P_I$.
\begin{figure}[h]
\centering
\includegraphics[width=7cm]{fig_p3-2.eps}
\label{fig7}
\end{figure}
\item The $P_I$ computes the next value in the $GQ$ scheme, $v$. This values is based on the number $b$ received from $P_S$ and on the secret value $\sigma$ of $P_I$:
\begin{equation}
v \equiv r \times \sigma^b \textrm{ (mod } N).
\label{eq2}
\end{equation}
\item $P_I$ sends $v$ and a new message (encrypted, using information from the puzzle). Some possible methods of securing the message are described below. The secured message has the form:
\begin{displaymath}
L(m', F(X_i)).
\end{displaymath}
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{fig_p3-3.eps}
\label{fig8}
\end{figure}
\item $P_S$ uses information extracted from the puzzle, $ID$, to obtain $J$ and verify if $v$ is the right value. To validate the response from $P_I$, $P_S$ checks if
\begin{equation}
J^b \times v^{K_P} \equiv u \textrm{ (mod } N).
\label{eq3}
\end{equation}
If the equation is satisfied, then the new message is accepted.
\end{enumerate}
\paragraph{Securing the new message}
The value $F(X_i)$ is a secret known only to $P_S$ and $P_I$. Thus, it can be used to establish a secure channel for the new message. This can be used to provide:
\begin{enumerate}
\item encryption: the message could be encrypted using $F(X_i)$ as a key for a symmetric cipher.
\item integrity: the hash of the message could be encrypted using a symmetric cipher with key $F(X_i)$.
\end{enumerate}
\label{secfile}
\subsection{Security of proposed scheme}
\label{sec}
In this section, we are going to discuss only the security of phase 3 of the protocol, since the protocol offers several distinct possibilities in phase 2, each one with a different level of security.
\paragraph{Continuing the conversation by an unauthorized user}
Assume that one of the routing nodes ($P_M$) wishes to send a private message and impersonate $P_I$. $P_M$ cannot impersonate $P_I$, because $P_M$ does not have the $\sigma$ value to obtain the correct $v$ for the authentication phase of the protocol. The values $u$, $b$ or $ID$ do not contain any information that would be useful in cheating $P_S$. This property is assured by the zero-knowledge protocol.
The message itself is secured by methods described in sec~\ref{secfile}, using the $F(X_i)$ value. For any eavesdropper it is computationally infeasible to solve all puzzles to find the puzzle the with proper $X_i$ (the one used by $P_S$) if the number of puzzles is large enough. E.g. if the function $G$ would be $DES$, and $R_i$ would be a key for a cipher with fixed 24 bits (so efficiently 32 bits long), then the number of computations required to solve one puzzle is about $2^{31}$. Now it is easy to estimate how many puzzles should be created by $P_I$.
\paragraph{Eavesdropping}
An eavesdropping node, $P_M$, can observe all values of the zero-knowledge protocol: $u$, $b$ and $r$. This knowledge does not reveal anything about the secret $\sigma$ and since $u$ and $b$ are random and change in every iteration of the protocol, that does not enable $P_M$ to interfere and gain any important information. Also, if the number of puzzles is sufficiently large, solving all puzzles is infeasible in reasonable time and finding the puzzle that was used to secure the message is hard.
\paragraph{Play-back attack}
Using our protocol, $P_S$ chooses a random value $b$ and then $P_I$ has to compute the $v$ value, which is later utilized by $P_S$ to check if the authentication is successful. Therefore, previously used $v$, $r$ ($u$) and $b$ values are useless. Only $P_I$ is able to create the proper $v$ value for a random $b$.
\paragraph{Man-in-the-middle attack} The goal of this attack is to either change the new message or to gain some information about $\sigma$ by one of the intermediate nodes ($P_M$). A property of zero-knowledge proofs used in our protocol is that gaining any information about $u$, $b$ and $v$ values does not reveal anything about the $\sigma$. Changing the message is also not possible since it is protected with the secret value $F(X_i)$, known only to $P_I$ and $P_S$.
\subsection{Performance analysis}
In the proposed system there are two phases when computational overhead could be significant:
\begin{enumerate}
\item computing values for the zero-knowledge protocol (equations:~\ref{eq1},~\ref{eq2},~\ref{eq3}). The number of computations needed for these equations is similar to computations of public key cryptography. The $u$ value (eq.~\ref{eq1}) can be calculated offline.
\item computing the set of puzzles: this also can be done by $P_I$ offline. We assume that the $G$ function would be $DES$ or any other symmetric cipher, so it would be quite fast to compute a single puzzle. The amount of computations depends rather on the number of puzzles ($M$) and is similar to encrypting a message of size $M \cdot n$, where $n$ is the size of $N$ in bits (because $u < N$).
\item Sending the set of puzzles is the only significant communication overhead. The size of the set of puzzles depends on the required security level and is difficult to estimate (without additional assumptions about available computational power of malicious nodes). Moreover, since the breaking of all puzzles should take more time than the transmission of the entire message, perhaps the size of the set of puzzles could depend on the message size (be bounded above by a fraction of the message size, for instance $1\%$).
\end{enumerate}
\subsection{Comparison with PKI}
Let us compare our protocol using the same criticism as for \emph{PKI} in section 4:
\begin{enumerate}
\item direct communication of $P_I$ and $P_S$ is no longer required
\item a directory or method to obtain the address of $P_I$ by $P_S$ is not necessary
\item communication with the bootstrap may be required during the initialization of a private conversation, depending on the chosen method of communicating the $ID$
\item 3-way exchange of information is not required during conversation initialization, but only for subsequent messages
\item the proposed protocol provides revocable a\-no\-ny\-mi\-ty.
\end{enumerate}
\section{Conclusions}
\label{concl}
Authentication in P2P/ad-hoc systems is surprisingly difficult due to the fact that nodes often do not know the identity of each other before they communicate. In a client-server system, at least the identity of the server is known to the client. This simplifies the use of \emph{PKI} for authentication. In a P2P/ad-hoc system, the use of \emph{PKI} may require direct communication of two nodes to prevent a man-in-the-middle attack. This is difficult to realize in a P2P/ad-hoc system.
We have developed an authentication method that is secured against eavesdropping, man-in-the-middle, and playback attacks in a P2P/ad-hoc system, but does not require direct communication. The proposed method does not introduce significant computational or communication overheads. Also, the proposed method provides revocable anonymity that is not available when \emph{PKI} is used.
The proposed system of authentication with revocable anonymity gives quite new possibilities for security solutions in P2P/ad-hoc networks. First, it provides anonymity of the operating node against other nodes and any external users, except for the bootstrap. Additionally, the system makes it possible to identify a node's actions when cooperating with the bootstrap. If practically implemented, the system can be controlled against a malicious nodes trying violate the rules of a MANET application. The applications of this control can range from games in a MANET to prevention of indecent or malicious messages on MANET chats.
\paragraph{Future work}
The form of anonymous authentication and revocable anonymous authentication should perhaps depend on the particular MANET application. Thus, the first possible extension of the results presented here is a precise analysis of requirements of chosen MANET applications. This problem will be the subject of future research.
Another extension of the presented results is to offer new security services. The first natural proposition is mutual authentication of nodes, then non-repudiation of operations and finally combinations of all common security services applied to nodes and the routed messages.
| 2024-02-18T23:39:49.631Z | 2005-10-22T13:51:31.000Z | algebraic_stack_train_0000 | 541 | 5,111 |
|
proofpile-arXiv_065-2741 | \section{Introduction}
Interest in the evolution of extremely metal--poor stars
has been stimulated recently by at least two types
of observing programmes.
First, the detection of very far galaxies at redshifts well beyond 6 (see e.g. Pell\'o et al.~\cite{Pe05})
opens the way to detection of galaxies whose colours
will be dominated by extremely metal--poor stars (Schaerer~\cite{Sc02}; \cite{Sc03}).
Second, as a complement to the observation of the deep Universe,
the detection
of nearby, very metal--poor halo stars provides very interesting clues
to the early chemical evolution of our Galaxy
(Beers et al.~\cite{Be92}; Beers~\cite{Be99}; Christlieb et al. \cite{christ04}; Bessell et al.
\cite{bessel04}; Cayrel et al. \cite{cayr04}; Spite et al.\cite{Sp05}).
These works have shown
the following
very interesting results:
\begin{itemize}
\item {\it The measured abundances of many elements at very low metallicity present
a very small scatter (Cayrel et al.~\cite{cayr04}).}
At first sight this appears difficult to understand. Indeed,
in the early Universe, stars are
believed to form from the ejecta of
a small number of supernovae (may be only one).
For instance the Argast et al. models (\cite{Ar00}; \cite{Ar02}) predict that for
[Fe/H] $< -3$, type II supernovae enrich the interstellar medium only locally.
Since the chemical composition of supernova
ejecta may differ a lot from case to case, large scatter
of the abundances is expected
at very low metallicity.
For most of the elements, however, this strong scatter is not observed,
at least down to a metallicity of [Fe/H]$\sim$ -4.0
(Cayrel et al. \cite{cayr04}).
This might indicate that, already at this low metallicity, stars are formed from
a well--mixed reservoir composed of ejecta from stars of different initial masses.
\item {\it These observations also show that there is no sign of enrichments
by pair--instability supernovae,
at least down to a metallicity of [Fe/H] equal to -4.} Let us recall that these supernovae have
very massive stars as progenitors, with initial masses between approximately 140 and 260 M$_\odot$
(Barkat et al. \cite{Ba67}; Bond et al. \cite{Bo84}; Heger \& Woosley~\cite{HW02}).
Such massive stars are believed to form only in very metal--poor environments.
At the end of their lifetime,
they are completely destroyed when they explode as a pair--instability supernova. In this way
they may strongly contribute to the enrichment of the primordial interstellar medium
in heavy elements.
The composition of the ejecta of pair--instability supernovae is characterised by
a well marked odd-even effect and a strong zinc deficiency. These two features are not observed
in the abundance pattern of very metal--poor halo stars.
Does this mean that at [Fe/H] equal to -4,
pair--instability supernovae
no longer dominate the chemical evolution of galaxies or that such stars are not formed~?
If formed, could these stars skip the pair instability or have different nucleosynthetic outputs
from those currently predicted by theoretical models~?
\item {\it The N/O ratios observed at the surface of halo stars by Israelian et al.~(\cite{Is04}) and
Spite et al. (\cite{Sp05})
indicate that important amounts of primary nitrogen should be produced by
very metal--poor massive stars
(Chiappini et al.~\cite{Ch05}).} The physical conditions for such important
productions of primary nitrogen by very metal--poor massive stars remain to be found.
\item {\it If most stars at a given [Fe/H] present a great
homogeneity in composition, a small group, comprising about 20 - 25\% of the stars
with [Fe/H] below -2.5, show very large enrichments in carbon.}
These stars are known as C-rich
extremely metal--poor (CEMP) stars. The observed [C/Fe] ratios
are between
$\sim$2 and 4, showing a large scatter.
Other elements, such as nitrogen and oxygen (at least in the few cases
where the abundance of this element could be measured), are also highly
enhanced. Interestingly,
the two most metal--poor stars known up to now, the Christlieb star
or HE 0107-5240, a halo giant with [Fe/H]=-5.3,
and the subgiant or main-sequence star HE 1327-2326 with [Fe/H]=-5.4 (Frebel et al.~\cite{Fr05})
belong in this category.
To explain such high and scattered CNO abundances,
obviously a special process has to be invoked
(see Sect.~6 below).
\end{itemize}
The results outlined above
clearly indicate that new scenarios for the
formation and evolution of massive stars at very low $Z$
need to be explored.
Among the physical ingredients that could open new evolutionary paths
in very metal--poor environments,
rotation certainly appears a very interesting possibility.
First, for metallicities $Z$ between 0.004 and 0.040,
the inclusion of rotation improves the agreement between the models and
observations in many respects by allowing us to reproduce the observed
surface abundances (Heger \& Langer \cite{He00}; Meynet \& Maeder \cite{MMV}),
the ratio of blue--to--red supergiants at low metallicity (Maeder \& Meynet \cite{MMVII}),
the variation with the metallicity of the WR/O ratios
and of the numbers of type Ibc to type II supernovae
(Meynet \& Maeder \cite{MMX}; \cite{MMXI}). Most
likely, stars are also rotating at very low metallicity, and one can
hope that the same physical model assumptions that
improve the physical description of stars at $Z \ge$ 0.004
would also apply to the very low
metallicity domain.
Second, if the effects of rotation are already
quite significant at high metallicity,
one expects that they are even more important at lower metallicity.
For instance, it was shown in previous
works that
the chemical mixing becomes more efficient for lower
metallicity for a given initial mass and velocity
(Maeder \& Meynet \cite{MMVII}; Meynet \& Maeder \cite{MMVIII}).
This comes from the fact that
the gradients of $\Omega$ are much steeper in the lower metallicity
models, so they trigger more efficient shear mixing.
The gradients are steeper because
less angular momentum is transported outwards by the
meridional currents, whose velocity
scales as the inverse of the density in the outer layers
(see the Gratton-\"Opick term in the expression for the meridional velocity in Maeder \& Zahn~\cite{mz98}).
Third, rotation can induce mass loss in two ways.
The first way, paradoxically, is linked to the fact
that very metal--poor
stars are believed to lose little mass by radiatively driven stellar winds.
Indeed, in the radiatively driven wind theory,
the mass loss scales with the metallicity of the outer
stellar layers as $(Z/{\rm Z}_\odot)^{\alpha}$ with $\alpha$ between 0.5 and 0.8
(Kudritzki et al.~\cite{Kud87}; Vink \& al. \cite{vink01}). Thus lowering the metallicity
by a factor 200 000 (as would be the
case for obtaining the metallicity of the Christlieb star)
would thus lower the mass loss rates by a factor 450,
or even by a greater factor if the metallicity dependence becomes stronger
at lower $Z$, as suggested by Kudritzki~(\cite{Ku02}).
Now since metal--poor stars lose little mass,
they also lose little angular momentum (if rotating),
so they have a greater chance of
reaching the break-up limit during the Main Sequence phase
(see for instance Fig.~9 in Meynet \& Maeder~\cite{MMVIII}).
At break-up, the outer stellar layers become unbound and
are ejected whatever their metallicity.
The break-up is reached more easily when we take into
account that
massive rotating stars have polar winds as shown by Owocki et al.~(\cite{Ow96})
and Maeder~(\cite{MaIV}). When most of the mass is lost
along the rotational axis, little angular momentum is lost.
Another way for rotation to trigger enhancements of the mass loss
comes from the mixing induced by rotation. In general, rotational mixing
favours the evolution into the red supergiant stage (see Maeder \& Meynet \cite{MMVII}), where mass loss is higher. It also
enhances the metallicity of the surface of the star and, in this way, boosts the radiatively driven
stellar winds (see below).
Thus there are very good reasons for exploring the effects of rotation at very
low metallicity, which we have attempted in this work.
This was also the aim of the recent work by
Marigo et al. (\cite{Ma03}), who compute Pop III massive stellar models with
rotation, assuming solid-body rotation. In this very interesting piece of work,
they mainly study the effects of reaching the break-up limit. However,
since they did not include the rotational mixing of the chemical elements, they
could not explore the effects of rotation on the internal chemical composition of the stars.
Also, solid body rotation is just the extreme case of coupling the internal
rotation, which ignores the physics and timescales of the internal transport.
In the present models, the transport of both the angular momentum and the chemical species are treated
in a consistent way, and, as we shall see, rotational mixing has very important consequences on both
the stellar yields and the mass loss rates.
In Sect.~2, we briefly recall the main physical ingredients of the models.
The evolutions of fast--rotating
massive star models at very low $Z$ are described in Sect. 3.
The evolution of the internal chemical composition is the subject of Sect.~4,
while the ejected masses of various isotopes are presented in Sect.~5.
The case of the CEMP stars is discussed in Sect.~6.
Section~7 summarises the main results and raises a few
questions to be explored in future works.
\section{Physical ingredients}
The computation of our models was done with the Geneva evolution code.
The opacities were taken from Iglesias \& Roger
(\cite{igl96})
and complemented at low temperatures by the molecular opacities of
Alexander (\url{http://web.physics.twsu.edu/alex/wwwdra.htm}).
The nuclear reaction
rates were based on the NACRE data basis (Angulo \& al. \cite{ang99}). The treatment of
rotation included the hydrostatic effects described in Meynet \& Maeder (\cite{MMI}) and
the effects of rotation on mass loss rates according to Maeder \& Meynet (\cite{MMVI}).
In particular, we accounted for the wind anisotropies induced by rotation as in
Maeder (\cite{MaIV}). Meridional circulation was implemented according to Maeder \& Zahn (\cite{mz98}), but
including the new $D_{\rm h}$ coefficient as described in Maeder (\cite{M03}).
Roughly compared to the old $D_{\rm h}$, the new one tends to reduce the size of
the convective core and to allow larger enrichment of the surface in CNO--processed
elements. The reader is referred to these papers for a detailed description of
the effects. The convective instability was treated according to the
Schwarzschild criterion without overshooting. The radiative mass loss rates are from
Kudritzki \& Puls (\cite{kudpul00}) when $\log T_{\rm eff} > 3.95$ and from
de Jager et al.~(\cite{Ja88}) otherwise. The mass loss rates depend
on metallicity as $\dot{M} \sim (Z/Z_{\odot})^{0.5}$, where
$Z$ is the mass fraction of heavy elements at the surface
of the star. As we shall see, this quantity may change during the evolution of the star.
A specific treatment for mass loss was applied at break-up.
At break-up, the mass loss rate adjusts itself in such a way that an
equilibrium is reached between the two following opposite effects. 1) The radial
inflation due to evolution, combined with the growth of the surface velocity due to the
internal coupling by meridional circulation, brings the star to break-up, and thus some
amount of mass at the surface is no longer bound to the star. 2) By removing
the most external layers,
mass loss brings the stellar surface down to a level in the star that
is no longer critical. Thus, at break-up, we should adapt the mass loss rates, in order
to maintain the surface layers at the break-up limit.
In practice, however, since the critical limit contains mathematical
singularities, we considered that during the break-up phase, the mass loss rates should be such
that the model stays near a constant fraction (for example, 0.98) of the limit.
At the end of the MS
phase, the stellar radius inflates so rapidly that meridional circulation is unable to
continue to ensure the internal coupling, and the break-up phase ceases naturally.
In this first exploratory work, we focused our attention on stars with initial masses
of 60 M$_\odot$ and 7 M$_\odot$ in order to gain insight into the
properties of both massive and AGB stars at low $Z$.
The evolution was computed until the end of the core helium burning phase (core carbon burning phase
in the case of the 60 M$_\odot$ rotating model at $Z=10^{-8}$).
Two metallicities were considered for the 60 M$_\odot$ models: $Z=10^{-8}$
and $Z=10^{-5}$. Only this last metallicity was considered for the 7 M$_\odot$ model.
Of course we do not know if stars
with $Z=10^{-8}$ have ever formed; however,
it is not possible at the present time to exclude such a possibility.
Indeed it might be that the first star generations produce very little amounts
of heavy elements, due to the strong fallback of ejected material onto black holes at the end of their lifetimes.
Moreover, as we shall see, the behaviour of the $Z=10^{-5}$ and $10^{-8}$ massive star models
are qualitatively similar, indicating that the evolutionary scenarios explored here might apply to a
broad range of initial metallicities.
The initial mixture of heavy elements was taken as equal to the one used
to compute the opacity tables (Iglesias \& Roger
\cite{igl96}, Weiss alpha-enhanced elements mixture).
The initial
composition for models at $Z=10^{-8}$ is given in Table~\ref{tbl-0}.
The models at $Z=10^{-5}$ have the same initial mixture of heavy elements.
More precisely, the mass fractions for all the isotopes heavier than $^{4}$He
were multiplied by $10^3$ (=$10^{-5}/10^{-8}$).
\begin{table}
\caption{Initial abundances in mass fraction for models at
$Z=10^{-8}$.} \label{tbl-0}
\begin{center}\scriptsize
\begin{tabular}{cc}
\hline
& \\
Element & Initial abundance \\
& \\
\hline
& \\
H & 0.75999996 \\
$^3$He & 0.00002554 \\
$^4$He & 0.23997448 \\
$^{12}$C & 7.5500e-10 \\
$^{13}$C & 0.1000e-10 \\
$^{14}$N & 2.3358e-10 \\
$^{15}$N & 0.0092e-10 \\
$^{16}$O & 67.100e-10 \\
$^{17}$O & 0.0300e-10 \\
$^{18}$O & 0.1500e-10 \\
$^{19}$F & 0.0020e-10 \\
$^{20}$Ne & 7.8368e-10 \\
$^{21}$Ne & 0.0200e-10 \\
$^{22}$Ne & 0.6306e-10 \\
$^{23}$Na & 0.0882e-10 \\
$^{24}$Mg & 3.2474e-10 \\
$^{25}$Mg & 0.4268e-10 \\
$^{26}$Mg & 0.4897e-10 \\
$^{27}$Al & 0.1400e-10 \\
$^{28}$Si & 3.2800e-10 \\
$^{56}$Fe & 3.1675e-10 \\
& \\
\hline
& \\
\end{tabular}
\end{center}
\end{table}
Nothing is known on the rotational velocities of such stars.
However, there are
some indirect indications that stars at lower $Z$ could have
higher initial rotational velocities:
1) realistic simulations of the formation of the first stars in the Universe
show that the problem of the dissipation of the angular momentum is
more severe at very low $Z$ than at the solar $Z$. Thus
these stars might begin their evolution
with a higher amount of angular momentum (Abel et al.~\cite{Ab02}).
2) There are some observational hints that
the distribution of initial rotation might
contain more fast rotators at lower $Z$ (Maeder et al.~\cite{MG99}).
3) Even if stars begin their life on the ZAMS with the same total amount of
angular momentum
for all metallicities,
then the stars at lower metallicity
rotate faster as a consequence of their smaller radii.
The three arguments listed above would favour the choice
of a higher value for the initial rotational velocity
than those adopted for solar models. To choose this value we proceeded
in the following way. First we supposed that the stars begin their
evolution on the ZAMS with approximately the same angular momentum
content, whatever the metallicity. At solar metallicity, observation provides
values for the mean observed rotational velocity on the MS phase
(around 200 km s$^{-1}$ for OB stars). Stellar models allowed us to estimate
the initial angular momentum required to achieve such values
(around 2.2--2.5 10$^{53}$~g~cm$^2$~s$^{-1}$). Adopting
such an initial value of the angular momentum, we found that
a 60 M$_\odot$ stellar model at $Z = 10^{-8 }$ has
a velocity on the ZAMS of 800 km s$^{-1}$. This is the value
of the initial velocity we adopt in the present work.
\section{Evolution of a massive rotating star at very low metallicity}
\subsection{Rotation and mass loss during the main sequence phase}
Figure~\ref{dhrm8} shows the evolutionary tracks during the main sequence (MS) phase
for the 60 M$_\odot$ stellar models at $Z=10^{-8}$. Table~\ref{tbl-1} presents some properties of the models
at the end of the core H- and He-burning phases.
From Fig.~\ref{dhrm8}, we see that
rotation produces a small shift of the tracks
toward lower luminosities and $T_{\rm eff}$. This effect is due to both
atmospheric distortions (note that surface--averaged effective temperatures
are plotted in Fig.~\ref{dhrm8} as explained in Meynet \& Maeder~\cite{MMV})
and to the lowering of the effective gravity
(see e.g. Kippenhahn and Thomas \cite{KippTh70}; Maeder and Peytremann \cite{MP70};
Collins and Sonneborn \cite{co77}). The MS lifetime of the rotating model is enhanced
by 11\%.
These results show that rotation does not
affect the UV outputs of very metal--poor
massive stars much (the UV outputs of the first
massive star generations might contribute a lot to the reionization of the early Universe,
see {\it e.g.} Madau \cite{Mad03}).
Only if a significant fraction of primordial stars would rotate
so fast that they follow the path of homogeneous evolution (Maeder~\cite{Ma87}), could rotation
increase the ionizing power. In that case, the star would remain in the blue part of the HR diagram
and would have a much longer lifetime.
Figure~\ref{ooc} shows the evolution of the ratio $\Omega/\Omega_{\rm crit}$
at the surface during the MS phase. At $Z=10^{-8}$, the model with $\upsilon_{\rm ini}=800$ km~s$^{-1}$
reaches the break-up limit when the mass fraction of hydrogen at the centre $X_{\rm c} \simeq$~0.40.
The star stays at
break-up for the remaining of its MS life with an enhanced mass loss rate.
As a consequence, the model ends its MS life with 57.6~M$_\odot$, having lost 4\% of
its initial mass.
Despite the star staying in the vicinity of the break-up limit during an important part of
its MS lifetime, it does not lose very large amounts of mass,
due to the fact that only the outermost layers of
the stars are above the break--up limit and are ejected. These layers have low density and thus contain little mass.
A model with the same initial velocity, but with a metallicity three orders
of magnitude higher, reaches the break--up limit very early in the MS phase, when $X_{\rm c} \simeq$~0.56.
This comes from the fact that when the metallicity increases, a given value of the initial
velocity corresponds to a higher initial value
of the $\upsilon_{\rm ini}/\upsilon_{\rm crit}$ ratio.
The model at $Z=10^{-5}$
ends its MS life with 53.8~M$_\odot$, having lost 10\% of
its initial mass.
\begin{table*}
\caption{Properties of the stellar models at the end of the H-
and He-burning phases.
$M_{\rm ini}$ is the initial mass, $Z$ the initial metallicity,
$v_{\rm ini}$ the initial velocity, $\overline{v}$ the
mean equatorial rotational velocity during the MS phase defined as in Meynet \& Maeder~(\cite{MMV}),
$t_{\rm H}$ the H--burning lifetimes, $M$ the actual mass of the star, $v$ the actual rotational
velocity at the stage considered, $Y_{\rm s}$
the helium surface abundance in mass fraction. N/C and N/O are the ratios of nitrogen to carbon,
respectively, of nitrogen to oxygen at the surface of stars in mass fraction; C, N, O are the abundances
of carbon, nitrogen, and oxygen at the surface in mass fractions.
The numbers in parentheses indicate the power of ten, {\it i.e.}, 7.54(-10)=7.54 $\times 10^{-10}$.} \label{tbl-1}
\begin{center}\scriptsize
\begin{tabular}{cccc|cccccc|ccccccc}
\hline
& & & & & & & & & & & & & & & & \\
\multicolumn{4}{c|}{ } & \multicolumn{6}{|c|}{End of H--burning} &\multicolumn{7}{|c}{End of He--burning} \\
& & & & & & & & & & & & & & & & \\
$M_{\rm ini}$ & $Z$ & $v_{\rm ini}$ & $\overline{v}$ & $t_{\rm H}$ & $M$ & $v$ & $Y_{\rm s}$ & N/C & N/O & $t_{\rm He}$ & $M$ & $v$ & $Y_{\rm s}$ & C & N & O \\
M$_\odot$ & & ${\rm km} \over {\rm s}$ & ${\rm km} \over {\rm s}$ & Myr & M$_\odot$ & ${\rm km} \over {\rm s}$ & & & &
Myr & M$_\odot$ & ${\rm km} \over {\rm s}$ & & & & \\
& & & & & & & & & & & & & & & & \\
\hline
\hline
\multicolumn{4}{c|}{ } & \multicolumn{6}{|c|}{ } &\multicolumn{7}{|c}{ } \\
60 & 10$^{-8}$ & 0 & 0 & 3.605 & 59.817 & 0 & 0.24 & 0.31 & 0.03 & 0.292 & 59.726 & 0 & 0.24 & 7.54(-10) & 2.34(-10) & 6.71(-9) \\
60 & 10$^{-8}$ & 800 & 719 & 4.004 & 57.624 & 591 & 0.27 & 103 & 9.77 & 0.522 & 23.988 & 0.02 & 0.76 & 1.97(-4) & 1.02(-2) & 2.85(-4) \\
60 & 10$^{-5}$ & 800 & 636 & 4.441 & 53.846 & 567 & 0.34 & 40 & 0.82 & 0.544 & 37.280 & 0.57 & 0.80 & 5.06(-5) & 2.07(-3) & 4.24(-5) \\
& & & & & & & & & & & & & & & & \\
\hline
\multicolumn{17}{ }{ } \\
\end{tabular}
\end{center}
\end{table*}
In
order to discuss the effects of a change of rotation, it is
interesting to compare this last result
with the one obtained in Meynet \& Maeder~(\cite{MMVIII}) for a 60~M$_\odot$ model at $Z=~10^{-5}$ with
$\upsilon_{\rm ini}$=~300 km s$^{-1}$.
This last model reaches the break-up velocity much later, only when $X_{\rm c}
\simeq$~0.01. At $Z=10^{-5}$, the velocity 300 km s$^{-1}$ thus appears as the lower limit
for the initial rotation,
allowing a 60 M$_\odot$ star to reach the break-up limit during its MS phase.
The 60 M$_\odot$ star ends its MS life with 59.7~M$_\odot$, having lost only 0.5\% of its
initial mass.
Note that the 2002 models were computed with
slightly different physical ingredients than those used to compute the models
discussed in this paper (different expression
for $D_{\rm h}$ and for the mass loss rates).
However, at these very low metallicities, radiatively driven winds remain quite modest, and
the transport of the angular
momentum, mainly driven by the meridional circulation, does not depend much
on the expression of $D_{\rm h}$.
During the MS phase, the surface of the rotating stars is
enriched in nitrogen and depleted in carbon as a result of rotational mixing.
Figure~\ref{nc} shows that
the N/C ratios are enhanced by more than two orders of magnitude
at the end of the H-burning phase.
More precisely, at the surface of the $Z=10^{-8}$ model,
nitrogen is enhanced by a factor 27 and carbon decreased
by a factor 12. All other physical ingredients
being the same, one also sees that the model with
the lowest metallicity is also the one with the greatest surface enrichments. This well agrees
with the trend already found in our previous works (Maeder \& Meynet~\cite{MMVII};
Meynet \& Maeder \cite{MMVIII}), which
results from
the steep gradients of angular velocity that build up
in very metal--poor stars that favours shear mixing.
Let us emphasise, however, that during the MS phase,
the arrival of CNO--processed material
at the surface
does not change the
metallicity of the outer layers.
Indeed, rotational mixing brings nitrogen to the surface
but depletes carbon and oxygen, keeping the sum of CNO elements constant,
and the metallicity as well. Thus during the MS phase,
the enhanced mass loss rates
undergone by the rotating models are entirely due
to the mechanical effect of the centrifugal force.
As we shall see, this is no longer the case
during the core He-burning phase.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[angle=-90]{3070fig1.eps}}
\caption{Evolutionary tracks in the HR diagram for 60 M$_\odot$ stellar models
at $Z=10^{-8}$.}
\label{dhrm8}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[angle=-90]{3070fig2.eps}}
\caption{Evolution of $\Omega/\Omega_{\rm crit}$ at the surface of 60~M$_\odot$ models at
$Z=10^{-8}$ (continuous line) and $Z=10^{-5}$ (upper dashed line) with $\upsilon_{\rm
ini}$=~800 km s$^{-1}$. The case of the 60 M$_\odot$ model
at $Z=10^{-5}$ with $\upsilon_{\rm ini}$=~300 km s$^{-1}$
from Meynet \& Maeder~(\cite{MMVIII}) is also shown (lower dashed line).}
\label{ooc}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[angle=-90]{3070fig3.eps}}
\caption{Evolution as a function of $\log T_{\rm eff}$ of the excess at the surface for the
ratio N/C (expressed in dex) compared to the initial ratio.
N and C are the abundances of nitrogen and carbon
at the surface of the star.}
\label{nc}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[angle=-90]{3070fig4.eps}}
\caption{Evolution of $\log T_{\rm eff}$ as a function of $Y_{\rm c}$, the mass fraction of
$^4$He at the centre, for a non-rotating (dashed line) and rotating (continuous line)
60~M$_\odot$ model at $Z=~10^{-8}$.}
\label{rgb}
\end{figure}
\subsection{Rotation and mass loss during the post main-sequence phases}
Rotational mixing that occurs during the MS phase and
still continues to be active during the core He-burning
phase deeply modifies the internal chemical composition
of the star. This has important consequences during the
post-MS phases and deeply changes the evolution
of the rotating models with respect to their non--rotating
counterparts. Among the most striking differences, one notes the following:
\begin{itemize}
\item 1) Rotation favours redwards evolution in the
HR diagram as was already shown by
Maeder \& Meynet (\cite{MMVII}), and as illustrated
in Fig.~\ref{rgb}. One sees that
the non-rotating model remains on the blue side during
the whole core He-burning phase, while
the 800 km s$^{-1}$ model at $Z$ = 10$^{-8}$
starts its journey toward the red side of the HR diagram early in the core helium burning
stage, when $Y_{\rm c} \simeq 0.67$
($Y_{\rm c}$ is the mass fraction of helium
at the centre of the star model). The same is true for the corresponding model at
$Z$ = 10$^{-5}$. Let us recall that this behaviour is linked to
the rapid disappearance of the intermediate convective
zone associated to the H-burning shell (see Fig.~\ref{travers} and
Maeder \& Meynet~\cite{MMVII}).
\item 2) Redwards evolution enhances the mass loss.
In the cases of our 60 M$_\odot$ stellar models,
it brings the stars near the Humphreys--Davidson limit,
{\it i.e.}, near $\log L/{\rm L}_\odot=6$ and $T_{\rm eff}$ in a broad range around $10^4$ K.
Near this limit, the mass loss rates (here from de Jager et al.~\cite{Ja88}) become very important.
For instance,
the model represented
in the left panel of Fig.~\ref{travers} ($\log L/{\rm L}_\odot= 6.129,\ \log T_{\rm eff}= 4.243$) is still far
to the left hand side of the Humphreys-Davidson limit. Its mass loss rate
is
$\log (-\dot M)=-5.467$, where $\dot M$ is expressed in M$_\odot$ per year.
The model in the right panel ($\log L/{\rm L}_\odot= 6.145,\ \log T_{\rm eff}= 3.853$) is in the vicinity of the
Humphreys-Davidson limit. Its mass loss rate is equal to -4.616, {\it i.e.}, more than seven times higher.
During this
transition, the overall metallicity at the surface
does not change and remains equal to the initial
one (here $Z_{\rm ini}=0.00001$). We observe a similar
transition in the case of the $Z=10^{-8}$ stellar model.
\item 3) During the core He-burning phase, primary nitrogen is synthesized in the H-burning shell, due
to the rotational diffusion of carbon and oxygen produced in the helium core into the H-burning shell
(Meynet \& Maeder \cite{MMVIII}). This is illustrated well in
Fig.~\ref{travers} for the $Z=10^{-5}$ rotating model and in Fig.~\ref{abond}
for the model at $Z=10^{-8}$.
\item 4) In contrast to what happens during the MS phase,
rotational mixing during the core He-burning phase
induces large changes in the surface metallicity.
These changes occur only at the end of the core He-burning phase,
although the conditions for their apparition result from the mixing
that occurs during the whole core He-burning phase. Indeed,
rotational mixing progressively enriches the outer radiative zone
in CNO elements, thus enhancing its opacity slowly.
When, in the $Z$ = 10$^{-8}$ stellar model, the abundance of nitrogen in the outer layers becomes approximately
$10^{-8}$ in mass fraction ({\it i.e.}, has increased
by two orders of magnitude with respect to the initial value),
these outer layers become convective. The outer convective zone
then rapidly deepens in mass and
dredges up newly synthesized elements to the surface.
From this stage onwards,
the surface metallicity increases in a spectacular way,
as can be seen in Fig.~\ref{abond}. For instance, the rotating
60 M$_\odot$ at $Z=10^{-8}$ has a surface metallicity of $10^{-2}$ at the end of its lifetime,
{\it i.e.}, similar
to that of the Large Magellanic Cloud !
\item 5) The consequence of such large surface enrichments
on the mass loss rates remains to be studied in detail
using models of stellar winds with the appropriate physical
characteristics (position in the HR diagram and chemical composition).
In the absence of such sophisticated models, we applied
the usual rule here, namely $\dot M(Z)=(Z/Z_\odot)^{1/2}\dot M(Z_\odot)$, where $Z$
is the metallicity of the outer layers.
With this prescription, the surface enhancement of the metallicity
is responsible for the large decrease in the stellar mass
that can be seen in Fig.~\ref{abond}.
\item 6) During the late stages of the core helium--burning phase,
as a result of mass loss and mixing, the star
may evolve along a blue loop in the HR diagram (see Fig.~\ref{rgb}).
When the star evolves bluewards,
the global stellar contraction brings the outer convective zone,
which evolves like a solid body rotating shell to break-up
(Heger \& Langer \cite{He98}).
At this stage of the evolution, the luminosity is not far from
the Eddington limit and the star may reach the $\Omega\Gamma$-limit
(Maeder \& Meynet \cite{MMVI}). This multiplies
the mass loss rates by very large factors.
\item 7) During the last 24 000 years of its lifetime, the model
presents abundance patterns characteristic of WNL stars at its surface.
\item 8) As a result of mixing and mass loss rates, the duration
of the core He-burning phase
is much longer in the rotating model. The present
60 M$_\odot$ model with $\upsilon_{\rm ini}=800$ km s$^{-1}$ at $Z=10^{-8}$ has
a helium--burning lifetime that is $\sim$80\% longer than the corresponding lifetime of the
non-rotating model (see Sect.~4 below for more explanations).
\end{itemize}
The different effects described above are all due to the mixing induced by rotation and
they all tend to enhance the quantity of mass lost by stellar winds.
The rotating 60~M$_\odot$ at $Z=10^{-8}$ loses about 36 M$_\odot$ during
its lifetime. About 2 M$_\odot$ are lost due to break-up during the MS phase,
$\sim$ 3 M$_\odot$ are lost when the star is
in the red part of the HR diagram (with surface metallicity equal to the initial one), 27 M$_\odot$ are lost
due to the effect of the enhancement of the surface metallicity, the remaining 4 M$_\odot$ are lost
when the star evolves along the blue loop and reaches the $\Omega\Gamma$-limit.
One sees that, by far, the most important effect is due to the increase in the surface metallicity.
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics[angle=-90]{3070fig5.eps}}
\caption{Chemical composition of a 60 M$_\odot$ stellar model at $Z=10^{-5}$ with
$\upsilon_{\rm ini}$= 800 km s$^{-1}$ when it evolves from the blue to the red
part of the HR diagram. The model shown in the left panel has
$\log L/{\rm L}_\odot = 6.129$ and $\log T_{\rm eff}=4.243$; in the middle
panel, it has $\log L/{\rm L}_\odot = 6.130$ and $\log T_{\rm eff}=4.047$; in the
right panel, it has $\log L/{\rm L}_\odot = 6.145$ and $\log T_{\rm eff}=3.853$.}
\label{travers}
\end{figure*}
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics[angle=-90]{3070fig6.eps}}
\caption{Variations of the abundances (in mass fraction) as a function of the Lagrangian mass
within a 60~M$_\odot$ star with
$\upsilon_{\rm ini}$=~800~km~s$^{-1}$ and $Z=10^{-8}$. The four panels show the chemical
composition at four different stages at the end of the core He-burning phase:
in panel {\bf a)} the model has a mass fraction of helium at the centre, $Y_{\rm c}$=~0.11
and an actual mass $M$=~54.8~M$_\odot$ - {\bf b)} $Y_{\rm c}$=~0.06,
$M$=~48.3~M$_\odot$ - {\bf c)} $Y_{\rm c}$=~0.04, $M$=~31.5~M$_\odot$
- {\bf d)} End of the core C-burning phase, $M$=~23.8~M$_\odot$. The actual surface metallicity $Z_{\rm surf}$
is indicated in each panel.}
\label{abond}
\end{figure*}
Paradoxically the corresponding model at higher
metallicity ($Z=10^{-5}$) loses less mass (a little less than 40\% of the total mass).
This can be understood from the following facts: first less
primary nitrogen is synthesized due to slightly less
efficient chemical mixing when the metallicity increases, thus the surface
does not become as metal rich as in the model at $Z=10^{-8}$.
Second and for the same reason as above, the outer convective zone does not deepen as far
as in the more metal--poor model.
These two factors imply that the maximum
surface metallicity reached in this model,
which is equal to 0.0025, is about a factor 4
below the one reached by the $Z=10^{-8}$ model.
Finally, the blue loop does not extend that far into the
blue side, and the surface velocity always remains well below
the break-up limit during the whole blueward excursion.
In order to investigate to what extent the behaviour described above depends
on the physical ingredients of the model, we
compare the present results with those of a rotating model ($\upsilon_{\rm ini}$= 800 km s$^{-1}$) of
a 60 M$_\odot$ star at $Z=10^{-5}$
with a different prescription for the mass loss rates
(Vink et al.~\cite{vink00}, \cite{vink01} instead of Kudritzki \& Puls~\cite{kudpul00}),
with the Ledoux criterion instead of the Schwarzschild one for determining the size of the convective core,
with a core overshoot of $\alpha=0.2~H_p$
and the old prescription for the horizontal diffusion coefficient $D_{\rm h}$.
This model is described in Meynet et al.~(\cite{Meynetal05}).
In this case, the outer convective zone
deepens farther into the stellar interior and thus produces a greater enhancement of the surface
metallicity (the same order as the one we obtained in the present $Z=10^{-8}$ 60 M$_\odot$ model).
Higher enhancements of the surface metallicity then induces greater mass loss by stellar winds.
More important than these differences, however,
we shall retain here
that the results are qualitatively similar to those obtained in our previous models.
In particular,
the mechanism of surface metallicity enhancement
occurs in both models and appears to be a robust process.
\subsection{Do very metal--poor, very massive stars end their lives as pair--instability supernovae~?}
Might the important mass loss undergone by the rotating models prevent
the most massive stars from going through pair instability~?
According to Heger \& Woosley (\cite{HW02}), progenitors of pair--instability supernovae
have helium core masses
between $\sim$64 and 133 M$_\odot$. This corresponds to initial masses between about 140 and 260 M$_\odot$.
Thus the question is whether
stars with initial masses above 140 M$_\odot$ can lose a sufficient amount of mass
to have a helium core that is less than about 64 M$_\odot$
at the end of the core He-burning phase.
From the values quoted above, it would imply the loss of more than
(140-64)=76 M$_\odot$, which represents about 54\%
of the initial stellar mass.
From the computation performed here,
one can expect that such a scenario is possible,
where
a 60 M$_\odot$ loses more than 60\% of its initial mass. However,
more extensive computations are needed to
check whether the rotational mass loss could indeed prevent the most massive stars
from going through this pair instability. Were this the case, it would explain why
the nucleosynthetic signature of pair--instability supernovae is not observed
in the abundance pattern of the most metal--poor halo stars observed up to now.
At least this mechanism could restrain the mass range for the progenitors
of pair--instability supernovae, pushing the minimum initial mass needed for such a scenario to occur to higher values.
Let us also note that the luminosity of the star comes
nearer to the Eddington limit
when the initial mass increases. When rotating, such stars will then encounter
the $\Omega\Gamma$-limit (Maeder \& Meynet~\cite{MMVI}) and very likely undergo strong mass losses.
\section{Evolution of the interior chemical composition}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{3070fig7.eps}}
\caption{Same as Fig.~\ref{abond} for a 60~M$_\odot$ star with
$\upsilon_{\rm ini}$=~0~km s$^{-1}$ and $Z=10^{-8}$. The four panels shows the chemical
composition at four different stages at the end of the core He-burning phase:
Panel {\bf a)} $Y_{\rm c}$=~0.12, $M$=~59.74~M$_\odot$ - {\bf b)} $Y_{\rm c}$=~0.06,
$M$=~59.74~M$_\odot$ - {\bf c)} $Y_{\rm c}$=~0.03, $M$=~59.73~M$_\odot$
- {\bf d)} $Y_{\rm c}$=~0.0, $M$=~59.73~M$_\odot$. The surface
metallicity is equal to $10^{-8}$ at the four evolutionary stages.}
\label{abond2}
\end{figure}
As discussed above, rotational mixing changes the chemical
composition of stellar interiors in an important way.
This is illustrated well by Figs.~\ref{abond} and \ref{abond2}, which show
the internal chemical composition of our rotating and
non-rotating 60 M$_\odot$ stellar models at four different stages
at the end of the core He--burning phase (models at $Z=10^{-8}$).
Comparing panels {\bf a} of Figs.~\ref{abond} and \ref{abond2}, one sees that a large
convective shell is associated to the H-burning shell in the rotating
model, while such a shell is absent in the non-rotating model.
This contrasts with what happens at the
beginning of the core He-burning phase, where the
intermediate convective zone associated to the H-burning shell was absent in the rotating model (or at least much smaller),
while in the non-rotating model, the intermediate convective zone was well--developed (see above).
Why is there this difference between the beginning and the end of the core He-burning phase~?
At the beginning of the core He-burning phase, the disappearance of the intermediate
convective shell was a
consequence of the rotational mixing that operated during
the core H-burning phase and that brought some freshly synthesized helium
into this region.
More helium in this region means less hydrogen and also some decrease
in the opacity, both of which inhibit the development of convection
(cf. Maeder \& Meynet~\cite{MMVII}).
Now, at the end of the core He-burning phase, we have
He-burning products that are brought into the H-burning shell.
These products, mainly carbon and oxygen, act as catalysts
for the CNO cycle and make the H-burning shell more active, thus
favouring convection. This mechanism enriches
this zone not only in primary $^{14}$N, but also in primary $^{13}$C.
Looking at the H-rich envelope in the non-rotating model at the same stage,
one sees that all these elements have much lower abundances. Actually they fall
well below the minimum ordinate of the figure.
If one now compares the chemical composition of the
CO cores when $Y_c\sim 0.11$ (see panels {\bf a} of Figs.~\ref{abond} and \ref{abond2}),
one notes the following points.
First, the abundances in $^{12}$C, $^{16}$O and $^{20}$Ne are approximately equal
in both the rotating and non-rotating models. This comes from the fact
that the CO core masses are approximately the same in both models.
On the other hand in the rotating
model, the abundance of $^{22}$Ne is greatly enhanced,
as the abundances of $^{25}$Mg and $^{26}$Mg.
The abundance of
$^{22}$Ne results from the conversion of primary $^{14}$N,
which has diffused
into the He-burning core. Thus the resulting high abundance of $^{22}$Ne
is also of primary origin.
The isotopes of magnesium are produced by the reactions $^{22}$Ne($\alpha$, $\gamma$)$^{26}$Mg
and $^{22}$Ne($\alpha$, n)$^{25}$Mg.
Their high abundances also result from primary nitrogen diffusion into the He-core.
The CO--core mass (cf. Table~\ref{tbl-2}) in the rotating model is slightly smaller than in the non-rotating one.
This contrasts to what
happens at higher metallicity, where rotation
tends to increase the CO--core mass (see Hirschi et al. \cite{Hi04}). Again, this
results from the mechanism of primary nitrogen production,
which induces a large convective zone associated to the H-burning shell, which then
prevents this shell from migrating outwards and, thus, the CO core from growing in mass.
Let us recall that in rotating models at solar metallicity, there is no primary nitrogen
production due to the less efficient mixing at higher metallicity (see
Meynet \& Maeder~\cite{MMVIII}); thus there is no increase in the H-burning shell
activity.
In panel {\bf b} of Fig.~\ref{abond}, as explained in the previous section,
one sees the outer convective zone extending inwards
and bringing CNO elements to the
surface. In panels {\bf c} and {\bf d}, mass--loss efficiently removes these outer layers.
At the corresponding stages
in the non-rotating model, the outer envelope is not enriched in heavy elements and
keeps its mass.
At the end of the He-burning phase (see panels {\bf d}), the abundance of $^{12}$C
is significantly smaller in the rotating model
than in the non-rotating one. At the same time, the abundances of $^{20}$Ne and $^{24}$Mg
are significantly greater. This is a consequence of helium diffusion into the He-core
at the end of the He-burning phase. Let us recall that
$^{12}$C is destroyed by alpha capture (to produce $^{16}$O), while $^{20}$Ne and $^{24}$Mg are produced by
alpha captures on, respectively, $^{16}$O and $^{20}$Ne. For what concerns the other isotopes of neon
and magnesium, one sees that in the rotating models, much higher
abundances of $^{25}$Mg and $^{26}$Mg are reached due to the transformation of the $^{22}$Ne at the end of the
core helium--burning phase. The neutrons liberated by the $^{22}$Ne($\alpha$,n)$^{25}$Mg reaction can be
captured by iron peak elements, producing some amount of s-process elements (see e.g Baraffe et al.~\cite{Ba92}).
In view of the important changes to the interior chemical composition due to rotation, there is good chance
that the s-process in the present rotating massive star models is quite
different from the one obtained in non-rotating models. This will be examined in later papers.
\section{Chemical composition of the winds and of the supernova ejecta}
\subsection{Wind composition}
Let us first discuss the chemical composition of the winds. The total mass lost, as well as the quantities
of various chemical elements ejected by stellar winds, are given in Tables~\ref{tbl-2} \& \ref{tbl-3}.
The models at $Z=10^{-5}$ were computed with an extended nuclear reaction network including
the Ne-Na and Mg-Al chains, which is why in Table~\ref{tbl-3} the wind--ejected masses of these elements can
be indicated. The stellar yields - {\it i.e.}, the mass of
an isotope newly synthesized and ejected by the star - can be obtained by subtracting
the mass of the isotope
initially present in that part of the star\footnote{This quantity may be obtained
by multiplying the initial abundance
of the isotope considered (given in Table~\ref{tbl-0}) by
$m_{\rm ej}$. }
from the ejected masses
given in Tables~~\ref{tbl-2} \& \ref{tbl-3}.
According to Table~\ref{tbl-2}, the non-rotating model ejects
only half a percent of its total mass through stellar winds, which is completely negligible. Moreover,
this material has exactly the same chemical composition as does the protostellar cloud from which
the star formed. If,
at the end of its lifetime, all the stellar material is swallowed by the
black-hole resulting from the star collapse, the nucleosynthetic contribution of such stars would be zero.
In contrast, the rotating models lose more than 60\% of their initial mass through stellar winds.
This material is strongly enriched in CNO elements. Even if all the final stellar mass is
engulfed into a black-hole at the end of
the evolution, the nucleosynthetic contribution of such stars
remains quite significant.
As already noted above, the corresponding model at $Z=10^{-5}$ loses less mass by stellar winds (see Table~\ref{tbl-3}).
However, the amounts of mass lost remain large and they present strong
enrichments in CNO elements, as in the case of the $Z=10^{-8}$ rotating model. Also $^{23}$Na and $^{27}$Al are somewhat
enhanced in the wind material.
Other striking differences between the rotating
and non-rotating models concern the $^{12}$C/$^{13}$C, N/C, and N/O ratios (see Tables~\ref{tbl-2} \& \ref{tbl-3}).
The wind of the non-rotating model shows solar ratios ($^{12}$C/$^{13}$C=73, N/C=0.31, and N/O=0.03 in mass fractions).
The wind of rotating models is characterised by very low $^{12}$C/$^{13}$C ratios, around 4 - 5
(close to the equilibrium value of the CN cycle) and by very high
N/C (between about 3 and 40) and N/O ratios (between 1 and 36). Thus wind material presents the signature of heavily CNO--processed material.
\subsection{Total ejecta composition (wind and supernova ejecta)}
In order to estimate the quantity of mass lost at the time of the supernova explosion (if any), it is
necessary to know the mass of the remnant.
This quantity is estimated with the relation of Arnett (\cite{Ar91})
between the mass of the remnant and the mass of the carbon-oxygen core.
The masses of the different elements ejected are then simply obtained by integrating their
abundance in the final model between $m_{\rm rem}$ (see Tables~\ref{tbl-2} \& \ref{tbl-3}) and the surface.
Since the evolution of the present models was stopped before the presupernova stage was reached,
the masses of $^{12}$C and $^{16}$O obtained here
might still be somewhat modified by the more advanced nuclear phases.
How does the contribution of the two models (rotating and non-rotating) at $Z=10^{-8}$ compare when both the wind
and the supernova contribute to the ejection of the stellar material?
First, one sees that the total mass ejected (through winds and supernova explosion)
is very similar (on the order of 54--55 M$_\odot$),
due to the fact that the two models have similar CO core masses. Second,
one sees that
the amount of $^{4}$He ejected by the rotating model is slightly higher, whereas the amount of $^{12}$C is
lower due to the effect discussed above ($\alpha$-captures at the
end of the core He-burning phase). The quantity of $^{16}$O ejected is similar in both models.
Third,
the most important differences between the rotating and
non-rotating models
occur for $^{13}$C, $^{14}$N, $^{17}$O, and $^{18}$O. The abundances
of these isotopes are increased by factors between 10$^4$-10$^7$ in the ejecta of the rotating model.
The first three isotopes are
produced in the H-burning shell (CNO cycle) and are mainly ejected by the winds,
while the last one, produced at the interface between the CO-core and the He-burning shell, is ejected
at the time of the supernova explosion.
Fourth, rotation also deeply affects the ratios of light elements in the ejected material (see Tables~\ref{tbl-2} \& \ref{tbl-3}).
The effects of rotation are qualitatively similar to those obtained when comparing the composition of the wind
material of rotating and non-rotating stellar models. Rotation decreases the
$^{12}$C/$^{13}$C ratios from 3.5 $\times$ 10$^8$ in the non-rotating case to 311 in the rotating case,
while it increases the N/C and N/O ratios, which have values of $\sim$10$^{-7}$ and
$10^{-8}$ respectively, when $\upsilon_{\rm ini}$ = 0 km s$^{-1}$, and of 0.5 and 0.02 when $\upsilon_{\rm ini}$ = 800 km s$^{-1}$.
In the ejecta of rotating models,
composed of both wind and supernova material, the $^{12}$C/$^{13}$C ratio is higher than in pure wind material, and
the N/C and N/O ratios are smaller.
This comes from the fact that
the supernova ejecta
are rich in helium-burning products characterized by a very high $^{12}$C/$^{13}$C ratio
and by very low N/C and N/O ratios.
At this point we can ask if the rotating star
had lost no mass through stellar winds, and if all the stellar material were ejected
at the time of the supernova explosion,
would the composition of the ejecta be
different with respect to the case discussed above, where part of the material
is ejected by the winds and part by the supernova explosion.
Let us recall that stellar winds remove layers from the stars at an earlier evolutionary stage than do
supernova explosions. If some of these layers, instead of being ejected by the winds, had
remained locked inside the star, they would have been processed further by the nuclear reactions.
Thus their composition
at the end of the stellar evolution
would be different from the one obtained if they had been ejected at an earlier time by the winds.
Obviously for such differences to be important, mass loss must remove the layers
at a sufficiently early time. If it does so only at the very end of the evolution, there would be no
chance for the layers to be processed much by the nuclear reactions, and there would be no significant
difference whether the mass were ejected by the winds or by the supernova explosion.
Actually, this is what happens in our rotating models. As indicated above, the mass
is removed at the very end of the He-burning phase, and only material from
the H-rich envelope is ejected. Thus, if this material were ejected only
at the time of the supernova explosion, it would have kept the same chemical composition
as the one in Table~\ref{tbl-2}.
As a result, the chemical composition of the ejecta (wind and supernova) does
not depend much on the mass loss, but is deeply affected by rotation.
However,
the stellar winds may of course be of primary importance
if the whole final mass of the star is swallowed in a
black-hole at the end of the evolution. In that case,
the star will contribute to the interstellar enrichment only by
its winds.
\begin{table}
\caption{Helium-, CO-core mass and mass of the remnants
(respectively $m_\alpha$, $m_{\rm CO}$, and $m_{\rm rem}$)
of 60 M$_\odot$ stellar models with
and without rotation at $Z=10^{-8}$.
The total mass ejected ($m_{\rm ej}$) and the mass ejected of various chemical species
($m(X_i)$)
are given in solar masses. The values of some isotope ratios (in mass fractions) are also indicated.
The case of matter ejected by stellar winds only is distinguished from the case
of matter ejected by both the
stellar winds and the supernova explosion.} \label{tbl-2}
\begin{center}\scriptsize
\begin{tabular}{|c|ll|ll|}
\hline
& & & & \\
& \multicolumn{2}{|c|}{$M_{\rm ini}$/M$_\odot$ $\ $ $Z$ $\ \ \ $ $\upsilon_{\rm ini}$}
& \multicolumn{2}{|c|}{$M_{\rm ini}$/M$_\odot$ $\ $ $Z$ $\ \ \ $ $\upsilon_{\rm ini}$}
\\
& \multicolumn{2}{|r|}{$\left[{{\rm km}\over {\rm s}}\right]$}
& \multicolumn{2}{|r|}{$\left[{{\rm km}\over {\rm s}}\right]$}
\\
& & & & \\
& \multicolumn{2}{|c|}{$\ \ \ \ \ $60$\ $ $\ \ $ 10$^{-8}$ $\ \ \ $ 0 $\ $}
& \multicolumn{2}{|c|}{$\ \ \ \ \ $60$\ $ $\ \ $ 10$^{-8}$ $\ $ 800 $\ $}
\\
& & & & \\
\hline
\hline
& & & & \\
$m_\alpha$ &\multicolumn{2}{|c|}{23.08} & \multicolumn{2}{|c|}{23.83} \\
$m_{\rm CO}$ &\multicolumn{2}{|c|}{21.61} & \multicolumn{2}{|c|}{18.04} \\
$m_{\rm rem}$ &\multicolumn{2}{|c|}{6.65} & \multicolumn{2}{|c|}{5.56} \\
& & & & \\
\hline
\hline
& & & & \\
&\multicolumn{2}{|c|}{Mass ejected} & \multicolumn{2}{|c|}{Mass ejected} \\
& & & & \\
& WIND & SN+WIND
& WIND & SN+WIND
\\
& & & & \\
$m_{\rm ej}$& 0.28 & 53.35 & 36.17 & 54.44 \\
$m( ^4{\rm He})$ & 6.62e-02 & 19.58 & 21.46 & 23.85 \\
$m(^{12}{\rm C})$ & 2.08e-10 & 2.066 & 4.78e-03 & 4.26e-01 \\
$m(^{13}{\rm C})$ & 2.84e-12 & 5.84e-09 & 1.25e-03 & 1.37e-03 \\
$m(^{14}{\rm N})$ & 6.44e-11 & 1.90e-07 & 1.97e-01 & 2.20e-01 \\
$m(^{16}{\rm O})$ & 1.85e-09 & 12.61 & 6.08e-03 & 13.54 \\
$m(^{17}{\rm O})$ & 8.27e-13 & 6.39e-10 & 7.50e-06 & 8.60e-06 \\
$m(^{18}{\rm O})$ & 4.14e-12 & 6.53e-10 & 1.58e-08 & 4.44e-03 \\
& & & & \\
\hline
\hline
& & & & \\
&\multicolumn{2}{|c|}{Isotopic ratios} & \multicolumn{2}{|c|}{Isotopic ratios} \\
& & & & \\
$^{12}$C/$^{13}$C & 73.24 & 3.54e+08 & 3.82 & 311 \\
N/C & 0.31 & 9.20e-08 & 41.2 & 0.52 \\
N/O & 0.03 & 1.51e-08 & 32.4 & 0.02 \\
& & & & \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{Same as Table~\ref{tbl-2} for rotating stellar models at $Z=10^{-5}$.
The two 60 M$_\odot$ models were computed with different physical ingredients, see text.
} \label{tbl-3}
\begin{center}\scriptsize
\begin{tabular}{|c|ll|ll|}
\hline
& & & & \\
& \multicolumn{2}{|c|}{$M_{\rm ini}$/M$_\odot$ $\ $ $Z$ $\ \ \ $ $\upsilon_{\rm ini}$}
& \multicolumn{2}{|c|}{$M_{\rm ini}$/M$_\odot$ $\ $ $Z$ $\ \ \ $ $\upsilon_{\rm ini}$}
\\
& \multicolumn{2}{|r|}{$\left[{{\rm km}\over {\rm s}}\right]$}
& \multicolumn{2}{|r|}{$\left[{{\rm km}\over {\rm s}}\right]$}
\\
& & & & \\
& \multicolumn{2}{|c|}{$\ \ \ \ \ $60$\ $ $\ \ $ 10$^{-5}$ $\ \ \ $ 800 $\ $}
& \multicolumn{2}{|c|}{$\ \ \ \ \ $60$\ $ $\ \ $ 10$^{-5}$ $\ $ 800 $\ $}
\\
& & & & \\
\hline
\hline
& & & & \\
$m_\alpha$ &\multicolumn{2}{|c|}{36.90} & \multicolumn{2}{|c|}{30.69} \\
$m_{\rm CO}$ &\multicolumn{2}{|c|}{28.60} & \multicolumn{2}{|c|}{27.95} \\
$m_{\rm rem}$ &\multicolumn{2}{|c|}{8.69} & \multicolumn{2}{|c|}{8.50} \\
& & & & \\
\hline
\hline
& & & & \\
&\multicolumn{2}{|c|}{Mass ejected} & \multicolumn{2}{|c|}{Mass ejected} \\
& & & & \\
& WIND & SN+WIND
& WIND & SN+WIND
\\
& & & & \\
$m_{\rm ej}$& 23.10 & 51.31 & 29.31 & 51.50 \\
$m(^4{\rm He})$ & 12.20 & 18.99 & 12.60 & 14.58 \\
$m(^{12}{\rm C})$ & 3.34e-04 & 5.84e-01 & 1.45e-02 & 2.46 \\
$m(^{13}{\rm C})$ & 6.81e-05 & 1.47e-04 & 3.81e-03 & 2.58e-02 \\
$m(^{14}{\rm N})$ & 9.78e-03 & 2.51e-02 & 4.29e-02 & 1.87e-01 \\
$m(^{15}{\rm N})$ & 3.21e-07 & 2.15e-06 & 1.55e-06 & 1.68e-05 \\
$m(^{16}{\rm O})$ & 2.72e-04 & 18.12 & 3.29e-02 & 17.32 \\
$m(^{17}{\rm O})$ & 4.59e-07 & 2.40e-06 & 2.78e-05 & 1.32e-04 \\
$m(^{18}{\rm O})$ & 2.82e-08 & 1.34e-03 & 1.63e-08 & 2.07e-04 \\
$m(^{19}{\rm F})$ & 1.95e-09 & & 1.10e-08 & \\
$m(^{20}{\rm Ne})$ & 7.64e-06 & & 1.29e-05 & \\
$m(^{21}{\rm Ne})$ & 1.84e-08 & & 6.99e-08 & \\
$m(^{22}{\rm Ne})$ & 4.80e-07 & & 3.35e-05 & \\
$m(^{23}{\rm Na})$ & 1.22e-06 & & 5.61e-06 & \\
$m(^{24}{\rm Mg})$ & 4.41e-06 & & 6.21e-06 & \\
$m(^{25}{\rm Mg})$ & 3.95e-07 & & 6.96e-07 & \\
$m(^{26}{\rm Mg})$ & 5.68e-07 & & 3.07e-06 & \\
$m(^{27}{\rm Al})$ & 5.49e-06 & & 7.75e-06 & \\
& & & & \\
\hline
\hline
& & & & \\
&\multicolumn{2}{|c|}{Isotopic ratios} & \multicolumn{2}{|c|}{Isotopic ratios} \\
& & & & \\
$^{12}$C/$^{13}$C & 4.90 & 3970 & 3.81 & 95.3 \\
N/C & 29.3 & 0.04 & 2.96 & 0.08 \\
N/O & 36.0 & 0.001 & 1.30 & 0.01 \\
& & & & \\
\hline
\end{tabular}
\end{center}
\end{table}
Comparing the data given in the right part of Table~\ref{tbl-2} with the left part of Table~\ref{tbl-3},
one can see the effect produced by an increase in the initial metallicity by three orders
of magnitude (all other things being equal).
Interestingly, we see that the differences between the two models are in general
much smaller than those between the rotating
and non-rotating model at a given metallicity. The total ejected mass, and the masses of $^4$He, $^{12}$C,
$^{16}$O are similar within factors between 0.8 and 1.4. The quantities of $^{14}$N and $^{13}$C are
within an order of magnitude, and the masses of $^{17}$O and $^{18}$O differ by a factor 3.
We are thus far from the factors 10$^4$-10$^7$ between the results of the rotating and non-rotating models
at $Z=10^{-8}$ ! The effects of rotation at extremely low metallicity are much larger than the effects of a change in the initial $Z$ content.
The results given on the right side of Table~\ref{tbl-3} corresponds to the model
described in Meynet et al.~(\cite{Meynetal05}). It differs from
the present models by the mass loss and mixing prescription (see Sect.~3.2). As already emphasized above, the results are qualitatively very similar.
However, quantitatively, they present some differences. For instance, the quantity of $^{12}$C in the model
presented on the right side of Table~\ref{tbl-3} is larger by a factor 4
compared to the value given on the left side of the same Table.
The right model presents a smaller helium core,
an effect mainly due to higher mass loss rates. This favours larger ejections of carbon by the winds and also by the supernova, since
smaller helium cores lead to higher
C/O ratios at the end of the helium-burning phase.
In the right model,
the quantity
of $^{16}$O is decreased by about 4\%. The ejected masses of $^{13}$C and $^{17}$O are increased by factors of 176 and 55, respectively.
The masses of the other isotopes
differ by less than
an order of magnitude.
\section{Link with the extremely metal--poor C-rich stars}
\subsection{Observations and existing interpretations}
Spectroscopic surveys of very metal--poor stars (Beers et al.~\cite{Be92};
Beers~\cite{Be99}; Christlieb~\cite{Ch03})
have shown that CEMP stars
account for up to about 25\% of stars with metallicities lower than
[Fe/H]$\sim -2.5$ (see e.g. Lucatello et al.~\cite{Lu04}).
A star is said to be C-rich if [C/Fe]$>1$.
A large proportion
of these CEMP stars also present enhancements in their neutron capture elements
(mainly $s$-process elements). A few of them also appear to exhibit large
enhancements in N and O. The most iron-deficient stars observed so far are CEMP stars.
These stars are
HE 0107-5240, a giant halo star, and HE 1327-2326, a dwarf or subgiant halo star.
The star HE 0107-5240 ([Fe/H]=-5.3) presents the following CNO surface abundances: [C/Fe]=4.0, [N/Fe]=2.3, and
[O/Fe]=2.4 (Christlieb et al.~\cite{christ04};
Bessell et al.~\cite{bessel04}). The ratio $^{12}$C/$^{13}$C
has also been tentatively estimated by
Christlieb et al.~(\cite{christ04}), who suggest a value of about
60, but with a great uncertainty. They can, however, rule out a value inferior to 50
(let us recall that the solar ratio is $\sim$73).
The star HE 1327-2326 has [Fe/H]=-5.4 and CNO surface abundances: [C/Fe]=4.1, [N/Fe]=4.5,
[O/Fe]$< 4.0$ (Frebel et al.~\cite{Fr05}).
The origin of the high carbon abundance is still an open question and various scenarios have been
proposed:
\begin{enumerate}
\item {\bf The primordial scenarios}: in this case the abundances observed at the surface
of CEMP stars are the abundances of the cloud from which the star formed. The protostellar
cloud was enriched in carbon by one or a few stars from a previous generation.
For instance, Umeda and Nomoto (\cite{Um03}) propose that the cloud from which HE 0107-5240
formed was enriched by the ejecta of one Pop III 25 M$_\odot$ star, which had exploded with low
explosion energy (on the order of 3 $\times 10^{50}$ erg) and experienced
strong mixing and fallback at the time of the supernova explosion.
The mixing is necessary to create the observed
high--level enrichments in light elements, and the fallback is necessary
to retain a large part of the iron peak elements.
Limongi et al (\cite{Li03}) suggest that the cloud was enriched by the ejecta of two supernovae
from progenitors
with masses of about 15 and 35 M$_\odot$.
\item {\bf The accretion/mass transfer scenarios}: some authors have proposed that this particular
abundance pattern results from accretion of
interstellar material and from a companion (for instance an AGB star, as proposed by Suda et al.~\cite{Su04}).
As far as the nucleosynthetic origin is concerned, this scenario is not fundamentally different from the first one.
\item {\bf The in situ scenarios}: finally, some authors have explored the possibility that
the star itself could have produced the particular abundance pattern
seen at its surface
(Picardi et al.~\cite{Pi04}).
The overabundance of nitrogen might easily be explained in the frame of this scenario,
if the star had begun its evolution with the high carbon and oxygen overabundance. In fact we did perform a test calculation
of a non-rotating 0.8 M$_\odot$ stellar model at [Fe/H]=-5.3 with an initial value of [C/Fe] and [O/Fe] equal to
4.0 and 2.3, respectively, {\it i.e.} equal to the abundances observed at the surface of HE 0107-5240. We found that, when
the star reaches the value of the effective temperature ($T_{\rm eff}$ = 5100$\pm$ 150 K) and
of gravity ($\log g= 2.2\pm 0.3$) of HE 0107-5240 (Christlieb et al.~\cite{Ch02}),
the surface nitrogen enrichment is well within the range of the observed values.
However, it appears difficult to invoke similar processes to explain the high carbon and oxygen
enhancements (see Picardi et al.~\cite{Pi04}).
\end{enumerate}
\subsection{No ``in situ'' CN production}
An abundance pattern typical of CEMP stars has been
observed at the surface of non-evolved stars (Norris et al.~\cite{norr97}; Plez \& Cohen \cite{Pl05};
Frebel et al.~\cite{Fr05}). Among the most recent observations, let us mention
the subgiant or dwarf star HE 1327-2326 (Frebel et al.~\cite{Fr05}, see above) and the dwarf
star G77-61 (Plez \& Cohen \cite{Pl05}).
The initial mass of G77-61 is estimated to be between 0.3 and 0.5 M$_\odot$, and it has an [Fe/H]=-4.03,
[C/Fe]=2.6, [N/Fe]=2.6, and a $^{12}$C/$^{13}$C ratio of 5$\pm$1.
In this case, there is no way for the star, which burns
its hydrogen through the pp chains, to produce nitrogen.
There is even less possibility of producing surface enhancements of carbon and oxygen.
Therefore, the ``in situ'' scenario can be excluded, at least for this star.
In that case, only the first and second scenarios are possible.
The same is true for explaining the very high overabundances of carbon
and nitrogen at the surface of HE 1327-2326.
The values observed at the surface of non-evolved and evolved
stars are shown in Fig.~\ref{vent}.
In the second case,
the surface may have been depleted in carbon and oxygen and enriched in nitrogen due
to the dredge-up that occurs along the red giant branch. On the other hand, in the case of
the non-evolved stars, as explained above,
this mechanism cannot be invoked, and the measured abundances reflect
the abundances of the cloud that gave birth to the star. On the whole, the distributions of elements
is similar for evolved and non-evolved stars, which favours the
primordial scenario.
In the following, we explore the first two scenarios using our fast--rotating models.
The abundance pattern observed at the surface
of CEMP stars seems to be a mixture of hydrogen and
helium burning products. Since rotation
allows these products to
coexist in the outer layers of stars
(both in massive and intermediate mass stars), this seems a useful direction
for our research.
\subsection{Comparison with wind composition}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[angle=-90]{3070fig8.eps}}
\caption{Chemical composition of the wind
of rotating 60~M$_\odot$ models (solid circles and squares).
The hatched areas correspond to the range of values
measured at the surface of giant CEMP stars: HE 0107-5240, [Fe/H]$\simeq$~-5.3
(Christlieb \& al. \cite{christ04});
CS 22949-037, [Fe/H]$\simeq$~-4.0 (Norris \& al.
\cite{norr01}; Depagne \& al. \cite{dep02}); CS 29498-043, [Fe/H]$\simeq$~-3.5 (Aoki
\& al. \cite{aoki04}). The empty triangles (Plez \& Cohen~\cite{Pl05}, [Fe/H]$\simeq -4.0$)
and stars (Frebel et al.~\cite{Fr05}, [Fe/H]$\simeq -5.4$, only an upper limit is given for [O/Fe]) correspond to
non-evolved CEMP stars (see text).}
\label{vent}
\end{figure}
Let us first see if the CEMP stars could be formed from material made up of massive star winds,
or at least heavily enriched by winds of massive stars. At first sight, such
a model might appear quite unrealistic, since the period of strong stellar winds
is rapidly followed by the supernova ejection, which would add the ejecta of the supernova itself
to the winds. However,
for massive stars at the end of their nuclear lifetime, a black
hole, that swallows
the whole final mass, might be produced. In that case, the massive star
would contribute to the local chemical enrichment of the interstellar medium only through
its winds. Let us suppose that such a situation has occurred and that
the small halo star we observe today was
formed from the shock induced by the stellar winds with the interstellar material.
What would its chemical composition be?
Its iron content would be the same as the iron abundance of
the massive star. Indeed,
the iron abundance of the interstellar medium would have no
time to change much in the brief massive star lifetime,
and the massive star wind ejecta
are neither depleted nor enriched in iron.
The abundances of the other elements in the stellar winds for our two rotating 60 M$_\odot$
at $Z=10^{-8}$ and 10$^{-5}$ are shown in Fig.~\ref{vent}.
The ordinate [X/Fe] is given by the following expression:
$$[{\rm X/Fe}]=\log\left({{\rm X} \over {\rm X}_\odot}\right)-\log\left({X({\rm Fe}) \over X({\rm Fe})_\odot}\right),$$
where X is the mass fraction of element X
in the wind ejecta, and X$_\odot$ in the Sun.
Similarly, the symbols $X$(Fe) and $X$(Fe)$_\odot$ refer to the mass fraction
of $^{56}$Fe
in the wind material or in the Sun.
Here we suppose that $\log(X({\rm Fe})/X({\rm Fe})_\odot)\sim[{\rm Fe/H}]$, since the mass fraction of hydrogen
remains approximately constant, whatever the metallicity between $Z=$10$^{-8}$ and 0.02.
The values of [Fe/H] are those corresponding to the initial metallicity of the models
(for $Z$ = 10$^{-8}$ one has $\log(X({\rm Fe})/X({\rm H}))=-9.38$, see Table~\ref{tbl-0}).
The solar abundances are those chosen by Christlieb et al (\cite{christ04}) and Bessell et al. (\cite{bessel04})
in their analysis of the star HE 0107-5240, and they correspond to the solar abundances obtained recently by Asplund et al.~(\cite{AS05}).
In particular, $\log(X({\rm Fe})/X({\rm H}))_\odot=-2.80$, thus [Fe/H]=-6.6 at $Z$ = 10$^{-8}$, and
[Fe/H]=-3.6 at $Z$ = 10$^{-5}$.
From Fig.~\ref{vent}, we see that the winds are strongly enriched in CNO elements.
The model at $Z=10^{-5}$, computed with an extended nuclear
reaction network, allows us to look at the abundances in the winds of heavier elements.
The wind material is also somewhat enriched
in Na and Al. Before comparing with the observations, let us first note:
\begin{itemize}
\item 1) The more metal--poor model is shifted toward higher values compared
to the metal rich one. If, in both models, the mass fraction of element X were
the same in the wind ejecta, then one would expect a shift by 3 dex when the iron content of the
model goes from [Fe/H]=-3.6 to [Fe/H]=-6.6. The actual shift is
approximately 3.6 dex, slightly more than the
iron content difference of 3 dex between the two models.
The additional 0.6 dex comes from the more efficient mixing
in the metal--poorer models.
\item 2) In the frame of the hypotheses made here, {\it i.e.} a halo
star made from the wind ejecta that triggered its formation, we should compare
the wind composition from a massive star model with the same initial iron content as in the halo
star we considered. The range of iron contents in the models, [Fe/H] equal from -6.6 to -3.6,
covers the range
of iron content of the CEMP plotted in Fig.~\ref{vent}, whose [Fe/H] are between
-5.4 and -3.5. However, the [Fe/H] = -6.6 model is well below the lower bound
of the observed [Fe/H], making this model less interesting for comparisons
with the presently available observations. In that respect the [Fe/H] = -3.6 model, which has
an iron content that is comparable to the iron--richest stars observed, is more interesting.
\item 3) Any dilution with some amount
of interstellar material would lower the abundance of the element X without changing the mass
fraction of iron. In that case the values plotted in Fig.~\ref{vent} are shifted to lower values, but
the relative abundances of the elements will not change
(as far as the main source of the elements considered
are the wind ejecta).
\end{itemize}
Keeping in mind these three comments, it appears that what has to be compared with
the observations are more the relative abundances between the CNO elements than the actual
values of the [X/Fe] ratios, which will depend on the initial metallicity of the model
considered, as well as on the dilution factor.
From Fig.~\ref{vent}, and Tables~\ref{tbl-2} \& \ref{tbl-3}, one sees that,
for the two metallicities considered here,
the wind material of rotating models is characterised by N/C and N/O ratios between
$\sim$ 1 and 40 and by $^{12}$C/$^{13}$C ratios around 4-5. These values are
compatible with the ratios observed at
the surface of CS 22949-037 (Depagne et al.~\cite{dep02}):
N/C $\sim$ 3 and $^{12}$~C~/~$^{13}$~C $\sim$4. The observed value
for N/O ($\sim$0.2) is smaller than the range of theoretical values, but greater
than the solar ratio ($\sim$ 0.03). Thus the observed N/O ratio
also bears the mark of some CNO processing, although slightly less
pronounced than in our stellar wind models.
On the whole,
a stellar wind origin for the material composing this star
does not appear out of order in view of these comparisons,
all the more so
if one considers that, in the present comparison, there is no
fine tuning of some parameters in order to obtain the best agreement possible.
The theoretical results are
directly compared to the observations. Moreover only a small subset of possible initial conditions
has been explored.
Other CEMP stars present, however, lower values for the N/C and N/O ratios
and higher values for the $^{12}$C/$^{13}$C ratio.
For these cases, it appears that the winds of our rotating 60 M$_\odot$ models
appear to be too strongly CNO--processed (too
high N/C and N/O ratios and too low $^{12}$C/$^{13}$C ratios).
Better agreement would
be obtained if the observed abundances also stem from material coming from the CO-core and ejected
either by strong late stellar winds or in a supernova explosion.
\subsection{Expression of abundance ratios in total ejecta (winds and supernova)}
To find the initial chemical composition of stars that
would form from a mixture of wind and supernova ejecta
with interstellar medium material,
let us define ${X}_{\rm ej}$ as the mass fraction of element X in the ejecta
(wind and supernova).
This quantity can be obtained from the stellar models and computed according to the expression below:
$${\it X}_{\rm ej}={{\it X}_{\rm wind} m_{\rm wind}+{\it X}_{\rm SN} m_{\rm SN} \over m_{\rm wind}+m_{\rm SN}},$$
where $X_{\rm wind}$ and $X_{\rm SN}$ are the mass fractions of element X in the wind,
in the supernova ejecta. Here $m_{\rm wind}$ and $m_{\rm SN}$ are the masses ejected by the stellar winds
and at the time of the supernova explosion.
To obtain the mass of the remnants, we adopted the relation obtained by Arnett~(\cite{Ar91}) between the masses
of the remnant and the CO core. This method is the same as the
one adopted by Maeder (\cite{Ma92}).
The total mass ejected by the star, $m_{\rm ej}=m_{\rm wind}+m_{\rm SN}$, is mixed with some
amount of interstellar material
$m_{\rm ISM}$. The mass fraction of element X in the material composed from the ejecta mixed with the interstellar medium will
be
$$X={X_{\rm ej} m_{\rm ej} + X_{\rm ini} m_{\rm ISM} \over m_{\rm ej} +m_{\rm ISM}}={X_{\rm ej}{m_{\rm ej} \over m_{\rm ISM}}+X_{\rm ini}\over
{m_{\rm ej} \over m_{\rm ISM}} +1},$$
where $X_{\rm ini}$ is the mass fraction of element X in the interstellar medium.
In our case the interstellar medium is very metal poor
so that one can consider
$X_{\rm ini}\sim 0$ for the heavy elements synthesised in great quantities by the star (note that this cannot be done for nitrogen ejected by the non-rotating
60 M$_\odot$ stellar model).
We also suppose that $m_{\rm ej}\ll m_{\rm ISM}$ and thus $X= (X_{\rm ej}m_{\rm ej})/m_{\rm ISM}$.
Using these expressions, one can write
$$[{\rm Fe/H}]=\log\left({X({\rm Fe})_{\rm ej}\over X({\rm Fe})_\odot}\right)+\log\left({m_{\rm ej}\over m_{\rm ISM}}\right),$$
assuming, as we did above, that $X({\rm H})_\odot /X({\rm H})_{\rm ej}\approx 1$.
Values of [X/Fe] are obtained using the expression
$$[{\rm X/Fe}]=[{\rm X/H}]-[{\rm Fe/H}]=$$
$$\log\left({{X}_{\rm ej}\over {X}_\odot}\right)-\log\left({X({\rm Fe})_{\rm ej}\over X({\rm Fe})_\odot}\right).$$
One needs to have an estimate for both $m_{\rm ej}/m_{\rm ISM}$ (the dilution factor)
and for the mass fraction of iron in the ejecta $X$(Fe)$_{\rm ej}$.
A precise quantitative determination of $X$(Fe)$_{\rm ej}$ and $m_{\rm ej}/m_{\rm ISM}$ from
theory is quite difficult. For instance,
for a given initial mass, the quantity of iron ejected by the supernova
can vary by orders of magnitudes depending on the mass cut, the energetics of the supernova, and the geometry
of the explosion (see e.g. Maeda \& Nomoto \cite{MN03}). On the other hand, the dilution factor will depend
on the energetics of the supernova, among other parameters.
In the absence of
any very precise guidelines, we determined the two unknown quantities,
the dilution factor, and the mass of ejected iron, requiring that the mixture will have
[Fe/H]=~-5.4 and [O/Fe]=~+3.5. The first value
corresponds to the values observed at the surface
of the star HE 1327-2326 (Frebel et al.~\cite{Fr05}), and the second value
is below the upper limit of [O/Fe] ($< 4.0$) found for this star.
Doing so, one can write,
$$[{\rm X/Fe}]=[{\rm X/O}]+[{\rm O/Fe}]=$$
$$\log\left({{X}_{\rm ej}\over {\it X}_\odot}\right)-\log\left( {X({\rm O})_{\rm ej} \over X({\rm O})_\odot}\right) +3.5,$$
where $X$(O) is the mass fraction of oxygen.
The mass fraction of ejected iron can be estimated
from
$$[{\rm O/Fe}]=
\log\left({X({\rm O})_{\rm ej}\over X({\rm O})_\odot}\right)-\log\left({X({\rm Fe})_{\rm ej}\over X({\rm Fe})_\odot}\right)=3.5,$$
and the dilution factor can be obtained from
$$[{\rm Fe/H}]=\log\left({X({\rm Fe})_{\rm ej}\over X({\rm Fe})_\odot}\right)+\log\left({m_{\rm ej}\over m_{\rm ISM}}\right)=-5.4.$$
\subsection{Results from the ``wind plus supernova ejecta'' model}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[angle=-90]{3070fig9.eps}}
\caption{Chemical composition of the ejecta
(wind and supernova)
of 60~M$_\odot$ models: solid circles and triangles
correspond to models at $Z=10^{-8}$ ([Fe/H]=-6.6) with and without rotation.
The [N/Fe] ratio for the non-rotating model is equal to 0,
{\it i.e.}, no N-enrichment is expected.
The hatched areas (moving from top right down to the left) correspond to the range of values
measured at the surface of giant CEMP stars (same stars as in Fig.~\ref{vent}).
The empty triangles (Plez \& Cohen~\cite{Pl05})
and stars (Frebel et al.~\cite{Fr05}, only an upper limit is given for [O/Fe]) correspond to
non-evolved CEMP stars (see text).
The hatched areas (L to R from top) show the range of values
measured at the surface of normal halo giant stars by Cayrel et al.~(\cite{cayr04})
and Spite et al.~(\cite{Sp05}, unmixed sample only, see text).
}
\label{sn}
\end{figure}
Using the above formulae, let us now discuss what can be expected for the chemical composition
of a very metal--poor star formed from such a mixture.
This includes the CNO elements
for which the present models can give consistent estimates and the case of our models
at $Z=10^{-8}$ ([Fe/H]=-6.6), which are the models that are compatible with the requirement
that the mixture of ejecta and ISM material must have an [Fe/H]=~-5.4. Obviously
our second series of models at $Z=10^{-5}$ ([Fe/H]=~-3.6) does not fit
this requirement. Imposing [O/Fe]=3.5 and [Fe/H]=~-5.4 implies masses
of iron that are being ejected on the order of 1 $\times$ 10$^{-3}$ M$_\odot$ and mixed
with a mass of interstellar medium of about 2 $\times$ 10$^{5}$ M$_\odot$.
The mass of ejected iron (actually in the form of $^{56}$Ni) is very small
compared to the classical values of 0.07-0.10 M$_\odot$. On the other hand,
this quantity can be very small if a large part of the mass
falls back onto the
remnant (Umeda \& Nomoto~\cite{Um03}).
The mass of interstellar gas collected by the shock
wave can be related to the explosion energy $E_{\rm exp}$ through
(see Shigeyama \& Tsujimoto~\cite{Sh98})
$$M_{\rm ISM}=5.1 \times 10^4 {\rm M}_\odot \left({E_{\rm exp} \over 10^{51}{\rm erg}}\right).$$
A mass of 2 $\times$ 10$^{5}$ M$_\odot$ would correspond to
an energy equal to 4 $\times$ 10$^{51}$ erg, {\it i.e.}, a value well in the range
of energies released by supernova explosion. Thus imposing [O/Fe]=3.5 and [Fe/H]=-5.4
does not imply unrealistic values for the mass of iron that is ejected and for the mass of
interstellar medium swept up by the shock wave of the supernova explosion.
The theoretical ratios for the CNO elements are shown in Fig.~\ref{sn} and compared
with the ratios observed at the surface of CEMP stars and of normal giant halo stars
by Cayrel et al.~(\cite{cayr04}) and Spite et al.~(\cite{Sp05}).
For oxygen, the value of 3.5
is obtained by construction, so it does not provide any constraint; however, see the previous
paragraph.
More interesting, of course, are the carbon and nitrogen abundances.
One sees that both the non-rotating and rotating models might account for some level of C-enrichment
that is compatible with the range of values observed at the surface of CEMP stars. However,
only the rotating models produce N-rich material at this level.
Figure~\ref{sn} also shows that the predicted value for the N/C and N/O ratios
from the rotating model appear to agree with the observed values
of these ratios at the surface of CEMP stars.
Thus as expected, the addition of
material from the CO core (here ejected at the time of the supernova explosion) to the wind material
(mainly enriched in CNO--processed material), reduces
the N/C and N/O ratios.
The theoretical values in the wind--plus--supernova ejecta model for the ratio $^{12}$C/$^{13}$C are between 100 and 4000
for the rotating models (see Tables~\ref{tbl-2}
\& \ref{tbl-3}).
The value predicted by the non-rotating model is much higher, on the order of
10$^8$. Compared to the observed values, which are between 4 and 60, the value of the non-rotating model
is in excess by at least seven orders of magnitude. The situation is much more favourable for the rotating models.
In this last case,
the predicted values are still somewhat too high, but by much lower factors.
Other proportions probably exist
between wind and supernova mass ejecta than those considered here,
which would provide a better fit to the observed surface abundances of CEMP stars.
Also more models need to be calculated for
exploring the set of initial parameters leading to a good
correspondence between theory and observations.
From the large range of results obtained by different initial
conditions, there is little doubt,
that such a given set of parameters exists.
The $^{12}$C/$^{13}$C ratio appears extremely sensitive to input parameters, so it may be
a powerful tool for a closer identification of the exact nucleosynthetic site.
As can be seen from Fig.~\ref{sn}, the abundances observed at the surface of the normal giant stars by
Cayrel et al.~(\cite{cayr04}) and Spite et al.~(\cite{Sp05}) are not far from solar ratios, and
are well below the range of values
observed at the surface of CEMP stars. Only the subset of stars qualified as unmixed by
Spite et al.~(\cite{Sp05}), {\it i.e.}, presenting no evidence of C to N conversion, has been
plotted here.
Probably these stars are formed from a reservoir of matter made up of the ejecta of
different initial mass stars, convolved with a proper distribution of the initial rotation velocities,
while the C-rich stars require some special circumstances involving a few or maybe only
one nucleosynthetic event.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[angle=-90]{3070fi10.eps}}
\caption{Chemical composition of the envelopes of E-AGB stars compared to abundances
observed at the surface of CEMP stars (hatched areas). The continuous line shows the case
of a 7 M$_\odot$
at $Z=10^{-5}$ ([Fe/H]=-3.6) with $\upsilon_{\rm ini}=$ 800 km s$^{-1}$.
The vertical lines (shown as ``error bars'')
indicate the
ranges of values for CNO elements
in the stellar models of Meynet \& Maeder~(\cite{MMVIII})
(models with initial masses between 2 and 7 M$_\odot$ at $Z=10^{-5}$).
The thick and thin lines correspond to rotating ($\upsilon_{\rm ini}$ = 300 km s$^{-1}$)
and non-rotating models.
The empty triangles (Plez \& Cohen~\cite{Pl05})
and stars (Frebel et al.~\cite{Fr05}, only an upper limit is given for [O/Fe]) correspond to
non-evolved CEMP stars.}
\label{agb}
\end{figure}
\subsection{Chemical composition of the envelopes of E-AGB stars}
One can wonder whether intermediate mass stars could also play a role
in explaining the peculiar abundance pattern of CEMP stars.
For instance, Suda et al.~(\cite{Su04}) propose that the small halo star,
observed today as a CEMP star, was the secondary in a close binary system.
The secondary might have accreted matter from its evolved companion, an AGB star,
and might have thus acquired at least part of its peculiar surface abundance pattern.
The physical
conditions encountered in the advanced phases of an intermediate mass star
are not so different from the one realised in massive stars. Thus the same
nuclear reaction chains can occur and lead to similar nucleosynthetic products.
Also the lifetimes of massive stars (on the order of a few million years) are
not very different from the lifetimes of the most massive intermediate mass stars;
typically a 7 M$_\odot$ has a lifetime on the order of 40 Myr, only an order of magnitude higher
than a 60 M$_\odot$ star.
Moreover, the observation of s-process element overabundances at the surface of some
CEMP stars also point toward a possible Asymptotic Giant Branch (AGB) star origin\footnote{Note that
massive stars also produce s-process elements. The massive star s-process elements
have low atomic mass number (A inferior to about 90) and
are known under the name of the weak component of the s-process.}
for the material composing the CEMP stars.
To explore this scenario, we computed a
7 M$_\odot$ with $\upsilon_{\rm ini}=800$ km s$^{-1}$ at $Z=10^{-5}$, and
with the same physical ingredients as the 60 M$_\odot$ stellar models of the
present paper. In contrast to the 60 M$_\odot$ models, the 7 M$_\odot$ stellar model
loses little mass during the core H- and He--burning phase, so that
the star has still nearly its whole
original mass
at the
early asymptotic giant branch stage (the actual mass at this stage is 6.988 M$_\odot$). This is because
the star never reaches the break-up limit during the MS phase; and,
due to rotational mixing and dredge-up,
the metallicity
enhancement at the surface
only occurs very late, when the star
evolves toward the red part of the HR diagram
after the end
of the core He-burning phase. At this stage, the outer
envelope of the star is enriched in primary CNO elements, and the surface metallicity
reaches about 1000 times the initial metallicity.
If such a star is in a close
binary system, there is good chance that mass transfer occurs during this
phase of expansion of the outer layers. In that case, the secondary may accrete
part of the envelope of the E-AGB star.
From the 7 M$_\odot$ stellar model, we can estimate the chemical composition
of the envelope
at the beginning of the thermal pulse AGB phase. Here we call
envelope all the material above the CO-core.
The result is shown in Fig.~\ref{agb} (continuous line with solid circles).
We also plotted the values obtained from the models
of Meynet \& Maeder~(\cite{MMVIII}) for
initial masses between 2 and 7 M$_\odot$ at $Z=10^{-5}$
and with $\upsilon_{\rm ini}=0$ and $300$ km s$^{-1}$.
Before discussing the comparisons with observations, let us make two remarks:
1) as was the case for the theoretical predictions of the
massive star winds, the values given here have not been adjusted to
fit the observed values but result from the basic physics of the models; 2) the initial metallicity of our AGB models
([Fe/H]=-3.6) is at the higher range of values of
the metallicities observed for the CEMP stars. However,
based on the results from our massive star models at [Fe/H]=-6.6 and -3.6
(see Fig.~\ref{vent}), we see that
the overall pattern of the abundances will probably remain quite similar
for a lower initial metallicity. Only a shift toward higher values along the vertical axis
is expected when the initial metallicity of the model is decreased.
Looking at Fig.~\ref{agb}, one can note the three following points:
\begin{enumerate}
\item The envelope of rotating intermediate mass stars presents a chemical composition in carbon,
nitrogen, and oxygen that agrees well with the observed value at the surface of CEMP stars. In particular,
compared to the wind material of massive stars (see Fig.~\ref{vent}), the N/C ratios and N/O ratios
are in better agreement. The non-rotating models cannot account
for the high overabundances in nitrogen and oxygen.
\item The $^{12}$C/$^{13}$C ratios in our rotating models are between 19 and 2500, with the lowest values
corresponding to the most massive intermediate--mass star models. The non--rotating
models give values between 3 $\times$ 10$^5$ and 2 $\times$ 10$^6$.
Again here, rotating models agree much better with the observed values, although
very low $^{12}$C/$^{13}$C values (on the order of 4-5, as observed {\it e.g.} at the surface
of the dwarf halo star G77 61, see Plez and Cohen~\cite{Pl05})
seem to be reproduced only by massive star models (wind material only).
\item For sodium and aluminum, the ratios predicted by our 7 M$_\odot$ model
with $\upsilon_{\rm ini}=800$ km s$^{-1}$ fit the observed values well. In the case
of magnesium, good agreement is also obtained.
\end{enumerate}
Thus we see that the envelopes of AGB stellar models with rotation
show a very similar chemical composition to the one observed at the surface
of CEMP stars. It is, however, still difficult to say that rotating intermediate mass star models
are better than rotating massive star models in this respect. Probably,
some CEMP stars are formed from massive star ejecta and others
from AGB star envelopes. Interestingly at this stage,
some possible ways to distinguish between massive star wind material
and AGB envelopes do appear. Indeed, we just saw above that
massive star wind material is characterised by a very low $^{12}$C/$^{13}$C ratio,
while intermediate mass stars seem to present higher values for this ratio.
The AGB envelopes would also present very high overabundances
of $^{17}$O, $^{18}$O, $^{19}$F, and $^{22}$Ne, while wind of massive rotating
stars present a weaker overabundance of $^{17}$O and depletion of
$^{18}$O, $^{19}$F, and $^{22}$Ne.
As discussed in Frebel et al.~(\cite{Fr05}), the ratio of heavy elements, such
as the strontium--to--barium ratio, can also give clues to the origin of the material
from which the star formed. In the case of HE 1327-2326, Frebel et al.~(\cite{Fr05})
give a lower limit of [Sr/Ba] $> -0.4$, which suggests that strontium was not
produced in the main s-process occurring in AGB stars, thus leaving
the massive star hypothesis as the best option, in agreement with the result
from $^{12}$C/$^{13}$C in G77-61 (Plez \& Cohen~\cite{Pl05}) and
CS 22949-037 (Depagne et al.~\cite{dep02}).
\section{Conclusion}
We have proposed a new scenario for the evolution of very metal--poor massive stars.
This scenario requires no new physical processes, as it is based on models that have been
extensively compared to observations of stars at solar composition and in the
LMC and SMC.
The changes with respect to classical scenarios are twofold and are both induced by fast rotation:
first, rotational mixing deeply affects the
chemical composition of the material ejected by the massive stars;
second, rotation significantly enhances the mass lost by stellar winds.
The mass loss rates are increased mainly because the mixing process is so strong that the surface metallicity is
enhanced by several orders of magnitude. This leads to strong radiative winds during the
evolution in the red part of the HR diagram. The strongest mass loss occurs at
the very end of the core He-burning phase.
The proposed scenario may
allow very massive stars
to avoid the pair instability.
We show that material ejected
by rotating models has chemical compositions that show
close similarities to the peculiar
abundance pattern observed at the surface of CEMP stars.
We explored the three possibilities of
CEMP stars made of: 1) massive star wind material, 2) total massive star ejecta
(wind plus supernova ejecta), 3) and material from E-AGB star envelopes.
Interestingly, from the models computed here, one can order these
three possibilities according to the degree of richness in CNO processed material.
From the richest to the poorest, one has the wind material, the E-AGB envelope, and
the total ejecta of massive stars. The imprints on the abundance pattern of CEMP stars
are thus not the same, depending on which source is involved. There is good hope that
in the future, it will be possible to distinguish them.
Other interesting questions will be explored in the
future with these rotating metal--poor models.
Among them let us briefly mention:
\begin{itemize}
\item {\it What is the enrichment in new synthesized helium by the first stellar generations?}
This is a fundamental question already asked long ago by
Hoyle \& Tayler (\cite{Ho64}).
A precise knowledge of the helium enrichment caused by the first massive stars
(Carr et al.~\cite{Ca84}; Marigo et al. \cite{Ma03})
is important in order to correctly deduce the value of the cosmological helium
from the observed abundance
of helium in low metallicity regions
(see e.g. Salvaterra \& Ferrara \cite{Sa03}).
Production of helium by the first massive stars may also affect the initial helium content of stars in
globular clusters. If the initial helium content of stars in globular
clusters is increased by 0.02 in mass fraction, the
stellar models will provide ages for the globular
clusters that are lower by roughly 15\%, {\it i.e.}, 2 Gyr starting from
an age of 13 Gyr (Shi~\cite{Sh95}; see also the interesting discussion in Marigo et al.~\cite{Ma03}).
In the case of our rotating 60 M$_\odot$ at $Z=10^{-8}$, 22\% of the initial mass is ejected in the form
of new synthesised helium by stellar winds. Thus the models presented here will certainly lead to new views on the question of
the helium enrichment at very low metallicity, provided of course, they are representative of the evolution of
the majority of massive stars at very low $Z$.
\item{\it What are the sources of primary nitrogen in very metal--poor halo stars?}
Primary $^{14}$N
is produced in large quantities in our rotating models.
In our previous work on this subject (Meynet \& Maeder~\cite{MMVIII}), we discussed the yields from stellar models at $Z=10^{-5}$ with
$\upsilon_{\rm ini}=300$ km s$^{-1}$. Such an initial velocity corresponds to a ratio
$\upsilon_{\rm ini}/\upsilon_{\rm crit}$ of only 0.25. This value is lower than the value of $\sim$0.35
reached by solar metallicity models with $\upsilon_{\rm ini}=300$ km s$^{-1}$.
With such a low initial ratio of $\upsilon_{\rm ini}/\upsilon_{\rm crit}$, we found that
the main sources of primary nitrogen were intermediate mass stars with initial masses around about 3 M$_\odot$.
However, as already shown in Meynet \& Maeder (\cite{MMVIII}),
the yield in $^{14}$N increases rapidly when the initial velocity increases.
As a numerical example, the yield
in primary nitrogen for the $Z=10^{-5}$, 60 M$_\odot$ with $\upsilon_{\rm ini}/\upsilon_{\rm crit}$ equal to 0.25
was 7 $\times 10^{-4}$ M$_\odot$, while
the corresponding model with $\upsilon_{\rm ini}/\upsilon_{\rm crit}$ equal to 0.65 produces a yield of $\sim$2 $\times 10^{-1}$;
{\it i.e.}, it increased by a factor of nearly 300~!
Interestingly, these high yields of primary $^{14}$N from short-lived massive stars
seem to be required for explaining the high N/O ratio
observed in metal--poor halo stars (Chiappini et al.~\cite{Ch05}). Note
that massive intermediate mass stars, whose lifetimes would be only an order of magnitude higher
than those of the most massive stars, could also be invoked to explain the high N/O ratio observed in very
metal--poor stars. The age-metallicity relation is not precise enough to allow us to distinguish between the two.
\item{\it How does rotation affect the yields of extremely metal--poor stars?}
Other elements such as $^{13}$C, $^{17}$O, $^{18}$O, and $^{22}$Ne
are also produced in much greater quantities in the rotating models.
Computations are now in progress for extending the range of initial parameters explored and to study
the impact of such models on the production of these isotopes, as well as on
s-process elements.
\end{itemize}
We think that the fact that stars rotate, and may even rotate fast
especially at low metallicity,
has to be taken into account for obtaining more realistic models of
the extremely metal--poor stars that formed in the early life of the Universe.
\begin{acknowledgements}
The authors are grateful to Dr. Joli Adams for the careful
language editing of the manuscript.
\end{acknowledgements}
| 2024-02-18T23:39:49.682Z | 2005-10-19T13:34:00.000Z | algebraic_stack_train_0000 | 545 | 16,459 |
|
proofpile-arXiv_065-2823 | \section{Introduction}
General Relativity has been extremely successful in describing the large-scale
features of our universe. But the global shape of space-time is a quantity that
is not determined by the local equations of General Relativity. An intriguing
possibility is therefore that our universe is much smaller than the size of the
particle horizon today.
In the standard model, the universe is described by a
Friedmann-Lema\^\i tre-Robertson-Walker (FLRW) type metric which is
homogeneous and isotropic. If the topology
of the universe is not trivial, then we are dealing with a quotient space
$X/\Gamma$ where $X$ is one of the usual simply connected FLRW spaces (spherical,
Euclidean or hyperbolic) and $\Gamma$ is a discrete and fixed-point free symmetry
group that describes the topology. This construction does not affect local physics
but changes the boundary conditions (see eg.~\cite{rep1,rep2} and references
therein).
This could potentially explain some of the anomalies found in the first-year WMAP data.
For example, the perturbations of the cosmic fluids need to be invariant
under $\Gamma$. Therefore the largest
wavelength of the fluctuations in the CMB cannot exceed the size of the
universe, and so the suppression (and maybe the strange alignment) of the
lowest CMB multipoles might be due to a non-trivial topology
\cite{cl1,cl2,cl3,cl4,align,align2}. Additionally,
the last scattering surface can wrap around the universe. In this case
we receive CMB photons, which originated at the same physical location on the
last scattering surface, from different directions. Observationally
this would appear as matched
(correlated) circles in the CMB \cite{circle}.
An analysis by Cornish et al of the first-year WMAP maps
based on a search for matching circles has not found any evidence for a
non-trivial topology \cite{cornish2}. However, it is difficult to quantify the
probability of missing matching circles, and other groups have claimed
a tentative detection of circles at scales not probed by Cornish et al
(see e.g. \cite{roukema,luminet}). In this paper we study a different approach which can
in principle yield both an optimal test as well as a rigorous assessment
of the fundamental detection power of the CMB for a cosmic topology.
Instead of working directly with the observed map of CMB temperature
fluctuations, we expand the map in terms of spherical harmonics,
\begin{equation}
T(x) = \sum_{\ell, m} a_{\ell m} Y_{\ell m}(x),
\end{equation}
where $x$ are the pixels. Both the pixels and the expansion coefficients $a_{\ell m}$
are random variables. In the simplest models of the early universe, they are
to a good approximation Gaussian random variables, an assumption that we will
make throughout this paper. Their $n$-point correlation functions are then
completely determined by the two-point correlation function.
The homogeneity and isotropy of the simply-connected
FLRW universe additionally requires the two-point correlation of the $a_{\ell m}$
to be diagonal,
\begin{equation}
\langle a_{\ell m} a_{\ell'm'}^* \rangle = C_\ell \delta_{\ell \ell'} \delta_{m m'} .
\end{equation}
The symmetry group $\Gamma$ will introduce preferred directions, which will
break global isotropy. This in turn induces correlations between off-diagonal
elements of the two-point correlation matrix. In this paper we study methods
to find such off-diagonal correlations. Such a test is complementary to the
matched-circle test of \cite{circle,cornish2}, and if the initial fluctuations are
Gaussian then it can use {\em all} the information present in the CMB
maps and so lead to optimal constraints on the size of the universe.
Investigating the amount of information introduced into the two-point
correlation matrix by a given topology allows us to decide from an
information theoretical standpoint whether the CMB will ever be able
to constrain that topology.
We will use the following notation: We often
combine the $\ell$ and $m$ indices to a single index
$s\equiv\ell(\ell+1)+m$ and mix
both notations frequently. The noisy correlation
matrix given by the data is ${\cal A}_{ss'} \equiv a_s a_{s'}^*$.
We will write the correlation matrix
which defines a given topology as ${\cal B}_{ss'}$. This is the expectation
value of the two-point correlation function for $a_s$ that describe
a universe with that topology.
All the simulations in this paper are based on a flat $\Lambda$CDM model
with $\Omega_\Lambda=0.7$, a Hubble parameter of $h=0.67$, a Harrison-Zel'dovich
initial power spectrum ($n_S=1$) and a baryon density of $\Omega_b h^2=0.019$,
as described in \cite{riaz1,riaz2}. With this choice of cosmological parameters
we find a Hubble radius $1/H_0 \approx 4.8$Gpc while the radius of the particle
horizon is $R_h \approx 15.6$Gpc.
We will denote a toroidal topology as T[X,Y,Z] where X, Y and Z are the
sizes of the fundamental domains, in units of the Hubble radius.
As an example, T[4,4,4] is a cubic torus of size $(19.3\textrm{Gpc})^3$.
The volume of such a torus is nearly half that of the observable universe.
The diameter of the particle horizon is about $6.5/H_0$.
But we should note that there are non-zero off-diagonal terms in
${\cal B}_{ss'}$ even for universes that are slightly larger than the
particle horizon.
We have a range of correlation matrices at our disposal so far.
Two of them are cubic tori with sizes $2/H_0$ (T[2,2,2]) and $4/H_0$
(T[4,4,4]). For these two we have the correlation matrices up to
$\ell_{\rm max}=60$ (corresponding to $s_{\rm max}=3720$). We also have two families
of slab spaces. The first one, T[X,X,1], has one very small direction
of size $1/H_0$. The second one, T[15,15,X], has two large directions that
are effectively infinite. Both groups include all tori with $X=1,2,\ldots,15$,
and we know their correlations matrices up to $\ell_{\rm max}=16$ (or $s_{\rm max}=288$).
The correlation matrices analysed in this paper do not contain the
integrated Sachs-Wolfe contributions (cf discussion in section \ref{sec:isw}).
This paper is organised as follows: We start out by matching the
measured correlations to a given correlation matrix.
We then show that a similar power to distinguish between
different correlation matrices can be achieved by using the
likelihood. In general we do not know the relative orientation
of the map and the correlation matrix, and we discuss how to
deal with this issue next. We then present a first set of
results from this analysis, before embarking on a simplified
analysis of the WMAP CMB data and toroidal topologies.
So far the methods were all of a frequentist nature. Using the likelihood
we can also study the evidence for a given topology, which is the
Bayesian approach to model selection. We then talk about the
issues that we neglected in this paper, and finish with conclusions.
The appendices look in more detail at how the correlation and the
likelihood method differ, and how their underlying structure can be
used to define ``optimal'' estimators. We also discuss how selecting
an extremum over all orientations can be linked to extreme value distributions,
which allows us to derive probability distribution functions that can be
fitted to the data for
quantifying confidence levels. We finally consider a distance function
on covariance matrices, motivated by the Bayesian evidence discussion,
and study its application to the comparison between different topologies.
\section{Detecting Correlations}
A priori it is very simple to check whether there are significant off-diagonal
terms present in the two-point correlation matrix: One just looks at terms
with $\ell\neq\ell'$ and/or $m\neq m'$. But the variance of the
$a_{\ell m}$ is too large as we can observe only a single universe. When computing
the $C_\ell$ we average over all directions
$m$. This averaging then leads to a cosmic variance that behaves like $1/\sqrt{\ell}$.
But now we have to consider each element of the correlation matrix separately,
leading to a cosmic variance of order $1$ for each element. The matrix is
therefore very noisy and we need to ``dig out'' the topological signal from
the noise. Furthermore, if we detect the presence of significant off-diagonal
correlations, we still need to verify that they are due to a non-trivial
topology and not to some other mechanism that breaks isotropy.
A natural approach to the problem is then to use the expected correlation
matrix for a given topology as a kind of filter.
To this end we compute a correlation amplitude $\lambda$
which describes how close two matrices are. We do this by minimising
\begin{equation}
\chi^2[\lambda] = \sum_{s s'}
\left({\cal A}_{s s'} - \lambda {\cal B}_{s s'}\right)^2 \label{eq:chi2}
\end{equation}
where ${\cal A}_{s s'}=a_s a_{s'}^*$ is the correlation matrix estimated from
the data and ${\cal B}_{s s'}$ the one which contains the topology that we want to test.
For a good fit we expect to find $\lambda\approx 1$
while for a bad fit $\lambda\approx0$.
We can easily solve $d\chi^2/d\lambda=0$ and find that
\begin{equation}
\lambda = \frac{\sum_{s s'} {\cal A}_{s s'} {\cal B}_{s s'}}{
\sum_{s s'} ({\cal B}_{s s'})^2} \label{eq:corpar}
\end{equation}
minimises Eq.~(\ref{eq:chi2}).
As we know that we will have to compare our method against maps
from an infinite universe with the same power spectrum, we do not sum
over the diagonal $s=s'$ (which corresponds to $\ell=\ell'$ and
$m=m'$) to improve the signal to noise. This corresponds to
replacing the correlation matrix through
${\cal B}\rightarrow{\cal B}-{\cal D}$ where ${\cal D}$ is a diagonal matrix with
the power spectrum on the diagonal. If the power spectrum
is constant so that ${\cal D} = C\times1$ then
the eigenvectors of the new correlation matrix are the same as
those of the original one, and the eigenvalues are replaced
by $\epsilon^{(i)}\rightarrow\epsilon^{(i)} - C$. In this
case they will no longer be positive.
We could also introduce a covariance matrix in Eq.~(\ref{eq:chi2}). In
the presence of noise this may be useful. In this study we will assume
throughout an idealised noise-free and full-sky experiment for simplicity.
At any rate the WMAP data will be cosmic variance dominated at the low
$\ell$ that we consider here, see section \ref{sec:noise}.
Neglecting the noise contribution, the covariance matrix is
$C_{qq'}=\langle{\cal B}_q{\cal B}_{q'}\rangle$ where $q=\{s,s'\}$.
But as the correlation matrices
are already expectation values, we end up with a matrix that has a
single non-zero eigenvalue $\epsilon = \sum_q {\cal B}_q^2$. If we invert
this singular matrix with the singular value decomposition (SVD) method
(setting the inverse of the zero eigenvalues to zero) and minimise the
resulting expression for
the $\chi^2$, we find again Eq.~(\ref{eq:corpar}).
It is straightforward to compute the expectation value and variance of
the $\lambda$ function for two important cases. In the first case the
universe is infinite, so that the spherical harmonics $a_{\ell m}$ are
characterised by the usual two-point function,
\begin{equation}
\langle a_{\ell m} a_{\ell'm'}^*\rangle_\infty = C_\ell \delta_{\ell\ell'}\delta_{mm'} .
\label{eq:infcor}
\end{equation}
In the second case the universe has indeed the topology described
by the correlation matrix ${\cal B}$ against which we test the $a_{\ell m}$.
In this
case the two point function of the spherical harmonics is given by
\begin{equation}
\langle a_{\ell m} a_{\ell'm'}^*\rangle_{\cal B} = {\cal B}_{ss'}
\end{equation}
In both
cases the spherical harmonics obey a Gaussian statistics and the higher
$n$-point functions are uniquely determined by the two-point function
via Wicks theorem.
Let us first define the auto-correlation $U=\sum_{ss'} |{\cal B}_{ss'}|^2$.
We remind the reader that such sums in this section exclude the diagonal
terms $s=s'$ except where specifically mentioned.
For an infinite universe, we notice that if we sum only over the
non-diagonal elements $s \ne s'$ then,
since $\langle a_s a_{s'}^* \rangle_\infty = C_s \delta_{ss'}$ the
expectation value of lambda is zero, $\langle \lambda \rangle_\infty =
0$. Else,
\begin{equation}
\langle \lambda \rangle_\infty = \frac{1}{U} \sum_s C_s {\cal B}_{ss} .
\end{equation}
If the map was whitened (see below),
then $\langle \lambda \rangle_\infty = {\rm tr}({\cal B})/U$.
For a finite universe,
\begin{equation}
\langle\lambda\rangle_{\cal B} = 1
\end{equation}
independently if we sum over the diagonal elements or not, as we just recover
the auto-correlation in the numerator. Of course the auto-correlation value
depends on the summation convention.
For the variance, in the case of an infinite universe, we find
\begin{equation}
\sigma_\infty^2
\equiv\langle \lambda^2 \rangle_\infty - \langle \lambda \rangle_\infty^2
= \frac{2}{U^2} \sum_{ss'} C_s C_{s'} |{\cal B}_{ss'}|^2 .
\end{equation}
The summation depends again if we keep the diagonal elements or
not. For a whitened map, the result simplifies to $\sigma_\infty^2 = 2/U$.
In a finite universe,
\begin{equation}
\sigma^2_{\cal B} = \frac{2}{U^2} {\rm tr}\left({\cal B}\BB^*{\cal B}\BB^*\right) ,
\end{equation}
however now we need to be more careful if we discard the diagonal elements,
as then
\begin{equation}
\sigma^2_{\cal B} = \frac{2}{U^2} \sum_{s1\neq s2,s3\neq s4}
\left({\cal B}_{s1s2}{\cal B}^*_{s2s3}{\cal B}_{s3s4}{\cal B}^*_{s4s1}\right)
\label{eq:corr_A_error}
\end{equation}
Table \ref{tab:lambda} shows the expectation values of variances
for a selection of topologies, computed with these formulas. It may
be surprising that the variance of $\lambda$ for an infinite universe
depends on the test-topology. However, Eq.~(\ref{eq:corpar}) depends
on ${\cal B}$ even if the $a_{\ell m}$ do not. The variance is a measure of how
different ${\cal B}$ is from the diagonal ``correlation matrix'' of an
infinite universe, Eq.~(\ref{eq:infcor}). The larger the difference,
the smaller the variance of $\lambda$, as the random off-diagonal
correlations present in the $a_{\ell m}$ are less likely to match those
of the test-matrix ${\cal B}$. The value of $\ell_{\rm max}$ in the table was
chosen basically arbitrarily, we will discuss later how it influences
the measurements. We have also introduced a ``signal to noise ratio'' S/N
which is the difference of the expectation values, divided by the
errors added in quadrature,
\begin{equation}
S/N({\cal B},X) =
\frac{|\langle X \rangle_\infty - \langle X\rangle_{\cal B}|}
{\sqrt{\sigma(X)^2_\infty+\sigma(X)^2_{\cal B}}} .
\label{eq:sn}
\end{equation}
Here $X$ is the estimator used. This gives only a rough indication
of the true statistical significance with which a universe with
the given topology can be distinguished from an infinite universe.
As the distribution of $\lambda$ and $\chi^2$ are not exactly Gaussian,
S/N is not exactly measured in units of standard deviations. However,
it is sufficient to compare the different methods and to
illustrate how well different topologies can be detected. For
precise statistical results we fit the full distribution, see
appendix \ref{app:rot}.
\begin{table}[ht]
\begin{tabular}{|l|cccc|}
\hline
topology & $\ell_{\rm max}$ & $\lambda_\infty$ & $\lambda_{\cal B}$ & S/N [$\sigma$] \\
\hline
T[2,2,2] & 60 & $0\pm0.017$ & $1\pm0.102$ & $9.7$ \\
T[4,4,4] & 60 & $0\pm0.046$ & $1\pm0.082$ & $10.6$ \\
T[2,2,2] & 16 & $0\pm0.03$ & $1\pm0.34$ & $2.9$ \\
T[4,4,4] & 16 & $0\pm0.09$ & $1\pm0.22$ & $4.2$ \\
T[6,6,1] & 16 & $0\pm0.08$ & $1\pm0.33$ & $2.9$ \\
T[15,15,6] & 16 & $0\pm0.51$ & $1\pm0.59$ & $1.3$ \\
\hline
\end{tabular}
\caption{Comparison of the mean and standard deviation of $\lambda$ for
different topologies and different $\ell_{\rm max}$, normalised with the
true power spectrum. The S/N value is given by Eq.~(\ref{eq:sn}).
\label{tab:lambda}}
\end{table}
The power spectrum $C_\ell$ depends of course on the cosmological
parameters. To minimise this potential
problem we normalise the correlation matrices either by the diagonal
$C_s\equiv \langle a_s a^*_s \rangle$ or by the usual orientation-
averaged power spectrum
\begin{equation}
C_\ell = \frac{1}{2\ell+1} \sum_m |a_{\ell m}|^2 ,
\end{equation}
via the prescription
\begin{equation}
{\cal B}_{ss'} \rightarrow \frac{{\cal B}_{ss'}}{\sqrt{C_s C_{s'}}} .
\end{equation}
This is often called ``whitening'', and it serves to enforce the
same (white noise) power spectrum in both the template and the
model being tested. {\em After} applying this normalisation the
power spectrum is just $C_s=1$. We apply the same normalisation
to the $a_{\ell m}$. As we will not in general know their ``true'' input
power spectrum,
we use the one recovered from the $a_{\ell m}$ themselves.
As can be seen in table \ref{tab:lambdadiv},
the division by the recovered power spectrum greatly reduces the
variance of $\lambda$ and so improves the detection power for the
different topologies. Contrary to table \ref{tab:lambda} we could
not compute the numbers analytically and have estimated them from
$10^4$ random realisations each of maps with the trivial topology
and the ${\cal B}$ topology.
\begin{table}[ht]
\begin{tabular}{|l|cccc|}
\hline
topology & $\ell_{\rm max}$ & $\lambda_\infty$ & $\lambda_{\cal B}$ & S/N [$\sigma$] \\
\hline
T[2,2,2] & 60 & $0\pm0.015$ & $0.973\pm0.030$ & $29.0$ \\
T[4,4,4] & 60 & $0\pm0.051$ & $0.976\pm0.044$ & $14.5$ \\
T[2,2,2] & 16 & $0\pm0.032$ & $0.924\pm0.100$ & $8.8$ \\
T[4,4,4] & 16 & $0\pm0.091$ & $0.948\pm0.100$ & $7.0$ \\
T[6,6,1] & 16 & $0\pm0.083$ & $0.894\pm0.200$ & $4.1$ \\
T[15,15,6] & 16 & $0\pm0.534$ & $0.971\pm0.553$ & $1.3$ \\
\hline
\end{tabular}
\caption{Comparison of the mean and standard deviation of $\lambda$ for
different topologies and different $\ell_{\rm max}$, normalised with the
power spectrum {\em estimated independently for each realisation}. As we
see, the signal to noise ratio is improved considerably.
\label{tab:lambdadiv}}
\end{table}
For an infinite universe $C_s$ is independent of $m$ and it does
not matter whether we divide by $C_s$ or $C_\ell$.
For non-trivial topologies this is not
the case as additional correlations are induced in different
$m$ modes. For this reason, the division by the $m$-averaged $C_\ell$
tends to lead to somewhat stronger constraints.
Of course we lose the information encoded in the power spectrum,
like the suppression of fluctuations with wavelengths larger than
the size of the universe. However, we feel that the improved
stability to mis-estimates of the power spectrum and the reduced
dependence on the cosmological parameters is worth the trade-off.
The numerical evaluation of Eq.~(\ref{eq:corpar}) requires a double
sum over $s_{\rm max}=\ell_{\rm max}(\ell_{\rm max}+2)$ matrix coefficients. It scales
therefore as $\ell_{\rm max}^4$. But the correlation matrix of an infinite
universe is diagonal, so that we only need to perform a single
sum. It should therefore be possible to reduce the work for matrices
that are close to being diagonal, ie.~for universes with a very large
compactification scale. A possibility is to decompose the
correlation matrix into a sum over eigenvalues and
eigenvectors. We can then only retain the most important eigenvectors.
As the correlation
matrix is also a covariance matrix, this is somewhat analogous to
principal component analysis or the Karhunen-Loeve transform.
For a correlation matrix ${\cal B}$ we will write the decomposition as
\begin{equation}
{\cal B}_{ss'} = \sum_i \epsilon^{(i)} v^{(i)}_s v^{(i)*}_{s'}
= \sum_i b^{(i)}_s b^{(i)*}_{s'} .
\label{eq:evec}
\end{equation}
The $\epsilon^{(i)}$ are the eigenvalues
of the matrix ${\cal B}$ and they are real and positive as the matrix is hermitian and positive. This
allows us to define effective spherical harmonics
$b^{(i)}_s\equiv\sqrt{\epsilon^{(i)}}v^{(i)}_s$, which have, for example,
the same properties under rotation as the usual $a_{\ell m}$.
\section{Using the likelihood\label{sec:like}}
Instead of considering the correlation between the recovered and
the theoretical matrix, we can think of the two-point
correlation matrix as the covariance matrix of the $a_{\ell m}$. Then
we may ask the question, what is the probability of a
covariance matrix ${\cal C}$ given the measured $a_{\ell m}$. This can be
answered using Bayesian statistics.
In a first step we need to construct the likelihood function.
The probability distribution for a Gaussian random variable $x$
with variance $\sigma^2$ and zero expectation value is
\begin{equation}
p(x|\sigma) = \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{x^2}{2\sigma^2}}
\end{equation}
If we assume that we measure $x$ and want to know $\sigma$, then
the likelihood function for finding a certain $x$ is given by
${\cal L}(\sigma) \equiv p(x|\sigma)$. We write the likelihood as a function
of the variance, as this is the model parameter that we are interested
in.
For many
independent variables, the probability distribution is the
product, which leads to a sum in the exponent. In the case of
the $a_{\ell m}$, the random variables are
not independent but are distributed according to a
multivariate Gaussian distribution with a covariance matrix
${\cal C}$. The likelihood function then is
\begin{equation}
p(a_{\ell m}|{\cal C}) =
{\cal L}({\cal C}) \propto \frac{1}{\sqrt{|{\cal C}|}} \exp\left\{-\frac{1}{2}
\sum_{s,s'} a^*_s {\cal C}^{-1}_{ss'} a_{s'} \right\} ,
\end{equation}
where $|{\cal C}|$ is the determinant of the matrix ${\cal C}$. The covariance
matrix is given by the two-point correlation matrix, and
$\langle a_s \rangle=0$. Any further model assumptions are
implicitly included in the choice of ${\cal C}$.
Using Bayes law we can invert the probability to find
\begin{equation}
p({\cal C}|a_{\ell m}) = \frac{p(a_{\ell m}|{\cal C}) p({\cal C})}{p(a_{\ell m})} .
\end{equation}
The probability in the denominator is a normalisation constant, while
$p({\cal C})$ is the prior probability of a given topology encoded by
${\cal C}$. We will assume that we have no prior information
about the topology of the universe, so that this is a constant as
well. In this case $p({\cal C}|a_{\ell m})\propto{\cal L}({\cal C})$, ie. we can use the
likelihood function to estimate the probability of a topology
given a set of $a_{\ell m}$. For our purpose, the covariance matrix is
just given by the correlation matrix ${\cal B}$. In general, one may have
to add noise to it, and maybe introduce a sky cut.
Generally it is preferable to consider the logarithm of the likelihood,
$\log({\cal L}) = -1/2(\log(|{\cal B}|)+\chi^2)+{\rm const.}$ where we have
defined
\begin{equation}
\chi^2 = \sum_{s,s'} a^*_s {\cal B}^{-1}_{ss'} a_{s'} .
\end{equation}
We notice that there is a potential issue with the normalisation of
the input model: If $a_s \rightarrow 0$ then $\chi^2\rightarrow0$ --
generally any model whose $a_s$ lead to a bad fit (high $\chi^2$) could
be renormalised until a reasonable likelihood is obtained. It is therefore
required to fix the overall normalisation, and we will do this
by using the whitened $a_s$, in which case the normalisation
is fixed by $\sum_s |a_s|^2=1$.
For the two special cases, the infinite universe and $a_{\ell m}$ distributed
according to ${\cal B}$, we can compute expectation value and variance.
For the general case we will write
$\langle a_s a_{s'}\rangle = {\cal A}_{ss'}$. Then
\begin{equation}
\langle \chi^2 \rangle = \sum_{ss'} \langle a_s^* a_{s'}\rangle {\cal B}_{ss'}^{-1}
= {\rm tr}({\cal A}{\cal B}^{-1}) ,
\end{equation}
where we have used the hermeticity of the correlation matrices. The
two special cases are
\begin{eqnarray}
\langle \chi^2 \rangle_\infty &=& \sum_s C_s {\cal B}_{ss}^{-1} \\
\langle \chi^2 \rangle_{\cal B} &=& {\rm tr}(1) = s_{\rm max}
\end{eqnarray}
As the $a_{\ell m}$ are Gaussian random variables, we expect to find
that $\chi^2$ is distributed with a $\chi^2$-like distribution.
The general expression is rather cumbersome, but for the two
special cases we find
\begin{equation}
\sigma^2_{\cal B} \equiv
\langle (\chi^2)^2 \rangle_{\cal B} - \langle \chi^2 \rangle^2_{\cal B}
= 2 s_{\rm max}
\end{equation}
and
\begin{equation}
\sigma^2_\infty = 2 \sum_{ss'} C_s C_{s'} |{\cal B}^{-1}_{ss'}|^2 .
\end{equation}
We list in table \ref{tab:chi2} some examples, together with the number of
standard deviations that the two expectation values lie apart.
\begin{table}[ht]
\begin{tabular}{|l|cccc|}
\hline
topology & $\ell_{\rm max}$ & $\chi^2_\infty$ & $\chi^2_{\cal B}$ & S/N [$\sigma$] \\
\hline
T[2,2,2] & 60 & $37168\pm2373$ & $3720\pm86$ & $14.1$ \\
T[4,4,4] & 60 & $14656\pm1517$ & $3720\pm86$ & $7.2$ \\
T[2,2,2] & 16 & $5608\pm738$ & $288\pm24$ & $7.2$ \\
T[4,4,4] & 16 & $1802\pm300$ & $288\pm24$ & $5.0$ \\
T[6,6,1] & 16 & $20781\pm7103$ & $288\pm24$ & $2.9$ \\
T[15,15,6] & 16 & $309\pm28$ & $288\pm24$ & $0.6$ \\
\hline
\end{tabular}
\caption{Same as table \ref{tab:lambda} for $\chi^2$.
\label{tab:chi2}}
\end{table}
In these computations, as in the corresponding ones for the
correlation coefficient,
we have assumed that we normalise the observed $a_{\ell m}$ by the
``true'' power spectrum (or diagonal). However, we do not know what it is.
If we instead normalise them by the estimated one (which is different for
each realisation), we change the statistics. It is now no longer Gaussian.
Table \ref{tab:chi2div} reproduces the previous
one, but now for this scenario. We estimated the numbers from $10^4$
numerical realisations for each topology. Again the detection power
increases considerably.
\begin{table}[ht]
\begin{tabular}{|l|cccc|}
\hline
topology & $\ell_{\rm max}$ & $\chi^2_\infty$ & $\chi^2_{\cal B}$ & S/N [$\sigma$] \\
\hline
T[2,2,2] & 60 & $37366\pm1123$ & $4655\pm438$ & $27.1$ \\
T[4,4,4] & 60 & $14932\pm1157$ & $4027\pm162$ & $9.3$ \\
T[2,2,2] & 16 & $5690\pm477$ & $474\pm131$ & $10.5$ \\
T[4,4,4] & 16 & $1841\pm196$ & $335\pm48$ & $7.5$ \\
T[6,6,1] & 16 & $21093\pm5645$ & $786\pm557$ & $3.6$ \\
T[15,15,6] & 16 & $309\pm10$ & $289\pm5$ & $1.8$ \\
\hline
\end{tabular}
\caption{Same as table \ref{tab:lambdadiv} for $\chi^2$.
\label{tab:chi2div}}
\end{table}
In appendix \ref{app:opt} we compare the structure of the correlation
estimator to the likelihood $\chi^2$. We find that for many cases the
$\chi^2$ has minimal variance.
\section{Rotating the map into position}
The situation discussed so far is somewhat misleading:
Nature is rather unlikely
to align the topology of the universe with our coordinate system.
The correlation matrices are not invariant under rotations,
as rotations mix $a_{\ell m}$ with different $m$.
To parametrise the rotations we use the three Euler angles
$\alpha$, $\beta$ and $\gamma$ which describe three subsequent
rotations around the $z$, the $y$ and again the $z$ axis. The
first and last rotation just lead to a phase change.
The rotation around the y-axis couples different $m$ and is
given by Wigner rotation matrices $d^\ell_{mm'}$,
\begin{equation}
a_{\ell m} \rightarrow \sum_{m'} e^{-i(m\alpha+m'\gamma)}
d^\ell_{mm'}(\beta) a_{\ell m'} .
\end{equation}
Together, the three rotations can describe any element of the
rotation group of order $\ell$. We use the relations given
in \cite{choi} to compute the rotation matrices.
Figure \ref{fig:rot} shows an example
where we plot $\lambda$ while rotating the $a_{\ell m}$ azimuthally.
The figure represents the case for $\ell_{\rm max}=60$, for lower values
of $\ell_{\rm max}$ the peaks are less sharp and there is less sub-structure.
The same is true for the $\chi^2$, while the peaks for likelihood,
which is proportional to $\exp(-\chi^2/2)$, are even much narrower.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=70mm]{figures/corr2.eps} \\
\caption{\label{fig:rot} Behaviour of the correlation coefficient
$\lambda$ under a rotation around the z-axis. The signal is maximal
only for very well-defined alignments. We used a T[2,2,2] correlation
matrix and $a_{\ell m}$ derived from a T[2,2,2] topology.}
\end{center}
\end{figure}
We can therefore not avoid probing all possible rotations, either
by computing the average or by taking
the maximum/minimum of our estimator over all orientations.
Possibly the most straightforward approach is to try many random rotations
\cite{graca}. This is simple to program and uses automatically any
symmetries present in the
template. But due to the precision needed to find the best
alignment for some templates, we found that we need in excess of
$10^6$ rotations to get correct results for $\ell_{\rm max}=60$.
We can on the other hand probe
systematically all orientations, for example with the total convolution
method \cite{totconv}. In this approach, the rotations with the three
Euler angles are replaced by a three-dimensional FFT. This speeds the
procedure up a by a large factor. However, we found that
we may nonetheless miss the best-fit peaks which can be very sharp
(see Figs.~\ref{fig:rot} and \ref{fig:map}).
\begin{figure}[ht]
\begin{center}
\includegraphics[width=45mm,angle=90]{figures/corrmap.ps} \\
\caption{\label{fig:map} The maximal correlation coefficient for the
case of a universe with T[2,2,2] orientation. The sharp, high peaks
correspond to the correct orientation of the map with respect to the
template.}
\end{center}
\end{figure}
If we limit ourselves to finding the maximum/minimum efficiently,
then we can also start with a random rotation and search for a
local extremum nearby. We then repeat the procedure for different
random starting locations until we have found a stable global maximum
(for example, eight times the same global maximum). This is the safest
method, and can be relatively fast depending on the topology.
Computing the average is therefore quite difficult and slow.
We also found that using the maximum or minimum results in a much
stronger detection than using the average, at least for the $\lambda$
and $\chi^2$ estimator. It is possible to improve the average by
using the likelihood which is proportional to $\exp(-\chi^2/2)$.
This decreases the weight of the ``wrong'' orientations exponentially.
However, it makes the average even harder to compute. Furthermore,
it lends itself readily to a Bayesian interpretation which is quite
different from the frequentist approach followed so far. For this
reason we will consider only the maximum/minimum approach here and
defer the discussion of the average likelihood to section \ref{sec:evidence}.
We also note that it makes no difference if we consider the
$\chi^2$ estimator or the likelihood when using the extremum
over orientations. The exponential function is monotonic and so
the maximum or minimum point will not change under it (except that
the minimum of the $\chi^2$ will turn into a maximum of the likelihood
and vice versa). For the
same reason, it does not change the statistical weight. If $99$
realisations of model $A$ have a lower $\chi^2$ than any of model
$B$, then those $99$ realisations will have a higher likelihood
as well.
A drawback of using the extremum over all rotations is that we do not
know the resulting distribution function. In general
we have to compute a large number of test-cases to obtain the distribution,
but this is very time-consuming and for high $\ell_{\rm max}$ computing more
than a few hundred realisations becomes prohibitive, at least on a
single processor.
Instead we can find a good approximation to the new distribution
by assuming
that each rotation leads to a new independent Gaussian distribution.
If there are $N$ independent rotations
then we
need to know the distribution of the maximal value of $N$ draws from
a Gaussian distribution. This leads to an extreme value distribution,
and exact results are known only for $N<6$. However, for very large
$N$, the distribution should converge to one of three limiting cases,
analogously to the central limit theorem (see eg.~\cite{ext_val}).
If we fit these distributions
to the numerical results then we can obtain confidence limits with
a reasonable amount of cpu-time. We discuss this in more detail in
appendix \ref{app:rot}.
\begin{table}[ht]
\begin{tabular}{|l|cccc|}
\hline
topology & $\ell_{\rm max}$ & $\chi^2_\infty$ & $\chi^2_{\cal B}$ & S/N [$\sigma$] \\
\hline
T[2,2,2] & 60 & $33237\pm586$ & $4588\pm382$ & $41$ \\
T[4,4,4] & 60 & $11146\pm438$ & $4057\pm204$ & $14$ \\
T[2,2,2] & 16 & $4062\pm172$ & $469\pm172$ & $17$ \\
T[4,4,4] & 16 & $1180\pm73$ & $350\pm47$ & $10$ \\
T[6,6,1] & 16 & $7719\pm1125$ & $675\pm370$ & $6$ \\
T[15,15,6] & 16 & $287\pm2.1$ & $285\pm2.5$ & $0.6$ \\
\hline
\end{tabular}
\caption{Comparison of the mean and standard deviation of the $\chi^2$ for
different topologies and different $\ell_{\rm max}$, normalised with the
power spectrum and minimised over rotations.
\label{tab:chi2rot}}
\end{table}
We compare in tables \ref{tab:chi2rot} and \ref{tab:lambdarot} the
minimal $\chi^2$ and maximal $\lambda$ values respectively, taken over
all possible orientations. We also quote the resulting S/N value. We
notice that especially the $\chi^2$ estimator gains in sensitivity.
This seems rather surprising, as the distance between the estimator
values of an infinite and a finite universe will in general decrease
when taking the extremum. However, we also notice that the variance
is dramatically decreased, which in turn leads to the even higher
detection power.
The reduction of the variance, especially for the infinite universe case,
is easy to understand. In table \ref{tab:lambdadiv} and \ref{tab:chi2div}
we use the best-fit
alignment for the maps of a finite universe. But the maps with the trivial
topology are always randomly aligned (being statistically isotropic). The
variance for the infinite universe maps contains therefore an effective
``random orientation'' contribution. Taking the extremum over all orientations
eliminates this contribution. As the infinite universe variance dominates
strongly in the case of the $\chi^2$ estimator, we find that this estimator
benefits more from the reduction of the variance.
\begin{table}[ht]
\begin{tabular}{|l|cccc|}
\hline
topology & $\ell_{\rm max}$ & $\lambda_\infty$ & $\lambda_{\cal B}$ & S/N [$\sigma$] \\
\hline
T[2,2,2] & 60 & $0.08\pm0.01$ & $0.98\pm0.03$ & $28$ \\
T[4,4,4] & 60 & $0.21\pm0.02$ & $0.98\pm0.05$ & $14$ \\
T[2,2,2] & 16 & $0.16\pm0.02$ & $0.95\pm0.08$ & $10$ \\
T[4,4,4] & 16 & $0.38\pm0.05$ & $0.98\pm0.09$ & $6$ \\
T[6,6,1] & 16 & $0.35\pm0.05$ & $0.94\pm0.19$ & $3$ \\
T[15,15,6] & 16 & $1.84\pm0.25$ & $1.86\pm0.27$ & $0$ \\
\hline
\end{tabular}
\caption{Same as table \ref{tab:chi2rot} for $\lambda$.
\label{tab:lambdarot}}
\end{table}
As a final point, we notice that the maximised value of $\lambda$ for
the T[15,15,6] topology in table \ref{tab:lambdarot} is larger than $1$.
This is a sign that we cannot detect that topology. The fluctuations
are so large that they completely overwhelm the signal. After
maximising over orientations we end up with $\lambda > 1$.
\section{Discussion of general results\label{sec:res}}
\subsection{What angular resolution is required?}
Is it better to test the maps to arbitrarily high $\ell_{\rm max}$, or to
use a lower resolution? One important consideration is the
amount of work (and thus of time) needed to evaluate the
estimator. For both estimators we need to sum over $s$ and
$s'$. This means that the required number of operations scales
like $\ell_{\rm max}^4$. The matrix inversion required for the likelihood
evaluation scales like $\ell_{\rm max}^6$. However, for two
reasons it is normally not the limiting factor. Firstly, as discussed
in the previous section, we still need to average over directions.
To do that we only need to invert the matrix once at the start,
not for every evaluation. But we need to evaluate the likelihood
for each orientation, and the number of the required rotations scales
roughly like $\ell_{\rm max}^2$. We therefore end up with a $\ell_{\rm max}^6$ scaling
at any rate. Secondly, the most time consuming
procedure is the estimation of the variance using simulated
maps, and again we only need to invert the matrix once as it
stays the same. $\ell_{\rm max}^6$ is a rather steep growth,
and it is certainly preferable to use the smallest matrices
that guarantee a detection.
On the other hand, does the detection always improve with growing
$\ell_{\rm max}$? Let us have a look at the correlation estimator, in the
case of a whitened map. Clearly $\sigma_\infty^2 = 2/U$ can only
decrease as long as there are {\em any} off-diagonal elements
in the correlation matrix. But this is not the dominant error.
However, we expect that the main contribution to
Eq.~(\ref{eq:corr_A_error}) is due to the remaining diagonal
entries $s_2=s_3$ and $s_1=s_4$. This term of the sum
is equal to the auto-correlation $U$ and so contributes the same
error as $\sigma_\infty^2$. As the signature of the topology
becomes very weak, we expect that the two errors become
comparable, but are still decreasing functions of $\ell_{\rm max}$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=70mm]{figures/lscal_norot.eps} \\
\caption{\label{fig:lscal_norot} Detection significance assuming that
we know the correct orientation. The topologies were T[2,2,2] (solid black
and dotted red line) and T[4,4,4] (dashed blue and dot-dashed magenta line).
The estimators were respectively the correlation amplitude $\lambda$
(dotted red and dot-dashed magenta line) and the likelihood $\chi^2$
(solid black and dashed blue line).}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=70mm]{figures/lscal_withrot.eps} \\
\caption{\label{fig:lscal_withrot} Detection significance when maximising
over all orientations. The topologies were T[2,2,2] (solid black
and dotted red line) and T[4,4,4] (dashed blue and dot-dashed magenta line).
The estimators were respectively the correlation amplitude $\lambda$
(dotted red and dot-dashed magenta line) and the likelihood $\chi^2$
(solid black and dashed blue line).}
\end{center}
\end{figure}
We compare in Figs.~\ref{fig:lscal_norot} and
\ref{fig:lscal_withrot} the scaling of $S/N(T[2,2,2])$ and
$S/N(T[4,4,4])$ respectively, for the correlation estimator
(red dotted / magenta dash-dotted) and
the likelihood method (black solid / blue dashed). In all cases we used 100
realisations to compute the average and standard deviation, which
explains the noisy curves. As discussed earlier, we find that
taking the extremum over rotations can increase the detection
power, especially for the $\chi^2$ estimator.
We also see that for the T[4,4,4] topology and the correct orientation,
the correlation method eventually overtakes the likelihood method.
This is most likely because the T[4,4,4] correlation matrix is closer
to being diagonal than the T[2,2,2] correlation matrix. At high $\ell$
the diagonal elements start to dominate the contributions to the
$\chi^2$. The correlator method is not sensitive to this contribution
as it does not sum over the diagonal elements. After maximising over
orientations, on the other hand, the likelihood is always superior
to the correlation method, except maybe for the highest $\ell_{\rm max}$.
We further notice that the detection power keeps increasing with
increasing $\ell_{\rm max}$, even though things tend to slow down beyond
$\ell\approx40$. This means that it is useful to consider the
largest $\ell$ for which we have the correlation matrix and
which we can analyse in a reasonable amount of time. Unfortunately, it is also
the case (and hardly surprising) that the smallest universes
profit the most from analysing smaller scales. The traces from
large but finite universes become rapidly weaker as $\ell_{\rm max}$
increases. As there is little practical difference between
a 20 $\sigma$ detection and a 50 $\sigma$ detection, it seems
in general quite sufficient to consider scales up to $\ell_{\rm max}=40$
to $60$. The higher $\ell$ may become more important when we
also consider the ISW effect.
\subsection{What size of the universe can be detected?}
From the suppression of the low-$\ell$ modes in the angular
power spectrum, the T[4,4,4] topology seems a good candidate for
the global shape of the universe. Can we constrain it with
one of our methods? Tables \ref{tab:lambdarot} and
\ref{tab:chi2rot} show that we can indeed distinguish a universe
with T[4,4,4] topology from an infinite one at over 10 $\sigma$.
As in the previous section we plot in Figs.~\ref{fig:Xscal_norot} and
\ref{fig:Xscal_withrot} the detection significance
both before and after maximising over directions. This time
we study two families of slab spaces. The first one, T[X,X,1], has one
very small direction of size $1/H_0$ and we vary the other two. We find that we
can clearly detect this kind of topology at $\ell_{\rm max}=16$ for any size
of the larger dimensions. For this example-topology it is very
striking how the correlation estimator is better if we use the
``correct'' alignment, while the $\chi^2$ becomes more powerful
as we extremise over orientations.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=70mm]{figures/Xscal_norot.eps} \\
\caption{\label{fig:Xscal_norot}
Detection significance assuming that we know the correct
orientation. The topologies were T[X,X,1] (solid black
and dotted red line) and T[15,15,X] (dashed blue and dot-dashed magenta line).
The estimators were respectively the correlation amplitude $\lambda$
(dotted red and dot-dashed magenta line) and the likelihood $\chi^2$
(solid black and dashed blue line). We used $\ell_{\rm max}=16$.}
\end{center}
\end{figure}
The second family, T[15,15,X] is considerably harder to detect as
here two directions are very large and effectively infinite.
For large values of X we cannot find a difference to an infinite
universe. As the third direction shrinks, we start to see
differences, but only for $X\leq3/H_0$ can we detect the non-trivial
topology at over 2 $\sigma$. In this case the correlation method
is always inferior to the $\chi^2$. In appendix
\ref{app:dkl} we consider a more fundamental distance measure
between correlation matrices, namely the Kullback-Leibler
divergence. We confirm that we will never be
able to distinguish T[15,15,X] with $X>6/H_0$ from an infinite universe,
see also Fig.~\ref{fig:dkl_T[15,15,X]}. This is not very surprising,
as in this case the universe is in all directions larger than the
particle horizon today.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=70mm]{figures/Xscal_withrot.eps} \\
\caption{\label{fig:Xscal_withrot} Detection significance when maximising
over all orientations. The topologies were T[X,X,1] (solid black
and dotted red line) and T[15,15,X] (dashed blue and dot-dashed magenta line).
The estimators were respectively the correlation amplitude $\lambda$
(dotted red and dot-dashed magenta line) and the likelihood $\chi^2$
(solid black and dashed blue line). Again $\ell_{\rm max}=16$.}
\end{center}
\end{figure}
\section{A simplified analysis of WMAP data\label{sec:wmap}}
To illustrate the application of these tests to real data, we
perform a simplified analysis of the WMAP \cite{wmap} data. Simplified
in the sense that we do not deal with issues like map noise and sky cuts.
In general, one has to simulate a large number of maps where
both of these effects are included, and which are then analysed with the
same pipeline as the actual data map. However, as an illustration we
will analyse reconstructed full-sky maps. We use the
internal linear combination (ILC) map created by the WMAP team, which
we will call the WMAP map from now on. We also use two map reconstructions
by by Tegmark, a Wiener filtered map (TW) and
a foreground-cleaned map (TC) \cite{tegmap}. All of these maps are
publicly available in HEALPix format \cite{healpix} with a resolution
of $N_{\rm side}=512$. We use this software package to read the map files
and to convert them into $a_{\ell m}$.
To get some idea of the systematic errors in this analysis, we additionally
analyse the ILC map reconstructed by Eriksen {\em et al.}~(LILC). They also
produced a set of simulated LILC maps (for the trivial topology) with the
same pipeline \cite{lilc1,lilc2}. It is a necessary (but not sufficient) condition
to trust our simplified analysis that the results from these maps are
consistent with our results for an infinite universe. As an illustration we plot
in Fig.~\ref{fig:lilcdist} the distribution of $\chi^2$ for our simple infinite
universe maps (black solid histogram) and for the simulated ILC maps which contain noise
and foreground contributions (red dashed histogram). We see that the two
distributions agree quite well, to within their own variance. The variance
observed between the different reconstructed sky maps (WMAP, TC, TW and LILC)
is of the same order of magnitude. This example is for T[2,2,2] and $\ell_{\rm max}=16$, but
it is representative of the other cases.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=70mm]{figures/compare_lilc.eps} \\
\caption{\label{fig:lilcdist} The distribution of the
$\langle\chi^2\rangle_\infty$ estimator values when testing for
a T[2,2,2] universe with $\ell_{\rm max}=16$. The black solid histogram is computed from
10000 noiseless full-sky realisations used throughout this paper, while
the red dashed histogram used 1000 simulated LILC maps (see text). The vertical
lines show the $\chi^2$ values of the measured maps, from the left LILC, TW and WMAP
(coincident) and TC.}
\end{center}
\end{figure}
For our standard example, the T[4,4,4] template, we find a maximal value for the
1st year WMAP ILC map of $\lambda_{\rm max} = 0.20$. This is about
expected for an infinite universe. A universe exhibiting a genuine
T[4,4,4] topology should lead to roughly $\lambda_{\rm max} = 1$.
\begin{table}[ht]
\begin{tabular}{|lc|ccc|ccc|}
\hline
topology & $\ell_{\rm max}$ & $\chi^2$ & $P_\infty$ & $P_{\cal B}$ & $\lambda$ & $P_\infty$ & $P_{\cal B}$ \\
\hline
T[2,2,2] & 60 & 33130 & $0.39$ & $0$ & 0.087 & $0.20$ & $0$ \\
T[4,4,4] & 60 & 11020 & $0.40$ & $0$ & 0.20 & $0.64$ & $0$ \\
T[6,6,1] & 16 & 8805 & $0.85$ & $10^{-6}$ & 0.37 & $0.29$ & $10^{-5}$ \\
T[15,15,6] & 16 & 290 & $0.95$ & $0.01$ & 1.6 & $0.16$ & $0.84$ \\
\hline
\end{tabular}
\caption{The value of $\chi^2$ and $\lambda$ obtained for the WMAP map,
together with the probability of measuring such a value if the universe
is infinite ($P_\infty$) and if the universe has indeed the topology
that we test for ($P_{\cal B}$).
\label{tab:ilc}}
\end{table}
We give in table \ref{tab:ilc} the values of $\chi^2$ and $\lambda$
for the WMAP map. The values for the other maps are not very different.
We also give two probabilities for both estimators,
$P_\infty$ and $P_{\cal B}$. The first one is the probability of measuring
a larger value of $\lambda$ (or a smaller value of $\chi^2$) if the
universe is infinite. $P_{\cal B}$ on the other hand is the probability
of measuring a smaller value of $\lambda$ (or a larger value of
$\chi^2$ if the universe has indeed the topology that we tested for.
For a non-detection of any topology we require $P_\infty$ to be not
too small. A positive detection of a topology on the other hand
requires a larger $P_{\cal B}$. If both probabilities are large then
we cannot detect that topology (as exemplified eg. for the case
of T[15,15,6]).
We compute these probabilities with the best-fitting theoretical PDF,
as discussed in the appendix \ref{app:rot}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=70mm]{figures/chi_TXX1_b.eps} \\
\caption{\label{fig:TXX1} Median and 95\% confidence limits as measured
with the $\chi^2$ estimator for infinite universes (upper green limits) and
universes with a T[X,X,1] topology (red lower limits), as a function of size
$X$ in units $1/H_0$. We also plot the $\chi^2$
values of the WMAP map (red crosses), the TW map
(cyan triangles) and TC map (blue circles) and the LILC map (magenta stars).
All sky maps are consistent with an infinite universe and not
consistent with a T[X,X,1] topology for any $X$. We also plot errorbars
for the LILC map simulations.}
\end{center}
\end{figure}
Fig.~\ref{fig:TXX1} shows 95\% confidence limits (estimated numerically
from $10^4$ samples) when testing for the presence (red, lower band) or absence
(green, upper band) of a T[X,X,1] topology. The WMAP data (points) are all compatible with
the infinite universe and rule out this kind of topology very strongly.
The bounds from the simulated LILC maps (black error bars) are consistent with
our simulated maps with a trivial topology, but systematically a bit lower.
We plot the same in Fig.~\ref{fig:T1515X} for a T[15,15,X] topology. Again,
WMAP is compatible with the infinite universe. But as discussed before,
we cannot detect these universes for $X>3/H_0$.
Overall, all results are consistent with an infinite universe.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=70mm]{figures/chi_T1515X.eps} \\
\caption{\label{fig:T1515X} The same as Fig.~\ref{fig:TXX1} for the
T[15,15,X] topology. Again all WMAP maps are consistent with an infinite
universe, but we can only rule out the universes with $X<3$ at more
than 95\% CL.}
\end{center}
\end{figure}
\section{Bayesian model selection\label{sec:evidence}}
The likelihood can also be used in a purely Bayesian approach.
We are interested in the probability of a model given the data,
$p({\cal M}|d)$.
If all topologies are taken to be equally probable, then through Bayes
theorem the statistical evidence ${\cal E}({\cal M})$ for a model is proportional to the
probability of that model, given the data.
Using the three Euler angles as parameters $\Theta$,
defining the model ${\cal M}$ to be a given topology, and the data $d$
the measured $a_{\ell m}$ we can write the model evidence as
\begin{equation}
{\cal E}({\cal M}) \propto p(d|{\cal M}) = \int \mu(\Theta) \pi(\Theta) {\cal L}(\Theta) ,
\end{equation}
where $\pi(\Theta)$ is the prior on the orientation of the map, see
eg.~\cite{mckay}.
The ratio of the evidence for two topologies
is a Bayesian measure of the relative probability. We
can think of it as the relative odds of the two topologies.
A similar method to constrain the topology was applied previously to
the COBE data, see \cite{inoue}.
The measure $\mu(\Theta)$ on SO(3)
needs to be independent of the orientation\footnote{The prior and the measure
play a similar r\^{o}le and could be combined into a single quantity.
We prefer to keep them separate here to avoid confusion.}, which pretty much singles
out the Haar measure (up to an irrelevant constant). In terms of the
Euler angles it is
$d\alpha d\beta d\gamma \sin(\beta)/(8\pi^2)$ with $\alpha$ and $\gamma$
going from $0$ to $2\pi$, and $\beta$ from $0$ to $\pi$. The volume
of SO(3) is then $\int \mu(\Theta)=1$. A simple way to generate random
orientations is to select $\alpha$ and $\gamma$ uniformly in $[0,2\pi]$
and $u$ in $[-1,1]$ and then set $\beta=\arccos(u)$.
The advantage of using Bayesian evidence is that it provides a
natural
probabilistic interpretation which depends only on the actually
observed data, but not on simulated data sets.
Because of this, there is no need to run large
comparison sets. This is a very different view point from
the frequentist approach followed so far.
For an infinite universe the correlation matrix is diagonal and
rotationally invariant (due to isotropy). The integral over the
alignment becomes trivial in this case. If we use whitening then
the correlation matrix is just the unit matrix and we have
\begin{equation}
\chi^2 = \sum_s |a_s|^2 = s_{\rm max}.
\end{equation}
The second equality is due to the whitening. The likelihood is then
\begin{equation}
{\cal L}_\infty = \frac{{\rm const}}{|1|^{1/2}} e^{-1/2\chi^2(\theta)}
= {\rm const}\, e^{-s_{\rm max}/2} ,
\end{equation}
where the constant normalisation is independent of the topology. We will
neglect it as it drops out when comparing the evidence for different
models. This ``infinite'' evidence gives us a reference point, with
our choice of measure on SO(3) and of normalisation it is
\begin{equation}
-\log({\cal E}_\infty) = \frac{s_{\rm max}}{2}.
\end{equation}
On the other hand, if the universe is infinite then
we know that the expected $\chi^2$ is the trace of the inverse of the
correlation matrix that we test for. It is again rotationally invariant
as $\langle a_s a_{s'}^*\rangle$ is rotationally invariant. The log-evidence
is on average
\begin{equation}
-\log({\cal E}) = \frac{1}{2}\left({\rm tr}({\cal B}^{-1})+\log|{\cal B}|\right).
\end{equation}
We notice that the expected log-evidence difference to the true
infinite universe is the Kullback-Leibler divergence,
\begin{equation}
\Delta \log({\cal E}) = D_{KL}(1||{\cal B})
= \frac{1}{2} \left(\log|{\cal B}|+{\rm tr}({\cal B}^{-1}-1)\right)
\label{eq:infev}
\end{equation}
We should not forget though that this is a very crude approximation to
the evidence.
Nonetheless, Eq.~(\ref{eq:infev}) gives a useful indication of the odds
that we can detect a given topology, as it can be evaluated very rapidly,
without performing the integration over orientations. Fundamentally,
this is the amount of additional information about topology contained
in the correlation matrix ${\cal B}$. If the amount of information is not
sufficient to distinguish it from an infinite universe, no test will
ever be able to tell the two apart.
We discuss the Kullback-Leibler (KL) divergence and its possible
applications in more detail in appendix \ref{app:dkl}.
Of course, faced with real data we have to evaluate the actual
evidence integral.
Unfortunately the likelihood is extremely strongly peaked around
the correct alignments (especially for a non-trivial topology),
and it is very difficult to sample from it. Already the $\lambda$
and $\chi^2$ estimators require a very precise alignment to reach
the true maximum or minimum. Exponentiating $-\chi^2$ leads to much
narrower peaks in the extrema, and makes the problem far worse.
In Fig.~\ref{fig:like} we plot the relative likelihood (normalised
to unity at the peak) for a universe with T[4,4,4] topology close to
a correct alignment (the vertical line), and for different $\ell_{\rm max}$.
The broadest peak corresponds to $\ell_{\rm max}=16$, and we added the
location of $10^4$ points evenly spaced between $0$ and $2\pi$ as
black crosses.
This corresponds to a total of roughly $10^{11}$ points to cover all of
SO(3). For $\ell_{\rm max}=16$ we could get away with using only every 10th
point (about $10^8$ points in total) and still detect the high-likelihood
region. But not so for $\ell_{\rm max}=32$ and $60$ (the narrower peaks), which
would easily be missed.
This renders methods like thermodynamic integration infeasible. On
the other hand, we are dealing only with three parameters. Direct
integration is therefore marginally possible by using an adaptive
algorithm. For $\ell_{\rm max}=16$ we need to start out with at least $10^6$ points
in order to detect the high-probability regions at all. This means that we
have to count on $10^7$ to $10^8$ likelihood evaluations. The situation
gets worse for higher resolution maps, as both the likelihood evaluations
require more time and the high-probability regions shrink. We therefore
only quote results for $\ell_{\rm max}=8$ in this section.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=70mm]{figures/like.eps} \\
\caption{\label{fig:like} Relative likelihood for a T[4,4,4] topology around
one of the symmetry points where a simulated T[4,4,4] map aligns correctly.
The broadest (black) curve is for $\ell_{\rm max}=16$, the intermediate (red) curve
$\ell_{\rm max}=32$ and the narrowest (blue) curve $\ell_{\rm max}=60$.
The vertical green line lies at $\phi=\pi/2$. The crosses show the location of
$10^4$ points between $\phi=0$ and $\phi=2\pi$. }
\end{center}
\end{figure}
\begin{table}[ht]
\begin{tabular}{|lc|cccc|c|}
\hline
topology & $\ell_{\rm max}$ & WMAP & TC & TW & LILC & $D_{KL}(1||{\cal B})$ \\
\hline
$\infty$ & 8 & $-17$ & $-17$ & $-17$ & $-17$ & 0 \\
T[2,2,2] & 8 & $-114$ & $-103$ & $-100$ & $-102$ & 172 \\
T[4,4,4] & 8 & $-46$ & $-41$ & $-47$ & $-44$ & 64 \\
T[6,6,1] & 8 & $-526$ & $-\infty$ & $-\infty$ & $-\infty$ & 1733 \\
T[15,15,6] & 8 & $-17$ & $-18$ & $-18$ & $-17$ & 1 \\
\hline
\end{tabular}
\caption{
The log-evidence $\log_{10}({\cal E})$ for a range of topologies and data maps
(see text). We also quote the KL divergence with respect to an
infinite universe for comparison.
\label{tab:evidence}}
\end{table}
As the data sets which define our likelihood we use the same four maps
as in the frequentist analysis: The ILC map by the WMAP team (WMAP), two
maps by Tegmark et al, the Wiener filtered map (TW) and the foreground-
cleaned map (TC) and the ILC map by Eriksen {\em et al.}~(LILC).
We quote the logarithm (to base 10) of the evidence in table \ref{tab:evidence} for
our usual range of example models. The relevant quantity for model comparison is
the difference of these values (corresponding to the ratio of the probability). If the
log evidence of a model $A$ is 3 higher than the log evidence of model $B$, we
conclude that the odds for model $A$ are $10^3$ times better. This can be seen
as fairly good odds in favour of model $A$. We plot in Fig.~\ref{fig:erf} the
correspondence between the logarithm of a probability ratio and the number
of standard deviations ($\sigma$) for a Gaussian random variable.
All topologies except T[15,15,6] are excluded at high confidence. The evidence values
for the different reconstructed CMB maps agree at least qualitatively. We plot
in Fig.~\ref{fig:wmap_ev} the evidence of the T[15,15,X] cases as a function of X.
The two smallest universes are strongly excluded, $X=2$ could
be excluded if we used a higher resolution, and the rest are too close to
the infinite universe to be constrained. We also plot the mean and
standard deviation of the simulated LILC maps as error bars.
The T[X,X,1] cases are all so completely excluded that the integral
is just barely feasible given the huge numbers involved.
We would like to remind the reader that the results in this section are always
relative to the observed map. It is therefore a bit worrying that the evidences
differ by several orders of magnitude when we consider the different full-sky
reconstructions. We also checked the stability of the results for 1000 simulation
of the LILC map with known (trivial) topology. We found it to be rather poor
(cf the large error bars in fig~\ref{fig:wmap_ev}), although
this may be partially due to the smaller range of $\ell$. Another possible source
for this lack of stability is our simplistic likelihood. The Bayesian interpretation
of the results is only true if we are able to derive the correct likelihood. This
is an important difference to the frequentist results where we calibrate the
statistical interpretation with the comparison sets. In the frequentist scenario,
we may end up with a sub-optimal test, but we will not get wrong results if we use
the wrong likelihood function. Not so in the Bayesian case, which forces us to
be more careful. A possible way out is to reconstruct a likelihood from the
set of simulated LILC maps.
Normally, a difference
of $2$ to $3$ in $\log_{10}({\cal E})$ is taken to be sufficient to strongly disfavour
a model against another one. This may be reasonable for a full analysis that takes
into account all the issues discussed in the following section. For full-sky
reconstructed maps we feel that we should require at least a difference of $10$.
Overall it seems that the frequentist approach leads to results which are more
stable against the uncertainties introduced by the full-sky reconstruction and
foreground removal.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=70mm]{figures/ev_wmap_lmax8.eps} \\
\caption{\label{fig:wmap_ev} The evidence of a T[15,15,X] topology with $\ell_{\rm max}=8$
for four different full-sky reconstructions of the WMAP data
(WMAP red crosses, TW cyan triangle, TC blue circle and LILC magenta stars).
The black error bars are derived from simulated LILC maps. They are consistent
with the actual LILC data map. The green line shows the predicted evidence of an
infinite universe.
}
\end{center}
\end{figure}
\section{Cosmic Complications\label{sec:complic}}
This paper aims at introducing and discussing the different methods for
constraining the topology of the universe in harmonic space. In doing so
we study an idealised situation with perfect data, neglecting several
issues that are present in the real world. Here we give a quick overview
over the main complications that will have to be dealt with for a rigorous
analysis. Clearly they will change the quantitative results presented
here, but we do not expect that they will lead to qualitative changes in
the results.
\subsection{Noise\label{sec:noise}}
If we assume constant and independent per-pixel noise $\sigma_N$ then
the covariance matrix of the $a_{\ell m}$ acquires an additional diagonal
term,
\begin{equation}
\langle a_s^* a_{s'} \rangle + \sigma_N^2 \delta_{ss'} .
\end{equation}
This is fairly close to what many CMB experiments (like WMAP and Planck)
expect for their data. The
CMB power spectrum on large scales behaves roughly like $1/\ell^2$
(Harrison-Zel'dovich) with a power of about $C_{10}\approx 60 \mu{\rm K}^2$.
For any experiment that probes scales beyond the first peak, we can
conclude that the large scales ($\ell<100$ say) are completely
signal dominated. Taking WMAP as an example, we see that Fig.~1 of
\cite{wmap_hinshaw} gives a noise contribution
to the $C_\ell$ of $0.1$ to $0.6 \mu{\rm K}^2$ depending on the assembly. As the
noise additionally (to first order) does not enter in the off-diagonal
terms, we can safely neglect it for a first analysis.
More generally we expect a fixed noise variance per detector and
per observation. The resulting
per-pixel noise is $\sigma_N(x) = \sigma_0/\sqrt{N_{\rm obs}}$.
Turning again to WMAP as an example, we find that
they cite a noise variance $\sigma_0 \approx 2 - 7$ mK.
Expressed in terms of the spherical harmonic coefficients, the
correlation matrix in this scenario becomes
\begin{equation}
\langle a_s^* a_{s'} \rangle +
\sigma_0^2 \int d^2\!x N_{\rm obs}^{-1}(x) Y^*_s(x) Y_{s'}(x)
\end{equation}
where the integration runs over all pixels $x$. Because of its spatial
variation, the noise is no
longer confined to the diagonal and should strictly speaking be taken
into account. But the off-diagonal terms will still be very small.
The most straightforward way to include the noise is to simulate maps with
the correct power spectrum and noise properties and to co-add them.
This is especially the case when we deal with a complicated sky cut (see below).
The ILC maps that we used here have more complicated noise properties
due to the full-sky reconstruction. But the noise itself will still be
negligible on large scales, compared to the signal. More worrying are
potential foreground contaminations that were not completely subtracted.
We explore that problem partially in section \ref{sec:wmap} by using
simulated LILC maps.
\subsection{Uncertainties in the cosmological parameters}
So far we have used correlation matrices computed for a fixed cosmological
model. But there are still significant
uncertainties present in the true value of the cosmological parameters,
and even in the underlying cosmological model. An example was recently
discussed in \cite{params}.
In principle we have to take such uncertainties into account. For the
Bayesian model selection approach, we could do it straight-forwardly by marginalising
over them. Of course this may mean computing a large number of correlation
matrices for different cosmological models, which would lead to a
computational challenge. Alternatively, one should consider a selection
of models and incorporate the variance of the correlations into a systematic
error on the correlation matrices.
In practise, we hope that the whitening which eliminates differences in the
power spectrum will also minimise the effects due to this parameter
uncertainty. At the very least it will do so for the ``infinite universe''
tests where no off-diagonal correlations are present. The result that
the full-sky WMAP maps are compatible with an infinite universe is thus
not affected by the parameter uncertainty.
\subsection{The integrated Sachs-Wolfe effect \label{sec:isw}}
An issue somewhat related to the last point is that not all perturbations
are generated on the last scattering surface. Some of them are due to the
integrated Sachs-Wolfe (ISW) effect. Especially perturbations due to the
late ISW effect that
are generated relatively close to us are then not affected by the global
topology and carry no information about it. They act as a kind of noise
for our purposes. This contribution is especially problematic when
searching for matching circles in pixel space. It is readily included
when working with the correlation matrices, even though it will also be
subject to the parameter uncertainties and it will lower our
detection power substantially.
The rapid decrease of the late ISW effect with increasing $\ell$
provides an additional incentive for probing smaller scales,
$\ell\approx40-60$.
\subsection{Sky cuts}
Here we have only considered full-sky maps. Unfortunately a large part
of the sky-sphere is covered by our galaxy which leads
to foregrounds that are not easy to subtract and obscure the true CMB
signal. The most conservative approach is therefore to remove a part
of the sky via a sky cut. This amounts to introducing a mask ${\cal M}(x)$ in
pixel space, with value $1$ on the pixels $x$ where the CMB signal is
clean, and $0$ in the contaminated parts of the sky. We then
consider the pseudo-$a_{\ell m}$
\begin{equation}
\hat{a}_{\ell m} = \int d^2\!x {\cal M}(x) \delta T(x) Y_{\ell m} (x)
\end{equation}
instead of the true $a_{\ell m}$. We can perform the masking operation directly
in harmonic space, using the spherical Fourier transform of the mask,
\begin{equation}
{\cal M}_{s s'} = \int d^2\!x {\cal M}(x) Y_s(x) Y^*_{s'}(x) .
\end{equation}
The relation between the true $a_{\ell m}$ and the observed pseudo-$a_{\ell m}$ is
then given by $\hat{a}_s = \sum_{s'} {\cal M}_{ss'} a_{s'}$. Unfortunately
the mask matrix ${\cal M}$ corresponds to a loss of information and can in
general not be inverted. We could of course use SVD to invert it,
and eliminate the small SVD eigenvalues. However, this would be quite
similar to a full-sky reconstruction. Instead, it may be preferable
to apply the sky cut to the correlation matrix as well. The resulting
pseudo-correlation matrix is then
\begin{equation}
\hat{{\cal B}} = {\cal M}^{\dag} {\cal B} {\cal M} .
\end{equation}
This leads to two problems. The first one is purely computational:
The sky cut has a fixed orientation (with respect to the $a_{\ell m}$).
So far it did not matter if we rotated the correlation matrix or
the $a_{\ell m}$, as only the relative orientation counted. But since the
sky cut defines an absolute orientation we now need to apply the
rotation to the correlation matrix. Rotating
the correlation matrix is considerably more costly than rotating the
observed $a_{\ell m}$. The use of the eigenvector
decomposition (\ref{eq:evec}) and rotation of the effective spherical
harmonics $b_{\ell m}$ can somewhat alleviate the situation if only
a few eigenvalues dominate the sum.
The second problem is that a sky cut and its associated mask matrix
introduce just the kind of correlations between different $a_{\ell m}$ that
we are looking for. A sky cut will impact significantly on
our ability to constrain large universes. We will have to either
accept this limitation, or hope that better full-sky reconstruction
and component separation methods (for example \cite{comp_sep}) will
become available. However, one would have to demonstrate that such
methods do indeed not change the correlation properties of the $a_{\ell m}$
in a way that influences the detection of a topology-signature. At
the very least one has to consider such effects as systematic
errors and include them in the error budget of a full analysis.
\section{Conclusions and outlook}
In this paper we have studied three ways to constrain
the topology of our universe directly with the
correlation matrix of the $a_{\ell m}$.
If the primordial fluctuations are Gaussian then
these correlation matrices contain all the information about the
global shape of our universe that is carried by the CMB. By trying
to find their traces in the measured $a_{\ell m}$ we can construct the
most sensitive probes possible.
We studied two frequentist estimators, $\lambda$ which describes
the correlation amplitude between the theoretical correlation
matrix ${\cal B}$ and the measured $a_{\ell m}$, and $\chi^2=a^\dag {\cal B}^{-1} a$.
Although $\lambda$ has certain advantages at high $\ell$ by leaving
out the diagonal terms, we found the $\chi^2$ to be generally superior
after taking into account the random orientation of the observed map. We also computed
the Bayesian evidence, which we found to be a very sensitive probe.
But the angular integration is computationally very intensive, especially
at high resolutions. Additionally, much care is needed in constructing
the likelihood function. For these reasons,
the $\chi^2$ minimised over rotations seems the most useful of our tests.
For our scenario we find that even high multipoles, $\ell > 50$,
still carry important information about the topology. However,
the amount of work needed to extract the information scales as
a high power of $\ell$. For most cases $\ell \approx 30 - 40$ seems
a sufficient upper limit.
We finally apply our methods to a set of reconstructed full-sky maps based
on WMAP data. For all topologies considered (cubic and slab tori) we
find no hints of a non-trivial topology. Based on the exclusion of the
T[4,4,4] topology, we conclude that the fundamental domain is at last
$19.3\textrm{Gpc}$ long if it is cubic. We rule out (not very surprisingly)
any universe where a fundamental domain in any direction is smaller
than $4.8$Gpc (based on the T[X,X,1] cases). If the universe is infinite
in two directions, then the third direction has to be larger than
$14.4$Gpc. These limits still allow two copies of the universe inside
the current particle horizon. We prefer to understand this analysis
as a demonstration of our methods, as we neglected a range of important
issues such as the ISW effect.
The noise of the WMAP data is already cosmic variance dominated on
the scales of interest. Future experiments will not be able to provide
significantly better CMB temperature data sets, although some improvement
may come from better foreground separation with more frequencies, and
from e.g. using polarisation maps in addition to the temperature
maps. Short of waiting a few
billion years for the universe to expand further, these tests and especially
the information theoretical limits provided by the Kullback-Leibler divergence
give us an idea about what we can learn of the shape of our universe.
\begin{acknowledgments}
It is a pleasure to thank Andrew Liddle and Peter Wittwer
for helpful comments.
MK and LC thank the IAS Orsay for hospitality; this work
was partially supported by CNES and the Universit\'{e}
Paris-Sud Orsay.
MK acknowledges financial support from the Swiss National
Science Foundation. Part of the numerical analysis was
performed on the University of Geneva Myrinet cluster.
\end{acknowledgments}
| 2024-02-18T23:39:49.908Z | 2006-01-28T19:34:08.000Z | algebraic_stack_train_0000 | 561 | 12,302 |
|
proofpile-arXiv_065-2851 | \section{Project Summary}
Detailed quantitative spectroscopy of Type Ia supernovae (SNe~Ia)
provides crucial information needed to minimize systematic effects in
both ongoing SNe~Ia observational programs such as the Nearby
Supernova Factory, ESSENCE, and the SuperNova Legacy
Survey (SNLS) and in proposed JDEM missions such as SNAP, JEDI, and DESTINY.
Quantitative spectroscopy is mandatory to quantify and understand the
observational strategy of comparing ``like versus like''. It allows us
to explore evolutionary effects, from variations in progenitor
metallicity to variations in progenitor age, to variations in dust
with cosmological epoch. It also allows us to interpret and quantify
the effects of asphericity, as well as different amounts of mixing in the
thermonuclear explosion.
While all proposed cosmological measurements will be based on empirical
calibrations, these calibrations must be interpreted and evaluated in
terms of theoretical explosion models. Here quantitative spectroscopy
is required, since explosion models can only be tested in
detail by direct comparison of detailed NLTE synthetic spectra with
observed spectra.
Additionally, SNe IIP can be used as complementary cosmological probes
via the spectral fitting expanding atmosphere method (SEAM) that we
have developed. The SEAM method in principle can be used for distance
determinations to much higher $z$ than Type Ia supernovae.
We intend to model in detail the current, rapidly growing, database
of SNe Ia and SNe IIP. Much of the data is immediately available in
our public spectral and photometric database SUSPECT, which is widely used
throughout the astronomical community.
We bring to this effort a variety of complementary synthetic spectra
modeling capabilities: the fast parameterized 1-D code SYNOW; BRUTE, a
3-D Monte-Carlo with similar assumptions to SYNOW; a 3-D Monte-Carlo
spectropolarimetry code, SYNPOL; and the generalized full NLTE, fully
relativistic stellar atmosphere code PHOENIX (which is being
generalized to 3-D).
\section{Cosmology from Supernovae}
While indirect evidence for the cosmological acceleration can be
deduced from a combination of studies of the cosmic microwave
background and large scale structure
\citep{efstat02,map03,eisensteinbo05}, distance measurements to
supernovae provide a valuable direct and model independent tracer of
the evolution of the expansion scale factor necessary to constrain the
nature of the proposed dark energy. The mystery of dark energy lies
at the crossroads of astronomy and fundamental physics: the former is
tasked with measuring its properties and the latter with explaining
its origin.
Presently, supernova measurements of the cosmological parameters are
no longer limited by statistical uncertainties, but systematic
uncertainties are the dominant source of error \citep[see][for a
recent analysis]{knopetal03}. These include the effects of evolution (do
SNe~Ia behave in the same way in the early universe?), the effect of
intergalactic dust on the apparent brightness of the SNe~Ia, and
knowledge of the spectral energy distribution as a function of light
curve phase (especially in the UV where are current data sets are
quite limited).
Recently major ground-based observational programs have begun: the
Nearby SuperNova Factory \citep[see][]{aldering_nearby,
nugent_nearby}, the European Supernova Research Training Network
(RTN), the Carnegie Supernova Project (CSP), ESSENCE, and the
SuperNova Legacy Survey. Their goals are to improve our understanding
of the utility of Type Ia supernovae for cosmological measurements by
refining the nearby Hubble diagram, and to make the first definitive
measurement of the equation of state of the universe using $z < 1$
supernovae. Many new programs have recently been
undertaken to probe the rest-frame UV region at moderate $z$,
providing sensitivity to metallicity and untested progenitor
physics. SNLS has found striking diversity in the UV behavior that is not
correlated with the normal light curve stretch parameter. As precise
knowledge of the $K$-correction is needed to use SNe~Ia to trace the
deceleration expected beyond $z=$1 \citep{riessetal04a}, understanding
the nature of this diversity is crucial in the quest for measuring
dark energy. We plan to undertake an extensive theoretical program,
which leverages our participation with both SNLS and the Supernova
Factory, in order to refine our physical understanding of supernovae
(both Type Ia and II) and the potential systematics involved in their
use as cosmological probes for the Joint Dark Energy Mission (JDEM).
In addition to SNe~Ia, the Nearby Supernova Factory will
observe scores of Type IIP supernovae in the Hubble
Flow. These supernovae will provide us with a perfect laboratory to
probe the mechanisms behind these core-collapse events, the energetics
of the explosion, asymmetries in the explosion event and thereby
provide us with an independent tool for precision measurements of the
cosmological parameters.
The SEAM method has shown that accurate distances may be obtained to
SNe~IIP, even when the classical expanding photosphere method fails
\citep[see Fig.~\ref{fig:fits} and][]{bsn99em04}. Another part of the
SN~IIP study is based a correlation between the absolute brightness of
SNe~IIP and the expansion velocities derived from the Fe~II 5169 \AA\
P-Cygni feature observed during their plateau phases
\citep{hp02}. We have refined this method in two ways (P. Nugent
{\it et al.}, 2005, in preparation) and have applied it to five
SNe~IIP at $z < 0.35$. Improving the accuracy of measuring distances
to SNe~IIP has potential benefits well beyond a systematically
independent measurement of the cosmological parameters based on SNe~Ia
or other methods. Several plausible models for the time evolution of
the dark energy require distance measures to $z \simeq 2$ and beyond. At
such large redshifts both weak lensing and SNe\,Ia may become
ineffective probes, the latter due to the drop-off in rates suggested
by recent work \citep{strolger04}. Current models
for the cosmic star-formation history predict an abundant source of
core-collapse at these epochs and future facilities, such as JDEM, in
concert with the James Webb Space Telescope (JWST) or the Thirty Meter
Telescope, could potentially use SNe~IIP to determine distances at
these epochs.
\emph{Spectrum synthesis computations provide the only
way to study this wealth of data and use it to quantify and correct
for potential systematics and improve the distances measurements to
both SNe~Ia and SNe~IIP.}
\section{Understanding the 3-D Nature of Supernovae}
While most SNe~Ia do not show signs of polarization, a subset of them
do. These supernovae will play a role in determining the underlying
progenitor systems/explosion mechanisms for SNe~Ia which is key to
ascertaining potential evolutionary effects with redshift. Flux and
polarization measurements of SN~2001el \citep{wangetal01el03} clearly
showed polarization across the high-velocity Ca~II IR triplet. A 3-D
spectopolometric model fit for this object assumes that there is a
blob of calcium at high-velocity over an ellipsoidal atmosphere with
an asphericity of $\approx$ 15\% \citep[see Fig~\ref{fig:sn01elclump}
and][]{kasen01el03}. \citet{KP05} have shown that a gravitationally
confined thermonuclear supernova model can also explain this
polarization signature. If this is in fact the correct hydrodynamical
explosion model for SNe~Ia, then the parameter space for potential
systematics becomes significantly smaller in their use as standard
candles. Currently there are a wide variety of possible mechanisms to
make a SN~Ia each with its own set of potential evolutionary
systematics. \citet{thomas00cx04} showed that the observed spectral
homogeneity implies that arbitrary asymmetries in SNe~Ia are ruled
out. The only way to test detailed hydrodynamical models of the
explosion event is to confront observations such as those that will be
obtained via the Nearby Supernova Factory with the models via spectrum
synthesis.\emph{The importance of studying these events in 3-D is
clear from the observations, and therefore every effort must be made
to achieve this goal.}
\section{\tt PHOENIX}
\label{phoenix}
In order to model astrophysical plasmas under a variety of conditions,
including differential expansion at relativistic velocities found in
supernovae, we have developed a powerful set of working computational
tools which includes the fully relativistic, non-local thermodynamic
equilibrium (NLTE) general stellar atmosphere and spectral synthesis
code {\tt PHOENIX}
\citep{hbmathgesel04,hbjcam99,hbapara97,phhnovetal97,ahscsarev97}. {\tt
PHOENIX} is a state-of-the-art model atmosphere spectrum synthesis
code which has been developed and maintained by some of us to tackle
science problems ranging from the atmospheres of brown dwarfs, cool
stars, novae and supernovae to active galactic nuclei and extra-solar
planets. We solve the fully relativistic radiative transport equation
for a variety of spatial boundary conditions in both spherical and
plane-parallel geometries for both continuum and line radiation
simultaneously and self-consistently. We also solve the full
multi-level NLTE transfer and rate equations for a large number of
atomic species, including non-thermal processes.
To illustrate the nature that our future research will take, we now
describe some of the past SN~Ia work with \texttt{PHOENIX}.
\citet{nugseq95}, showed that the diversity in the peak of the light
curves of SNe~Ia was correlated with the effective temperature and
likely the nickel mass (see Fig.~\ref{fig:nugseq}). We also showed
that the spectroscopic features of Si~II and Ca~II near maximum light
correlate with the peak brightness of the SN~Ia and that the spectrum
synthesis models by {\tt PHOENIX} were nicely able to reproduce this
effect. We were able to define two spectroscopic indices $\Re_{Si}$\xspace and $\Re_{Ca}$\xspace
(see Figs~\ref{fig:RSiDef}--\ref{fig:RCaDef}), which correlate very
well with the light curve shape parameter \citep{garn99by04}. These
spectroscopic indices offer an independent (and since they are
intrinsic, they are also reddening independent) approach to
determining peak luminosities of SNe~Ia. S.~Bongard et al. (in
preparation) have shown that measuring these spectroscopic indicators
may be automated, and that they can be used with the spectral signal
to noise and binning planned for the JDEM missions SNAP and JEDI.
The relationship between the width (and hence risetime) of the
lightcurves of SNe~Ia to the brightness at maximum light is crucial
for precision cosmology. It is well known that the square of the time
difference between
explosion and peak brightness, $t_{\rm rise}^2$ is proportional to
the opacity, $\kappa$, \citep{arnett82,byb93}. In an effort to find a
more direct connection between SN~Ia models and the light-curve shape
relationships we examined the Rosseland mean opacity, $\kappa$, at the
center of each model. We found that in our hotter, more luminous
models $\kappa$ was a factor of 2 times greater than in our cooler,
fainter models. This factor of 1.4 in $t_{\rm rise}$ is very near to
what one would expect, given the available photometric data, for the
ratio of the light-curve shapes between the extremes of SN~1991T (an
over-luminous SN~Ia with a broad light curve) and SN~1991bg (an
under-luminous SN~Ia with a narrow light curve).
We have been studying the effects of evolution on the spectra of
SNe~Ia, in particular the role the initial metallicity of the
progenitor plays in the peak brightness of the SN~Ia. Due to the
effects of metal line blanketing one expects that the metallicity of
the progenitor has a strong influence on the UV spectrum
\citet{hwt98,lentzmet00}. In \citet{lentzmet00} we quantified these
effects by varying the metallicity in the unburned layers and
computing their resultant spectra at maximum light.
Finally we note the work we have done on testing
detailed hydrodynamical models of SNe~Ia \citep{nughydro97}.
It is clear from these calculations that the
sub-Chandrasekhar ``helium-igniter'' models \citep[see for
example][]{wwsubc94} are too
blue in general and that very few spectroscopic features match the
observed spectrum. On the other hand, the Chandrasekhar-mass model W7 of
\citet{nomw7} is a fairly good match to the early spectra (which are most
important for cosmological applications) of the most typical
SNe~Ia. \citet{l94d01} calculated an extensive time series of W7 and
compared it with that of the well observed nearby SN~Ia SN~1994D. In
this work we showed that W7 fits the observations pretty well at
early times, but the quality of the fits degrades by about 15 days
past maximum light. We speculate that this implies that the outer
layers (seen earliest) of W7 reasonable well represent normal SNe~Ia,
whereas the
inner layers of SNe~Ia are affected by 3-D mixing effects. With the
work described here, we will be able to directly test this hypothesis
by calculating the spectra of full 3-D hydrodynamical calculations now
being performed by Gamezo and collaborators and by the Munich
group (Hillebrandt and collaborators). \citet{bbbh05}
have calculated very detailed NLTE models of W7
and delayed detonation models of \citet{HGFS99by02}. We find that W7
does not fit the observed Si~II feature very well, although it does a
good job in other parts of the spectrum. The delayed-detonation models
do a bit better, but a highly parameterized model is the best. We
will continue this work as well as extending it to 3-D
models. This will significantly impact our understanding of SNe~Ia
progenitor, something that is crucial for the success of JDEM.
We stress that the quantitative spectroscopic studies discussed here do
not just show that a proposed explosion model fits or doesn't fit
observed spectra, but provides important information into just how
the spectrum forms. One learns as much from spectra that don't fit as
from ones that do.
Our theoretical work provides important constraints on the science
definition of JDEM, helps to interpret the data coming in now from
both nearby and mid-redshift surveys and involves ongoing code
development to test 3-D hydrodynamic models, as well as both flux and
polarization spectra from nearby supernovae which may indicate
evidence of asphericity. Research such as this requires both manpower and
large-scale computational facilities for production which can be
done to some extent at national facilities such as the National Energy
Research Supercomputing Center at LBNL (NERSC), and local, mid-sized
computing facilities for code development which requires with the
ability to perform tests with immediate turn-around.
\clearpage
| 2024-02-18T23:39:49.976Z | 2005-10-06T22:16:29.000Z | algebraic_stack_train_0000 | 568 | 2,338 |
|
proofpile-arXiv_065-2857 | \section{Introduction}
The determination of the one-body distribution function, which gives the
probability of finding a particle at some given position,with a given
velocity at a given time, is one of the central problems in nonequilibrium
statistical mechanics. Its time-evolution is in many cases well described by
approximate kinetic equations such as the Boltzmann equation\cite{McLennan},
for low-density gases and the revised Enskog equation\cite{RET},\cite{KDEP},
for denser hard-sphere gases and solids. Only rarely are exact solutions of
these equations possible. Probably the most important technique for
generating approximate solutions to one-body kinetic equations is the
Chapman-Enskog method which, as originally formulated, consists of a
gradient expansion about a local-equilibrium state\cite{ChapmanCowling},\cite%
{McLennan}. The goal in this approach is to construct a particular type of
solution, called a ''normal solution'', in which all space and time
dependence of the one-body distribution occurs implicitly via its dependence
on the macroscopic hydrodynamic fields. The latter are, for a simple fluid,
the density, velocity and temperature fields corresponding to the conserved
variables of particle number, momentum and energy respectively. (In a
multicomponent system, the partial densities are also included.) The
Chapman-Enskog method proceeds to develop the solution perturbatively in the
gradients of the hydrodynamic fields: the distribution is developed as a
functional of the fields and their gradients and at the same time the
equations of motion of the fields, the hydrodynamic equations, are also
developed. The zeroth-order distribution is the local-equilibrium
distribution; at first order, this is corrected by terms involving linear
gradients of the hydrodynamic fields which in turn are governed by the Euler
equations (with an explicit prescription for the calculation of the pressure
from the kinetic theory). At second order, the hydrodynamic fields are
governed by the Navier-Stokes equations, at third order, by the Burnett
equations, etc. The calculations involved in extending the solution to each
successive higher order are increasingly difficult and since the
Navier-Stokes equations are usually considered an adequate description of
fluid dynamics, results above third order (Burnett order) for the Boltzmann
equation and above second (Navier-Stokes) order for the Enskog equation are
sparse. The extension of the Chapman-Enskog method beyond the Navier-Stokes
level is, however, not physically irrelevant since only by doing so is it
possible to understand non-Newtonian viscoelastic effects such as shear
thinning and normal stresses which occur even in simple fluids under extreme
conditions\cite{Lutsko_EnskogPRL},\cite{LutskoEnskog}.
Recently, interest in non-Newtonian effects has increased because of their
importance in fluidized granular materials. Granular systems are composed of
particles - grains - which lose energy when they collide. As such, there is
no equilibrium state - an undriven homogeneous collection of grains will
cool continuously. This has many interesting consequences such as the
spontaneous formation of clusters in the homogeneous gas and various
segregation phenomena in mixtures\cite{GranularPhysicsToday},\cite%
{GranularRMP},\cite{GranularGases},\cite{GranularGasDynamics}. The
collisional cooling also gives rise to a unique class of nonequilibrium
steady states due to the fact that the cooling can be balanced by the
viscous heating that occurs in inhomogeneous flows. One of the most widely
studied examples of such a system is a granular fluid undergoing planar
Couette flow where the velocity field takes the form $\mathbf{v}\left(
\mathbf{r}\right) =ay\widehat{\mathbf{x}}$, where $a$ is the shear rate. The
common presence of non-Newtonian effects, such as normal stresses, in these
systems has long been recognized as signalling the need to go beyond the
Navier-Stokes description\cite{SelGoldhirsch}. As emphasized by Santos et al,%
\cite{SantosInherentRheology}, the balance between the velocity gradients,
which determine the rate of viscous heating, and the cooling, arising from a
material property, means that such fluids are inherently non-Newtonian in
the sense that the sheared state cannot be viewed as a perturbation of the
unsheared, homogeneous fluid and so the usual Navier-Stokes equations cannot
be used to study either the rheology or the stability of the sheared
granular fluid. One of the goals of the present work is to show that a more
general hydrodynamic description can be derived for this, and other flow
states, which is able to accurately describe such far-from-equilibrium
states. The formalism developed here is general and not restricted to
granular fluids although they do provide the most obvious application.
Indeed,an application of this form of hydrodynamics has recently been
presented by Garz{\'{o}}\cite{garzo-2005-} who studied the stability of a
granular fluid under strong shear.
The extension of the Chapman-Enskog method to derive the hydrodynamics for
fluctuations about an arbitrary nonequilibrium state might at first appear
trivial but in fact it involves a careful application of the ideas
underlying the method. To illustrate, let $f\left( \mathbf{r},\mathbf{v}%
,t\right) $ be the probability to find a particle at position $\mathbf{r}$
with velocity $\mathbf{v}$ at time $t$. For a $D-$dimensional system in
equilibrium, this is just the (space and time-independent) Gaussian
distribution%
\begin{equation}
f\left( \mathbf{r},\mathbf{v},t\right) =\phi _{0}\left( \mathbf{v}%
;n,T,U\right) =n\left( \frac{m}{2\pi k_{B}T}\right) ^{D/2}\exp \left(
-\left( \mathbf{v}-\mathbf{U}\right) ^{2}/k_{B}T\right) \label{le}
\end{equation}%
where $n$ is the number density, $k_{B}$ is Boltzmann's constant, $T$ is the
temperature, $m$ is the mass of the particles and $\mathbf{U}$ is the
center-of-mass velocity. The zeroth-order approximation in the
Chapman-Enskog method is the localized distribution $f^{(0)}\left( \mathbf{r}%
,\mathbf{v},t\right) =\phi _{0}\left( \mathbf{v};n\left( \mathbf{r},t\right)
,T\left( \mathbf{r},t\right) ,\mathbf{U}\left( \mathbf{r},t\right) \right) $
or, in other words, the local equilibrium distribution. In contrast, a
homogeneous non-equilibrium steady state might be characterized by some
time-independent distribution
\begin{equation}
f\left( \mathbf{r},\mathbf{v},t\right) =\Phi _{ss}\left( \mathbf{v};n,T,%
\mathbf{U}\right)
\end{equation}%
but the zeroth-order approximation in the Chapman-Enskog method will \emph{%
not} in general be the localized steady-state distribution, $f^{(0)}\left(
\mathbf{r},\mathbf{v},t\right) \neq \Phi _{ss}\left( \mathbf{v};n\left(
\mathbf{r},t\right) ,T\left( \mathbf{r},t\right) ,\mathbf{U}\left( \mathbf{r}%
,t\right) \right) $. The reason is that a steady state is the result of a
balance - in the example given above, it is a balance between viscous
heating and collisional cooling. Thus, any change in density must be
compensated by, say, a change in temperature or the system is no longer in a
steady state. This therefore gives a relation between density and
temperature in the steady state, say $n=n_{ss}(T)$, so that one has $\Phi
_{ss}\left( \mathbf{v};n,T,\mathbf{U}\right) =\Phi _{ss}\left( \mathbf{v}%
;n_{ss}(T),T,\mathbf{U}\right) $. Clearly, it makes no sense to simply
''localize'' the hydrodynamic variables as the starting point of the
Chapman-Enskog method since, in a steady state, the hydrodynamic variables
are not independent. Limited attempts have been made in the past to perform
the type of generalization suggested here. In particular, Lee and Dufty
considered this problem for the specific case of an ordinary fluid under
shear with an artificial thermostat present so as to make possible a steady
state\cite{MirimThesis},\cite{MirimThesisArticle}. However, the issues
discussed in this paper were circumvented through the use of a very
particular type of thermostat so that, while of theoretical interest, that
calculation cannot serve as a template for the more general problem.
In Section II, the abstract formulation of the Chapman-Enskog expansion for
fluctuations about a non-equilibrium state is proposed. It not only requires
care in understanding the zeroth order approximation, but a generalization
in the concept of a normal solution. In Section III, the method is
illustrated by application to a simple kinetic theory for a sheared granular
gas. Explicit expressions are given for the full complement of transport
coefficients. One unique feature of the hydrodynamics obtained in this case
is that several transport coefficients depend linearly on fluctuations in
the velocity in the $y$-direction (i.e. in the direction of the velocity
gradient). The section concludes with a brief summary of the resulting
hydrodynamics and of the linearized form of the hydrodynamic equations which
leads to considerable simplification. The paper ends in Section IV with a
summary of the results, a comparison of the present results to the results
of the standard Chapman-Enskog analysis and a discussion of further
applications.
\section{The Chapman-Enskog expansion about an arbitrary state}
\subsection{Kinetic theory}
Consider a single-component fluid composed of particles of mass $m$ in $D$
dimensions. In general, the one-body distribution will obey a kinetic
equation of the form%
\begin{equation}
\left( \frac{\partial }{\partial t}+\mathbf{v}\cdot \nabla \right) f(\mathbf{%
r},\mathbf{v},t)=J[\mathbf{r},\mathbf{v},t|f] \label{x1}
\end{equation}%
where the collision operator $J[\mathbf{r},\mathbf{v},t|f]$ is a function of
position and velocity and a \emph{functional} of the distribution function.
No particular details of the form of the collision operator will be
important here but all results are formulated with the examples of BGK-type
relaxation models, the Boltzmann equation and the Enskog equation in mind.
The first five velocity moments of $f$ define the number density
\begin{equation}
n(\mathbf{r},t)=\int \;d\mathbf{v}f(\mathbf{r},\mathbf{v},t),
\label{density}
\end{equation}%
the flow velocity
\begin{equation}
\mathbf{u}(\mathbf{r},t)=\frac{1}{n(\mathbf{r},t)}\int \;d\mathbf{v}\mathbf{v%
}f(\mathbf{r},\mathbf{v},t), \label{velocity}
\end{equation}%
and the kinetic temperature
\begin{equation}
T(\mathbf{r},t)=\frac{m}{Dn(\mathbf{r},t)k_{B}}\int \;d\mathbf{v}C^{2}(%
\mathbf{r},t)f(\mathbf{r},\mathbf{v},t), \label{temperature}
\end{equation}%
where $\mathbf{C}(\mathbf{r},t)\equiv \mathbf{v}-\mathbf{u}(\mathbf{r},t)$
is the peculiar velocity. The macroscopic balance equations for density $n$,
momentum $m\mathbf{u}$, and energy $\frac{D}{2}nk_{B}T$ follow directly from
eq.\ ({\ref{x1}) by multiplying with $1$, $m\mathbf{v}$, and $\frac{1}{2}%
mC^{2}$ and integrating over $\mathbf{v}$:
\begin{eqnarray}
D_{t}n+n\nabla \cdot \mathbf{u} &=&0\; \label{x2} \\
D_{t}u_{i}+(mn)^{-1}\nabla _{j}P_{ij} &=&0 \notag \\
D_{t}T+\frac{2}{Dnk_{B}}\left( \nabla \cdot \mathbf{q}+P_{ij}\nabla
_{j}u_{i}\right) &=&-\zeta T, \notag
\end{eqnarray}%
where $D_{t}=\partial _{t}+\mathbf{u}\cdot \nabla $ is the convective
derivative. The microscopic expressions for the pressure tensor $\mathsf{P=P}%
\left[ f\right] $, the heat flux $\mathbf{q=q}\left[ f\right] $} depend on
the exact form of the collision operator (see ref. \cite{McLennan},\cite%
{LutskoJCP} for a general discussion){\ but as indicated, they are in
general functionals of the distribution, while the cooling rate $\zeta $ is
given by
\begin{equation}
\zeta (\mathbf{r},t)=\frac{1}{Dn(\mathbf{r},t)k_{B}T(\mathbf{r},t)}\int \,d%
\mathbf{v}mC^{2}J[\mathbf{r},\mathbf{v},t|f]. \label{heating}
\end{equation}%
}
\subsection{Formulation of the gradient expansion}
The goal of the Chapman-Enskog method is to construct a so-called \emph{%
normal }solution to the kinetic equation, eq.(\ref{x1}). In the standard
formulation of the method\cite{McLennan}, this is defined as a distribution $%
f(\mathbf{r},\mathbf{v},t)$ for which all of the space and time dependence
occurs through the hydrodynamic variables, denoted collectively as $\psi
\equiv \left\{ n,\mathbf{u},T\right\} $, and their derivatives so that%
\begin{equation}
f(\mathbf{r},\mathbf{v},t)=f\left( \mathbf{v};\psi \left( \mathbf{r}%
,t\right) ,\mathbf{\nabla }\psi \left( \mathbf{r},t\right) ,\mathbf{\nabla
\nabla }\psi \left( \mathbf{r},t\right) ,...\right) . \label{KE}
\end{equation}%
The distribution is therefore a \emph{functional} of the fields $\psi \left(
\mathbf{r},t\right) $ or, equivalently in this case, a \emph{function} of
the fields and their gradients to all orders. In the following, this
particular type of functional dependence will be denoted more compactly with
the notation $f\left( \mathbf{v};\left[ \mathbf{\nabla }^{(n)}\psi \left(
\mathbf{r},t\right) \right] \right) $ where the index, $n$, indicates the
maximum derivative that is used. When all derivatives are possible, as in
eq.(\ref{KE}) the notation $f(\mathbf{r},\mathbf{v},t)=f\left( \mathbf{v};%
\left[ \mathbf{\nabla }^{(\infty )}\psi \left( \mathbf{r},t\right) \right]
\right) $ will be used. The kinetic equation, eq.(\ref{x1}), the balance
equations, eqs.(\ref{x2}), and the definitions of the various fluxes and
sources then provide a closed set of equations from which to determine the
distribution. Note that since the fluxes and sources are functionals of the
distribution, their space and time dependence also occurs implicitly via
their dependence on the hydrodynamic fields and their derivatives.
Given such a solution has been found for a particular set of boundary
conditions yielding the hydrodynamic state $\psi _{0}\left( \mathbf{r}%
,t\right) $ with distribution $f_{0}\left( \mathbf{v};\left[ \mathbf{\nabla }%
^{(\infty )}\psi _{0}\left( \mathbf{r},t\right) \right] \right) $, the aim
is to describe deviations about this state, denoted $\delta \psi $, so that
the total hydrodynamic fields are $\psi =\psi _{0}+\delta \psi $. In the
Chapman-Enskog method, it is assumed that the deviations are smooth in the
sense that
\begin{equation}
\delta \psi \gg l\mathbf{\nabla }\delta \psi \gg l^{2}\mathbf{\nabla \nabla }%
\delta \psi ...,
\end{equation}%
where $l$ is the mean free path, so that one can work perturbatively in
terms of the gradients of the perturbations to the hydrodynamic fields. To
develop this perturbation theory systematically, it is convenient to
introduce a fictitious small parameter, $\epsilon $, and to write the
gradient operator as $\mathbf{\nabla }=\mathbf{\nabla }^{\left( 0\right)
}+\epsilon \mathbf{\nabla }^{\left( 1\right) }$ where the two operators on
the right are defined by $\mathbf{\nabla }_{0}\psi =\mathbf{\nabla }\psi _{0}
$ and $\mathbf{\nabla }_{1}\psi =\mathbf{\nabla \delta }\psi $. This then
generates an expansion of the distribution that looks like%
\begin{eqnarray}
f\left( \mathbf{v};\left[ \mathbf{\nabla }^{(\infty )}\psi \left( \mathbf{r}%
,t\right) \right] \right) &=&f^{(0)}\left( \mathbf{v};\mathbf{\nabla }%
_{0}^{\left( \infty \right) }\psi \left( \mathbf{r},t\right) \right)
\label{dist-expansion} \\
&&+\epsilon f^{(1)}\left( \mathbf{v};\mathbf{\nabla }_{1}\mathbf{\delta }%
\psi ,\mathbf{\nabla }_{0}^{\left( \infty \right) }\psi \left( \mathbf{r}%
,t\right) \right) \notag \\
&&+\epsilon ^{2}f^{(2)}\left( \mathbf{v};\mathbf{\nabla }_{1}\mathbf{\nabla }%
_{1}\mathbf{\delta }\psi ,\left( \mathbf{\nabla }_{1}\mathbf{\delta }\psi
\right) ^{2},\mathbf{\nabla }_{0}^{(\infty )}\psi \left( \mathbf{r},t\right)
\right) \notag \\
&&+... \notag
\end{eqnarray}%
where $f^{(1)}$ will be linear in $\mathbf{\nabla }_{1}\mathbf{\delta }\psi $%
, $f^{(2)}$ will be linear in $\mathbf{\nabla }_{1}\mathbf{\nabla }_{1}%
\mathbf{\delta }\psi $ and $\left( \mathbf{\nabla }_{1}\mathbf{\delta }\psi
\right) ^{2}$, etc. This notation is meant to be taken literally:\ the
quantity $\mathbf{\nabla }_{0}^{(\infty )}\psi \left( \mathbf{r},t\right)
=\left\{ \psi \left( \mathbf{r},t\right) ,\mathbf{\nabla }_{0}\psi \left(
\mathbf{r},t\right) ,...\right\} =\left\{ \psi \left( \mathbf{r},t\right) ,%
\mathbf{\nabla }\psi _{0}\left( \mathbf{r},t\right) ,...\right\} $ so that
at each order in perturbation theory, the distribution is a function of the
exact field $\psi \left( \mathbf{r},t\right) $ as well as all gradients of
the reference field. This involves a departure from the usual formulation of
the Chapman-Enskog definition of a normal state. In the standard form, the
distribution is assumed to be a functional of the \emph{exact} fields $\psi
\left( \mathbf{r},t\right) $ whereas here it is proposed that the
distribution is a functional of the exact field $\psi \left( \mathbf{r}%
,t\right) $ \emph{and} of the reference state $\psi _{0}\left( \mathbf{r}%
,t\right) $. Of course, it is obvious that in order to study deviations
about a reference state within the Chapman-Enskog framework, the
distribution will have to be a functional of that reference state.
Nevertheless, this violates, or generalizes, the usual definition of a
normal solution since there are now two sources of space and time dependence
in the distribution:\ the exact hydrodynamics fields and the reference
hydrodynamic state. For deviations from an equilibrium state, this point is
moot since $\mathbf{\nabla }\psi _{0}\left( \mathbf{r},t\right) =0$, etc.
The perturbative expansion of the distribution will generate a similar
expansion of the fluxes and sources through their functional dependence on
the distribution, see e.g. eq.(\ref{heating}), so that one writes%
\begin{equation}
P_{ij}=P_{ij}^{(0)}+\epsilon P_{ij}^{(1)}+...
\end{equation}%
and so forth. Since the balance equations link space and time derivatives,
it is necessary to introduce a multiscale expansion of the time derivatives
in both the kinetic equation and the balance equations as
\begin{equation}
\frac{\partial }{\partial t}f=\partial _{t}^{(0)}f+\epsilon \partial
_{t}^{(1)}f+...
\end{equation}%
The precise meaning of the symbols $\partial _{t}^{(0)}$, $\partial
_{t}^{(1)}$ is that the balance equations define $\partial _{t}^{(i)}$ in
terms of the spatial gradients of the hydrodynamic fields and these
definitions, together with the normal form of the distribution, define the
action of these symbols on the distribution. Finally, to maintain
generality, note that sometimes (specifically in the Enskog theory) the
collision operator itself is non-local and must be expanded as well in
gradients in $\delta \psi $ so that we write%
\begin{equation}
J[\mathbf{r},\mathbf{v},t|f]=J_{0}[\mathbf{r},\mathbf{v},t|f]+\epsilon J_{1}[%
\mathbf{r},\mathbf{v},t|f]+...
\end{equation}%
and it is understood that $J_{0}[\mathbf{r},\mathbf{v},t|f]$ by definition
involves no gradients with respect to the perturbations $\delta \psi \left(
\mathbf{r},t\right) $ but will, in general, contain gradients of \emph{all}
orders in the reference fields $\psi _{0}\left( \mathbf{r},t\right) $. (Note
that the existence of a normal solution is plausible if the spatial and
temporal dependence of the collision operator is also normal which is, in
fact, generally the case. However, for simplicity, no effort is made here to
indicate this explicitly.) A final property of the perturbative expansion
concerns the relation between the various distributions and the hydrodynamic
variables. The zeroth order distribution is required to reproduce the exact
hydrodynamic variables via%
\begin{equation}
\left(
\begin{array}{c}
n(\mathbf{r},t) \\
n(\mathbf{r},t)\mathbf{u}(\mathbf{r},t) \\
Dn(\mathbf{r},t)k_{B}T%
\end{array}%
\right) =\int \left(
\begin{array}{c}
1 \\
\mathbf{v} \\
mC^{2}%
\end{array}%
\right) f^{(0)}\left( \mathbf{v};\mathbf{\nabla }_{0}^{\left( \infty \right)
}\psi \left( \mathbf{r},t\right) \right) d\mathbf{v} \label{f0-hydro}
\end{equation}%
while the higher order terms are orthogonal to the first three velocity
moments%
\begin{equation}
\int \left(
\begin{array}{c}
1 \\
\mathbf{v} \\
mC^{2}%
\end{array}%
\right) f^{(n)}\left( \mathbf{v};\mathbf{\nabla }_{0}^{\left( \infty \right)
}\psi \left( \mathbf{r},t\right) \right) d\mathbf{v}=0,\;n>0,
\label{fn-hydro}
\end{equation}%
so that the total distribution $f=f^{(0)}+f^{(1)}+...$ satisfies eqs.(\ref%
{density})-(\ref{temperature}).
\subsection{The reference state}
Recall that the goal is to describe deviations from the reference state $%
\psi _{0}\left( \mathbf{r},t\right) $ which corresponds to the distribution $%
f_{0}\left( \mathbf{r},\mathbf{v,}t;\left[ \psi _{0}\right] \right) $ and in
fact the distribution and fields are related by the definitions given in
eqs.(\ref{density})-(\ref{temperature}). The reference distribution is
itself assumed to be normal so that the dependence on $\mathbf{r}$ and $t$
occurs implicitly through the fields. In terms of the notation used here,
the reference distribution satisfies the kinetic equation, eq.(\ref{KE}),
and the full, nonlinear balance equations, eqs.(\ref{x2}). Using the
definitions given above, these translate into
\begin{equation}
\left( \partial _{t}^{\left( 0\right) }+\mathbf{v}\cdot \nabla ^{\left(
0\right) }\right) f_{0}\left( \mathbf{r},\mathbf{v,}t;\left[ \psi _{0}\right]
\right) =J_{0}[\mathbf{r},\mathbf{v},t|f_{0}] \label{ref-KE}
\end{equation}%
and the fields are solutions to the full, nonlinear balance equations {%
\begin{eqnarray}
\partial _{t}^{\left( 0\right) }{n}_{0}{+\mathbf{u}\cdot }\mathbf{\nabla }%
^{\left( 0\right) }n_{0}+n_{0}\mathbf{\nabla }^{\left( 0\right) }\cdot
\mathbf{u}_{0} &=&0\; \label{ref-balance} \\
\partial _{t}^{\left( 0\right) }{u}_{0,i}{+\mathbf{u}}_{0}{\cdot }\mathbf{%
\nabla }^{\left( 0\right) }u_{0.i}+(mn_{0})^{-1}\partial
_{j}^{(0)}P_{ij}^{(00)} &=&0 \notag \\
{\partial _{t}^{(0)}T}_{0}{+\mathbf{u}}_{0}{\cdot }\mathbf{\nabla }^{\left(
0\right) }T_{0}+\frac{2}{Dn_{0}k_{B}}\left( \mathbf{\nabla }^{\left(
0\right) }\cdot \mathbf{q}^{(00)}+P_{ij}^{(00)}\partial
_{j}^{(0)}u_{0,i}\right) &=&-\zeta ^{(00)}T_{0}\;, \notag
\end{eqnarray}%
where, e.g., }$P_{ij}^{(00)}$ is the pressure tensor evaluated in the
reference state, and
\begin{equation}
{\partial _{t}^{(n)}\psi }_{0}=0,\;n>0.
\end{equation}%
Thus, in the ordering scheme developed here, the reference state is an exact
solution to the zeroth order perturbative equations.
For the standard case describing deviations from the equilibrium state, the
hydrodynamic fields are constant in both space and time and $\zeta ^{(00)}=0$
so that the balance equations just reduce to ${\partial _{t}^{(0)}\psi }%
_{0}=0$. The left hand side of the kinetic equation therefore vanishes
leaving $0=J_{0}[\mathbf{r},\mathbf{v},t|f_{0}]$ which is indeed satisfied
by the equilibrium distribution. For a granular fluid, $\zeta ^{(00)}\neq 0$
and the simplest solution that can be constructed consists of spatially
homogeneous, but time dependent fields giving%
\begin{equation}
\partial _{t}^{\left( 0\right) }f_{0}\left( \mathbf{r},\mathbf{v,}t;\left[
\psi _{0}\right] \right) =J_{0}[\mathbf{r},\mathbf{v},t|f_{0}] \label{e1}
\end{equation}%
and {%
\begin{eqnarray}
\partial _{t}^{\left( 0\right) }{n}_{0} &=&0\; \\
\partial _{t}^{\left( 0\right) }{u}_{0,i} &=&0 \notag \\
{\partial _{t}^{(0)}T}_{0} &=&-\zeta ^{(00)}T_{0} \notag
\end{eqnarray}%
so that the distribution depends on time through its dependence on the
temperature. The balance equations, together with the assumption of
normality, serve to define the meaning of the left hand side of eq.(\ref{e1}%
) giving}%
\begin{equation}
-\zeta ^{(00)}T_{0}\frac{\partial }{\partial T}f_{0}\left( \mathbf{r},%
\mathbf{v,}t;\left[ \psi _{0}\right] \right) =J_{0}[\mathbf{r},\mathbf{v}%
,t|f_{0}].
\end{equation}%
Typically, this is solved by assuming a scaling solution of the form {\ }$%
f_{0}\left( \mathbf{r},\mathbf{v,}t;\left[ \psi _{0}\right] \right) =\Phi
\left( \mathbf{v}\sqrt{\frac{m\sigma ^{2}}{k_{B}T\left( t\right) }}\right) $.
\subsection{The zeroth order Chapman-Enskog solution}
As emphasized above, the Chapman-Enskog method is an expansions in gradients
of the deviations of the hydrodynamic fields from the reference state. Using
the ordering developed above,{the zeroth order kinetic equation is}%
\begin{equation}
\partial _{t}^{(0)}f^{(0)}\left( \mathbf{r},\mathbf{v};\delta \psi \left(
\mathbf{r},t\right) ,\left[ \psi _{0}\right] \right) +\mathbf{v}\cdot \nabla
^{(0)}f^{(0)}\left( \mathbf{r},\mathbf{v};\delta \psi \left( \mathbf{r}%
,t\right) ,\left[ \psi _{0}\right] \right) =J_{0}[\mathbf{r},\mathbf{v}%
,t|f_{0}]. \label{zero-KE}
\end{equation}%
and the zeroth order balance equations are{%
\begin{eqnarray}
{\partial _{t}^{(0)}n+\mathbf{u}\cdot }\mathbf{\nabla }n_{0}+n\mathbf{\nabla
}\cdot \mathbf{u}_{0} &=&0\; \label{zero-KE-2} \\
{\partial _{t}^{(0)}u}_{i}{+\mathbf{u}\cdot }\mathbf{\nabla }%
u_{0.i}+(mn)_{j}^{-1}\mathbf{\nabla }^{\left( 0\right) }P_{ij}^{(0)} &=&0
\notag \\
{\partial _{t}^{(0)}T+\mathbf{u}\cdot \nabla }T_{0}+\frac{2}{Dnk_{B}}\left(
\mathbf{\nabla }^{\left( 0\right) }\cdot \mathbf{q}^{(0)}+P_{ij}^{(0)}%
\partial _{j}u_{0,i}\right) &=&-\zeta ^{(0)}T. \notag
\end{eqnarray}%
Making use of the balance equations satisfied by the reference fields, (\ref%
{ref-balance}), this can be written in terms of the deviations as
\begin{eqnarray}
{\partial _{t}^{(0)}\delta n+\delta \mathbf{u}\cdot \nabla }n_{0}+\delta
n\nabla \cdot \mathbf{u}_{0} &=&0\; \label{zero-balance} \\
{\partial _{t}^{(0)}\delta u}_{i}{+\delta \mathbf{u}\cdot \nabla }%
u_{0.i}+(mn)^{-1}\nabla _{j}^{(0)}P_{ij}^{(0)}-(mn_{0})^{-1}\nabla
_{j}P_{ij}^{(00)} &=&0 \notag \\
{\partial _{t}^{(0)}\delta T+\delta \mathbf{u}\cdot \nabla }T_{0}+\frac{2}{%
Dnk_{B}}\left( \nabla ^{(0)}\cdot \mathbf{q}^{(0)}+P_{ij}^{(0)}\nabla
_{j}u_{0,i}\right) -\frac{2}{Dn_{0}k_{B}}\left( \nabla \cdot \mathbf{q}%
^{(00)}+P_{ij}^{(00)}\nabla _{j}u_{0,i}\right) &=&-\zeta ^{(0)}T+\zeta
^{(00)}T_{0}. \notag
\end{eqnarray}%
Since the zeroth-order distribution is a \emph{function} of }$\delta \psi $
but a \emph{functional} of the reference fields, t{he time derivative in eq.(%
\ref{zero-KE}) is evaluated using }%
\begin{equation}
\partial _{t}^{(0)}f^{(0)}=\sum_{\alpha }\left( {\partial _{t}^{(0)}}\delta
\psi _{\alpha }\left( \mathbf{r},t\right) \right) \frac{\partial }{\partial
\delta \psi _{\alpha }\left( \mathbf{r},t\right) }f^{(0)}+\sum_{\alpha }\int
d\mathbf{r}^{\prime }\;\left( {\partial _{t}^{(0)}}\psi _{0,\alpha }\left(
\mathbf{r}^{\prime },t\right) \right) \frac{\delta }{\delta \psi _{0,\alpha
}\left( \mathbf{r}^{\prime },t\right) }f^{(0)}. \label{zero-t}
\end{equation}%
and these equations must be solved subject to the additional boundary
condition%
\begin{equation}
\lim_{\delta \psi \rightarrow 0}f^{(0)}\left( \mathbf{r},\mathbf{v},t;\delta
\psi \left( \mathbf{r},t\right) ,\left[ \psi _{0}\right] \right)
=f_{0}\left( \mathbf{r},\mathbf{v},t;\left[ \psi _{0}\right] \right) .
\label{bc0}
\end{equation}%
There are several important points to be made here. First, it must be
emphasized that the reference fields $\psi _{0}\left( \mathbf{r},t\right) $
and the deviations $\delta \psi \left( \mathbf{r},t\right) $ are playing
different roles in these equations. The former are fixed and assumed known
whereas the latter are independent variables. The result of a solution of
these equations will be the zeroth order distribution as a function of the
variables $\delta \psi $. For any given physical problem, the deviations
will be determined by solving the balance equations, eqs.(\ref{zero-balance}%
), subject to appropriate boundary conditions and only then is the
distribution completely specified. Second, nothing is said here about the
solution of eqs.(\ref{zero-KE})-(\ref{zero-t}) which, in general, constitute
a complicated functional equation in terms of the reference state variables $%
\psi _{0,\alpha }\left( \mathbf{r},t\right) $ . The only obvious exceptions,
and perhaps the only practical cases, are when the reference state is either
time-independent, so that ${\partial _{t}^{(0)}}\psi _{0,\alpha }=0$, or
spatially homogeneous so that $f^{(0)}$is a function, and not a functional,
of the reference fields. The equilibrium state is both, the homogeneous
cooling state is a spatially homogeneous state and time-independent flow
states such as uniform shear flow or Pouseille flow with thermalizing walls
are important examples of time-independent, spatially inhomogeneous states.
Third, since eqs.(\ref{zero-KE})-(\ref{zero-KE-2}) are the lowest order
equations in a gradient expansion, they are to be solved for \emph{%
arbitrarily large} deviations of the fields, $\delta \psi $. There is no
sense in which the deviations should be considered to be small. The fourth
observation, and perhaps the most important, is that there is no conceptual
connection between the zeroth order distribution $f^{(0)}\left( \mathbf{v}%
;\delta \psi \left( \mathbf{r},t\right) ,\mathbf{\nabla }_{0}^{(\infty
)}\psi _{0}\left( \mathbf{r},t\right) \right) $ and the reference
distribution $f_{0}\left( \mathbf{v};\mathbf{\nabla }_{0}^{(\infty )}\psi
_{0}\left( \mathbf{r},t\right) \right) $ except for the limit given in eq.(%
\ref{bc0}). In particular, it will almost always be the case that
\begin{equation}
f^{(0)}\left( \mathbf{v};\delta \psi \left( \mathbf{r},t\right) ,\mathbf{%
\nabla }_{0}^{(\infty )}\psi _{0}\left( \mathbf{r},t\right) \right) \neq
f_{0}\left( \mathbf{v};\mathbf{\nabla }_{0}^{(\infty )}\left( \psi
_{0}\left( \mathbf{r},t\right) +\delta \psi \left( \mathbf{r},t\right)
\right) \right) .
\end{equation}%
A rare exception for which this inequality is reversed is when the reference
state is the equilibrium state. In that case, the density, temperature and
velocity fields are uniform and the reference distribution is just a Gaussian%
\begin{equation}
f_{0}\left( \mathbf{r},\mathbf{v};\mathbf{\nabla }_{0}^{(\infty )}\psi
_{0}\right) =\phi _{0}\left( \mathbf{v};n_{0},T_{0},\mathbf{U}_{0}\right)
\end{equation}%
and the solution to the zeroth order equations is the local equilibrium
distribution
\begin{equation}
f^{(0)}\left( \mathbf{v};\delta \psi \left( \mathbf{r},t\right) ,\mathbf{%
\nabla }_{0}^{(\infty )}\psi _{0}\left( \mathbf{r},t\right) \right) =\phi
_{0}\left( \mathbf{v};n+\delta n\left( \mathbf{r},\mathbf{t}\right)
,T+\delta T\left( \mathbf{r},\mathbf{t}\right) ,\mathbf{U}+\delta \mathbf{U}%
\left( \mathbf{r},\mathbf{t}\right) \right) =f_{0}\left( \mathbf{v};\mathbf{%
\nabla }_{0}^{(\infty )}\left( \psi _{0}\left( \mathbf{r},t\right) +\delta
\psi \left( \mathbf{r},t\right) \right) \right) . \label{localize}
\end{equation}%
For steady states, as will be illustrated in the next Section, it is not the
case that $f^{(0)}$ is obtained from the steady-state distribution via a
''localization'' along the lines of that shown in eq.(\ref{localize}). On
the other hand, eqs.(\ref{zero-KE})-(\ref{zero-KE-2}) are the same whether
they are solved for the general field $\delta \psi \left( \mathbf{r}%
,t\right) \;$or for the spatially homogeneous field $\delta \psi \left(
t\right) $ with the subsequent localization $\delta \psi \left( t\right)
\rightarrow \delta \psi \left( \mathbf{r},t\right) $. Furthermore, these
equations are identical to those one would solve in order to obtain an exact
normal solution to the full kinetic equation, eq.(\ref{ref-KE}) and balance
equations, eq.(\ref{ref-balance}), for the fields $\psi _{0}\left( \mathbf{r}%
,t\right) +\delta \psi \left( t\right) $. In other words, the zeroth-order
Chapman-Enskog distribution is the localization of the exact distribution
for homogeneous deviations from the reference state. Again, only in the case
of the equilibrium reference state is it true that this corresponds to the
localization of the reference state itself.
\subsection{First order Chapman-Enskog}
In the following, the equations for the first-order terms will also be
needed. Collecting terms in eq.(\ref{ref-KE}), the first order distribution
function is found to satisfy%
\begin{eqnarray}
&&\partial _{t}^{(0)}f^{(1)}(\mathbf{v};\delta \psi \left( \mathbf{r}%
,t\right) ,\left[ \psi _{0}\right] )+\mathbf{v}\cdot \mathbf{\nabla }%
^{(0)}f^{(1)}(\mathbf{v};\delta \psi \left( \mathbf{r},t\right) ,\left[ \psi
_{0}\right] ) \label{f1} \\
&=&J_{0}[\mathbf{r},\mathbf{v},t|f_{1}]+J_{1}[\mathbf{r},\mathbf{v}%
,t|f_{0}]-\left( \partial _{t}^{(1)}f^{(0)}(\mathbf{v};\delta \psi \left(
\mathbf{r},t\right) ,\left[ \psi _{0}\right] )+\mathbf{v}\cdot \nabla
^{(1)}f^{(0)}(\mathbf{v};\delta \psi \left( \mathbf{r},t\right) ,\left[ \psi
_{0}\right] )\right) \notag
\end{eqnarray}%
and the first-order balance equations become{%
\begin{eqnarray}
{\partial _{t}^{(1)}\delta n+\mathbf{u}\cdot \nabla }\delta n+n\nabla \cdot
\delta \mathbf{u} &=&0\; \label{P1} \\
{\partial _{t}^{(1)}\delta u_{i}+\mathbf{u}\cdot }\mathbf{\nabla }{\delta }%
u_{i}+(mn)^{-1}\nabla _{j}^{\left( 1\right) }P_{ij}^{\left( 0\right)
}+(mn)^{-1}\nabla _{j}^{\left( 0\right) }P_{ij}^{\left( 1\right) } &=&0
\notag \\
{\partial _{t}^{(1)}\delta T+\mathbf{u}\cdot \mathbf{\nabla }\delta }T+\frac{%
2}{Dnk_{B}}\left( \mathbf{\nabla }^{\left( 1\right) }\cdot \mathbf{q}%
^{(0)}+P_{ij}^{\left( 0\right) }\nabla _{j}\delta u_{i}\right) +\frac{2}{%
Dnk_{B}}\left( \mathbf{\nabla }^{\left( 0\right) }\cdot \mathbf{q}%
^{(1)}+P_{ij}^{\left( 1\right) }\nabla _{j}u_{0,i}\right) &=&-\zeta ^{(1)}T.
\notag
\end{eqnarray}%
}
\section{Application to Uniform Shear Flow of Granular Fluids}
Uniform shear flow (USF) is a macroscopic state that is characterized by a
constant density, a uniform temperature and a simple shear with the local
velocity field given by
\begin{equation}
u_{i}=a_{ij}r_{j},\quad a_{ij}=a\delta _{ix}\delta _{jy}, \label{profile}
\end{equation}%
where $a$ is the \emph{constant} shear rate. If one assumes that the
pressure tensor, heat flux vector and cooling rate are also spatially
uniform, the reference-state balance equations, eqs.(\ref{ref-balance}),
become{%
\begin{eqnarray}
\partial _{t}^{\left( 0\right) }{n}_{0} &=&0\; \label{ssx} \\
\partial _{t}^{\left( 0\right) }{u}_{0,i}+au_{0,y}\delta _{ix} &=&0 \notag
\\
{\partial _{t}^{(0)}T}_{0}+\frac{2}{Dn_{0}k_{B}}aP_{xy}^{(00)} &=&-\zeta
^{(00)}T_{0}\;, \notag
\end{eqnarray}%
The question of whether or not these assumptions of spatial homogeneity are
true depends on the detailed form of the collision operator:\ in ref.\cite%
{LutskoPolydisperse} it is shown that only for the linear velocity profile,
eq.(\ref{profile}), this assumption }is easily verified for the Enskog
kinetic theory (and hence for simpler approximations to it such as the
Boltzmann and BGK theories).{\ }This linear velocity profile is generated by
Lee-Edwards boundary conditions \cite{LeesEdwards}, which are simply
periodic boundary conditions in the local Lagrangian frame. For elastic
gases, $\zeta ^{(00)}=0$ and the temperature grows in time due to viscous
heating and so a steady state is not possible unless an external
(artificial) thermostat is introduced\cite{MirimThesisArticle}. However, for
inelastic gases, the temperature changes in time due to the competition
between two (opposite) mechanisms: on the one hand, viscous (shear) heating
and, on the other hand, energy dissipation in collisions. A steady state
occurs when both mechanisms cancel each other at which point the balance
equation for temperature becomes
\begin{equation}
\frac{2}{Dn_{0}k_{B}}aP_{xy}^{(00)}=-\zeta ^{(00)}T_{0}.
\end{equation}%
Note that both the pressure tensor and the cooling rate are in general
functions of the two control parameters, the shear rate and the coefficient
of restitution, and the hydrodynamic variables, the density and the
temperature, so that this relation fixes any one of these in terms of the
the other three: for example, it could be viewed as giving the steady-state
temperature as a function of the other variables.
At a microscopic level, the one-body distribution for USF will clearly be
inhomogeneous since the eq.(\ref{velocity}) and eq.(\ref{profile}) imply
that the steady-state distribution must give%
\begin{equation}
ay\widehat{\mathbf{x}}=\frac{1}{n_{0}}\int \;d\mathbf{v}\mathbf{v}f_{0}(%
\mathbf{r},\mathbf{v}).
\end{equation}%
However, it can be shown, at least up to the Enskog theory{\cite%
{LutskoPolydisperse}}, that for the Lees-Edwards boundary conditions, the
state of USF possesses a modified translational invariance whereby the
steady state distribution, when expressed in terms of the local rest-frame
velocities $V_{i}=v_{i}-a_{ij}r_{j}$ does not have any explicit dependence
on position. In terms of these variables, and assuming a steady state, the
kinetic equation becomes
\begin{equation}
-aV_{y}\frac{\partial }{\partial V_{x}}f(\mathbf{V})=J\left[ \mathbf{V}|f,f%
\right] \;. \label{2.15}
\end{equation}%
The solution of this equation has been considered in some detail for the
BGK-type models\cite{MirimThesis},\cite{MirimThesisArticle},\cite%
{Brey_EarlyKineticModels},\cite{Brey_KineticModels}, the Boltzmann equation%
\cite{SelGoldhirsch}, and the Enskog equation\cite{ChouRichman1},\cite%
{ChouRichman2},\cite{LutskoPolydisperse}.\textbf{\ }
\subsection{The model kinetic theory}
Here, for simplicity, attention will be restricted to a particularly simple
kinetic theory which nevertheless gives realistic results that can be
compared to experiment. The kinetic theory used is the kinetic model of
Brey, Dufty and Santos\cite{Brey_KineticModels}, which is a relaxation type
model where the operator $J[f,f]$ is approximated as
\begin{equation}
J[f,f]\rightarrow -\nu ^{\ast }\left( \alpha \right) \nu \left( \psi \right)
(f-\phi _{0})+\frac{1}{2}\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi
\right) \frac{\partial }{\partial \mathbf{v}}\cdot \left( \mathbf{C}f\right)
. \label{BGK}
\end{equation}%
The right hand side involves the peculiar velocity $\mathbf{C}=\mathbf{v}-%
\mathbf{u}=\mathbf{V}-\delta \mathbf{u}$ and the local equilibrium
distribution, eq.(\ref{le}). The parameters in this relaxation approximation
are taken so as to give agreement with the results from the Boltzmann theory
of the homogeneous cooling state as discussed in ref.\cite%
{Brey_KineticModels}. Defining the collision rate for elastic hard spheres
in the Boltzmann approximation as
\begin{equation}
\nu \left( \psi \right) =\frac{8\pi ^{\left( D-2\right) /2}}{\left(
D+2\right) \Gamma \left( D/2\right) }n\sigma ^{D}\sqrt{\frac{\pi k_{B}T}{%
m\sigma ^{2}}},
\end{equation}%
the correction for the effect of the inelasticity is chosen to reproduce the
Navier-Stokes shear viscosity coefficient of an inelastic gas of hard
spheres in the Boltzmann approximation\cite{BreyCubero},\cite{LutskoCE}
giving
\begin{equation}
\nu ^{\ast }\left( \alpha \right) =\frac{1}{4D}\left( 1+\alpha \right)
\left( \left( D-1\right) \alpha +D+1\right) .
\end{equation}%
The second term in eq.(\ref{BGK}) accounts for the collisional cooling and
the coefficient is chosen so as to give the same cooling rate for the
homogeneous cooling state as the Boltzmann kinetic theory\cite{BreyCubero},%
\cite{LutskoCE},
\begin{equation}
\zeta ^{\ast }\left( \alpha \right) =\frac{D+2}{4D}\left( 1-\alpha
^{2}\right) .
\end{equation}%
In this case, the expressions for the pressure tensor, heat-flux vector and
cooling rate take particularly simple forms typical of the Boltzmann
description\cite{ChapmanCowling}%
\begin{eqnarray}
P_{ij} &=&m\int d\mathbf{C}\;C_{i}C_{j}f\left( \mathbf{r,C,}t\right) ,
\label{fluxBGK} \\
q_{i} &=&\frac{1}{2}m\int d\mathbf{C}\;C_{i}C^{2}f\left( \mathbf{r,C,}%
t\right) , \notag
\end{eqnarray}%
while the cooling rate can be calculated directly from eqs.(\ref{BGK}) and (%
\ref{heating}) with the result $\zeta (\psi )=\nu \left( \psi \right) \zeta
^{\ast }\left( \alpha \right) $.
\subsection{The steady-state}
Before proceeding with the Chapman-Enskog solution of the kinetic equation,
it is useful to describe the steady state for which the distribution
satisfies eq.(\ref{2.15}) which now becomes%
\begin{equation}
-aV_{y}\frac{\partial }{\partial V_{x}}f(\mathbf{V})=-\nu ^{\ast }\left(
\alpha \right) \nu \left( \psi _{0}\right) (f-\phi _{0})+\frac{1}{2}\zeta
^{\ast }\left( \alpha \right) \nu \left( \psi _{0}\right) \frac{\partial }{%
\partial \mathbf{V}}\cdot \left( \mathbf{V}f\right) . \label{kss}
\end{equation}%
The balance equations reduce to
\begin{equation}
2aP_{xy}^{ss}=-\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right)
Dn_{0}k_{B}T_{0}. \label{ss}
\end{equation}%
An equation for the pressure tensor is obtained by multiplying eq.(\ref{kss}%
) through by $mV_{i}V_{j}$ and integrating giving%
\begin{equation*}
aP_{iy}^{ss}\delta _{jx}+aP_{jy}^{ss}\delta _{ix}=-\nu ^{\ast }\left( \alpha
\right) \nu \left( \psi _{0}\right) (P_{ij}^{ss}-n_{0}k_{B}T_{0}\delta
_{ij})-\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi _{0}\right)
P_{ij}^{ss}.
\end{equation*}%
This set of algebraic equations is easily solved giving the only non-zero
components of the pressure tensor as
\begin{eqnarray}
P_{ii}^{ss} &=&\frac{\nu ^{\ast }\left( \alpha \right) +\delta _{ix}D\zeta
^{\ast }\left( \alpha \right) }{\nu ^{\ast }\left( \alpha \right) +\zeta
^{\ast }\left( \alpha \right) }n_{0}k_{B}T_{0} \\
P_{xy}^{ss} &=&-\frac{a_{ss}^{\ast }}{\nu ^{\ast }\left( \alpha \right)
+\zeta ^{\ast }\left( \alpha \right) }P_{yy}, \notag
\end{eqnarray}%
where $a_{ss}^{\ast }=a_{ss}/\nu \left( \psi _{0}\right) $ satisfies the
steady-state condition, eq.(\ref{ss})%
\begin{equation}
\frac{a_{ss}^{\ast 2}\nu ^{\ast }\left( \alpha \right) }{\left( \nu ^{\ast
}\left( \alpha \right) +\zeta ^{\ast }\left( \alpha \right) \right) ^{2}}=%
\frac{D}{2}\zeta ^{\ast }\left( \alpha \right) . \label{balance}
\end{equation}%
For fixed control parameters, $\alpha $ and $a$, this is a relation
constraining the state variables $n_{0}$ and $T_{0}$. The steady-state
distribution can be given explicitly, see e.g. \cite{SantosSolveBGK}.
\subsection{Zeroth order Chapman-Enskog}
Since the only spatially varying reference field is the velocity and since
it is linear in the spatial coordinate, the zeroth-order kinetic equation,
eq.(\ref{zero-KE})\ becomes%
\begin{equation}
\partial _{t}^{(0)}f^{(0)}+\mathbf{v}\cdot \left( \mathbf{\nabla }%
^{(0)}u_{0i}\right) \frac{\partial }{\partial u_{0i}}f^{(0)}=-\nu \left(
\psi \right) (f^{\left( 0\right) }-\phi _{0})+\frac{1}{2}\zeta ^{\ast
}\left( \alpha \right) \nu \left( \psi \right) \frac{\partial }{\partial
\mathbf{v}}\cdot \left( \mathbf{C}f^{(0)}\right) . \label{f00}
\end{equation}%
or, writing this in terms of the peculiar velocity,
\begin{equation}
\partial _{t}^{(0)}f^{(0)}+v_{y}\partial _{y}^{0}f^{(0)}-av_{y}\frac{%
\partial }{\partial C_{x}}f^{(0)}=-\nu \left( \psi \right) (f-\phi _{0})+%
\frac{1}{2}\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right) \frac{%
\partial }{\partial \mathbf{v}}\cdot \left( \mathbf{C}f^{(0)}\right) .
\end{equation}%
Here, the second term on the left accounts for any explicit dependence of
the distribution on the coordinate $y$, aside from the implicit dependence
coming from $\mathbf{C}$. Since it is a zero-order derivative, it does not
act on the deviations $\delta \psi $. In terms of the peculiar velocity,
this becomes%
\begin{equation}
\partial _{t}^{(0)}f^{(0)}+\left( C_{y}+\delta u_{y}\right) \partial
_{y}^{0}f^{(0)}-aC_{y}\frac{\partial }{\partial C_{x}}f^{(0)}-a\delta u_{y}%
\frac{\partial }{\partial C_{x}}f^{(0)}=-\nu \left( \psi \right) (f-\phi
_{0})+\frac{1}{2}\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right)
\frac{\partial }{\partial \mathbf{C}}\cdot \left( \mathbf{C}f^{(0)}\right) .
\label{f001}
\end{equation}%
The first term on the left is evaluated using eq.(\ref{zero-t}) and the
zeroth order balance equations{%
\begin{eqnarray}
{\partial _{t}^{(0)}n} &=&0\; \label{T0} \\
{\partial _{t}^{(0)}u}_{i}+a\delta u_{y}\delta _{ix} &=&0 \notag \\
{\partial _{t}^{(0)}T}+\frac{2}{Dnk_{B}}aP_{xy}^{(0)} &=&-\zeta ^{\ast
}\left( \alpha \right) \nu \left( \psi \right) T\;, \notag
\end{eqnarray}%
and the assumption of normality}%
\begin{equation*}
\partial _{t}^{(0)}f^{(0)}=\left( \partial _{t}^{(0)}\delta n\right) \left(
\frac{\partial }{\partial \delta n}f^{(0)}\right) +\left( \partial
_{t}^{(0)}\delta T\right) \left( \frac{\partial }{\partial \delta T}%
f^{(0)}\right) +\left( \partial _{t}^{(0)}\delta u_{i}\right) \left( \frac{%
\partial }{\partial \delta u_{i}}f^{(0)}\right)
\end{equation*}%
{to give}%
\begin{eqnarray}
&&\left( -\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right) T-%
\frac{2}{Dnk_{B}}aP_{xy}^{(0)}\right) \frac{\partial }{\partial T}%
f^{(0)}-aC_{y}\frac{\partial }{\partial C_{x}}f^{(0)}-a\delta u_{y}\left(
\frac{\partial }{\partial C_{x}}f^{(0)}+\frac{\partial }{\partial \delta
u_{x}}f^{(0)}\right) \label{f0} \\
&=&-\nu ^{\ast }\left( \alpha \right) \nu \left( \psi \right) (f^{(0)}-\phi
_{0})+\frac{1}{2}\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right)
\frac{\partial }{\partial \mathbf{C}}\cdot \left( \mathbf{C}f^{(0)}\right) ,
\notag
\end{eqnarray}%
where the temperature derivative is understood to be evaluated at constant
density. Here, the second term on the left in eq.(\ref{f001}) has been
dropped as neither eq.(\ref{f001}) nor the balance equations contain
explicit reference to the velocity field $u_{0}$, and so no explicit
dependence on the coordinate $y$ , thus justifying the assumption that such
dependence does not occur in $f^{\left( 0\right) }$. One can also assume
that $f^{\left( 0\right) }$ depends on $\delta u_{i}$ only through the
peculiar velocity, since in that case the term proportional to $\delta u_{y}$%
vanishes as well and there is no other explicit dependence on $\delta u_{y}$.
Equation (\ref{f0}) is closed once the pressure tensor is specified. Since
the primary goal here is to develop the transport equations for deviations
from the reference state, attention will be focused on the determination of
the pressure tensor and the heat flux vector. It is a feature of the simple
kinetic model used here that these can be calculated without determining the
explicit form of the distribution.
\subsubsection{The zeroth-order pressure tensor}
An equation for the pressure tensor can be obtained by multiplying this
equation through by $mC_{i}C_{j}$ and integrating over velocities. Using the
definition given in eq.(\ref{fluxBGK}),
\begin{equation}
\left( -\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right) T-\frac{2%
}{Dnk_{B}}aP_{xy}^{(0)}\right) \frac{\partial }{\partial T}%
P_{ij}^{(0)}+a\delta _{ix}P_{jy}^{(0)}+a\delta _{jx}P_{iy}^{(0)}=-\nu ^{\ast
}\left( \alpha \right) \nu \left( \psi \right) (P_{ij}^{(0)}-\delta
_{ij}nk_{B}T)-\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right)
P_{ij}^{(0)}, \label{p0}
\end{equation}%
and of course there is the constraint that by definition $Tr\left( \mathsf{P}%
\right) =Dnk_{B}T$. It is interesting to observe that eqs.,(\ref{T0}) - (\ref%
{p0}) are identical with their steady-state counterparts when the
steady-state condition, $\zeta ^{(0)}T=\frac{2}{Dnk_{B}}aP_{xy}^{(0)}$, is
fulfilled. However, here the solution of these equations is needed for
arbitrary values of $\delta T$, $\delta n$ and $\delta \mathbf{u}$. Another
point of interest is that these equations are local in the deviations $%
\delta \psi $ so that they are exactly the same equations as describe
spatially homogeneous deviations from the reference state. As mentioned
above, this is the meaning of the zeroth-order solution to the
Chapman-Enskog expansion: it is the exact solution to the problem of uniform
deviations from the reference state. It is this exact solution which is
''localized'' to give the zeroth-order Chapman-Enskog approximation and not
the reference distribution, $f_{0}$, except in the rare cases, such as
equilibrium, when they coincide.
To complete the specification of the distribution, eqs. (\ref{f0}) and (\ref%
{p0}) must be supplemented by boundary conditions. The relevant
dimensionless quantity characterizing the strength of the nonequilibrium
state is the dimensionless shear rate defined as
\begin{equation}
a^{\ast }\equiv a/\nu =a\frac{\left( D+2\right) \Gamma \left( D/2\right) }{%
8\pi ^{\left( D-1\right) /2}n\sigma ^{D}}\sqrt{\frac{m\sigma ^{2}}{k_{B}T}}.
\end{equation}%
It is clear that for a uniform system, the dimensionless shear rate becomes
smaller as the temperature rises so that we expect that in the limit of
infinite temperature, the system will behave as an inelastic gas without any
shear - i.e., in the homogeneous cooling state, giving the boundary condition%
\begin{equation}
\lim_{T\rightarrow \infty }\frac{1}{nk_{B}T}P_{ij}=\delta _{ij},
\end{equation}%
and in this limit, the distribution must go to the homogeneous cooling state
distribution. These boundary conditions can be implemented equivalently by
rewriting eqs.(\ref{s-low}) in terms of the inverse temperature, or more
physically the variable $a^{\ast }$, and the dimensionless pressure tensor $%
P_{ij}^{(\ast )}=\frac{1}{nk_{B}T}P_{ij}^{(0)}$ giving%
\begin{equation}
\left( \frac{1}{2}\zeta ^{\ast }\left( \alpha \right) +\frac{1}{D}a^{\ast
}P_{xy}^{(\ast )}\right) a^{\ast }\frac{\partial }{\partial a^{\ast }}%
P_{ij}^{(\ast )}=\frac{2}{D}a^{\ast }P_{xy}^{(\ast )}P_{ij}^{(\ast
)}-a^{\ast }\delta _{ix}P_{jy}^{(\ast )}-a^{\ast }\delta _{jx}P_{iy}^{(\ast
)}-\nu ^{\ast }\left( \alpha \right) (P_{ij}^{(\ast )}-\delta _{ij})
\label{P0-a}
\end{equation}%
and writing $f^{(0)}\left( \mathbf{C};\psi \right) =n\left( \frac{m}{2\pi
k_{B}T}\right) ^{D/2}g\left( \sqrt{\frac{m}{k_{B}T}}\mathbf{C};a^{\ast
}\right) $
\begin{eqnarray}
&&\left( \zeta ^{\ast }\left( \alpha \right) +\frac{2}{D}a^{\ast
}P_{xy}^{(\ast )}\right) a^{\ast }\frac{\partial }{\partial a^{\ast }}g+%
\frac{1}{D}a^{\ast }P_{xy}^{(\ast )}C_{i}\frac{\partial }{\partial C_{i}}%
g+a^{\ast }P_{xy}^{(\ast )}g-a^{\ast }C_{y}\frac{\partial }{\partial C_{x}}g
\label{P00} \\
&=&-\nu ^{\ast }\left( \alpha \right) \left( g-\exp \left(
-mC^{2}/k_{B}T\right) \right) , \notag
\end{eqnarray}%
with boundary condition $\lim_{a^{\ast }\rightarrow 0}P_{ij}^{\left( \ast
\right) }=\delta _{ij}$ and $\lim_{a^{\ast }\rightarrow 0}g=\exp \left(
-mC^{2}/k_{B}T\right) $. For practical calculations, it is more convenient
to introduce a fictitious time variable, $s$, and to express these equations
as
\begin{eqnarray}
\frac{da^{\ast }}{ds} &=&\frac{1}{2}a^{\ast }\zeta ^{\ast }\left( \alpha
\right) +\frac{1}{D}a^{\ast 2}P_{xy}^{(\ast )} \label{ss-hi} \\
\frac{\partial }{\partial s}P_{ij}^{(0)} &=&\frac{2}{D}a^{\ast
}P_{xy}^{(\ast )}P_{ij}^{(\ast )}-a^{\ast }\delta _{ix}P_{jy}^{(\ast
)}-a^{\ast }\delta _{jx}P_{iy}^{(\ast )}-\nu ^{\ast }\left( \alpha \right)
(P_{ij}^{(\ast )}-\delta _{ij}) \notag
\end{eqnarray}%
where the boundary condition is then $P_{ij}^{\left( \ast \right) }\left(
s=0\right) =\delta _{ij}$, and $a^{\ast }\left( s=0\right) =0$. The
distribution then satisfies%
\begin{equation}
\frac{\partial }{\partial s}g=-\frac{1}{D}a^{\ast }P_{xy}^{(\ast )}C_{i}%
\frac{\partial }{\partial C_{i}}g-a^{\ast }P_{xy}^{(\ast )}g+a^{\ast }C_{y}%
\frac{\partial }{\partial C_{x}}g-\nu ^{\ast }\left( \alpha \right) \left(
g-\exp \left( -mC^{2}/k_{B}T\right) \right) \label{ss-hif}
\end{equation}%
with $\lim_{s\rightarrow 0}g=\exp \left( -mC^{2}/k_{B}T\right) $. These are
to be solved simultaneously to give $P_{ij}^{\left( \ast \right) }\left(
s\right) ,a^{\ast }\left( s\right) $ and $f^{\left( 0\right) }\left(
s\right) $ from which the desired curves $P_{ij}^{\left( \ast \right)
}\left( a^{\ast }\right) $ and $f^{\left( 0\right) }\left( a^{\ast }\right) $
are obtained.
Physically, if the gas starts at a very high temperature, it would be
expected to cool until it reached the steady state. It is easy to see that
the right hand sides of eqs.(\ref{ss-hi}) do in fact vanish in the steady
state so that the steady state represents a critical point of this system of
differential equations\cite{Nicolis}. In order to fully specify the curve $%
P_{ij}\left( T\right) $ and the distribution $f^{(0)}$ it is necessary to
integrate as well from a temperature below the steady state temperature.
Clearly, in the case of \emph{zero} temperature, one expects that the
pressure tensor goes to zero since this corresponds to the physical
situation in which the atoms stream at exactly the velocities predicted by
their positions and the macroscopic flow field. (Note that if the atoms have
finite size, this could still lead to collisions. However, the BGK kinetic
theory used here is properly understood as an approximation to the Boltzmann
theory appropriate for a low density gas in which the finite size of the
grains is of no importance.) Thus, the expectation is that the
zero-temperature limit will give%
\begin{equation}
\lim_{T\rightarrow 0}P_{ij}^{(0)}=0.
\end{equation}%
Then, in terms of a fictitious time parameters, one has
\begin{eqnarray}
\frac{dT}{ds} &=&-\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi
\right) T-\frac{2}{D}aTP_{xy}^{(\ast )} \label{s-low} \\
\frac{\partial }{\partial s}P_{ij}^{(\ast )} &=&a\frac{2}{D}P_{xy}^{(\ast
)}P_{ij}^{(\ast )}-a\delta _{ix}P_{jy}^{(\ast )}-a\delta _{jx}P_{iy}^{(\ast
)}-\nu ^{\ast }\left( \alpha \right) \nu \left( \psi \right) (P_{ij}^{(\ast
)}-\delta _{ij}) \notag
\end{eqnarray}%
and for the distribution%
\begin{equation}
\frac{\partial }{\partial s}f^{(0)}=aC_{y}\frac{\partial }{\partial C_{x}}%
f^{(0)}-\nu ^{\ast }\left( \alpha \right) \nu \left( \psi \right)
(f^{(0)}-\phi _{0})+\frac{1}{2}\zeta ^{\ast }\left( \alpha \right) \nu
\left( \psi \right) \frac{\partial }{\partial \mathbf{C}}\cdot \left(
\mathbf{C}f^{(0)}\right) . \label{s-low1}
\end{equation}%
A final point is that the solution of these equations requires more than the
boundary condition $P_{ij}^{(0)}\left( s=0\right) =0$ since evaluation of
the right hand side of eq.(\ref{s-low}) requires a statement about $%
P_{ij}^{(\ast )}\left( s=0\right) $ as well. A straight-forward series
solution of eq.(\ref{p0}) in the vicinity of $T=0$ gives $P_{xy}^{\ast
}\sim a^{\ast -1/3}$ and $P_{ii}^{\ast }\sim a^{\ast -2/3}$so that the
correct boundary condition here is $P_{ij}^{(\ast )}\left( s=0\right) =0$.
The solution of these equations can then be performed as discussed in ref. %
\cite{SantosInherentRheology} with the boundary conditions given here.
It will also prove useful below to know the behavior of the pressure tensor
near the steady-state. This is obtained by making a series solution to eq.(%
\ref{P0-a}) in the variable $\left( a^{\ast }-a_{ss}^{\ast }\right) $ where $%
a_{ss}^{\ast }$ is the reduced shear in the steady-state. Details are given
in Appendix \ref{AppP} and the result is that%
\begin{equation}
P_{ij}^{\left( 0\right) }=P_{ij}^{ss}\left( 1+A_{ij}^{\ast }\left( \alpha
\right) \left( \frac{a^{\ast }}{a_{ss}^{\ast }}-1\right) +...\right) ,
\label{Pss}
\end{equation}%
with the coefficients%
\begin{eqnarray}
A_{xy}^{\ast }\left( \alpha \right) &=&-2\frac{\Delta \left( \alpha \right)
+\zeta ^{\ast }\left( \alpha \right) }{\zeta ^{\ast }\left( \alpha \right) }
\label{Pss-A} \\
\left( 1-\delta _{ix}\right) A_{ii}^{\ast }\left( \alpha \right)
&=&-2\left( \frac{\nu ^{\ast }\left( \alpha \right) +\zeta ^{\ast }\left(
\alpha \right) }{\Delta \left( \alpha \right) +\nu ^{\ast }\left( \alpha
\right) +\frac{1}{2}\zeta ^{\ast }\left( \alpha \right) }\right) \left(
1-\delta _{ix}\right) \notag \\
A_{xx}^{\ast }\left( \alpha \right) &=&-2D\frac{\left( \Delta \left( \alpha
\right) +\frac{1}{D}\nu ^{\ast }\left( \alpha \right) +\frac{1}{2}\zeta
^{\ast }\left( \alpha \right) \right) \left( \nu ^{\ast }\left( \alpha
\right) +\zeta ^{\ast }\left( \alpha \right) \right) }{\left( \Delta \left(
\alpha \right) +\nu ^{\ast }\left( \alpha \right) +\frac{1}{2}\zeta ^{\ast
}\left( \alpha \right) \right) \left( \nu ^{\ast }\left( \alpha \right)
+D\zeta ^{\ast }\left( \alpha \right) \right) }, \notag
\end{eqnarray}%
where $\Delta \left( \alpha \right) $ is the real root of
\begin{equation}
4\Delta ^{3}+8\left( \nu ^{\ast }\left( \alpha \right) +\zeta ^{\ast }\left(
\alpha \right) \right) \Delta ^{2}+\left( 4\nu ^{\ast 2}\left( \alpha
\right) +14\nu ^{\ast }\left( \alpha \right) \zeta ^{\ast }\left( \alpha
\right) +7\zeta ^{\ast 2}\left( \alpha \right) \right) \Delta +\zeta ^{\ast
}\left( \alpha \right) \left( 2\nu ^{\ast 2}\left( \alpha \right) -\nu
^{\ast }\left( \alpha \right) \zeta ^{\ast }\left( \alpha \right) -2\zeta
^{\ast 2}\left( \alpha \right) \right) =0. \label{Pss-d1}
\end{equation}
\subsubsection{Higher order moments:\ the zeroth-order heat flux vector}
Determination of the heat flux vector requires consideration of the full
tensor of third order moments. Since fourth order moments will also be
needed later, it is easiest to consider the equations for the general $N$-th
order moment, defined as
\begin{equation}
M_{i_{1}...iN}^{(0)}\left( \mathbf{r,}t\right) =m\int d\mathbf{v}%
\;C_{i_{1}}...C_{i_{N}}f^{\left( 0\right) }\left( \mathbf{r,C,}t\right) .
\end{equation}%
To simplify the equations, a more compact notation will be used for the
indices whereby a collection of numbered indices, such as $i_{1}...i_{N}$,
will be written more compactly as $I_{N}$ so that capital letters denote
collections of indices and the subscript on the capital indicates the number
of indices in the collection. Some examples of this are%
\begin{eqnarray}
M_{I_{N}}^{(0)} &=&M_{i_{1}...i_{N}}^{(0)} \\
M_{I_{2}}^{(0)} &=&M_{i_{1}i_{2}}^{(0)} \notag \\
M_{I_{2}y}^{(0)} &=&M_{i_{1}i_{2}y}^{(0)}. \notag
\end{eqnarray}
In terms of the general moments, the heat flux vector is
\begin{equation}
q_{i}^{\left( 0\right) }\left( \mathbf{r,}t\right) =\frac{1}{2}%
\sum_{j}M_{ijj}^{(0)}\left( \mathbf{r,}t\right) =\frac{1}{2}%
M_{ijj}^{(0)}\left( \mathbf{r,}t\right) ,
\end{equation}%
where the second equality introduces the Einstein summation convention
whereby repeated indices are summed. The pressure tensor is just the second
moment $P_{ij}^{\left( 0\right) }=M_{ij}^{\left( 0\right) }$. The local
equilibrium moments are easily shown to be zero for odd $N$ while the result
for even $N$ is%
\begin{equation}
M_{I_{N}}^{\left( le\right) }=mn\left( \frac{2k_{B}T}{m}\right) ^{\frac{N}{2}%
}2^{\frac{N}{2}}\frac{\Gamma \left( \frac{N+1}{2}\right) \Gamma \left( \frac{%
N+2}{2}\right) }{\sqrt{\pi }\Gamma \left( N\right) }\mathcal{P}%
_{I_{N}}\delta _{i_{1}i_{2}}\delta _{i_{3}i_{4}}...\delta _{i_{N-1}i_{N}}
\end{equation}%
where the operator $\mathcal{P}_{ijk...}$ indicates the sum over distinct
permutations of the indices $ijk...$ and has no effect on any other indices.
(e.g., $\mathcal{P}_{I_{4}}\delta _{i_{1}i_{2}}\delta _{i_{3}i_{4}}=\delta
_{i_{1}i_{2}}\delta _{i_{3}i_{4}}+\delta _{i_{1}i_{3}}\delta
_{i_{2}i_{4}}+\delta _{i_{1}i_{4}}\delta _{i_{2}i_{3}}$). An equation for
the general $N$-th order moment can be obtained from eq.(\ref{f00}) with the
result%
\begin{equation}
\left( -\zeta ^{\ast }\left( \alpha \right) -\frac{2}{D}a^{\ast
}P_{xy}^{(\ast )}\right) T\frac{\partial }{\partial T}M_{I_{N}}^{(0)}+\left(
\nu ^{\ast }\left( \alpha \right) +\frac{N}{2}\zeta ^{\ast }\left( \alpha
\right) \right) M_{I_{N}}^{(0)}+a^{\ast }\mathcal{P}_{I_{N}}\delta
_{xi_{N}}M_{I_{N-1}y}^{(0)}=\nu ^{\ast }\left( \alpha \right)
M_{I_{N}}^{(le)}.
\end{equation}%
Writing $M_{I_{N}}^{(0)}=mn\left( \frac{2k_{B}T}{m}\right) ^{\frac{N}{2}%
}M_{I_{N}}^{\ast }$gives%
\begin{equation}
-\left( \zeta ^{\ast }\left( \alpha \right) +\frac{2}{D}a^{\ast
}P_{xy}^{(\ast )}\right) T\frac{\partial }{\partial T}M_{I_{N}}^{\ast
}+\left( \nu ^{\ast }\left( \alpha \right) -\frac{N}{D}a^{\ast
}P_{xy}^{(\ast )}\right) M_{I_{N}}^{\ast }+a^{\ast }\mathcal{P}%
_{I_{N}}\delta _{xi_{N}}M_{I_{N-1}y}^{\ast }=\nu ^{\ast }\left( \alpha
\right) M_{I_{N}}^{(le\ast )} \label{Moments1}
\end{equation}%
Notice that the moments are completely decoupled order by order in $N$.
Since the source on the right vanishes for odd $N$ it is natural to assume
that $M_{I_{N}}^{\ast }=0$ for odd $N$. This is certainly true for
temperatures above the steady state temperature since the appropriate
boundary condition in this case, based on the discussion above, is that $%
\lim_{T\rightarrow \infty }M_{I_{N}}^{\ast }=M_{I_{N}}^{(le\ast )}=0$. In
the opposite limit, $T\rightarrow 0$, as mentioned above, one has that $%
P_{xy}^{\ast }\sim a^{\ast -1/3}\sim T^{1/6}$ and there are two cases to
consider depending on whether or not the third term on the left contributes.
If it does, i.e. if one or more indices is equal to $x$, then a series
solution near $T=0$ gives $M_{I_{N}}^{\ast }\sim a^{\ast -1}\sim T^{1/2}$
while if no index is equal to $x$ then $M_{I_{N}}^{\ast }\sim a^{\ast
-2/3}\sim T^{1/3}$ giving in both cases the boundary condition $%
\lim_{T\rightarrow 0}M_{I_{N}}^{\ast }=0$. In particular, this shows that
the odd moments vanish for all temperatures. From this, it immediately
follows that
\begin{equation}
q_{i}^{\left( 0\right) }\left( \mathbf{r,}t\right) =0.
\end{equation}
\subsection{First-order Chapman-Enskog: General formalism}
The equation for the first-order distribution, eq.(\ref{f1}), becomes%
\begin{equation}
\partial _{t}^{(0)}f^{(1)}+av_{y}\frac{\partial }{\partial u_{0x}}%
f^{(1)}=-\nu \left( \psi \right) f^{\left( 1\right) }+\frac{1}{2}\zeta
^{\ast }\left( \alpha \right) \nu \left( \psi \right) \frac{\partial }{%
\partial \mathbf{v}}\cdot \left( \mathbf{C}f^{(1)}\right) -\left( \partial
_{t}^{(1)}f^{(0)}+\mathbf{v}\cdot \mathbf{\nabla }_{1}f^{(0)}\right) ,
\end{equation}%
and the operator $\partial _{t}^{(1)}$ is defined via the corresponding
balance equations which are now {%
\begin{eqnarray}
{\partial _{t}^{(1)}\delta n+\mathbf{u}\cdot \mathbf{\nabla }}\delta n+n{%
\mathbf{\nabla }}\cdot \delta \mathbf{u} &=&0\; \\
{\partial _{t}^{(1)}\delta u_{i}+\mathbf{u}\cdot \mathbf{\nabla }\delta }%
u_{i}+(mn)^{-1}\partial _{j}^{\left( 1\right) }P_{ij}^{\left( 0\right)
}+(mn)^{-1}\partial _{y}^{\left( 0\right) }P_{iy}^{\left( 1\right) } &=&0
\notag \\
{\partial _{t}^{(1)}\delta T+\mathbf{u}\cdot \mathbf{\nabla }\delta }T+\frac{%
2}{Dnk_{B}}\left( P_{ij}^{\left( 0\right) }\nabla _{j}\delta u_{i}+{\mathbf{%
\nabla }}^{\left( 0\right) }\cdot \mathbf{q}^{(1)}+aP_{xy}^{\left( 1\right)
}\right) &=&0. \notag
\end{eqnarray}%
}\newline
Writing the kinetic equation in the form%
\begin{eqnarray}
&&\partial _{t}^{(0)}f^{(1)}+a\frac{\partial }{\partial u_{0x}}%
v_{y}f^{(1)}+\nu ^{\ast }\left( \alpha \right) \nu \left( \psi \right)
f^{\left( 1\right) }-\frac{1}{2}\zeta ^{\ast }\left( \alpha \right) \nu
\left( \psi \right) \frac{\partial }{\partial \mathbf{v}}\cdot \left(
\mathbf{C}f^{(1)}\right) \\
&=&-\left( \partial _{t}^{(1)}n+u_{l}\partial _{l}^{1}n\right) \frac{%
\partial }{\partial n}f^{(0)}-\left( \partial _{t}^{(1)}T+u_{l}\partial
_{l}^{1}T\right) \frac{\partial }{\partial T}f^{(0)}-\left( \partial
_{t}^{(1)}\delta u_{j}+u_{l}\partial _{l}^{1}\delta u_{j}\right) \frac{%
\partial }{\partial \delta u_{j}}f^{(0)} \notag \\
&&-\left( \partial _{l}^{1}u_{l}\right) f^{(0)}-\partial _{l}^{1}C_{l}f^{(0)}
\notag
\end{eqnarray}%
equations for the $N$-th moment can be obtained by multiplying through by $%
C_{i_{1}}...C_{i_{N}}$ and integrating over velocity. The first two terms on
the left contribute
\begin{eqnarray}
\int C_{i_{1}}...C_{i_{N}}\left( \partial _{t}^{(0)}f^{(1)}+a\frac{\partial
}{\partial u_{0x}}v_{y}f^{(1)}\right) d\mathbf{v} &=&\partial
_{t}^{(0)}M_{I_{N}}^{\left( 1\right) }\mathbf{+}\mathcal{P}_{I_{N}}\left(
\partial _{t}^{\left( 0\right) }\delta u_{i_{N}}\right) M_{I_{N-1}}^{\left(
1\right) } \\
&&+a\frac{\partial }{\partial u_{0x}}\left( M_{I_{N}y}^{\left( 1\right)
}+\delta u_{y}M_{I_{N}}^{\left( 1\right) }\right) \mathbf{+}a\mathcal{P}%
_{I_{N}}\delta _{xi_{N}}\left( M_{I_{N-1}y}^{\left( 1\right) }+\delta
u_{y}M_{I_{N-1}}^{\left( 1\right) }\right) \notag \\
&=&\partial _{t}^{(0)}M_{I_{N}}^{\left( 1\right) }+a\frac{\partial }{%
\partial u_{0x}}\left( M_{I_{N}y}^{\left( 1\right) }+\delta
u_{y}M_{I_{N}}^{\left( 1\right) }\right) \mathbf{+}a\mathcal{P}%
_{I_{N}}\delta _{xi_{N}}M_{I_{N-1}y}^{\left( 1\right) } \notag
\end{eqnarray}%
where the last line follows from using the zeroth order balance equation $%
\partial _{t}^{\left( 0\right) }\delta u_{i_{N}}=-a\delta _{i_{N}x}\delta
u_{y}$ . The evaluation of the right hand side is straightforward with the
only difficult term being%
\begin{equation}
\int C_{i_{1}}...C_{i_{N}}\left( \frac{\partial }{\partial \delta u_{j}}%
f^{(0)}\right) d\mathbf{v=}\frac{\partial }{\partial \delta u_{j}}%
M_{I_{N}}^{\left( 0\right) }\mathbf{+}\mathcal{P}_{I_{N}}\delta
_{i_{N}j}M_{I_{N-1}}^{\left( 0\right) },
\end{equation}%
and from eq.(\ref{Moments1}) it is clear that $M_{I_{N}}^{\left( 0\right) }$
is independent of $\delta u_{j}$ so that the first term on the right
vanishes. Thus%
\begin{eqnarray}
&&\partial _{t}^{(0)}M_{I_{N}}^{\left( 1\right) }+a\frac{\partial }{\partial
u_{0x}}\left( M_{I_{N}y}^{\left( 1\right) }+\delta u_{y}M_{I_{N}}^{\left(
1\right) }\right) \mathbf{+}a\mathcal{P}_{I_{N}}\delta
_{xi_{N}}M_{I_{N-1}y}^{\left( 1\right) }+\left( \nu ^{\ast }\left( \alpha
\right) +\frac{N}{2}\zeta ^{\ast }\left( \alpha \right) \right) \nu \left(
\psi \right) M_{I_{N}}^{\left( 1\right) } \label{moments} \\
&=&-\left( \partial _{t}^{(1)}n+u_{l}\partial _{l}^{1}n\right) \frac{%
\partial }{\partial n}M_{I_{N}}^{\left( 0\right) }-\left( \partial
_{t}^{(1)}T+u_{l}\partial _{l}^{1}T\right) \frac{\partial }{\partial T}%
M_{I_{N}}^{\left( 0\right) }-\left( \partial _{t}^{(1)}\delta
u_{j}+u_{l}\partial _{l}^{1}\delta u_{j}\right) \mathcal{P}_{I_{N}}\delta
_{i_{N}j}M_{I_{N-1}}^{\left( 0\right) } \notag \\
&&-\left( \partial _{l}^{1}u_{l}\right) M_{I_{N}}^{\left( 0\right) }-%
\mathcal{P}_{I_{N}}\left( \partial _{l}^{1}u_{i_{N}}\right)
M_{I_{N-1}l}^{\left( 0\right) }-\partial _{l}^{1}M_{I_{N}l}^{\left( 0\right)
} \notag
\end{eqnarray}%
Superficially, it appears that the right hand side depends explicitly on the
reference field, since $u_{l}=u_{0.l}+\delta u_{l}$, which would in turn
generate an explicit dependence of the moments on the $y$-coordinate.
However, when the balance equations are used to eliminate $\partial
_{t}^{(1)}$ this becomes%
\begin{eqnarray}
&&\partial _{t}^{(0)}M_{I_{N}}^{\left( 1\right) }+a\frac{\partial }{\partial
u_{0x}}\left( M_{I_{N}y}^{\left( 1\right) }+\delta u_{y}M_{I_{N}}^{\left(
1\right) }\right) +a\mathcal{P}_{I_{N}}\delta _{xi_{N}}M_{I_{N-1}y}^{\left(
1\right) }+\left( \nu ^{\ast }\left( \alpha \right) +\frac{N}{2}\zeta ^{\ast
}\left( \alpha \right) \right) \nu \left( \psi \right) M_{I_{N}}^{\left(
1\right) } \\
&=&\left( \partial _{l}^{\left( 1\right) }\delta u_{l}\right) n\frac{%
\partial }{\partial n}M_{I_{N}}^{\left( 0\right) }+\frac{2}{Dnk_{B}}\left(
M_{lk}^{\left( 0\right) }\partial _{l}^{\left( 1\right) }\delta
u_{k}+aM_{xy}^{\left( 1\right) }\right) \frac{\partial }{\partial T}%
M_{I_{N}}^{\left( 0\right) } \notag \\
&&+\frac{1}{mn}\mathcal{P}_{I_{N}}\left( \partial _{l}^{\left( 1\right)
}P_{li_{N}}^{\left( 0\right) }+\partial _{y}^{\left( 0\right)
}P_{yi_{N}}^{\left( 1\right) }\right) M_{I_{N-1}}^{\left( 0\right) } \notag
\\
&&-\left( \partial _{l}^{1}\delta u_{l}\right) M_{I_{N}}^{\left( 0\right) }-%
\mathcal{P}_{I_{N}}\left( \partial _{l}^{1}\delta u_{i_{N}}\right)
M_{I_{N-1}l}^{\left( 0\right) }-\partial _{l}^{1}M_{I_{N}l}^{\left( 0\right)
} \notag
\end{eqnarray}%
Then, assuming that the first-order moments are independent of the reference
field, $\mathbf{u}_{0}$, gives
\begin{eqnarray}
&&\partial _{t}^{(0)}M_{I_{N}}^{\left( 1\right) }+a\mathcal{P}_{I_{N}}\delta
_{xi_{N}}M_{I_{N-1}y}^{\left( 1\right) }+\left( \nu ^{\ast }\left( \alpha
\right) +\frac{N}{2}\zeta ^{\ast }\left( \alpha \right) \right) \nu \left(
\psi \right) M_{I_{N}}^{\left( 1\right) }-\left( \frac{2a}{Dnk_{B}}\frac{%
\partial }{\partial T}M_{I_{N}}^{\left( 0\right) }\right) M_{xy}^{\left(
1\right) } \label{moments2} \\
&=&\left[ \delta _{ab}\left( n\frac{\partial }{\partial n}M_{I_{N}}^{\left(
0\right) }-M_{I_{N}}^{\left( 0\right) }\right) +\frac{2}{Dnk_{B}}%
P_{ab}^{\left( 0\right) }\frac{\partial }{\partial T}M_{I_{N}}^{\left(
0\right) }-\mathcal{P}_{I_{N}}\delta _{bi_{N}}M_{I_{N-1}a}^{\left( 0\right) }%
\right] \left( \partial _{a}^{\left( 1\right) }\delta u_{b}\right) \notag
\\
&&+\left[ \frac{1}{mn}\mathcal{P}_{I_{N}}\left( \frac{\partial }{\partial
\delta n}P_{li_{N}}^{\left( 0\right) }\right) M_{I_{N-1}}^{\left( 0\right) }-%
\frac{\partial }{\partial \delta n}M_{I_{N}l}^{\left( 0\right) }\right]
\left( \partial _{l}^{\left( 1\right) }\delta n\right) \notag \\
&&+\left[ \frac{1}{mn}\mathcal{P}_{I_{N}}\left( \frac{\partial }{\partial
\delta T}P_{li_{N}}^{\left( 0\right) }\right) M_{I_{N-1}}^{\left( 0\right) }-%
\frac{\partial }{\partial \delta T}M_{I_{N}l}^{\left( 0\right) }\right]
\left( \partial _{l}^{\left( 1\right) }\delta T\right) \notag
\end{eqnarray}%
which is consistent since no factors of $\mathbf{u}_{0}$ appear and since
the zeroth order moments are known to be independent of the reference
velocity field.
The moment equations are linear in gradients in the deviation fields, so
generalized transport coefficients via the definition%
\begin{equation}
M_{I_{N}}^{\left( 1\right) }=-\lambda _{I_{N}ab}\frac{\partial \delta \psi
_{b}}{\partial r_{a}}=-\mu _{I_{N}a}\frac{\partial \delta n}{\partial r_{a}}%
-\kappa _{I_{N}a}\frac{\partial \delta T}{\partial r_{a}}-\eta _{I_{N}ab}%
\frac{\partial \delta u_{a}}{\partial r_{b}}
\end{equation}%
where the transport coefficients for different values of $N$ have the same
name but can always be distinguished by the number of indices they carry.
The zeroth-order time derivative is evaluated using
\begin{eqnarray}
\partial _{t}^{(0)}\lambda _{I_{N}ab}\frac{\partial \delta \psi _{b}}{%
\partial r_{a}} &=&\left( \partial _{t}^{(0)}\lambda _{I_{N}ab}\right) \frac{%
\partial \delta \psi _{b}}{\partial r_{a}}+\lambda _{I_{N}ab}\partial
_{t}^{(0)}\frac{\partial \delta \psi _{b}}{\partial r_{a}} \\
&=&\left( \left( \partial _{t}^{(0)}T\right) \frac{\partial \lambda
_{I_{N}ab}}{\partial T}+\left( \partial _{t}^{(0)}\delta u_{j}\right) \frac{%
\partial \lambda _{I_{N}ab}}{\partial \delta u_{j}}\right) \frac{\partial
\delta \psi _{b}}{\partial r_{a}}+\lambda _{I_{N}ab}\frac{\partial }{%
\partial r_{a}}\left( \partial _{t}^{(0)}\delta \psi _{b}\right) \notag \\
&=&\left( \partial _{t}^{(0)}T\right) \frac{\partial \lambda _{I_{N}ab}}{%
\partial T}\frac{\partial \delta \psi _{b}}{\partial r_{a}}+\lambda
_{I_{N}ab}\frac{\partial \delta \psi _{c}}{\partial r_{a}}\frac{\partial
\left( \partial _{t}^{(0)}\delta \psi _{b}\right) }{\partial \delta \psi _{c}%
}
\end{eqnarray}%
where the third line follows from (a) the fact that the transport
coefficients will have no explicit dependence on the velocity field, as may
be verified from the structure of eq(\ref{moments2}) and (b) the fact that
the gradient here is a first order gradient $\nabla _{1}$ so that it only
contributes via gradients of the deviations of the fields thus giving the
last term on the right. Since the fields are independent variables, the
coefficients of the various terms $\frac{\partial \delta \psi _{b}}{\partial
r_{a}}$ must vanish independently. For the coefficients of the velocity
gradients, this gives%
\begin{eqnarray}
&&\left( \partial _{t}^{(0)}T\right) \frac{\partial }{\partial T}\eta
_{I_{N}ab}+\eta _{I_{N}ac}\frac{\partial \left( \partial _{t}^{(0)}\delta
u_{c}\right) }{\partial \delta u_{b}}+a\mathcal{P}_{I_{N}}\delta
_{xi_{N}}\eta _{I_{N-1}yab}+\left( \nu ^{\ast }\left( \alpha \right) +\frac{N%
}{2}\zeta ^{\ast }\left( \alpha \right) \right) \nu \left( \psi \right) \eta
_{I_{N}ab}-\left( \frac{2a}{Dnk_{B}}\frac{\partial }{\partial T}%
M_{I_{N}}^{\left( 0\right) }\right) \eta _{xyab} \\
&=&-\delta _{ab}\left( n\frac{\partial }{\partial n}M_{I_{N}}^{\left(
0\right) }-M_{I_{N}}^{\left( 0\right) }\right) -\frac{2}{Dnk_{B}}%
M_{ab}^{\left( 0\right) }\frac{\partial }{\partial T}M_{I_{N}}^{\left(
0\right) }+\mathcal{P}_{I_{N}}\delta _{bi_{N}}M_{I_{N-1}a}^{\left( 0\right)
}. \notag
\end{eqnarray}%
The vanishing of the coefficients of the density gradients gives%
\begin{eqnarray}
&&\left( \partial _{t}^{(0)}T\right) \frac{\partial }{\partial T}\mu
_{I_{N}a}+\kappa _{I_{N}a}\frac{\partial \left( \partial _{t}^{(0)}T\right)
}{\partial n}+a\mathcal{P}_{I_{N}}\delta _{xi_{N}}\mu
_{I_{N-1}ya}^{N}+\left( \nu ^{\ast }\left( \alpha \right) +\frac{N}{2}\zeta
^{\ast }\left( \alpha \right) \right) \nu \left( \psi \right) \mu
_{I_{N}a}-\left( \frac{2a}{Dnk_{B}}\frac{\partial }{\partial T}%
M_{I_{N}}^{\left( 0\right) }\right) \mu _{xya} \\
&&=-\frac{1}{mn}\mathcal{P}_{I_{N}}\left( \frac{\partial }{\partial \delta n}%
P_{ai_{N}}^{\left( 0\right) }\right) M_{I_{N-1}}^{\left( 0\right) }+\frac{%
\partial }{\partial \delta n}M_{I_{N}a}^{\left( 0\right) }, \notag
\end{eqnarray}%
while the vanishing of the coefficient of the temperature gradient gives
\begin{eqnarray}
&&\left( \partial _{t}^{(0)}T\right) \frac{\partial }{\partial T}\kappa
_{I_{N}a}+\frac{\partial \left( \partial _{t}^{(0)}T\right) }{\partial T}%
\kappa _{I_{N}a}+a\mathcal{P}_{I_{N}}\delta _{xi_{N}}\kappa
_{I_{N-1}ya}^{N}+\left( \nu ^{\ast }\left( \alpha \right) +\frac{N}{2}\zeta
^{\ast }\left( \alpha \right) \right) \nu \left( \psi \right) \kappa
_{I_{N}a}-\left( \frac{2a}{Dnk_{B}}\frac{\partial }{\partial T}%
M_{I_{N}}^{\left( 0\right) }\right) \kappa _{xya} \\
&&=-\frac{1}{mn}\mathcal{P}_{I_{N}}\left( \frac{\partial }{\partial \delta T}%
P_{ai_{N}}^{\left( 0\right) }\right) M_{I_{N-1}}^{\left( 0\right) }+\frac{%
\partial }{\partial \delta T}M_{I_{N}a}^{\left( 0\right) }. \notag
\end{eqnarray}
Notice that for even moments, the source terms for the density and
temperature transport coefficients all vanish (as they involve odd
zeroth-order moments) and it is easy to verify that the boundary conditions
are consistent with $\mu _{I_{N}a}=\kappa _{I_{N}a}=0$ and only the velocity
gradients contribute. For odd values of $N$, the opposite is true and $\eta
_{I_{N}ab}=0$ while the others are in general nonzero.
\subsection{Navier-Stokes transport}
\subsubsection{The first order pressure tensor}
Specializing to the case $N=2$ gives the transport coefficients appearing in
the pressure tensor%
\begin{equation}
P_{I_{N}}^{\left( 1\right) }=-\eta _{I_{N}ab}\frac{\partial \delta u_{a}}{%
\partial r_{b}}
\end{equation}%
where the generalized viscosity satisfies%
\begin{eqnarray}
&&\left( \partial _{t}^{(0)}T\right) \frac{\partial }{\partial T}\eta
_{ijab}-a\eta _{ijax}\delta _{by}+a\delta _{xi}\eta _{jyab}+a\delta
_{xj}\eta _{iyab}+\left( \nu ^{\ast }\left( \alpha \right) +\zeta ^{\ast
}\left( \alpha \right) \right) \nu \left( \psi \right) \eta _{ijab}-\left(
\frac{2a}{Dnk_{B}}\frac{\partial }{\partial T}P_{ij}^{\left( 0\right)
}\right) \eta _{xyab} \\
&=&-\delta _{ab}\left( n\frac{\partial }{\partial n}P_{ij}^{\left( 0\right)
}-P_{ij}^{\left( 0\right) }\right) -\frac{2}{Dnk_{B}}P_{ab}^{\left( 0\right)
}\frac{\partial }{\partial T}P_{ij}^{\left( 0\right) }+\delta
_{bi}P_{ja}^{\left( 0\right) }+\delta _{bj}P_{ia}^{\left( 0\right) }. \notag
\end{eqnarray}
\subsubsection{First order third moments and the heat flux vector}
For the third moments, the contribution of density gradients to the heat
flux is well-known in the theory of granular fluids and the transport
coefficient is here the solution of
\begin{eqnarray}
&&\left( \partial _{t}^{(0)}T\right) \frac{\partial }{\partial T}\mu _{ijka}+%
\frac{\partial \left( \partial _{t}^{(0)}T\right) }{\partial n}\kappa
_{ijka}+\left( \nu ^{\ast }\left( \alpha \right) +\frac{3}{2}\zeta ^{\ast
}\left( \alpha \right) \right) \nu \left( \psi \right) \mu _{ijka} \\
&&+a\delta _{xk}\mu _{ijya}+a\delta _{xi}\mu _{kjya}+a\delta _{xj}\mu _{ikya}
\notag \\
&=&-\frac{1}{mn}\left( \frac{\partial }{\partial n}P_{ak}^{\left( 0\right)
}\right) P_{ij}^{\left( 0\right) }-\frac{1}{mn}\left( \frac{\partial }{%
\partial n}P_{ai}^{\left( 0\right) }\right) P_{kj}^{\left( 0\right) }-\frac{1%
}{mn}\left( \frac{\partial }{\partial n}P_{aj}^{\left( 0\right) }\right)
P_{ik}^{\left( 0\right) }+\frac{\partial }{\partial n}M_{ijka}^{\left(
0\right) }, \notag
\end{eqnarray}%
and the generalized thermal conductivity is determined from
\begin{eqnarray}
&&\left( \partial _{t}^{(0)}T\right) \frac{\partial }{\partial T}\kappa
_{ijka}+\frac{\partial \left( \partial _{t}^{(0)}T\right) }{\partial T}%
\kappa _{ijka}+\left( \nu ^{\ast }\left( \alpha \right) +\frac{3}{2}\zeta
^{\ast }\left( \alpha \right) \right) \nu \left( \psi \right) \kappa _{ijka}
\\
&&+a\delta _{xk}\kappa _{ijya}+a\delta _{xi}\kappa _{kjya}+a\delta
_{xj}\kappa _{ikya} \notag \\
&=&-\frac{1}{mn}\left( \frac{\partial }{\partial T}P_{ak}^{\left( 0\right)
}\right) P_{ij}^{\left( 0\right) }-\frac{1}{mn}\left( \frac{\partial }{%
\partial T}P_{ai}^{\left( 0\right) }\right) P_{kj}^{\left( 0\right) }-\frac{1%
}{mn}\left( \frac{\partial }{\partial T}P_{aj}^{\left( 0\right) }\right)
P_{ik}^{\left( 0\right) }+\frac{\partial }{\partial T}M_{ijka}^{\left(
0\right) }. \notag
\end{eqnarray}%
Note that both of these require knowledge of the zeroth-order fourth
velocity moment $M_{ijka}^{\left( 0\right) }$. The heat flux vector is
\begin{equation}
q_{i}^{\left( 1\right) }=-\overline{\mu }_{ia}\frac{\partial \delta n}{%
\partial r_{a}}-\overline{\kappa }_{ia}\frac{\partial \delta T}{\partial
r_{a}}
\end{equation}%
where%
\begin{eqnarray}
\overline{\mu }_{ia} &=&\mu _{ijja} \\
\overline{\kappa }_{ia} &=&\kappa _{ijja}. \notag
\end{eqnarray}
\subsection{The second-order transport equations}
In this Section, the results obtained so far are put together so as to give
the Navier-Stokes equations for deviations from the steady state. The
Navier-Stokes equations result from the sum of the balance equations. To
first order, this takes the form{%
\begin{eqnarray}
{\partial _{t}n+\mathbf{u}\cdot \mathbf{\nabla }}\delta n+n{\mathbf{\nabla }}%
\cdot \delta \mathbf{u} &=&0\; \label{first} \\
{\partial _{t}u_{i}+\mathbf{u}\cdot \mathbf{\nabla }\delta }u_{i}+a\delta
_{ix}\delta u_{y}+(mn)^{-1}\partial _{j}^{\left( 1\right) }P_{ij}^{\left(
0\right) } &=&0 \notag \\
{\partial _{t}T+\mathbf{u}\cdot \mathbf{\nabla }\delta }T+\frac{2}{Dnk_{B}}%
\left( P_{ij}^{\left( 0\right) }\nabla _{j}\delta
u_{i}+aP_{xy}^{(0)}+aP_{xy}^{\left( 1\right) }\right) &=&-\zeta ^{\ast
}\left( \alpha \right) \nu \left( \psi \right) T. \notag
\end{eqnarray}%
where }${\partial _{t}=\partial _{t}^{\left( 0\right) }+\partial
_{t}^{\left( 1\right) }}$. By analogy with the analysis of an equilibrium
system, these will be termed the Euler approximation. Summing to second
order to get the Navier-Stokes approximation gives{%
\begin{gather}
{\partial _{t}n+\mathbf{u}\cdot \mathbf{\nabla }}\delta n+n{\mathbf{\nabla }}%
\cdot \delta \mathbf{u}=0\; \label{second} \\
{\partial _{t}u_{i}+\mathbf{u}\cdot \mathbf{\nabla }\delta }u_{i}+a\delta
_{ix}\delta u_{y}+(mn)^{-1}\partial _{j}^{\left( 1\right) }P_{ij}^{\left(
0\right) }+(mn)^{-1}\partial _{y}^{\left( 1\right) }P_{iy}^{\left( 1\right)
}+(mn)^{-1}\partial _{j}^{\left( 0\right) }P_{ij}^{\left( 2\right) }=0
\notag \\
{\partial _{t}T+\mathbf{u}\cdot \nabla \delta }T+\frac{2}{Dnk_{B}}\left( {%
\mathbf{\nabla }}^{\left( 1\right) }\cdot \mathbf{q}^{(1)}+{\mathbf{\nabla }}%
^{\left( 0\right) }\cdot \mathbf{q}^{(2)}+P_{ij}^{\left( 0\right) }\nabla
_{j}\delta u_{i}+P_{ij}^{\left( 1\right) }\nabla _{j}\delta
u_{i}+aP_{xy}^{(0)}+aP_{xy}^{\left( 1\right) }+aP_{xy}^{\left( 2\right)
}\right) =-\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right) T.
\notag
\end{gather}%
where now }${\partial _{t}=\partial _{t}^{\left( 0\right) }+\partial
_{t}^{\left( 1\right) }+\partial _{t}^{\left( 2\right) }}$ but this
expression is problematic. Based on the results so far, it seems reasonable
to expect that $\partial _{j}^{\left( 0\right) }P_{ij}^{\left( 2\right) }={%
\mathbf{\nabla }}^{\left( 0\right) }\cdot \mathbf{q}^{(2)}=0$. However, to
consistently write the equations to third order requires knowledge of $%
P_{xy}^{\left( 2\right) }$ which is not available without extending the
solution of the kinetic equation to third order. The reason this problem
arises here, and not in the analysis about equilibrium, is that the shear
rate, $a$, arises from a gradient of the reference field. In the usual
analysis, such a term would be first order and $aP_{xy}^{\left( 2\right)
}=\left( \partial _{i}u_{0j}\right) P_{ij}^{\left( 2\right) }$would be of
third order and therefore neglected here. This is unfortunate and shows that
this method of analysis does not completely supplant the need to go beyond
the second-order solution in order to study shear flow. However, this
problem is not unique. In fact, in calculations of the transport
coefficients for the homogeneous cooling state of a granular gas, a similar
problem occurs in the calculation of the cooling rate: the true
Navier-Stokes expression requires going to third order in the solution of
the kinetic equation\cite{DuftyGranularTransport},\cite{LutskoCE}. (This is
because the source does not appear under a gradient, as can be seen in the
equations above.) Thus, it is suggested here that the same type of
approximation be accepted here, namely that the term $aP_{xy}^{\left(
2\right) }$ is neglected, so that the total pressure tensor and heat flux
vectors are%
\begin{eqnarray}
P_{ij} &=&P_{ij}^{\left( 0\right) }+P_{ij}^{\left( 1\right) } \\
q_{i} &=&q_{i}^{\left( 0\right) }+q_{i}^{\left( 1\right) } \notag
\end{eqnarray}%
and the transport equations can be written as{%
\begin{eqnarray}
{\partial _{t}n+\mathbf{\nabla }}\cdot \left( n\mathbf{u}\right) &=&0\;
\label{hydro-final} \\
{\partial _{t}u_{i}+\mathbf{u}\cdot \mathbf{\nabla }}u_{i}+(mn)^{-1}\partial
_{j}P_{ij} &=&0 \notag \\
{\partial _{t}T+\mathbf{u}\cdot \mathbf{\nabla }}T+\frac{2}{Dnk_{B}}\left( {%
\mathbf{\nabla }}\cdot \mathbf{q}+P_{ij}\nabla _{j}u_{i}\right) &=&-\zeta
^{\ast }\left( \alpha \right) \nu \left( \psi \right) T. \notag
\end{eqnarray}%
which is the expected form of the balance equations. The total fluxes are
given terms of the generalized transport coefficients}%
\begin{eqnarray}
P_{ij} &=&P_{ij}^{\left( 0\right) }-\eta _{ijab}\frac{\partial \delta u_{a}}{%
\partial r_{b}} \\
q_{i} &=&-\mu _{ijja}\frac{\partial \delta n}{\partial r_{a}}-\kappa _{ijja}%
\frac{\partial \delta T}{\partial r_{a}}. \notag
\end{eqnarray}
\subsection{Linearized second-order transport}
Some simplification occurs if attention is restricted to the linearized form
of these equations. This is because, as noted in the previous Section,
several transport coefficients are proportional to $\delta u_{y}$ and
consequently do not contribute when the transport coefficients are
linearized. Taking this into account, the total fluxes are%
\begin{eqnarray}
P_{ij} &=&P_{ij}^{\left( ss\right) }+\left( \frac{\partial P_{ij}^{\left(
0\right) }}{\partial \delta n}\right) _{ss}\delta n+\left( \frac{\partial
P_{ij}^{\left( 0\right) }}{\partial \delta T}\right) _{ss}\delta T-\eta
_{ijab}^{ss}\frac{\partial \delta u_{a}}{\partial r_{b}} \\
q_{i} &=&-\overline{\mu }_{ia}^{ss}\frac{\partial \delta n}{\partial r_{a}}-%
\overline{\kappa }_{ia}^{ss}\frac{\partial \delta T}{\partial r_{a}}, \notag
\end{eqnarray}%
where the superscript on the transport coefficients, and subscript on the
derivatives, indicates that they are evaluated to zeroth order in the
deviations,%
\begin{equation*}
\left( \frac{\partial P_{ij}^{\left( 0\right) }}{\partial \delta n}\right)
_{ss}\equiv \lim_{\delta \psi \rightarrow 0}\frac{\partial P_{ij}^{\left(
0\right) }}{\partial \delta n},
\end{equation*}%
i.e. in the steady state. The defining expressions for the transport
coefficients simplify since the factor $\partial _{t}^{(0)}T$ is at least of
first order in the deviations from the steady state (since it vanishes in
the steady state) so that the temperature derivative can be neglected thus
transforming the differential equations into coupled algebraic equations.
Also, all remaining quantities are evaluated for $\delta \psi =0$, i.e. in
the steady state. Thus the viscosity becomes%
\begin{eqnarray}
&&-a_{ss}^{\ast }\eta _{ijax}^{ss}\delta _{by}+a_{ss}^{\ast }\delta
_{xi}\eta _{jyab}^{ss}+a_{ss}^{\ast }\delta _{xj}\eta _{iyab}^{ss}+\left(
\nu ^{\ast }\left( \alpha \right) +\zeta ^{\ast }\left( \alpha \right)
\right) \eta _{ijab}^{ss}-\frac{2a_{ss}^{\ast }}{Dn_{0}k_{B}}\left( \frac{%
\partial }{\partial T}P_{ij}^{\left( 0\right) }\right) _{ss}\eta _{xyab}^{ss}
\label{p-lin} \\
&=&-\nu ^{-1}\left( \psi _{0}\right) \delta _{ab}\left( n_{0}\left( \frac{%
\partial }{\partial n}P_{ij}^{\left( 0\right) }\right) _{ss}-P_{ij}^{\left(
ss\right) }\right) -\frac{2\nu ^{-1}\left( \psi _{0}\right) }{Dn_{0}k_{B}}%
P_{ab}^{\left( ss\right) }\left( \frac{\partial }{\partial T}P_{ij}^{\left(
0\right) }\right) _{ss}+\nu ^{-1}\left( \psi _{0}\right) \left( \delta
_{bi}P_{ja}^{\left( ss\right) }+\delta _{bj}P_{ia}^{\left( ss\right)
}\right) \notag
\end{eqnarray}%
where $a_{ss}^{\ast }$ was defined in eq.(\ref{balance}). The generalized
heat conductivities will be given by the simplified equations
\begin{eqnarray}
&&\nu ^{-1}\left( \psi _{0}\right) \left( \frac{\partial \left( \partial
_{t}^{(0)}T\right) }{\partial n}\right) _{ss}\kappa _{ijka}^{ss}+\left( \nu
^{\ast }\left( \alpha \right) +\frac{3}{2}\zeta ^{\ast }\left( \alpha
\right) \right) \mu _{ijka}^{ss}+a_{ss}^{\ast }\mathcal{P}_{ijk}\delta
_{xk}\mu _{ijya}^{ss} \\
&=&-\frac{\nu ^{-1}\left( \psi _{0}\right) }{mn_{0}}\mathcal{P}_{ijk}\left(
\frac{\partial }{\partial n}P_{ak}^{\left( 0\right) }\right)
_{ss}P_{ij}^{\left( ss\right) }+\nu ^{-1}\left( \psi _{0}\right) \left(
\frac{\partial }{\partial n}M_{ijka}^{\left( 0\right) }\right) _{ss} \notag
\end{eqnarray}%
and
\begin{eqnarray}
&&\nu ^{-1}\left( \psi _{0}\right) \left( \frac{\partial \left( \partial
_{t}^{(0)}T\right) }{\partial T}\right) _{ss}\kappa _{ijka}^{ss}+\left( \nu
^{\ast }\left( \alpha \right) +\frac{3}{2}\zeta ^{\ast }\left( \alpha
\right) \right) \kappa _{ijka}^{ss}+a_{ss}^{\ast }\mathcal{P}_{ijk}\delta
_{xk}\kappa _{ijya}^{ss} \\
&=&-\frac{\nu ^{-1}\left( \psi _{0}\right) }{mn_{0}}\mathcal{P}_{ijk}\left(
\frac{\partial }{\partial T}P_{ak}^{\left( 0\right) }\right)
_{ss}P_{ij}^{\left( ss\right) }+\nu ^{-1}\left( \psi _{0}\right) \left(
\frac{\partial }{\partial T}M_{ijka}^{\left( 0\right) }\right) _{ss}. \notag
\end{eqnarray}%
In these equations, the hydrodynamic variables $\psi _{0}$ must satisfy the
steady state balance condition, eq.(\ref{balance}). The various quantities
in these equations are known from the analysis of the zeroth order moments.
For example, from eq.(\ref{Pss}), one has that%
\begin{eqnarray}
\left( \frac{\partial P_{ij}^{\left( 0\right) }}{\partial T}\right) _{ss}
&=&-\frac{1}{2}T_{0}^{-1}P_{ij}^{ss}A_{ij}^{\ast }\left( \alpha \right) \\
\left( \frac{\partial P_{ij}^{\left( 0\right) }}{\partial n}\right) _{ss}
&=&n_{0}^{-1}P_{ij}^{ss}\left( 1-A_{ij}^{\ast }\left( \alpha \right) \right)
\notag \\
\nu ^{-1}\left( \psi _{0}\right) \left( \frac{\partial \left( \partial
_{t}^{(0)}T\right) }{\partial T}\right) _{ss} &=&-\frac{1}{2}\zeta ^{\ast
}\left( \alpha \right) \left( 1+A_{xy}^{\ast }\left( \alpha \right) \right)
\notag
\end{eqnarray}%
where $A_{ij}^{\ast }\left( \alpha \right) $ was given in eq.(\ref{Pss-A})
and here, there is no summation over repeated indices. The derivatives of
higher order moments in the steady state can easily be given using the
results in Appendix \ref{AppP}.The linearized transport equations are{%
\begin{eqnarray}
{\partial _{t}\delta n+ay}\frac{\partial }{\partial x}\delta n+n_{0}\mathbf{%
\nabla }\cdot \delta \mathbf{u} &=&0\; \\
{\partial _{t}\delta u_{i}+{ay\frac{\partial }{\partial x}}\delta }%
u_{i}+a\delta u_{y}\delta _{ix}+(mn_{0})^{-1}\left( \left( \frac{\partial
P_{ij}^{\left( 0\right) }}{\partial n}\right) _{ss}\frac{\partial \delta n}{%
\partial r_{j}}+\left( \frac{\partial P_{ij}^{\left( 0\right) }}{\partial T}%
\right) _{ss}\frac{\partial \delta T}{\partial r_{j}}+\eta _{ijab}^{ss}\frac{%
\partial ^{2}\delta u_{a}}{\partial r_{j}\partial r_{b}}\right) &=&0 \notag
\end{eqnarray}%
}%
\begin{eqnarray*}
&&{\partial _{t}\delta T+ay}\frac{\partial }{\partial x}{\delta T}+\frac{2}{%
Dn_{0}k_{B}}\left( \overline{\mu }_{ia}^{ss}\frac{\partial ^{2}\delta n}{%
\partial r_{i}\partial r_{a}}+\overline{\kappa }_{ia}^{ss}\frac{\partial
^{2}\delta T}{\partial r_{i}\partial r_{a}}+P_{ij}^{\left( ss\right) }\frac{%
\partial \delta u_{i}}{\partial r_{j}}+a\eta _{xyab}^{ss}\frac{\partial
\delta u_{a}}{\partial r_{b}}\right) \\
&&+\frac{2a}{Dn_{0}^{2}k_{B}}\left( n_{0}\left( \frac{\partial
P_{xy}^{\left( 0\right) }}{\partial \delta n}\right) _{ss}-P_{xy}^{\left(
ss\right) }\right) \delta n+\frac{2a}{Dn_{0}k_{B}}\left( \frac{\partial
P_{xy}^{\left( 0\right) }}{\partial \delta T}\right) _{ss}\delta T \\
&=&-\frac{3}{2}\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi
_{0}\right) \delta T-\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi
_{0}\right) T_{0}\frac{\delta n}{n_{0}}.
\end{eqnarray*}%
where the fact that $\nu \left( \psi \right) \sim nT^{1/2}$ has been used.
These equations have recently been used by Garz\'{o} to study the stability
of the granular fluid under uniform shear flow\cite{garzo-2005-}.
\section{Conclusions}
In this paper, the extension of the Chapman-Enskog method to arbitrary
reference states has been presented. One of the key ideas is the separation
of the gradient operator into ''zeroth'' and ''first'' order operators that
help to organize the expansion. It is also important that the zeroth order
distribution be recognized as corresponding to the exact distribution for
\emph{arbitrary} \emph{uniform} deviations of \emph{all} hydrodynamic fields
from the reference state. This distribution does not in general have
anything to do with the distribution in the reference state, except in the
very special case that the reference state itself is spatially uniform.
The method was illustrated by application to the paradigmatic non-uniform
system of a fluid undergoing uniform shear flow. In particular, the fluid
was chosen to be a granular fluid which therefore admits of a steady state.
The analysis was based on a particularly simple kinetic theory in order to
allow for illustration of the general concepts without the technical
complications involved in, e.g., using the Boltzmann equation. Nevertheless,
it should be emphasized that the difference between the present calculation
and that using the Boltzmann equation would be no greater than in the case
of an equilibrium fluid. The main difference is that with the simplified
kinetic theory, it is possible to obtain closed equations for the velocity
moments without having to explicitly solve for the distribution. When
solving the Boltzmann equation, the moment equations are not closed and it
is necessary to resort to expansions in orthogonal polynomials. In that
case, the calculation is usually organized somewhat differently: attention
is focussed on solving directly for the distribution but this is only a
technical point.(In fact, Chapman originally developed his version of the
Chapman-Enskog method using Maxwell's moment equations while Enskog based
his on the Boltzmann equation\cite{ChapmanCowling}. The methods are of
course equivalent.)
It is interesting to compare the hydrodynamic equations derived here to the
''standard'' equations for fluctuations about a uniform granular fluid. As
might be expected, the hydrodynamic equations describing fluctuations about
the state of uniform shear flow are more complex in some ways than are the
usual Navier-Stokes equations for a granular fluid, but the similarities
with the simpler case are perhaps more surprising. The complexity arises
from the fact that the transport coefficients do not have the simple spatial
symmetries present in the homogeneous fluid where, e.g., there is a single
thermal conductivity rather than the vector quantity that occurs here.
However, just as in homogeneous system, the heat flux vector still only
couples to density and temperature gradients and the pressure tensor to
velocity gradients so that the hydrodynamics equations, eq.(\ref{hydro-final}%
), have the same structure as the Navier-Stokes equations for the
homogeneous system.
An additional complication in the general analysis presented here is that
the zeroth-order pressure tensor and the transport coefficients are obtained
as the solution to partial differential equations in the temperature rather
than as simple algebraic functions. This requires that appropriate boundary
conditions be supplied which will, in general, depend on the particular
problem being solved. Here, in the high-temperature limit, the
non-equilibrium effects are of no importance and the appropriate boundary
condition on all quantities is that they approach their equilibrium values.
Boundary conditions must also be given at low temperature as the two domains
are separated by the steady-state which represents a critical point. At low
temperatures, there are no collisions and no deviations from the macroscopic
state so that all velocity moments go to zero thus giving the necessary
boundary conditions. A particularly simple case occurs when the hydrodynamic
equations are linearized about the reference state as would be appropriate
for a linear stability analysis. Then, the transport properties are obtained
as the solution to simple algebraic equations.
A particular simplifying feature of uniform shear flow is that the flow
field has a constant first gradient and, as a result, the moments do not
explicitly depend on the flow field. This will not be true for more complex,
nonlinear flow fields. However, the application of the methods discussed in
Section II should make possible an analysis similar to that given here for
the simple case of uniform shear flow.
\bigskip
\begin{acknowledgements}
I am grateful to Vicente Garz\'{o} and Jim Dufty for several useful discussions. This work was supportd in part by the European Space Agency
under contract number C90105.
\end{acknowledgements}
| 2024-02-18T23:39:49.989Z | 2005-11-06T16:29:25.000Z | algebraic_stack_train_0000 | 569 | 16,152 |
|
proofpile-arXiv_065-2868 | \section*{A report}
The publications resulting from the Nordita Workdays on QPOs
are an interesting and original contribution to research on
accretion flows around compact objects.
They contain four observational papers, one theoretical
paper dealing with numerical simulations of accretion discs and
eleven contributions (some of them analyzing observations) totally
devoted to the epicyclic resonance model (ERM) of high frequency
QPOs (hfQPOs) of Abramowicz \& Klu\'zniak. Probably all that is to
be known about this model is included in these publications. This is
their strength but also their weakness. First the model is not
complete, it is rather kinematic than dynamic. It describes in great
detail the interactions between two oscillations but as Klu\'zniak
confesses: \textsl{It would be good to identify the two non-linear
oscillators.} Yes indeed. Not only \textsl{good} but crucial.
Second, concentrating on hfQPOs only is most probably not a wise
decision because there exist (admittedly complex) relations between
them and their lower frequency brethren and there is a clear link
between their presence and the state of the system. Although the
authors of the eleven papers sometimes pay lip-service to
observations not directly concerning the frequency values of hfQPOs,
in practice they seem to ignore the very important conclusion of
Remillard: \textsl{... models for explaining hfQPO frequencies
must also explain the geometry, energetics and radiation mechanisms
for the SPL state}. By the way, probably even this will not do: the
model will have to explain all the X-ray states. One can understand
the reluctance to leave the clean world of resonating orbits for the
dirty world of turbulent, magnetized, radiating discs with
unpleasant boundary conditions, but QPOs occur in such world.
Abramowicz believes that QPOs are the Rosetta stone for
understanding black-hole accretion. Not so. If one had to
(over)use\footnote{The road to the theorist's hell is paved with
Rosetta stones} the Rosetta-stone analogy, QPOs would be just one of
the texts on this stone. Let's hope it is the Greek one. All in all,
these publications are not so bad: imagine a volume devoted to the
beat-frequency model. At least the epicyclic resonance model is
still alive.
The authors of the papers deal only with neutron star and
black-hole QPOs. The abundant QPOs observed in CVs are only
mentioned en passant and no special attention is paid to them.
Probably because, not being (sufficiently) relativist, they are
considered boring. In view of the recently published article on the
subject \cite{klab} such an attitude is rather surprising.
\subsection*{Observations}
The four contributions in this category have been written by some of
the top observers of X-ray binaries and they form a very good (too
good maybe) background for the theoretical papers. van der Klis, as usual, gives a clear and sober
review of the QPO phenomenon. One wishes theorists paid more
attention to what he has to say about black hole hfQPOs: \textsl{The
phenomenon is weak and transient so that observations are difficult,
and discrepant frequencies occur as well, so it can not be excluded
that these properties of approximately constant frequency and
small-integer ratios would be contradicted by work at better signal
to noise.} Being a loyal participant he adds: \textsl{In the
remainder I will assume these properties are robust.}
A usual in QPO research it is difficult to get used to the
terminology and classification. It took some time to make sense of
\textsl{atolls}, \textsl{bananas} and \textsl{z}-\textsl{tracks}
(and sources!) and now we encounter the challenge of the X-ray
states of Black Hole Binaries. Not surprisingly Remillard is using
the classification defined in his monumental work with McClintock
\cite{mcrm}. We have therefore the \textsl{thermal}, \textsl{hard}
and \textsl{SPL} states. One might be slightly worried not seeing
the \textsl{thermal dominant (TD) state} \cite{mcrm} but fortunately
we are told that the thermal state is the \textsl{formerly
``high/soft" state"}, so \textsl{TD = thermal}. In any case the real
drama begins when one wishes to see what other specialists have to
say about the subject, e.g. Belloni (2005). There we find a
different classification into: an \textsl{LS} (Low/hard state), an
\textsl{HIMS} (High Intermediate State), a \textsl{SIMS} and an
\textsl{HS} (High/Soft state). It seems that \textsl{HS=TD} and
\textsl{LS=hard} but in the two other cases relations are not clear.
This is not surprising because Belloni defines his states by the
transition properties and not by the state properties. In addition
Belloni (2005) classifies low frequency QPOs into A, B and C types,
whereas Remillard uses quantities $a$ and $r$, the rms amplitude
and power (note that it was Remillard who introduced type C QPOs).
Both approaches have their merits and one can understand why they
were introduced but they make life really difficult for people
trying to understand the physics of accretion flows. I am surprised
that Abramowicz complains only about the confusion introduced by
numerical simulations and not about the impenetrable jungle of X-ray
states and QPO terminology. I suspect he has given up on reading on
this subject.
However, Remillard convincingly shows that hfQPOs appear in the
SPL state and shows very interesting relations between the presence
of $2\nu_0$ and the $3\nu_0$ frequencies and the state of the system
as described by the disc flux and the power-law flux. As far as I
can tell this is ignored by the epicyclic theorists but this could
be the second text of the Rosetta stone. It is also a major
difficulty for the epicyclic resonance model. Since the SPL state is
characterized by a strong Comptonised component in the X-ray flux,
it is difficult to see how the flux modulation at the vertical
epicyclic frequency by gravitational-lensing could survive in such
an environment.
This brings me to the contribution by Barret and collaborators.
Recently Barret with a different (but intersecting) set of
collaborators \cite{barretal} made a fundamental discovery by
showing that the lower frequency kHzQPO in the neutron-star binary
4U 1608-52 is a highly coherent signal that can keep $Q\approx 200$
for $\sim 0.1$ s. They also found that the higher frequency kHzQPO
is fainter than its lower frequency counterpart and has lower $Q$.
Barret et al. (2005) showed very convincingly that no proposed QPO
model can account for such highly coherent oscillations. They can
all be rejected except for the ERM but only because the two resonant
oscillators have not yet been identified. In particular, they
rejected the modified beat-frequency model of Miller et al. (1998).
In Barret et al. another puzzling phenomenon is presented. They
found in three neutron-star binaries (including 4U 1608-52) showing
high-Q lower kHzQPOs that the coherence increases with frequency to
a maximum ($\sim 800$ Hz) after which it rapidly drops and QPOs
disappear. To me it looks like an effect related to the forcing
mechanism. Barret et al. link their observations to the ISCO basing
their claim on the Miller et al. (1998) model. There is half a
paragraph trying to explain (I think) how the model rejected in a
previous paper can be rejuvenated (or rather resuscitated) and used
to interpret the present observations. I read this part of the paper
several times and failed to understand its meaning. I had no problem
understanding the reasoning rejecting Miller et al. (1998).
In any case I also fail to understand why the Barret et al. (2005)
discovery of the high coherence of QPOs was not the central point of
the Nordita workdays. It is easy to miss a \textit{Mane, Mane,
Tekel, Uphar'sin} when looking for a Rosetta stone.
The main result of the excellent article on neutron-star boundary
layers Gilfanov is that the kHzQPOs appear to have the same origin
as aperiodic and quasiperiodic variability at lower frequency. It
seems to be clear that the msec flux modulations originate on the
surface of the neutron star. Nota bene, I am surprised that the
remarkable and extremely relevant discovery of the universal
rms-flux correlation (Uttley 2004; Uttley et al. 2005) is not
mentioned in this context. Gilfanov
point out that the kHz clock could still be in the disc.
\subsection*{Disc simulations}
It is known that in stars some multimode pulsations may arise from
stochastic excitation by turbulent convection (see e.g. Dziembowski
2005). It is therefore legitimate to expect that in turbulent discs
similar effects could be found. Brandenburg presents very
interesting results obtained in the framework of the shearing-box
approximation of accretion disc structure. He obtains what he calls
stochastic excitation of epicycles. In his model the radial
epicyclic frequency is equal to the Keplerian frequency and the
vertical epicyclic frequency is not equal (or comparable) to the
p-mode frequency so it is not clear how close his results are to
what is happening in full-scale discs. But they are promising.
Another result concerning dissipation in discs requires more
investigation. According to Brandenburg in MRI discs most of the
dissipation occurs in the corona, whereas in the forced hydrodynamic
case most of the dissipation occurs near the midplane. He claims
that his result, obtained in the isothermal case, has been shown
also for radiating discs. The disc model in question, however, was
radiation-pressure dominated while gas-pressure dominated models
\cite{millstone} do not seem to confirm the claim that MRI discs
release most of the energy in the corona.
\subsection*{The epicyclic resonance model}
The eleven contributions to the epicyclic resonance model contain two
general articles by the founders; the other papers on different
aspects of the model were written (except for the last contribution)
by younger members of the ERM team. All these
contributions are very well written, clear and to the point. I was
really impressed by their quality. They contain all one needs to
know about the ERM. As far as I know they were written by the
authors whose names appear explicitly on the paper and since they
are very careful in acknowledging other people's contributions I
recommend removing the ``et al.'s" which give the impression that
the texts were written by a sect, or that they form a sort of
Norditan Creed. Fortunately this is not the impression one gets
reading the articles. They are professional, open to alternatives,
pointing out difficulties etc.
Of particular quality in this respect in the contribution by Paola
Rebusco. She presents the problem in a very clear way and
carefully chooses the (difficult) questions still to be answered.
Ji\v{r}\'{\i} Hor\'ak contributes two interesting articles. The
first discusses the 3:2 autoparametric resonance in the general
framework of conservative systems and shows that the amplitude and
frequency of the oscillations should be periodically modulated - a
result that might relate hfQPOs to lower frequency QPOs. The second
paper tries to explain the QPO modulations in neutron-star binaries
by a mechanism proposed by Paczy\'nski. It is not clear if such a
mechanism could achieve the high quality factors observed by Barret
et al. (2005) or how it relates to the oscillation forced by the
spinning neutron-star magnetic field. Three contributions deal with
various aspects of oscillating tori. Eva {\v S}r{\' a}mkov{\' a}
presents the preliminary results of her research on eigenvalues and
eigenfrequencies of slightly non-slender tori. She includes in her
paper a figure showing a transient torus appearing in a 3D
simulation of an accretion flow -- a rather touching testimony to the
ERM-team reliance on this elusive structures. William Lee uses SPH
simulations to study the response of a slender torus to external
periodic forcing. The results are a very interesting illustration of
the essentially nonlinear character of the coupling between the
radial and vertical modes (coupling through the sub-harmonic of the
perturbation: $1/2\nu_0$) and the rather fascinating phenomenon of
mode locking for a drifting torus. This can be relevant to the drift
of QPO frequencies observed in neutron-star binaries. Since his
contribution is devoted to these systems, mentioning ``stellar-mass
black holes" in the abstract is a bit misleading. Michal Bursa
expertly attacks the problem crucial for the ERM applied to black
holes: how to produce \textsl{two} modulations of the X-ray flux. By
using a toy model consisting of an optically thin, bremsstrahlung
emitting, oscillating slender torus he shows that strong-gravity
relativistic effects may produce the desired result. How would
things look in the case of an optically thick disc surrounded by a
comptonizing cloud is (probably) a different story. The last three
contributions deal with some aspects of hfQPO observations. Tomek
Bulik reanalysis the somewhat controversial issue of the Sco X-1
kHzQPO clustering around the value corresponding to the frequency
ratio of 2/3. His skillful analysis shows that the clustering is
real.
Gabriel T{\"o}r{\"o}k has been entrusted with the somehow irksome
task of linking microquasar QPOs with those observed in Sgr A$^*$
and AGNs. Since the last category forms an empty set he could just
discuss why such observations would be important. Unfortunately the
prospect of detecting QPOs from AGNs is rather remote
\cite{vauguttl}. His valiant attempt to discuss determining
black-hole spin from hfQPOs was hindered by the uncertainties in
both data and models. But his is a very good short review of the
problem.
Because they are a general introduction to an unfinished
construction, the contributions by the founders are less
interesting. Abramowicz gives a general introduction to the
subject of accretion onto compact objects. In his (entirely
justified) efforts to rehabilitate his and collaborators' (to whom I
belong) fundamental contributions to the concept of ADAF, Abramowicz
went too far: he antedated the relevant Abramowicz et al. paper by
ten years and did not insert the Narayan \& Yi article into the
references. I think also that his claim that accretion theory today
experiences a period of confusion caused by supercomputer
simulations is exaggerated. The confusion is caused by (some)
astrophysicists hastily trying to apply to real objects whatever
comes out of the computer and not by the physicists making these
very impressive simulations. People who are confused should read the
excellent article by
Balbus (2005) -- a real guide for the
perplexed. However, Eq.~(2) Abramowicz can create confusion since
it asserts that the radial epicyclic frequency is \textsl{larger}
than the vertical one. Luckily there is his Fig.~2 to sober us up.
Klu\'zniak with his usual charming intellectual incisiveness
describes his personal road to ERM. He is convinced that after
trying various roads which led nowhere, he finally chose the right
one. He knows it is uphill and very steep. But never send to know
for whom the disc tolls; it tolls for him. I wish him luck.
\acknowledgements I am grateful to Marek Abramowicz for inviting me
to write this report and to G\"unther R\"udiger for accepting this
risky idea.
| 2024-02-18T23:39:50.029Z | 2005-10-14T09:39:04.000Z | algebraic_stack_train_0000 | 572 | 2,508 |
|
proofpile-arXiv_065-3083 |
\section{Introduction.}
The metal content of galaxies is an important diagnostic because it
relates directly to the integral history of star formation, galaxy
mass, and the inward and outward flows of gas
\citep[see reviews by][ or, for a review on chemical evolution models, see \citealp{cen_ost_99}]
{ostlin00, pagel97}. Local
studies reveal the existence of a luminosity-metallicity relation (LZR)
\citep{lequeux79,skill89_1,kindav81,rich_call95,camposa93} that
presumably arises from the higher retention rate of enriched gas
in the gravitational wells of galaxies with larger masses, where the
assumption is that more luminous galaxies are also more massive. The
luminosity-metallicity relation (LZR) is expected to evolve over the lifetime of galaxies, but any predicted changes
in the slope, offset, and dispersion of the LZR are subject to many uncertainties.
Observations of the metallicity of galaxies at intermediate
redshifts $z > 0.5$ have been few and include three studies of field
galaxies at $z \sim $ 0.5 to 1 by \citet[henceforth K03]{kob03}, \citet[henceforth L03]{cfrs_lilly},
and \citet[henceforth KK04]{koke04} and a few targets at very
high redshifts $z \sim 2.5$ by, e.g., \citet{pett01} and \citet[henceforth KK00]{kob_koo_2000}.
The intermediate-redshift studies suggest that, at a given metallicity, galaxies
were typically more luminous in the past, while
the high-redshift samples show metallicities that are sub-solar with
luminosities 5--40 times brighter than local galaxies of comparable metallicity.
The distant galaxy metallicities in these studies were all based on the
[O/H]\footnote{We will henceforth refer to 12+log(O/H) as ``[O/H]''.} of the
emission lines and estimated from the empirical $R_{23}$ method introduced
by \citet{pagel_r23}, and further developed by \citet{mcgaugh91} and
\citet{py00}, among others. No galaxies had less than 1/3 solar
abundances, but this was in part due to the \emph{assumption} of using the
metal-rich (upper) branch of the $R_{23}$-metallicity relation. This
letter presents a new sample of distant galaxies selected for the presence
of the [\ion{O}{3}]$\lambda$ 4363 \AA \ auroral line. This line is sensitive to
electron-temperatures \citep{ost89} and can, together with
$H_{\beta}$ and other oxygen lines, provide reliable gas metallicities
without assumptions about the ionization and metallicity.
This selection also strongly favors [O/H] abundances less than $\sim 1/3$
solar and has enabled us to discover a new distant sample of luminous metal-poor
field galaxies. We summarize our observations and
measurements in \S2; we present our data analysis in \S3 and compare our results
to the LZR derived from previous studies of field galaxies. The main conclusions of this
study are presented in \S4.
We adopt the concordance cosmology, i.e., a flat Universe
with $\Omega_{\Lambda} = 0.7$ and $h = 0.7$. Magnitudes are all on
the Vega system.
\section{Observations \& Measurements}
\label{samp_met}
Galaxies were selected by inspection of reduced spectra from two
redshift surveys of faint field galaxies, DEEP2 and TKRS, both using
the DEIMOS spectrograph
\citep{faber03} on the 10-meter Keck II Telescope. DEEP2
\citep{deep_deimos} spectra had a total exposure time of one hour, covered the
wavelength range $\sim$ 6400-9000 \AA \ with the 1200 mm$^{-1}$
grating, and yielded FWHM resolutions of around 60 km s$^{-1}$. The initial
DEEP2 sample consisted of 3900 galaxies, 1200 of which had redshifts that
allowed the [\ion{O}{3}]$\lambda 4363$ and
[\ion{O}{3}]$\lambda 4959$ lines in principle to be observed. This search
yielded 14 galaxies, or about 1\%, that display the weak auroral line
[\ion{O}{3}]$\lambda 4363$ along with prominent oxygen emission lines.
The TKRS \citep{wirth04} is a
similar one hour survey targeting the GOODS-North field \citep{giava04}.
It used the 600 mm$^{-1}$ grating, and covered a wider range of wavelengths
(4600--9800 \AA), but had a lower FWHM resolution of 170 km s$^{-1}$.
This survey yielded 1536 galaxies with reliable redshifts.
For 1090 galaxies, the redshifts
allowed the [\ion{O}{3}]$\lambda 4363$ and
[\ion{O}{3}]$\lambda 4959$ lines to be observable.
Of these, three galaxies, or 0.3\%, showed the auroral line and had
redshifts above $z \sim 0.5$.
Fig. \ref{spectrum_image} shows one example along with its HST
image. Table \ref{tabla} identifies all 17 targets, henceforth called
the O-4363 sample and tabulates the measurements
described below.
\begin{figure}
\plotone{f1.eps}
\caption{
Spectrum of a low-metallicity (1/10 solar) galaxy at redshift $z = 0.68$ (TK-2 in Table 1)
showing the temperature-sensitive [\ion{O}{3}]$\lambda$4363 line used to identify the sample and the
other lines used to measure the gas phase abundance [O/H]. From top to bottom, (i) the [\ion{O}{2}]$\lambda$3727 line, (ii) the
H$\gamma$ and [\ion{O}{3}]$\lambda$4363 lines, and (iii) the $H_{\beta}$ and
[\ion{O}{3}]$\lambda, \lambda$4959,5007 lines. The $HST$ ACS image is taken in the $F814W$ filter
(close to rest frame $B$); North is up, and East is to the left. The image is
3 \arcsec $\times$ 3\arcsec (18 kpc $\times$ 18 kpc). The half-light radius of this galaxy is 0.7 kpc. The thick, dark
line shown represents 2 kpc.
\label{spectrum_image}
}
\end{figure}
The [O/H] metallicities are derived from emission lines,
including the temperature sensitive [\ion{O}{3}]$\lambda 4363$ line
along with [\ion{O}{2}]$\lambda3727$, $H_{\gamma}$, $H_{\beta}$ and
[\ion{O}{3}]$\lambda,\lambda 4959,5007$. For the DEEP2 sample, only
4/14 galaxies possessed the full set of lines, while 10 had
the [\ion{O}{2}]$\lambda 3727$ lines outside the observable
wavelength range. All oxygen lines were detected for the 3 TKRS
galaxies. When [\ion{O}{2}]$\lambda 3727$ was unobservable, its line strength was
estimated using the following fit to local \ion{H}{2} galaxies, with errors
about 50\% larger than from using direct [\ion{O}{2}] measurements (A. D\'{\i}az, private communication):
\begin{equation}
\log \frac{\mathrm{[OIII]}}{\mathrm{[OII]}}= (0.877\pm0.042)\times \log EW(H\beta) -1.155 \pm 0.078
\end{equation}
\noindent
The electron temperature in the [\ion{O}{2}] zone was then derived
according to the method given in \cite{epm_diaz}, while the oxygen
abundances were all calculated using the formulae given in
\cite{pagel92.orig}. Objects showing [\ion{O}{2}]$\lambda$ 3727
have abundance uncertainties set to 0.1 dex, while the others
have uncertainties of 0.15 dex.
Blue absolute magnitudes ($M_{B}$) and rest-frame \ub \ colors were calculated
from the \textit{BRI} photometry \citep{coil04} in DEEP2 and the 4-band \emph{HST-ACS} photometry
in the GOODS-N field of the TKRS, with K-corrections following
those described by Willmer et al. (2005). Half-light
radii, $R_e$, were estimated from curve-of-growth profiles derived
from multi-aperture photometry of the \emph{HST ACS} image taken with the filter
that yielded the closest match to restframe $B$ at the target's redshift.
The star formation rates (SFR) were calculated from the $H_{\beta}$ luminosity
as in \citet{ken94}, valid for T$_{e}=10^{4}K$ and case B
recombination.
Since the DEIMOS spectra are not flux calibrated,
the $H_{\beta}$ line luminosity was estimated via $M_{B}$ and $EW(H_{\beta})$ following \cite{jmelnickxx},
with no extinction or color corrections. The derived luminosities and SFR are thus lower limits.
\input{tab1.tex}
\section{The Luminosity-Metallicity Relation (LZR)}
\label{sci1}
The key result is seen in the [O/H] \textsl{vs.} $M_{B}$ relation in
Fig. \ref{lz_diagram}, which shows that [O/H]
for the 17 galaxies in our O-4363 sample is 1/3 to 1/10 the solar
value of [O/H]$_{\odot}$ = 8.69 \citep{allende01}.
While the O-4363 galaxies have luminosities close to
$L^{*}$ ($M_B \sim -20.4$ locally), they are offset to lower metallicities by about 0.6 dex
in [O/H] when compared to 180 other $z \sim 0.7$ field galaxies studied by
K03, L03, and KK04. All these studies used empirical calibrations, such as $R_{23}$, and
adopted the upper, metal-rich branch\footnote{The K03 sample had 25 galaxies in the redshift
range $0.60 < z < 0.81$; L03 had 55 galaxies between 0.48 and 0.91; and KK04 had 102
between 0.55 and 0.85. }.
\begin{figure}
\plotone{f2.eps}
\caption{
(a)~LZR diagram showing the intermediate-redshift samples as marked in the inset.
The [O/H] values for K03 and L03 were rederived using the
\citet{py00} calibration, placing all surveys on the same system. We estimate the errors to be about
0.2 dex for these data. In some fraction of cases, the resulting metallicities fell below 8.35, which is approximately
the limit between the high metallicity branch and the lower metalicity branch. However, given
the huge scatter of the $12+\log \mathrm{O/H}$-$R_{23}$ relationship in this regime ($\sim$0.4dex), they
might still be compatible with the use of the upper branch. For this reason, their oxygen abundance is fixed
at 8.35. The average position of these sources is given by the black solid square with error bars. The dashed dIrr line
is the average LZR found for local dIrr \citep{skill89_1, rich_call95} while the solid
XBCG line is the LZR for local, metal-poor, blue compact galaxies \citep{ostlin00}.~~~
(b)~LZ diagram showing possible local or high-redshift counterparts to the O-4363
sample. The three most metal-poor galaxies known are identified by
name. Besides keeping the LZRs from panel~(a) of dIrr and XBCGs, we show
the LZR of two local, emission-line galaxy samples: one from KISS \citep{mel02} and the other from 2dF \citep{lamareille04}.
The big, open triangle is for the $z = 3.36$ lensed galaxy \citep{vi-mar04} and the big open circle is
the average position of LBGs at $z \sim 3$ (KK00). Five local XBCGs from \citet{bergvost02} are shown
as open squares.
\label{lz_diagram}
}
\end{figure}
For galaxies at $z\sim0.7$, the oxygen abundances derived here are the first using the direct method
based on the temperature-sensitive [\ion{O}{3}]$\lambda 4363$ line. Our discovery
of luminous galaxies with low [O/H] gas metallicities suggests that adopting
the metal-rich branch when using, e.g., the $R_{23}$
method should be made with caution. Such an assumption precludes finding [O/H] below $\sim$8.4.
If the empirical $R_{23}$ method and upper branch assumption were to be
applied to the O-4363 sample, [O/H] would be greater by about 0.4 $\pm$ 0.2 dex, nearly enough
to place the O-4363 points atop the mean LZR of the $z \sim 0.7$ field galaxies (see Fig. 2).
\lastpagefootnotes
What fraction of the three other moderate-redshift samples are
actually metal-poor? One estimate adopts two criteria suggested by
the O-4363 sample to identify metal-poor galaxies. The first is based
on calculating $R_{23,0}$\footnote{Defined as the $R_{23}$ value that
an ionized \ion{H}{2} region would show if the reddening-corrected
ionization ratio [\ion{O}{3}]$\lambda, \lambda
4959,5007$/[\ion{O}{2}]$\lambda 3727$ were equal to one, leaving the
oxygen content unchanged. We used the \citet{py00} calibration for
the upper branch. In the case of the L03 objects, a uniform
extinction of $c(H\beta)=0.50$ was used. This value was adopted since this is the
average extinction found for emission-line galaxies in the Nearby Field Galaxy
Survey of similar luminosity \citep{cfrs_lilly}.
For both K03 and KK04 sources, EW's were used as surrogates for line strengths, but with no extinction
corrections for K03 and a uniform extinction of $c(H\beta)=0.40$ for
KK04. In this latter case, we have used the mean value of a very large sample
of bright local \ion{H}{2} galaxies from \citet{hoyos05}.} for all galaxies. The O-4363 galaxies
have $R_{23,0} > 5$. This places them near the turnaround region of the
$R_{23}$--[O/H] relation, where a small range in $R_{23}$ spans a wide
range in metallicity. Our discovery of distant, luminous, metal-poor
galaxies in this region implies that the other distant samples may
also have such metal-poor galaxies.
The second criterion is based on large EWs of $H_{\beta}$. The O-4363 sample
yield EW's greater than 40 \AA \ for all but one object
\footnote{TK-3 with TKRS catalogue ID-3653 has an unusually low
EW($H_{\beta}$) of about 20 \AA. Its ionization ratio
[\ion{O}{3}]$\lambda,\lambda 4959+5007$/[\ion{O}{2}]$\lambda 3727$ is
approximately 0.7, and the ratio EW([\ion{O}{2}]$\lambda
3727$)/EW($H_{\beta}$) is $5\pm1$. These values indicate that this
object is probably a Seyfert 2 galaxy, according to
\citet{rola97}. For all other objects for which the \citet{rola97}
diagnostics could be calculated, all tests indicate that they are
normal star-forming galaxies.}. This additional criterion selects
those galaxies in the turnaround region that were most likely to be
metal-poor. In the other distant galaxy surveys, we found 13 galaxies
(7\%) with high EW's of $H_{\beta}$ and $R_{23,0}$, that together
suggest low-metallicities. This 7\% fraction is a lower limit since
some metal-poor galaxies may be outside the turnaround region or may
have smaller EW of $H_{\beta}$. In any case, independent tests are
critical to assess the true fraction of intermediate redshift galaxies
that have abundances below the upper branch, e.g., by observing
[\ion{N}{2}]/$H_{\alpha}$ in the near-infrared at our redshifts as
suggested by \citet{kewdop02} and \citet{denic02}.
What is the nature of our O-4363 sample, and do such galaxies exist
locally or at higher redshifts? Fig. 2b. shows several relevant LZRs
from local to distant samples. One sees that relatively common samples
of emission line galaxies, such as those of local dIrr, the $z \sim 0.7$
galaxies from K03, L03, and KK04 and the local emission line
galaxies from 2dF or KISS surveys all have LZRs that are offset to metallicities
higher than that of the O-4363 sample\footnote{The comparison with the latter
samples of local galaxies should not be taken beyond $\mathrm{[O/H]}=9.0$, because
the KISS and 2DF abundances at high luminosities are clearly
too high \citep{pett_pagel04}. It is then only below $\mathrm{[O/H]}=9.0$
($M_{B}\geq -20.5$) that valid comparisons can be made between our O-4363 sample
and the KISS or 2DF samples. Fortunately, most of the $z\sim 0.7$ O-4363 objects are less
luminous than this limit.}.
On the other hand, the O-4363 galaxies are far better matches to local XBCG's and even to the luminous Lyman Break
Galaxies (LBG) at redshifts $z \sim 2.5$ (KK00), or to a
gravitationally-lensed galaxy at redshift $z = 3.36$, which has a
metallicity of 1/10 solar, a blue absolute magnitude of -21.0
and a SFR of 6 $M_{\odot}$yr$^{-1}$ \citep{vi-mar04}\footnote{This object is rather
extreme, being 1.0 dex below L03, K03 and KK04 objects of similar luminosity.}. We do not find any
correlation between the residuals of
the O-4363 sample with respect to the LZR of local Extreme Blue Compact
Galaxies (XBCG) \citep{ostlin00} with \ub \ color, strength of
H$_{\beta}$, and internal
velocity dispersion (see Table \ref{tabla}).
Much like the XBCG and LBG, the O-4363 galaxies
may belong to the compact class of galaxies. But this suggestion is
based presently on the small 1--2~kpc sizes seen in the only three
galaxies with \emph{HST} images. Moreover, the emission-line velocity
widths (see Table 1.) are narrow and suggest that the O-4363 galaxies
are more likely to be galaxies with small dynamical masses; the very blue colors and
high star formation rates suggest a recent, strong burst of star
formation. Overall,
the trend suggested by Fig. 2b is that the most luminous, metal-poor galaxies are getting fainter
with time.
\section{Summary.}\label{discu}
Based on a search for the [\ion{O}{3}]$\lambda 4363$ emission line in
the TKRS and initial DEEP2 surveys of field galaxies, we have
discovered 17 galaxies at redshift $z \sim 0.7$ that are luminous,
very blue, compact, and metal poor, roughly 1/3 to 1/10 solar in
[O/H]. Though rare, such metal-poor galaxies highlight the diversity
among galaxies with similar luminosities and serve as important
laboratories to study galaxy evolution \citep{ostlin00}. This sample
is lower in [O/H] by 0.6 dex on average in the LZR when compared to
prior studies at these redshift, which used empirical calibrations,
such as $R_{23}$.
The previous studies, however, assumed the metal-rich
branch of the calibration, while our results show that this assumption
may not apply, even for luminous galaxies, especially when high values
of EW(H$_{\beta}$) and $R_{23}$ are found (roughly 7\% of the other
samples). Based on comparisons to local and high redshift samples, we
speculate that our metal-poor, luminous galaxies at $z \sim 0.7$
provide an important bridge between local Extreme Blue
Compact Galaxies (XBCGs) and Lyman Break Galaxies (LBGs) at redshifts
$z \sim 3$. All three samples share the property of being overluminous for their
metallicities, when compared to local galaxy samples, and of having very high EWs of $H_{\beta}$, typically above
40 \AA \ and up to 150 \AA \ and, thus, similar to that found for some
local \ion{H}{2} galaxies \citep{T91,hoyos05}. The calculated star
formation rates of the O-4363 galaxies are mostly from 5 to 12
$M_{\odot}$yr$^{-1}$, indicating that the star-forming activity is
very strong (c.f., the SFR of 30 Doradus is 0.1 $M_{\odot}$yr$^{-1}$)
and thus lying roughly between that of local metal-poor, blue compact
galaxies and distant Lyman Break Galaxies. When DEEP2 is complete, we
expect to have a sample nearly 10$\times$ larger, many with \emph{HST}
images. The resulting data set should thus provide vastly improved probes of
their nature, enable us to understand
their relationship to other classes of galaxies at different
epochs, and yield constraints on the physical processes involved in chemical and
galaxy evolution.
\acknowledgments
We thank A. I. D\'\i az and R.. Guzm\'an for useful discussions.
We acknowledge support from
the Spanish DGICYT grant AYA-2000-0973, the MECD FPU grant AP2000-1389,
NSF grants AST 95-29028 and AST 00-71198, NASA grant AR-07532.01-96,
and the New Del Amo grant. We close with thanks to the Hawaiian people for
allowing us to use their sacred mountain.
| 2024-02-18T23:39:50.707Z | 2005-10-31T11:40:22.000Z | algebraic_stack_train_0000 | 620 | 3,146 |
|
proofpile-arXiv_065-3157 | \section{Introduction} \label{sec:introduction}
The field of extrasolar planet research has recently made a leap forward with the direct detection of extrasolar giant planets (EGPs). Using Spitzer Space Telescope, \citet{Charbonneau05} and \citet{Deming05} have detected infrared photons from two transiting planets, TrES-1b and HD209458b, respectively. \citet{Chauvin04,Chauvin05} have reported the infrared imaging of an EGP orbiting the nearby young brown dwarf 2M1207 with VLT/NACO, whereas \citet{Neuhauser05} have collected evidence for an EGP companion to the T-Tauri star GQ Lup using VLT/NACO as well.
Although there are claims that the direct detection of terrestrial planets could be performed from the ground with -- yet to come -- extremely large telescopes \citep{Angel03,Chelli05}, it is widely believed that success will be more likely in space. Direct detection is the key to spectroscopy of planetary atmospheres and discovery of biomarkers, namely indirect evidence of life developed at the planetary scale \citep[e.g.][]{DesMarais02}.
Both NASA and ESA have space mission studies well underway to achieve this task. Darwin, the European mission to be launched in 2015, will be a thermal infrared nulling interferometer with three 3.5-m free-flying telescopes \citep{Karlsson04}. Terrestrial Planet Finder, the American counterpart, will feature two missions: a $8\!\times\!3.5$~m-monolithic visible telescope equipped with a coronagraph (TPF-C) to be launched in 2015, and an analog to Darwin (TPF-I) to be launched in the 2015--2019 range \citep{Coulter04}.
The direct detection of the photons emitted by a terrestrial planet is made very challenging by the angular proximity of the parent star, and by the very high contrast (i.e. luminosity ratio) between the planet and its star: about $10^6$ in the thermal infrared and about $ 10^{10}$ in the visible. Both wavelength ranges have their scientific merits and technical difficulties, and both of them are thought to be necessary for an unambiguous detection of habitability and signs of life \citep[e.g.][]{DesMarais02}. In this paper, we deal with the visible range only.
In the visible, planet detection faces two fundamental noise sources: (i) quantum noise of the diffracted star light, and (ii) speckle noise due to the scattering of the star light by optical defects. \citet{Labeyrie95} proposed a technique based on \emph{dark speckles} to overcome speckle noise: random fluctuations of the atmosphere cause the speckles to interfere destructively and disappear at certain locations in the image, thus creating localized dark spots suitable for planet detection. The statistical analysis of a large number of images then reveals the planet as a spot persistently brighter than the background.
\citet{Malbet95} proposed to use a deformable mirror (DM) instead of the atmosphere to make speckles interfere destructively in a targeted region of the image called \emph{search area} or \emph{dark hole} (DH or $\mathcal{H}$). Following the tracks of these authors, this paper discusses methods to reduce the speckle noise below the planet level by using a DM and an ideal coronagraph. However, unlike \citet{Malbet95}, we propose non-iterative algorithms, in order to limit the number of long exposures needed for terrestrial planet detection. We will refer to these methods as \emph{speckle nulling} techniques, as \cite{Trauger04} call them. Technical aspects of this work are inspired by the High Contrast Imaging Testbed \cite[HCIT;][]{Trauger04}, a speckle-nulling experiment hosted at the Jet Propulsion Laboratory, specifically designed to test TPF-C related technology.
After reviewing the process of speckle formation to establish our notations (\S\ref{sec:speckle_formation}), we derive two speckle nulling methods in the case of small aberrations (\S\ref{sec:speckle_nulling}). The speckle nulling phase is preceded by the measurement of the electric field in the image plane (\S\ref{sub:measurement}). The performance of both methods are then evaluated with one- and two-dimensional simulations (\S\ref{sec:simulations}), first with white speckle noise (\S\ref{sub:sim_white}), then with non-white speckle noise (\S\ref{sub:sim_real}). Various effects and instrumental noises are considered in \S\ref{sec:discussion}. Finally, we conclude and discuss some future work (\S\ref{sec:conclusion}).
\section{Speckle formation} \label{sec:speckle_formation}
This paper is written in the framework of Fourier optics considering a single wavelength, knowing that a more sophisticated theory (scalar or vectorial) in polychromatic light will eventually be needed. Fourier transforms (FTs) are signaled by a hat.
Let us consider a simple telescope with an entrance pupil $\mathcal{P}$. In the pupil plane, we use the reduced coordinates $(u,v) = (x/\lambda,y/\lambda)$, where $(x,y)$ are distances in meters and $\lambda$ is the wavelength. We define the pupil function by
\begin{equation} \label{eq:P}
P(u,v) \equiv
\left \{
\begin{array}{l}
1 \mbox{ if } (u,v) \in \mathcal{P}, \\
0 \mbox{ otherwise.}
\end{array}
\right.
\end{equation}
Even in space, i.e. when not observing through a turbulent medium like the atmosphere, the optical train of the telescope is affected by phase and amplitude aberrations. Phase aberrations are wavefront corrugations that typically originate in mirror roughness caused by imperfect polishing, while amplitude aberrations are typically the result of a heterogeneous transmission or reflectivity. Moreover, Fresnel propagation turns phase aberrations into amplitude aberrations, and the reverse \citep[e.g.][]{Guyon05b}. Regardless of where they originate physically, all phase and amplitude aberrations can be represented by a complex aberration function $\phi$ in a re-imaged pupil plane, so that the aberrated pupil function is now $P e^{i \phi}$.
The electric field associated with an incident plane wave of amplitude unity is then
\begin{equation} \label{eq:E_pu}
E(u,v) = P(u,v)\,e^{i \phi(u,v)}.
\end{equation}
Exoplanet detection requires that we work in a regime where aberrations are reduced to a small fraction of the wavelength. Once in this regime, we can replace $e^{i \phi}$ by its first order expansion $1 + i \phi$ (we will discuss in \S\ref{sub:discuss_linearity} the validity of this approximation). The electric field in the image plane being the FT of (\ref{eq:E_pu}), we get
\begin{equation} \label{eq:E_im}
\widehat{E}(\alpha,\beta) = \widehat{P}(\alpha,\beta) + i \, \widehat{P\phi}(\alpha,\beta),
\end{equation}
where $(\alpha,\beta)$ are angular coordinates in the image plane.
The physical picture is as follows. The first term ($\widehat{P}$) is the direct image of the star. The second term ($\widehat{P\phi}$) is the field of speckles surrounding the central star image, where each speckle is generated by the equivalent of first-order scattering from one of the sinusoidal components of the complex aberration $\phi$. Each speckle is essentially a ghost of the central PSF.
In the remainder of this paper, we focus on means to measure and correct the speckles in a coronagraphic image. Following \cite{Malbet95} we will leave out the unaberrated PSF term by assuming that it is was canceled out by a coronagraph of some sort (see \citet{Quirrenbach05} for a review on coronagraphs). Thus we clearly separate the gain in contrast that can be obtained by reducing the diffracted light with the coronagraph on one hand, and by fighting the scattered light with the speckle nulling technique on the other hand.
\section{Speckle nulling theory} \label{sec:speckle_nulling}
The purpose of speckle nulling is to reduce the speckle noise in a central region of the image plane. This region, the dark hole, then becomes dark enough to enable the detection of companions much fainter than the original speckles. Speckle nulling is achieved by way of a servo system that has a deformable mirror as actuator. Because our sensing method requires DM actuation and is better understood with the knowledge of the command control theory, we first model the deformable mirror (\S\ref{sub:deformable_mirror}), then present two algorithms for the command control (\S\ref{sub:field_nulling} \& \ref{sub:energy_min}), and conclude with the sensing method (\S\ref{sub:measurement}).
\subsection{Deformable mirror} \label{sub:deformable_mirror}
The deformable mirror (DM) in \cite{Trauger03} consists of a continuous facesheet supported by $N\!\times\!N$ actuators arranged in a square pattern of constant spacing. This DM format is well adapted to either square or circular pupils, the only pupil shapes that we consider in this paper\footnote{Two square DMs can be assembled to accommodate an elliptical pupil such as the one envisioned for TPF-C.}. We assume that the DM is physically located in a plane that is conjugate to the entrance pupil. However, what we call DM in the following is the projection of this real DM in the entrance pupil plane. The projected spacing between actuators is denoted by $d$. We assume that the optical magnification is such that the DM projected size is matched to the entrance pupil, i.e. $Nd = D$, where $D$ is either the pupil side length or its diameter. The DM surface deformation in response to the actuation of actuator ${(k,l) \in \{0 \ldots N\!-\!1 \}^2}$ is described by an \emph{influence function}, denoted by $f_{kl}$. The total phase change introduced by the DM (DM phase function) is
\begin{equation} \label{eq:psi}
\psi(u,v) \equiv \sum_{k=0}^{N-1} \sum_{l=0}^{N-1} a_{kl}\,f_{kl}(u,v),
\end{equation}
where $a_{kl}$ are actuator strokes (measured in radians). Note that contrary to the complex aberration function $\phi$, the DM phase function is purely real.
With an ideal coronagraph and a DM, the image-plane electric field formerly given by (\ref{eq:E_im}) becomes
\begin{equation} \label{eq:E_im2}
\widehat{E}'(\alpha,\beta) = i\,\widehat{P\phi}(\alpha,\beta) + i\,\widehat{P\psi}(\alpha,\beta).
\end{equation}
In the next two sections, we explore two approaches for speckle nulling. In \S\ref{sub:field_nulling}, we begin naively by trying to cancel $\widehat{E}'$. Because there is a maximum spatial frequency that the DM can correct for, the DH has necessarily a limited extension. Any energy at higher spatial frequencies will be aliased in the DH and limit its depth. Therefore, the DM cannot be driven to cancel $\widehat{E}'$, unless $\widehat{P\phi}$ is equal to zero outside the DH (i.e. unless there are already no speckles outside the DH). With this in mind, we start over in \S\ref{sub:energy_min} with the idea that speckle nulling is better approached by minimizing the field energy.
\subsection{Speckle field nulling} \label{sub:field_nulling}
The speckle field nulling approach consists in trying to null out $\widehat{E}'$ in the DH region ($\mathcal{H}$), meaning we seek a solution to the equation
\begin{equation} \label{eq:field1}
\forall (\alpha,\beta) \in \mathcal{H}, \quad \widehat{P\phi}(\alpha,\beta) + \widehat{P\psi}(\alpha,\beta) = 0,
\end{equation}
although, as we shall show, this equation has no exact solution unless $\widehat{P\phi}$ happens to be a band-limited function within the controllable band of the DM.
By replacing $\psi$ with its expression (\ref{eq:psi}), we obtain
\begin{equation} \label{eq:field2}
\forall (\alpha,\beta) \in \mathcal{H}, \quad \sum_{k=0}^{N-1} \sum_{l=0}^{N-1} a_{kl}\,\widehat{Pf_{kl}}(\alpha,\beta)
= - \widehat{P\phi}(\alpha,\beta).
\end{equation}
We recognize in (\ref{eq:field2}) a linear system in the $a_{kl}$ that could be solved using various techniques such as singular value decomposition \citep[SVD;][\S2.6]{Press02}. Although general, this solution does not provide much insight in the problem of speckle nulling. For this reason, let us examine now a different solution, less general but with more explanatory power. We will comment on the use of SVD at the end of this section.
We consider a square pupil. In this case, all DM actuators receive light and the pupil function has no limiting effect on the DM phase function, i.e. $P\psi = \psi$. Moreover, we assume all influence functions to be identical in shape, and write $f_{kl}(u,v) = f(u-k\frac{d}{\lambda},v-l\frac{d}{\lambda})$. Under these hypotheses,
\begin{equation} \label{eq:psi2}
P\psi(u,v) = f(u,v) \ast \sum_{k=0}^{N-1} \sum_{l=0}^{N-1} a_{kl} \, \delta \! \left( u - k \frac{d}{\lambda}, v - l \frac{d}{\lambda} \right),
\end{equation}
where $\delta$ is Dirac's bidimensional distribution, and $\ast$ denotes the convolution.
Substituting $\widehat{P\psi}$ by the FT of (\ref{eq:psi2}) in (\ref{eq:field1}) yields
\begin{equation} \label{eq:field3}
\forall (\alpha,\beta) \in \mathcal{H}, \quad \sum_{k=0}^{N-1} \sum_{l=0}^{N-1} a_{kl}\,e^{-i \frac{2\pi d}{\lambda} (k \alpha + l \beta)}
= - \frac{\widehat{P\phi}(\alpha,\beta)}{\hat{f}(\alpha,\beta)}.
\end{equation}
We recognize in the left-hand side of (\ref{eq:field3}) a truncated Fourier series. If we choose the $a_{kl}$ to be the first $N^2$ Fourier coefficients of $-\widehat{P\phi}/\hat{f}$, i.e.
\begin{equation} \label{eq:coef1}
a_{kl} = \frac{2d^2}{\lambda^2} \int\!\!\!\int_{{[-\frac{\lambda}{2d}, \frac{\lambda}{2d}]}^2} -\frac{\widehat{P\phi}(\alpha,\beta)}{\hat{f}(\alpha,\beta)} \:
e^{i \frac{2\pi d}{\lambda} (k \alpha + l \beta)} \, \mathrm{d}\alpha \, \mathrm{d}\beta,
\end{equation}
then according to Fourier theory, we minimize the mean-square error between both sides of the equation \cite[see e.g.][\S1.5]{Hsu67}. This error cannot be reduced to zero unless the Fourier coefficients of $-\widehat{P\phi}/\hat{f}$ happen to vanish for $k,l < 0$ and $k,l > N$. At this point, we have reached the important conclusion that \emph{perfect speckle cancellation cannot be achieved with a finite-size DM unless the wavefront aberrations are band-limited}. Moreover, we can assert that the maximum DH extension is the square domain $\mathcal{H} \equiv {[-\frac{\lambda}{2d}, \frac{\lambda}{2d}]}^2 = {[-\frac{N}{2}\frac{\lambda}{D}, \frac{N}{2}\frac{\lambda}{D}]}^2$.
Solution (\ref{eq:coef1}) is physically acceptable only if the Fourier coefficients are real numbers, which means mathematically that $\widehat{P\phi}/\hat{f}$ should be Hermitian\footnote{A function $f$ is said to be Hermitian if $ \forall (x,y), \: f(x,y) = f^\ast(-x,-y)$. The FT of a real function is Hermitian and vice versa.}. If there are phase aberrations only, $P\phi$ is real, $\widehat{P\phi}/\hat{f}$ is Hermitian, and the $a_{kl}$ are real. This is no longer true if there are amplitude aberrations as well, reflecting the fact that the DM alone cannot correct both phase and amplitude aberrations in $\mathcal{H}$. However, by considering the Hermitian function that is equal to $\widehat{P\phi}/\hat{f}$ in one half of the DH, say $\mathcal{H}^+ \equiv [0,\frac{\lambda}{2d}] \times [-\frac{\lambda}{2d},\frac{\lambda}{2d}]$, we obtain the real coefficients
\begin{equation} \label{eq:coef2}
a_{kl} = 4d^2 \int\!\!\!\int_{\mathcal{H}^+} -\frac{\widehat{P\phi}(\alpha,\beta)}{\hat{f}(\alpha,\beta)} \:
\cos \! \left[ \frac{2\pi d}{\lambda} (k \alpha + l \beta) \right] \mathrm{d}\alpha \, \mathrm{d}\beta,
\end{equation}
that correct both amplitude and phase aberrations in $\mathcal{H}^+$. As we have $\frac{\lambda}{2d} = \frac{N}{2}\frac{\lambda}{D}$, the DH has a size of $N\!\times\!N$ resolution elements (resels) with phase aberrations only, and of $\frac{N}{2}\!\times\! N$ resels with phase and amplitude aberrations. Therefore, \emph{a DM can correct both amplitude and phase aberrations in the image plane, albeit in a region that is either the left, right, upper, or lower half of the phase-corrected region.}
As \citet{Malbet95} pointed out, let us remind the reader that $\frac{\lambda}{2d}$ is equal to the Nyquist frequency for a sampling interval $\frac{d}{\lambda}$. Therefore, we find that maximum extension for the DH corresponds to the range where the sampling theorem applies to the wavefront at the DM actuator scale. Indeed, taking the inverse FT of (\ref{eq:field3}) leads to the wavefront reconstruction formula
\begin{equation} \label{eq:field4}
P\phi(u,v) = - \sum_{k=0}^{N-1} \sum_{l=0}^{N-1} a_{kl} \, f \! \left( u - k \frac{d}{\lambda}, v - l \frac{d}{\lambda} \right).
\end{equation}
Again, this reconstruction cannot be perfect unless the spectrum of $P\phi$ is contained in $\mathcal{H}$. Note that because $\hat{f}$ is generally not a flat function (as it would be the case if influence functions were for instance 2D sinc functions), actuator strokes are not equal to the negative of wavefront values sampled at the actuator locations.
Our Fourier solution was derived by assuming that (a) all influence functions are identical in shape, and (b) that the pupil has a square shape. Hypothesis (a) appears to be reasonable at least for the DM in use on the HCIT (Joseph Green, personal communication), but this remains to be precisely measured. If hypothesis (b) is relaxed then (i) some actuators do not receive any light and play no role, so there are effectively fewer terms in the summation in (\ref{eq:field3}), and (ii) the fact that influence functions on the pupil boundary are only partly illuminated is ignored.
Now that we have two methods to solve (\ref{eq:field1}), Fourier expansion and SVD, let us compare their solutions. We deal here with functions belonging to the Hilbert space of square integrable functions $f\!: \mathcal{H} \rightarrow \mathbb{C}$. This space has $<f,g> \equiv \int\!\!\!\int_\mathcal{H} f \, g^\ast$ for dot product, and $||f|| \equiv \sqrt{\int\!\!\!\int_\mathcal{H} |f|^2}$ for norm. As mentioned earlier, Fourier expansion minimizes the mean-square error between both sides of (\ref{eq:field3}), i.e. $||(\widehat{P\phi}+\widehat{P\psi})/\hat{f}||^2$. By contrast, SVD has the built-in property of minimizing the norm of the residuals of (\ref{eq:field2}), i.e. $||\widehat{P\phi}+\widehat{P\psi}||$. In other words, SVD minimizes $||\widehat{E'}||^2$, the speckle field energy, which seems more satisfactory from a physical point of view. To find out what is best, we have performed one-dimensional numerical simulations. It turns out that SVD yields dark holes 50\,\% deeper (median value) than Fourier expansion. In addition, SVD does not require all influence functions to have the same shape.
However, considering four detector pixels per resel in two dimensions (critical sampling), SVD would require us to manipulate matrices as large as $N^2\!\times\!4N^2$ (or even $N^2\!\times\!8N^2$ when real and imaginary parts are separated). Such matrices would occupy 537~MB of memory space for $64\!\times\!64$ actuators and single-precision floating-point numbers. By contrast, Fourier expansion would be straightforwardly obtained with FFTs of $2N\!\times\!2N$ arrays at critical sampling, but again at the cost of a 50\,\% shallower dark hole and a strong hypothesis on the influence functions.
In the next section, we seek to find a computationally less intensive solution that still minimizes the speckle energy in the dark hole, but does not require any hypothesis on the influence functions.
\subsection{Speckle energy minimization} \label{sub:energy_min}
Let us start with the idea that the best solution is defined as the one \emph{minimizing the total energy of the speckle field in the DH}. For the sake of simplicity, we assume once again a square pupil, but not necessarily a common shape for the influence functions. The total energy in the speckle field reads
\begin{equation} \label{eq:energy1}
\mathcal{E} \equiv \int\!\!\!\int_\mathcal{H} |\widehat{P\phi}(\alpha,\beta) + \widehat{\psi}(\alpha,\beta)|^2 \, \mathrm{d}\alpha \, \mathrm{d}\beta
\; = \; <\widehat{P\phi} + \widehat{\psi}, \widehat{P\phi} + \widehat{\psi}>,
\end{equation}
using the same notation as in \S\ref{sub:field_nulling}.
Given that $\partial \widehat{\psi}/\partial a_{kl} = \hat{f}_{kl}$, the energy is minimized when
\begin{equation} \label{eq:energy2}
\forall (k,l) \in {\{0 \ldots N\!-\!1\}}^2, \quad \frac{\partial \mathcal{E}}{\partial a_{kl}} = 0
\quad \Longleftrightarrow \quad
\Re \left( <\widehat{P\phi} + \widehat{\psi}, \hat{f}_{kl}> \right) = 0,
\end{equation}
where $\Re$ stands for the real part. Note that this is less demanding than (\ref{eq:field1}), as (\ref{eq:field1}) implies (\ref{eq:energy2}) but the reverse is not true.
Using the definition (\ref{eq:psi}) for $\psi$ and realizing that $<\hat{f}_{nm},\hat{f}_{kl}>$ is a real number\footnote{This property stems from the Hermitian character of $\hat{f}_{kl}$ together with the symmetry of $\mathcal{H}$.}, we get finally
\begin{equation} \label{eq:energy3}
\forall (k,l) \in {\{0 \ldots N\!-\!1\}}^2, \quad
\sum_{n=0}^{N-1} \sum_{m=0}^{N-1} a_{nm} <\hat{f}_{nm},\hat{f}_{kl}> \; = - \Re \left( <\widehat{P\phi},\hat{f}_{kl}> \right).
\end{equation}
As in (\ref{eq:field2}), we find a system that is linear in the actuator strokes. By replacing double indices with single ones, e.g. $(k,l)$ becomes $s = k \, N + l$, (\ref{eq:energy3}) can be solved in matrix format by inverting a $N^2\!\times\!N^2$ real matrix. This is already an improvement with respect to the $N^2\!\times\!4N^2$ complex matrix required by SVD in the previous section.
It appears that the same solution can be obtained with a much less demanding ${N\!\times\!N}$ matrix inversion, provided two-dimensional influence functions can be written as the tensor product of two one-dimensional functions (separation of variables), i.e. $f_{kl}(u,v) = g_k(u) \, g_l(v)$. This would be the case for box functions or bidimensional Gaussians, and is good at the 5\,\% level for the DM in use on the HCIT. This property also holds in the image plane since the FT of the previous equation yields $\hat{f}_{kl}(\alpha,\beta) = \hat{g}_k(\alpha) \, \hat{g}_l(\beta)$.
By separating variables, (\ref{eq:energy3}) becomes
\begin{equation} \label{eq:energy4}
\forall (k,l) \in {\{0 \ldots N\!-\!1\}}^2, \quad
\sum_{n=0}^{N-1} <\hat{g}_n,\hat{g}_k> \sum_{m=0}^{N-1} a_{nm} <\hat{g}_m,\hat{g}_l> \; = - \Re \left( <\widehat{P\phi},\hat{f}_{kl}> \right).
\end{equation}
As the left-hand side happens to be the product of three $N\!\times\!N$ matrices, (\ref{eq:energy4}) can be rewritten as an equality between square matrices.
\begin{equation} \label{eq:energy5}
G \, A \, G = \Phi,
\quad \mbox{where} \quad
\left \{
\begin{array}{l}
G_{kl} = \; <\hat{g}_k,\hat{g}_l> \\
A_{kl} = a_{kl} \\
\Phi_{kl} = - \Re \left( <\widehat{P\phi},\hat{f}_{kl}> \right).
\end{array}
\right.
\end{equation}
For square-box and actual HCIT influence functions, numerical calculations show that $G$ is diagonally dominant\footnote{A matrix $A=[a_{ij}]$ is said to be diagonally dominant if $\forall i, \: |a_{ii}| > \sum_{j \neq i} |a_{ij}|$.} and therefore invertible by regular Gaussian elimination. The solution to (\ref{eq:energy5}) is then
\begin{equation} \label{eq:energy6}
A = G^{-1} \, \Phi \, G^{-1}.
\end{equation}
Note that $G^{-1}$ can be precomputed and stored, so that computing the strokes effectively requires only two matrix multiplications. As shown in appendix~\ref{app:global}, an equivalent result can be obtained by working with pupil plane quantities.
As for the field nulling approach, correcting amplitude errors as well implies restricting the dark hole to either $\mathcal{H}^+ = [0,\frac{N}{2}\frac{\lambda}{D}] \times [-\frac{N}{2}\frac{\lambda}{D},\frac{N}{2}\frac{\lambda}{D}]$ or $\mathcal{H}^- = [-\frac{N}{2}\frac{\lambda}{D},0] \times [-\frac{N}{2}\frac{\lambda}{D},\frac{N}{2}\frac{\lambda}{D}]$. To account for amplitude errors and keep the formalism we have presented so far, it is sufficient to replace $\widehat{P\phi}$ by a function equal to $\widehat{P\phi}(\alpha,\beta)$ in either $\mathcal{H}^+$ or $\mathcal{H}^-$ (depending on the half where one wishes to create the dark hole), and equal to $\widehat{P\phi}^\ast(-\alpha,-\beta)$ in the other half (Hermitian symmetry). Because its FT is Hermitian, the new aberration function in the pupil plane is real, and thus the algorithm processes amplitude and phase errors at the same time as if there were phase errors only.
Let us derive the residual total energy in the DH after the correction has been applied. Starting from definition (\ref{eq:energy1}) and rewriting condition (\ref{eq:energy2}) as ${\Re ( <\widehat{P\phi} + \widehat{\psi}, \widehat{\psi}> ) = 0}$, we find
\begin{equation}
\mathcal{E}_\mathrm{min} = \; <\widehat{P\phi},\widehat{P\phi}> - <\widehat{\psi},\widehat{\psi}>.
\end{equation}
The former term is the initial speckle energy in the DH, while the latter is the speckle energy decrease gained with the DM. Mathematically, $\sqrt{\mathcal{E}_\mathrm{min}}$ measures the distance (according to the norm we have defined) between the speckle field and its approximation with the DM inside the DH. Because there is no exact solution to ($\ref{eq:field1}$), the residual energy cannot be made equal to zero in $\mathcal{H}^+$ or $\mathcal{H}^-$. However, the energy approach offers an additional degree of freedom: by reducing concentrically the domain over which the energy is minimized, the speckle energy can be further decreased (see \S\ref{sec:simulations}).
\subsection{Speckle field measurement} \label{sub:measurement}
So far, our speckle nulling theory has presupposed the knowledge of the speckle field $\widehat{P\phi}$, or equivalently of the phase and amplitude aberrations across the pupil, embodied in the complex phase function $P\phi$. In this section, we show how the speckle field can be measured directly in the image plane. As the detector measures an intensity, a single image yields only the modulus of the speckle field. The phase of the speckle field can be retrieved by perturbing the phase function $P\phi$ in a controlled way, and by recording the corresponding images, a process analogous to \emph{phase diversity} \citep[e.g.][]{Lofdahl94}. In our system, the DM provides the natural means for creating this controlled perturbation.
As we will see, exactly three images obtained with well chosen DM settings provide enough information to measure $\widehat{P\phi}$. Let us call image 0 the original image recorded with a setting $\psi_0$, whereas images 1 and 2 are recorded with settings $\psi_0+\delta\psi_1$ and $\psi_0+\delta\psi_2$.
To be general, we consider in the field of view the presence of an exoplanet and an exozodiacal cloud (exozodi for short), in addition to the star itself. The electric fields of these objects are incoherent with that of the star, so their intensities should be added to the star's intensity. Because they are much fainter than the star, the speckles they produce are negligible with respect to the star speckles, and their intensities can be considered as independent of $\phi$ and $\psi$. The total intensity of every image pixel $(\alpha,\beta)$ takes then the successive values
\begin{equation} \label{eq:I_system1}
\left \{
\begin{array}{l}
I_0 = |\widehat{P\phi} + \widehat{\psi}_0|^2 + I_\mathrm{p} + I_\mathrm{z} \\
I_1 = |\widehat{P\phi} + \widehat{\psi}_0 + \widehat{\delta\psi_1}|^2 + I_\mathrm{p} + I_\mathrm{z} \\
I_2 = |\widehat{P\phi} + \widehat{\psi}_0 + \widehat{\delta\psi_2}|^2 + I_\mathrm{p} + I_\mathrm{z}, \\
\end{array}
\right.
\end{equation}
where $I_\mathrm{p}$ and $I_\mathrm{z}$ are the exoplanet and exozodi intensities, respectively.
System~(\ref{eq:I_system1}) can be reduced to the linear system
\begin{equation} \label{eq:I_system2}
\left \{
\begin{array}{l}
{(\widehat{\delta\psi_1})}^\ast \, (\widehat{P\phi}+\widehat{\psi}_0) + \widehat{\delta\psi_1} \, {(\widehat{P\phi}+\widehat{\psi}_0)}^\ast
= I_1 - I_0 - |\widehat{\delta\psi_1}|^2 \\
{(\widehat{\delta\psi_2})}^\ast \, (\widehat{P\phi}+\widehat{\psi}_0) + \widehat{\delta\psi_2} \, {(\widehat{P\phi}+\widehat{\psi}_0)}^\ast
= I_2 - I_0 - |\widehat{\delta\psi_2}|^2, \\
\end{array}
\right.
\end{equation}
where the exponent $\ast$ denotes the complex conjugate. Notice how the exoplanet and exozodi intensities have disappeared from the equations, demonstrating that faint objects do not affect the measurement process of stellar speckles. However, note that because of quantum noise, the planet detection can still be problematic if the exozodi is much brighter than the planet.
Now, system (\ref{eq:I_system2}) admits a unique solution if its determinant,
\begin{equation} \label{eq:delta}
\Delta \equiv {(\widehat{\delta\psi_1})}^\ast \, \widehat{\delta\psi_2} - \widehat{\delta\psi_1} \, {(\widehat{\delta\psi_2})}^\ast,
\end{equation}
is not zero, that is to say if
\begin{equation} \label{eq:I_condition}
|\widehat{\delta\psi_1}|\,|\widehat{\delta\psi_2}|\,\sin \! \left[ \arg(\widehat{\delta\psi_2}) - \arg(\widehat{\delta\psi_1}) \right] \neq 0.
\end{equation}
Condition (\ref{eq:I_condition}) tells us that the DM setting changes, $\delta\psi_1$ and $\delta\psi_2$, should modify the speckles differently in any given pixel, otherwise not enough information is secured to measure unambiguously $\widehat{P\phi}$ in this pixel. It should be expected for this method to work practically that the magnitude of the speckle modification be greater than the photon noise level.
We have not yet found a rigorous derivation of the optimum values for the amplitude $|\widehat{\delta\psi_1}|$ and $|\widehat{\delta\psi_2}|$, but a heuristic argument suggests to us that the optimum perturbations may be that $I_1 \approx I_0$ and $I_2 \approx I_0$. That is to say, the DM-induced speckle instensity pattern, taken by itself, should be approximately the same as the original speckle intensity pattern. Thus at each pixel we choose $|\widehat{\delta\psi_1}| \approx |\widehat{\delta\psi_2}| \approx \sqrt{I_0}$, with the caveat that neither should be zero to keep (\ref{eq:I_condition}) valid.
The phase of $\widehat{\delta\psi_1}$ does not matter, but the phase difference between $\widehat{\delta\psi_1}$ and $\widehat{\delta\psi_2}$ should be made as close to $\frac{\pi}{2}$ as possible to keep $\Delta$ from zero. Practically, this can be realized as follows:
\begin{enumerate}
\item Compute $\delta\psi_1$ stroke changes from (\ref{eq:coef2}) or (\ref{eq:energy6}) by replacing $\widehat{P\phi}$ by $\sqrt{I_0}\,e^{i\theta}$, where $\theta$ is a random phase;
\item Compute $\delta\psi_2$ stroke changes from (\ref{eq:coef2}) or (\ref{eq:energy6}) by replacing $\widehat{P\phi}$ by $\widehat{\delta\psi_1}\,e^{i\frac{\pi}{2}}$.
\end{enumerate}
Now that we have made sure that $\Delta \neq 0$, we derive finally
\begin{equation} \label{eq:I_solution}
\widehat{P\phi} = \frac{\widehat{\delta\psi_2} \, (I_1 - I_0 - |\widehat{\delta\psi_1}|^2) -
\widehat{\delta\psi_1} \, (I_2 - I_0 - |\widehat{\delta\psi_2}|^2)}{\Delta} - \widehat{\psi}_0.
\end{equation}
Equation (\ref{eq:I_solution}) shows that the initially unknown speckle field ($\widehat{P\phi}$) can be experimentally measured in just three exposures taken under identical circumstances but with different shapes imposed on the DM.
\section{Speckle nulling simulations} \label{sec:simulations}
\subsection{White speckle noise} \label{sub:sim_white}
In this section, we perform one- and two-dimensional simulations for the theoretical case of white speckle noise caused by phase aberrations only. The DM has 64 actuators and top-hat influence functions. Smoother influence functions have been tested and do not lead to qualitatively different results. A simulation with actual HCIT influence functions will be presented in the next section. The simulated portion of the pupil plane is made twice as big as the pupil by zero padding, so that every element of resolution in the image plane would be sampled by two detector pixels. This corresponds to the realistic case of a photon-starved exoplanet detection where read-out noise must be minimized.
\subsubsection{One-dimensional simulations}
Figure~\ref{fig:f1} shows a complete one-dimensional simulation including speckle field measurement (\S\ref{sub:measurement}) and speckle nulling with field nulling (\S\ref{sub:field_nulling}) and energy minimization (\S\ref{sub:energy_min}). The standard deviation of the phase aberrations is set to $\lambda/1000$. Intensities are scaled with respect to the maximum of the star PSF in the absence of a coronagraph. Ideal conditions are assumed: no photon noise, noiseless detector, and perfect precision in the control of DM actuators. Under these conditions, the speckle field is perfectly estimated, and the mean intensity in the DH is $5.8 \times 10^{-11}$ with field nulling and $1.4 \times 10^{-11}$ with energy minimization, i.e. about 1500 and 6500 times lower than the mean intensity outside the DH, respectively. Repeated simulations with different noise sequences show that energy minimization performs always better than field nulling by a factor of a few. Field nulling solved with SVD yields the same numerical solution as energy minimization (they differ by the last digit only), in agreement with the idea that they both minimize speckle energy.
\subsubsection{Dark hole depth estimate in one dimension}
In the one-dimensional case, it is easy to predict roughly the shape and the depth of the DH. The function $\widehat{P\phi} + \widehat{\psi}$ is band-limited since the pupil has a finite size. As the pupil linear dimension is $D/\lambda$, the maximum spatial frequency of $\widehat{P\phi} + \widehat{\psi}$ is $D/2\lambda$. Let us apply the sampling theorem at the Nyquist sampling frequency $D/\lambda$, and write
\begin{equation} \label{eq:1d1}
(\widehat{P\phi} + \widehat{\psi})(\alpha) = \sum_{n=-\infty}^{+\infty} \left[ \widehat{P\phi}_n + \widehat{\psi}_n \right] \mbox{sinc} \! \left( \frac{\alpha D}{\lambda} - n \right),
\end{equation}
where the subscript $n$ denotes the function value for $\alpha = n \frac{\lambda}{D}$.
Substituting $\alpha$ by $n \frac{\lambda}{D}$ and $d$ by $\frac{D}{N}$ leads to
\begin{equation} \label{eq:1d2}
\widehat{P\phi}_n + \widehat{\psi}_n = \widehat{P\phi}_n + \hat{f}_n \sum_{k=0}^{N-1} a_k e^{-i \frac{2\pi k n }{N}}.
\end{equation}
The field nulling equation (\ref{eq:field1}) takes here the discrete form
\begin{equation} \label{eq:1d3}
\forall n \in \{ 0 \ldots N\!-\!1\}, \quad \widehat{P\phi}_n + \widehat{\psi}_n = 0
\quad \Longleftrightarrow \quad
a_k = \sum_{n=0}^{N-1} \left( -\frac{\widehat{P\phi}_n}{\hat{f}_n} \right) e^{i \frac{2\pi k n }{N}}.
\end{equation}
i.e. actuator strokes are computed thanks to an inverse FFT.
Let us now turn to the residual speckle field
\begin{equation} \label{eq:1d4}
(\widehat{P\phi} + \widehat{\psi})(\alpha) = \sum_{n=-\infty}^{-1} \widehat{P\phi}_n \: \mbox{sinc} \! \left( \frac{\alpha D}{\lambda} - n \right) + \sum_{n=N}^{+\infty} \widehat{P\phi}_n \: \mbox{sinc} \! \left( \frac{\alpha D}{\lambda} - n \right).
\end{equation}
Because the sinc function decreases rapidly with $\alpha$, the terms flanking the DH ($n=-1$ and $n=N$) should by themselves give the order of magnitude of the residual speckle field in the DH. In case of phase aberrations only and white noise, we have $|\widehat{P\phi}_{-1}|^2 \approx |\widehat{P\phi}_N|^2 \approx \overline{I_0}$, where $\overline{I_0}$ is the mean intensity in the image plane prior to the DH creation. Therefore, a crude estimate of the intensity profile in the DH should be
\begin{equation} \label{eq:1d5}
I_\mathrm{DH}(\alpha) \approx \overline{I_0} {\left[ \mbox{sinc} \! \left( \frac{\alpha D}{\lambda} +1 \right) + \mbox{sinc} \! \left( \frac{\alpha D}{\lambda} - N \right) \right]}^2.
\end{equation}
We have superimposed this approximation as a thick line in Fig.~\ref{fig:f1}. In this case the match is remarkable, but more simulations show that it is generally good within a factor of 10 only. Nevertheless, it demonstrates that the DH depth depends critically on the residual speckle field at its edges, hence on the decreasing rate of the complex aberration spectrum with spatial frequency. In that respect, a white spectrum is certainly the worst case. Equation (\ref{eq:1d5}) further indicates that the DH depth depends on the number of actuators: as $N$ is increased, the DH widens and gets deeper. With 8, 16, 32, and 64 actuators, (\ref{eq:1d5}) predicts $\overline{I_0}/\overline{I_\mathrm{DH}}$ to reach about 100, 300, 1000, and 4500.
\subsubsection{Dark hole depth vs. search area}
As mentioned in \S\ref{sub:energy_min}, speckle nulling by energy minimization can be performed in a region narrower than the maximum DH. Figure~\ref{fig:f2} illustrates this point: by reducing the search area from 64 to 44 resels (31\,\% reduction), the DH floor was decreased from $1.4 \times 10^{-11}$ to $2.7 \times 10^{-15}$, i.e. a gain of about 5200 in contrast (further reducing the search area does not bring any significant gain). By giving up search space, one frees the degrees of freedom corresponding to the highest spatial frequency components on the DM pattern. These can be used to improve the DH depth at lower spatial frequency because of the PSF angular extension (this is essentially the same reason why high spatial frequency speckles limit the DH depth). As the search space is reduced, the leverage of these highest spatial frequency components decreases (PSF wings falling off). The energy minimization algorithm compensates by putting more energy at high frequency (see lower panel in Fig.~\ref{fig:f2}), which produces increasingly oscillatory DM patterns (see top panel in Fig.~\ref{fig:f2}) and increasingly brighter spots in the image (around $\pm 32 \frac{\lambda}{D}$ and $\pm 96 \frac{\lambda}{D}$ in bottom panel of Fig.~\ref{fig:f2}). Thus the trade-off range might be limited in practice by the maximum actuator stroke (currently $0.6\,\mu$m on the HCIT), and/or by the detector's dynamic range.
In two-dimensions, the trade-off limits are well illustrated by the following example: considering a $64\!\times\!64$ DM and a random wavefront, we find that the DH floor can be decreased from $2.4 \times 10^{-12}$ to $1.4 \times 10^{-13}$ (a factor of 17) if the search area is reduced from $64\!\times\!64$ to $60\!\times\!60$ resels (12\,\% reduction in area). This implies a maximum actuator stroke of 10\,nm and a detector dynamic range of $10^6$. A further reduction to $58\!\times\!58$ resels does not feature a lower DH floor ($2.1 \times 10^{-13}$), and would imply a maximum actuator stroke of $10\,\mu$m and a detector dynamic range of $10^{10}$. In this case, the leverage of the additionally freed high-spatial frequency components is so weak that the algorithm starts diverging.
\subsubsection{Two-dimensional simulations with phase and amplitude aberrations}
In Figs.~\ref{fig:f3}--\ref{fig:f4}, we show an example of two-dimensional speckle nulling with phase and amplitude aberrations for a square pupil. To reflect the fact that phase aberrations dominate amplitude aberrations in real experiments \cite[see][]{Trauger04}, the rms amplitude of amplitude aberrations is made ten times smaller than that of phase aberrations (the choice of a factor ten is arbitrary). The DH is split into two regions: in the right one ($\mathcal{H}^+$), amplitude and phase aberrations are corrected, whereas in the left one ($\mathcal{H}^-$), phase aberrations are corrected and amplitude aberrations are made worse by a factor of four in intensity.
\subsection{Realistic speckle noise} \label{sub:sim_real}
\subsubsection{Power spectral density of phase aberrations}
With the $3.5\!\times\!8$-m TPF-C primary mirror in mind, we have studied the phase aberration map of an actual 8-m mirror: the primary mirror of Antu, the first 8.2-m unit telescope of ESO's Very Large Telescope (VLT). This phase map\footnote{It can be found by courtesy of ESO at http://www.eso.org/projects/vlt/unit-tel/m1unit.html.} was obtained with the active optics system on, and is characteristic of zonal errors (aberrations which cannot be fitted by low-order Zernike-type polynomials). It can be seen in Fig.~\ref{fig:f5} that the azimuthally averaged power spectral density (PSD) of such a map is well represented by
\begin{equation} \label{eq:psd}
\mbox{PSD}(\rho) = \frac{\mbox{PSD}_0}{1+{(\rho/\rho_c)}^x},
\end{equation}
where $\rho = \sqrt{\alpha^2+\beta^2}$. Values for PSD$_0$, $\rho_c$ and $x$ are listed in Table~\ref{tab:t1}. For comparison, the same treatment has been applied to the Hubble Space Telescope (HST) zonal error map from \citet{Krist95}.
We conclude from this study that \emph{a realistic phase aberration PSD for an 8-m mirror decreases as the third power of the spatial frequency}. The standard deviation of the VLT phase map is 20.9~nm (18.5~nm for HST). The square root of the power of phase aberrations in the 0.5--4~m$^{-1}$ spatial frequency range (4--32~$\lambda/D$ for an 8-m mirror) is 19.4~nm, i.e. about $\lambda/25$ at 500~nm, clearly not in the validity domain of our linear approximation.
\subsubsection{One-dimensional simulation}
Figure~\ref{fig:f6} shows a simulation performed in the same conditions as Fig.~\ref{fig:f1}, but with a VLT-like PSD. The PSD is scaled so that the standard deviation of phase aberrations is equal to $\lambda/1000$. The average DH floor is now $5.3 \times 10^{-12}$, six orders of magnitude below the intensity peak in the original image! In agreement with \S\ref{sub:sim_white}, we find that \emph{the DH's depth depends critically on the magnitude of the speckle field at the edge of the DH, hence on the decrease of the phase aberration PSD with spatial frequency}.
\subsubsection{Two-dimensional simulation}
For the two-dimensional simulation in Figs.~\ref{fig:f7}--\ref{fig:f8}, we have kept the original VLT phase map and circular pupil, but scaled the standard deviation of phase aberrations to $\lambda/1000$. In addition, we have used the actual HCIT influence functions from \citet{Trauger03}. The average DH floor is then $5.9 \times 10^{-12}$ with field nulling (case shown), and $7.1 \times 10^{-11}$ with energy minimization. The worse performance of the second method reflects the cost of the variable separation hypothesis, only accurate to within 5\,\% for the HCIT. Note that the DH retains its square shape with a circular pupil, as the DH shape is fixed by the actuator grid geometry on the DM (a square grid of constant spacing in our case).
\section{Discussion} \label{sec:discussion}
\subsection{Quantum and read-out noise} \label{sub:discuss_noise}
In \S\ref{sec:simulations}, we presented noise-free simulations. To give an idea of the effect of quantum and read-out noises, let us consider a sun-like star at 10~pc observed by a $3.5\!\times\!8$~m space telescope with a 5\,\% overall efficiency. In a 100~nm bandwith centered at 600~nm, the telescope collects about $2 \times 10^{12}$ photo-electrons in one-hour exposures. Considering the quantum noise, a 1~e$^-$ read-noise and ignoring chromatic effects, simulations of sequences of four one-hour exposures show that the average DH floor in Fig.~\ref{fig:f1} would jump from $1.4 \times 10^{-11}$ to $2.7 \times 10^{-10}$, whereas the average DH floor in Fig.~\ref{fig:f6} would jump from $5.2 \times 10^{-12}$ to $3.2 \times 10^{-11}$.
\subsection{Validity of the linear approximation} \label{sub:discuss_linearity}
In practice, our speckle nulling process will work as stated provided Eq.~(\ref{eq:E_im}) holds, that is to say if ${|P\phi+\psi| \gg \frac{1}{2}|P\phi^2|}$. If $c$ is the improvement in contrast with respect to the speckle floor and $\sigma_\phi$ the standard deviation of wavefront aberrations in radians, this condition translates into ${\sigma_\phi/\sqrt{c} \gg \sigma_\phi^2/\sqrt{2}}$, or ${\sigma_\phi \ll \sqrt{2/c}}$. In terms of optical path difference, the standard deviation should then be much less than $\lambda/(\pi \sqrt{2c}) = \lambda/140$ for $c = 10^3$. This is why we considered $\lambda/1000$ rms wavefronts in our simulations. As the wavefront will probably not be of this quality at the start, the speckle nulling method presented here is intended to be used in the course of observations, after a first phase where the bulk of the aberrations have been taken out.
When the linear approximation breaks down, three images with different DM settings still provide enough information about the aberrations, so that a DH could be created thanks to a global non-linear analysis of these images \citep{Borde04}. \cite{Malbet95} also explored non-linear solutions, but with many more iterations ($\approx 20$).
\subsection{Real coronagraphs} \label{sub:discuss_coronagraphs}
Dwelling on the validity of Eq.~(\ref{eq:E_im}), real coronagraphs would not only remove the direct image of the star ($\widehat{P}$), they would also modify the speckle field ($\widehat{P\phi}$) and the DM phase function ($\widehat{P\psi}$). This can be easily incorporated in the theory. A more delicate point is that real coronagraphs are not translation-invariant systems. As a consequence, effective influence functions as seen from behind the coronagraph will vary over the pupil. For image-plane coronagraphs with band-limited sinc masks \cite[][\S4]{Kuchner02}, we estimate this variation to be of the order of 10\,\%, assuming $\epsilon = 0.1$ and 64 actuators. Only energy minimization, not field nulling (unless solved with SVD), can accomodate this effect.
\subsection{Actuator stroke precision} \label{sub:discuss_actuators}
What about the precision at which actuators should be controlled? As a consequence of the linearity of (\ref{eq:energy3}), the DH depth depends quadratically on the precision on the actuator strokes. We deduce -- and this is confirmed by simulations -- that a four orders of magnitude deep DH can only be obtained if the strokes are controlled at a 1\,\% precision, i.e. 6\,pm rms with $\lambda/1000$ aberrations at 600\,nm. This precision corresponds to the current resolution of the actuator drivers on the HCIT.
\subsection{Instrumental stability} \label{sub:discuss_stability}
Regarding instrumental stability, we assumed that the instrument would remain perfectly stable during the four-step process. However, despite the foreseen thermal and mechanical controls of the spacecraft, very slow drifts during the few hours of single exposures should be expected. Therefore we intend to study in a subsequent paper how to incorporate a model of the drifts in our method. The exact parameters of this model would be derived from a learning phase after the launch of the spacecraft.
\subsection{Chromaticity} \label{sub:discuss_chromaticity}
We have not considered the effect of chromaticity. Let us point out that phase aberrations due to mirror surface errors scale with wavelength, so the correction derived from one wavelength would apply to all wavelengths. This is unfortunately not the case for amplitude aberrations. Although these are weaker than phase aberrations, a degradation of the correction should be expected in polychromatic light. Moreover, polychromatic wavefront sensing will require a revised theory as speckles will move out radially in proportion to the wavelength.
\section{Conclusion and future work} \label{sec:conclusion}
In this paper, we presented two techniques to optimally null out speckles in the central field of an image behind an ideal coronagraph in space. The measurement phase necessitates only three images, the fourth image being fully corrected. Depending on the number of actuators and the desired search area, the gain in contrast can reach several orders of magnitude.
These techniques are intended to work in a low aberration regime, such as in the course of observations after an initial correction phase. They are primarily meant to be used in space but could be implemented in a second-stage AO system on ground-based telescopes. Out of these two methods, the speckle energy minimization approach seems to be the more powerful and flexible: (i) it offers the possibility to trade off some search area against an improved contrast, and (ii) it can accomodate influence function variations over the pupil (necessary with real coronagraphs). If influence functions feature the required symmetry (variable separation), it is computationally very effective, but is otherwise still better than SVD.
Since the principles underlying these speckle nulling techniques are general, it should be possible to use them in conjunction with most coronagraph designs, including those with band-limited masks \citep{Kuchner02}, pupil-mapping \citep{Guyon05a,Vanderbei05}, and shaped pupils \citep{Kasdin03}. It is our intent to complete our work by integrating in our simulations models of these coronagraphs, and to carry out experiments with the HCIT.
In addition, we will seek to incorporate in the measurement theory a linear model for the evolution of aberrations, and we will work toward a theory accommodating the spectral bandwidth needed for the detection and spectroscopy of terrestrial planets.
\acknowledgments
We wish to thank the anonymous referee for his insightful comments that helped to improve greatly the content of this paper. We acknowledge many helpful discussions with Chris Burrows, John Trauger, Joe Green, and Stuart Shaklan. This work was performed in part under contract 1256791 with the Jet Propulsion Laboratory (JPL), funded by NASA through the Michelson Fellowship Program, and in part under contract 1260535 from JPL. JPL is managed for NASA by the California Institute of Technology. This research has made use of NASA's Astrophysics Data System.
| 2024-02-18T23:39:50.959Z | 2005-10-20T03:43:06.000Z | algebraic_stack_train_0000 | 630 | 8,205 |
|
proofpile-arXiv_065-3267 | \section{Introduction}
In the recent past, optical interferometry has made the greatest
impact in the area stellar astrophysics, in particular the study of
nearby single stars. Be stars are hot stars that exhibit, or have
exhibited the so-called Be phenomenon, i.e.\ Balmer lines in emission
and infrared excess, interpreted as an equatorial disk around these
objects. Be stars are relatively frequent among the B-type objects, and
therefore, many bright and close Be stars are known.
These stars have been preferred targets for long baseline
interferometry since long, and the Be community has followed the
new developments of optical long baseline interferometry to study
the circumstellar environments of Be stars with great interest
(see also the recent review of Stee \& Gies 2004).
The first environment resolved was the one of $\gamma$~Cas. Thom et
al.\ (1986) used the I2T for this, and Mourard et al.\ (1989) saw
evidence for a rotating circumstellar environment by inspecting the
visibility across the line itself using the GI2T.
These results clearly demonstrated the potential of observations that
combine spectral and spatial resolution, but also that extensive
modeling is required to interpret measurements obtained with very
limited sampling of the $uv$-plane. The first model specialized for
this task was the one developed by Stee \& de Araujo (1994) and Stee
et al.\ (1995). Their model represents the environment of a Be star as
an axisymmetric structure, based on a latitude-dependant radiatively
driven wind. The model confirms that its free parameters can be
constrained by comparison of predicted line profiles and visibilities.
With a good range of baselines, the Mark~III instrument was able
to determine the geometrical aspect of seven Be stars, i.e.\ the
axial ratio of their elongated H$\alpha$ circumstellar emission
region (Quirrenbach et al., 1993a, 1994, 1997). The axial ratios
$r$ span a wide range, with $r < 0.5$ for $\phi$\,Per, $\psi$\,Per
and $\zeta$\,Tau, an intermediate ellipticity ($r=0.7$) for
$\gamma$\,Cas, and $r\sim1$ for $\eta$\,Tau and 48\,Per. In the
disk model for Be stars, this can easily be understood as an
inclination effect. The strong correlation of the minimum
inclination derived in this way with polarimetric estimates
supports the geometrically thin-disk hypothesis (Quirrenbach et
al.\ 1997).
The Mark~III was specifically designed to perform wide-angle
astrometry, but a variable baseline that could be configured from 3m
to 31.5m provided the flexibility needed for a variety of astronomical
research programs. Mark~III was the first interferometer having a full
computer control of the siderostats and delay lines which allowed
almost autonomous acquisition of stars and data taking. This
capability was an important factor for the calibration of instrumental
effects and for the scientific productivity of the instrument.
Among the stars investigated with Mark~III were also $\gamma$\,Cas and
$\zeta$\,Tau. In their disks asymmetric H$\alpha$ emission with
respect to the central object was later uncovered with the GI2T
instrument. Using spectral Differential Interferometry (DI) it has
become possible to monitor such structures of Be disks during a long
time (several years) with great spatial resolution (Vakili et al.\
1998, B\'erio et al.\ 1999).
The field of optical and infrared (IR) interferometry has seen rapid
technical and scientific progress over the past decade and the
interferometric efficiency improves now dramatically in the era of the
Very Large Telescope Interferometer (VLTI). The design of ESO's VLTI,
which is the first large optical/IR telescope facility expressly
designed with aperture synthesis in mind, is of a hybrid type: There
are the four large 8.2 meter spatially fixed unit-telescopes, but
additionally there will be four auxiliary telescopes of smaller, i.e.\
1.8-meter aperture, which can be moved and set up at a large number of
locations. Three recombiners are attached to this structure: VINCI was
foreseen to be a K band test instrument but has provided such precise
visibility measurements that numerous outstanding science results have
been published from its observation. MIDI is the first direct
recombiner operating in N band in the world, which is described
extensively in the following. Finally, AMBER, currently being in
commissioning phase, is an impressive three-beam
spectro-interferometer operating in J, H and K bands with a spectral
resolution reaching 10\,000.
A challenge for the understanding of the Be star phenomenon is the
rapid increase of our knowledge of the central star itself. Be
stars are statistically rapid rotators and subject to a strong von
Zeipel effect (1924). In 2001, van Belle et al. (2001) observed
Altair (HD 187642, A7V) using two baselines of the Palomar Testbed
Interferometer (PTI). They calculated the apparent stellar angular
diameters from measured squared visibility amplitudes using the
uniform-disk model and found that the angular diameters change
with position angle. This was the first measurement of stellar
oblateness owing to rapid rotation. In parallel, the observable
consequences of this effect on the interferometric observation
have been extensively studied by Domiciano de Souza et al. (2002)
under the Roche approximation (uniform rotation and centrally
condensed star). Two effects are competing affecting the
interferometric signal: the geometrical distortion of the stellar
photosphere and the latitudinal dependence of the flux related to
the local gravity resulting from the von Zeipel theorem. The
measurements from the PTI were not sufficient to disentangle
between these two effects but recent observations using closure
phases\footnote{Closure phases are measured when at least three
telescopes are operating simultaneously.} from the NPOI
interferometer reported in Ohishi et al. (2004) have confirmed the
oblateness of the star and have evidenced a bright region
attributed to the pole. The observations of Altair from PTI, NPOI
and also from VLTI/VINCI cover now three spectral regions visible,
H and K band and an extensive modeling of the star has been
undertaken by A. Domiciano de Souza.
Altair is still a relatively cool and small star compared to the
Be stars and its gravity surface remains large, therefore larger
effects of the rotation are expected for Be stars. In 2003, the
large oblateness of the Be star Achernar ($\alpha$\,Eri) was
measured with VLTI/VINCI (Domiciano de Souza et al. 2003). The
measured oblateness of 1.56$\pm$0.05 based on equivalent Uniform
Disk apparently exceeds the maximum distortion allowed in the
Roche approximation and no models investigated by Domiciano de
Souza et al. (2002) could be satisfactorily fitted to the
observations. These observations open a new area for the study of
the Be phenomenon and VLTI/AMBER should take over this study and
expand rapidly the number of target observed.
Recently, VLTI/MIDI observed two Be stars, $\alpha$\,Ara (B3\,Ve)
and $\delta$\,Cen (B2\,IVne) from 8 to 13\,$\mu$m with baselines
reaching 100\,m but their circumstellar environment could not be
resolved. These observations are also reported in this paper.
\section{Basic Principles of Stellar Interferometry}
This section will review the basic principles of stellar
interferometry. More detailed discussions of optical interferometry
issues can be found in the reviews by Monnier (2003) and by
Quirrenbach (2001). In order to introduce the principles, we restrict
ourselves to the case of a single interferometric baseline, i.e.\ with
two telescopes only. We adopt the formalism of Domiciano de Souza et
al.\ (2002) and reproduce here the equations necessary for an
introduction to natural light interferometry.
\subsection{Basic principles}
Let us consider an astrophysical target located at the center of a
Cartesian coordinate system $(x, y, z)$. The system is oriented such
that the $y$ axis is defined as the North-South celestial orientation
and the $x$ axis points towards the observer.
Next, we define the sky-projected monochromatic brightness
distribution $I_{\lambda}(y,z)$, hereafter called "intensity map".
Interferometers measure the complex visibility, which is proportional
to the Fourier transform of $I_{\lambda}(y,z)$, which shall be denoted
$\widetilde{I}_{\lambda}(y,z)$. The complex visibility in natural
light can then be written as:
\begin{eqnarray}\label{eq:V}
V(f_{y},f_{z},\lambda)& = & \left| V(f_{y},f_{z},\lambda)\right|
\mathrm{e}^{\mathrm{i}\phi(f_{y},f_{z},\lambda)}\\
& = & \frac{\widetilde{I}_{\lambda}(f_{y},f_{z})}{
\widetilde{I}_{\lambda }(0,0)}
\end{eqnarray}
where $f_{y}$ and $f_{z}$ are the Fourier spatial frequencies
associated with the coordinates $y$ and $z$. These spatial frequencies
in long-baseline interferometry are given by $\vec{B}_{\rm
proj}/\lambda_{\rm eff}$, where $\lambda_{\rm eff}$ is the effective
wavelength of the spectral band considered and $\vec{B}_{\mathrm{\rm
proj}}$ is a vector representing the baseline of the interferometer
projected onto the sky. The vector $\vec{B}_{\mathrm{proj}}$ defines
the direction $s$, which forms an angle $\xi $ with the $y$ axis so
that
\begin{equation}\label{eq:Bproj}
\vec{B_{\rm proj}}=\left( B_{\rm proj}\cos \xi \right) \widehat{y}+\left(
B_{\rm proj}\sin \xi \right) \widehat{z}
\end{equation}
where $\widehat{y}$ and $\widehat{z}$ are unit vectors. Note that
{\em large} structure in image-space results in {\em small} structure
in Fourier transformed, i.e.\ visibility-space.
\begin{figure*}
\fbox{\parbox{\textwidth}{
\centerline{\bf Visibility and phase as observational concepts}
\begin{center}
\parbox{0.4\textwidth}{Interference pattern of a monochromatic point-source}
\includegraphics[angle=270,width=0.4\textwidth]{f1.ps}%
\parbox[t]{0.48\textwidth}{\begin{center}
\centerline{Polychromatic point-source}
\includegraphics[angle=270,width=0.4\textwidth]{f2.ps}%
\includegraphics[angle=270,width=0.4\textwidth]{f3.ps}%
\vskip5mm
The interference patterns of the various colours add up to a spatial
modulation of the pattern. Note that the amplitude at OPD=0, the
``white-light fringe'' is still at maximum
\end{center}}\hspace*{0.04\textwidth}%
\parbox[t]{0.48\textwidth}{\begin{center}
\centerline{Monochromatic extended source}
\includegraphics[angle=270,width=0.4\textwidth]{f4.ps}%
\includegraphics[angle=270,width=0.4\textwidth]{f5.ps}%
\vskip5mm
The interference patterns from the local points add up to a pattern of
reduced amplitude, independent of the optical path difference (OPD),
however.
\end{center}}%
\parbox[t]{\textwidth}{Both principles combined give the fringe
patterns as seen by interferometry. The ``visibility'', holding the
spatial extend of the investigated source is quantifying the
amplitude, as in the right column. Suppose a source observed at two
different wavelengths, one can obtain {\em relative} positional
information by measuring whether the OPD for maximal positive
interference has shifted. This concept is called ``interferometric
phase''}
\end{center}
}}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=0.83\textwidth]{Tutorial.eps}%
\end{center}
\caption{\label{fig1} In this figure typical examples of intensity
maps (left) are shown. From top to bottom these are a uniform disk, a
resolved binary, a ring and a Gaussian distribution. The models are
``observed'' with a horizontal ad vertical baseline. The 1-D flux
intensity along the baseline is shown in the middle and the
corresponding visibility is displayed on the right. All units are
arbitrary. A further description of visibility curves is given in the
text. }
\end{figure*}
We consider linear cuts along the Fourier plane corresponding to a
given baseline direction $\widehat{s}$. Then we can define the new
spatial frequency coordinates ($u,v$) for which
$\vec{B}_{\mathrm{proj}}$ is parallel to the unit vector
$\widehat{u}$. In that case the line integral (or strip intensity)
of $I_{\lambda }(s,p)$ over $p$ for a given $\xi$ can be written
as:
\begin{equation}\label{eq:FTline}
\widetilde{I}_{\lambda,\xi}(u)= \int I_{\lambda,\xi}(s)
\mathrm{e}^{-\mathrm{i}2\pi su} \mathrm{d}s
\end{equation}
The \textit{complex visibility} is given by:
\begin{equation}\label{eq:Vline}
V_\xi(u,\lambda)=\left| V_\xi(u,\lambda)\right|
\mathrm{e}^{\mathrm{i}\phi_\xi(u,\lambda)}=\frac{\widetilde{I}_{\lambda,\xi}(u)}{
\widetilde{I}_{\lambda,\xi}(0)}
\end{equation}
By varying the spatial frequency (meaning the baseline length and/or
wavelength), we obtain the so-called visibility curve. Eqs.\
\ref{eq:FTline} and \ref{eq:Vline} say that the interferometric
information along $\vec{B}_{\mathrm{proj}}$ is identical to the
one-dimensional Fourier transform of the curve resulting from the
integration of the brightness distribution in the direction
perpendicular ($\widehat{p}$) to this baseline. The
interferometric observable, called visibility is directly related
to the fringe contrast.
The visibility can be observed either as the fringe contrast in {\bf
an image plane} (as with AMBER) or by modulating the internal delay
and detecting the resulting temporal variations of the intensity in a
pupil plane (as done with VINCI or MIDI).
It should be stressed that the signal-to-noise ratio (SNR) of
interferometric observables depends not only on the photon count $N$,
but on $NV^2$ for the photon-noise limited regime (optical) and on
$NV$ for the background-limited regime (MIDI). Indeed, the
interferometer is not sensitive to the total flux from the source but
to the {\it correlated} one.
In Fig.~1 we show several examples of intensity maps and their
corresponding visibility curves, depending on the baseline
orientation. All models, except the binary, are symmetrical which
means that the interferometer will provide the same visibility for a
particular baseline length (projected onto the sky), whatever its
direction. The second model shows the visibility curves from a binary
system, consisting of two stars of the same diameter. In the case
where the baseline is perpendicular to the binary's position angle,
the interferometer is unable to distinguish the two components and
sees a visibility signal close to the one provided by a single uniform
disk. When the baseline is aligned with the binary's PA, the binary
signature is superimposed to the signature from the individual
components.
The uniform ring structure is an interesting intermediate
situation between the uniform disk and a binary. The
interferometer sees mainly a binary structure and owing to the
symmetry of the source, this signal is the same whatever the
baseline direction. This example could appear quite artificial, yet
it reflects a geometry which can be encountered frequently in the
inner rims of a Young Stellar Objects and even more evolved star's disks.
Finally, we show the example of an object exhibiting a Gaussian
distribution of light, like approximating a circumstellar environment
with outwards decreasing emission. The Fourier transform of a Gaussian
being a Gaussian too, the visibility curve will not show the
characteristic lobes which are seen in the other curves. It must be
stressed out that these lobes are the consequence in the Fourier
domain of a discontinuity of the light distribution in the image
plane. For instance, a limb-darkened disk will exhibit a visibility
curve with the lobes attenuated compared to the uniform disk case. One
can, in fact, note a small lobe in the Gaussian visibility curve in
Fig.~1. This is the consequence of the numerical truncation of the
Gaussian, generating a discontinuity at the limit of the chosen
Field-of-View.
Of course, for typical main sequence stars, this implies long
baselines and it must be stressed that at this stage the contrast of
the fringes is so low that this experiment is very demanding in terms
of signal-to-noise ratio, i.e.\ in terms of target flux.
\subsection{Differential Interferometry (DI)}
Differential Interferometry (DI) uses the high angular resolution
interferometric capabilities in a dispersed mode in order to compare
the spatial properties of an object at different wavelengths (Vakili
et al. 1994, 1997, 1998). This technique offers obvious advantages:
over few nanometers, the (differential) atmospherical turbulence
effects are negligible, and the differential sensitivity can be much
better than expected by classical techniques. In interferometry, an
unresolved source is required for calibration, and such an object can
be difficult to find. In particular cases, such as Be stars, the
continuum can be regarded as unresolved, whereas emission lines
emitted by an extended circumstellar environment are angularly
resolved. Moreover, the continuum can be considered as a {\em
polarization} reference in the Zeeman effect context.
The phase of the visibility is generally lost due to the blurring
effect of the atmosphere, but DI can retrieve a {\em relative}
spectral phase. Some algorithms can compare the properties of the
fringes studied in a (broad, i.e.\ continuum) reference spectral
channel at wavelength $\lambda_r$ with the ones in a (narrow, i.e.\
spectral line) science channel at wavelength $\lambda_s$. This can be
performed by means of cross-correlation of a broad continuum channel
with a series of narrow channels across the emission line as
encountered in Be stars. These steps are then repeated with small
wavelength shifts for both channels from the blue to the red wing of
the line, starting and finishing in the continuum next to the line on
both sides. Since the signal to noise ratio in the cross-correlations
depends on the geometric mean of the number of photons in channel $r$
and channel $s$, the interferometric signal can be safely estimated in
the narrow channel even if the flux or the fringe visibility are very
small in this channel.
The accuracy of the phase determination, which allows to measure a
position can be better than the actual resolution of the
interferometer, but again the super-resolution power of
cross-spectral density methods apply, as long as the star is partially
resolved (Chelli \& Petrov 1995). For instance, a positive (negative)
relative phase indicates the position at the north (south) of the
central star if the baseline is oriented North-South. The relative
phase shift between to spectral channels is related to the photocenter
of the object by the following first order equation:
$$ \phi(\vec{u}, \lambda_r, \lambda_s)=-2 \pi
\vec{u}.[\vec{\epsilon}(\lambda_s)-\vec{\epsilon}(\lambda_r)]$$
Using the continuum fringes as reference for the phase one can,
for instance, determine whether the light distribution in a
spectral line is centered on this continuum, as any departure from
symmetry, e.g.\ due to localized circumstellar emission or a
magnetic field, causes a spectral phase effect. While the
visibility is a quadratic estimator, the phase sensitivity depends
linearly on the photon count.
Increasing theoretical predictions of photocenter positions
through emission or absorption lines are available concerning the
environment of stars (Stee et al. 1996, Dessart \& Chesneau 2002)
or the study of the underlying photosphere (Domiciano et al. 2002,
2004).
\section{MIDI Interferometer}
MIDI is the first 10~$\mu$m interferometric instrument worldwide using
the full available atmospheric transmission window. Due to the MIR
radiation of the environment and the optical setup itself, most of the
instruments optics is inside a dewar and is cooled to cryogenic
temperatures. The incoming afocal VLTI beams are combined on the
surface of a 50:50 beam splitter, which is the heart of the
instrument. Spectral information from 8~$\mu$m to 13.5~$\mu$m can be
obtained by spectrally dispersing the image using a prism for low
(R=30), or a grism for intermediate (R=270) spectral resolution. For
source acquisition a field of 3" is available. This small area on the
sky represents about 10 times the Airy disk of a single UT
telescope. This field is useful especially in case of extended objects
like the S\,Doradus variable (also called LBV) $\eta$ Car (Chesneau et
al. 2004a) or some Herbig stars (like HD\,100\,546, Leinert et
al. 2004).
MIDI measures the degree of coherence between the interfering
beams (i.e. the object visibility) by artificially stepping the
optical path difference between the two input beams rapidly, using
its internal delay lines. The result is an intensity signal
modulated with time from which the fringe amplitude can be
determined. The total (uncorrelated) flux is determined separately
by chopping between the object and an empty region of the sky, and
determining the source flux by subtraction. In this mode MIDI is
working like an usual mid-infrared camera. An example of a
resulting image in case of the observation of $\eta$\,Carinae is
shown in Chesneau et al. (2004a) which demonstrates the excellent
imaging capabilities of the MIDI/VLTI infrastructure, even if it
sends the light via 31 mirrors and 5 transmissive elements until
it reaches the detector.
Observing with an interferometer requires accurate preparation.
Useful tools for that are simulation programs like ASPRO (by the
Jean-Mariotti Center, Fr.), SIMVLTI (by the MPIA Heidelberg, Ge.)
or ESO's VLTI visibility calculator. Those software packages make
it possible to get an idea of the expected visibility values for
given parameter setups. For further reference, the reader should
also consult the MIDI web page at
ESO\footnote{http://www.eso.org/instruments/midi/}.
When planning observations with MIDI, a few constraints have to be
kept in mind. Of course, the object should be bright enough in the
mid-IR to be measured with MIDI. However, for self-fringe tracking,
the source not only has to be bright enough in total, but there must
be sufficient flux concentrated in a very compact
($<0.1^{\prime\prime}$) central region, to which then the
interferometric measurements will refer. Also, the {\em visual brightness}
should be at least 16 mag, in order to allow the operation of the
MACAO tip-tilt and adaptive optics system.
In addition, one has to consider that interferometry with two
telescopes of the VLTI in a reasonable time of several hours will
provide only few measured points of visibility, i.e.\ only a few points
where the Fourier transform of the object image is determined. The
scientific programme has to be checked before whether its main
questions can be answered on this basis (e.g.\ to determine the
diameter of a star one does not need to construct an image of its
surface).
\section{Be stars observed by MIDI}
\subsection{\boldmath$\alpha$ Ara\unboldmath}
The first VLTI/MIDI observations of the Be star $\alpha$~Ara show a
nearly unresolved circumstellar disk in the N band (Chesneau et
al. 2004b). $\alpha$~Ara (HD\,158\,427, B3\,Ve) is one of the closest Be
star with an estimated distance of 74pc$\pm$6pc, based on the
Hipparcos parallax, and color excesses E(V-L) and E(V-12 ~$\mu$m)
among the highest of its class. The interferometric measurements made
use of the UT1-UT3 projected baselines of 102~m and 74~m, at two
position angles of 7$^\circ$ and 55$^\circ$, respectively. The object
is mostly unresolved, putting an upper limit to the disk size in the N
band of the order of $\phi_{\rm max}=4$\,mas, i.e.\ 14 $R_\star$ at
74~pc and assuming $R_\star=4.8{\rm R_\odot}$, based on the spectral
type.
On the other hand, the density of the disk is large enough to produce
strong Balmer emission lines. The SIMECA code developed by Stee (1995)
and Stee \& Bittar (2001) has been used for the interpretation.
Optical spectra from the {\sc Heros} instrument, taken 1999, when the
instrument was attached to the ESO-50cm telescope, and infrared ones
from the 1.6m Brazilian telescope have been used together with the
MIDI visibilities to constrain the system parameters. In fact, these
two observations, spectroscopy vs.\ interferometry, put complementary
constraints on the density and geometry of the $\alpha$~Ara
circumstellar disk. It was not possible to find model parameters that
at the same time are able to reproduce the observed spectral
properties, both in the emission lines and in the continuum
flux-excess, and the interferometric null-result, meaning that the
disk must be smaller than 4\,mas.
However, the Hydrogen recombination line profiles of $\alpha$~Ara
exhibit (quasi?)-periodic short-term variations of the
violet-to-red peak heights of the emission profiles
($V/R$-variability, see Mennickent \& Vogt, 1991 and this study).
The radial velocity of the emission component of the Balmer lines
changes in a cyclic way as well in these lines. This may point to
a possible truncation of the disk by a putative companion, that
could explain the interferometric observations.
Using the NPOI interferometer, Tycner et al. (2004) have
recently studied the disk geometry of the Be star $\zeta$~Tau,
which is also a well-investigated spectroscopic binary
(P$\sim$133d, K$\sim$10\,km\,s$^{-1}$). They measured the disk
extension quite accurately to be well within the Roche radius.
This suggests also that this disk may be truncated.
\vspace{5mm}
\subsection{\boldmath$\delta$\,Cen\unboldmath}
$\delta$\,Cen (HD\,105\,435, B2\,IVe, F$_{12\mu m}$=15.85\,Jy),
situated at about 120\,pc, is one of the very few Be stars which
has been detected at centimeter wavelengths (Clark et al.\ 1998)
and also the only star for which significant flux has been
measured at 100$\mu$m (Dachs et al. 1988). These two measurements
suggest an extended disk, contrary to the case of $\alpha$~Ara.
However, recent VLTI/MIDI observations of $\delta$\,Cen during
Science Demonstration Time (programme conducted by D.\ Baade) with
a baseline of 91\,m have not been able to resolve the disk of this
object as well. It must be stressed out that these observations
have been conducted under much better atmospheric conditions than
those for $\alpha$\,Ara, leading to a well constrained upper limit
of 4$\pm$0.5\,mas for the equivalent Uniform Disk diameter.
This is roughly the same size as was determined for other Be stars
using interferometry in the wavelength region of H$\alpha$. Note that
for these H$\alpha$ observations the baseline was much less than the
one used here, about 40\,m vs.\ 100\,m. That both datasets still come
up with the same angular resolution is due to the scaling of the
spatial frequency with the effective wavelength introduced in
Sect.~2.1.
Whether or not a Be star disc should be resolved in the near IR
depends on the model one adopts for such a disk. Based on modelling of
the Balmer-line emission, at least one model does predict a resolved
disk, while others don't (see Chesneau et al., 2005, for a detailed
discussion). In this sense, even null-results provide important
constraints to our understanding of Be star disks.
\section{Conclusions}
Although being null-results these observations, as others before, have
shown the potential discriminating power of interferometric
observations for the current open questions of Be star research.
Long baseline interferometry is now able to provide a complete set of
observations from the visible to the thermal infrared at high angular
and spectral resolution, opening a new area for the study of the Be
phenomenon. In particular, this technique is now able to study the
complex interplay between fast rotator distorted photospheres,
affected by the von Zeipel effect and their direct surroundings by
means of spectrally resolved NIR observations and MIR ones. The
Guaranteed Time Document of the VLTI/\-AMBER
\footnote{available at http://www-laog.obs.ujf-grenoble.fr/amber/}
gives a good idea of the possibilities opened by this new instrument.
The first VLTI/MIDI observations of Be stars have demonstrated the
need to use long baseline at these wavelengths in order to resolve the
disk of even the few closest (and brightest) Be stars. The VLTI 1.8\,m
Auxiliary Telescopes (ATs) AT1 and AT2 are currently being
commissioned at Paranal observatory and should be able to observe
their first fringes in mid-2005. The ATs are movable telescopes which
can project onto the sky a baseline from 8\, to 200\,m. Such long
baselines should be perfectly suited for MIDI to study the inner disk
of Be stars, and for AMBER to observe the star itself, whereas the
shorter ones would allow AMBER to study the close environment from the
photosphere to several stellar radii.
\vspace{15mm}
\references
\ritem B\'{e}rio, P., Stee, Ph., Vakili, F., et al., 1999, A\&A,
345, 203
\ritem Chelli, A., Petrov, R.G., 1995, A\&A Sup. Ser. 109, 389
\ritem Chesneau, O., Min, M., Herbst, T. et al. 2004a, A\&A,
submitted
\ritem Chesneau, O., Meilland, A., Stee, Ph et al. , 2004b, A\&A,
submitted
\ritem Clark J.S., Steele I.A., Fender R.P., 1998, MNRAS, 299,
1119
\ritem Dachs, J., Engels, D. and Kiehlin, R. 1988, A\&A, 194, 167
\ritem Dessart, L. and Chesneau, O., 2002, A\&A, 395, 209
\ritem Domiciano de Souza, A., Vakili, F., Jankov, S., 2002, A\&A,
393, 345
\ritem Domiciano de Souza, Kervella, P., Jankov, S. et al., 2003,
A\&A, 407, L47
\ritem Domiciano de Souza, A., Zorec, J., Jankov, S. et al. 2004,
A\&A, 418, 781
\ritem Leinert, Ch., van Boekel, R., Waters, L.B.F.M. 2004, A\&A,
423, 537
\ritem Monnier, J. Reports on Progress in Physics, 2003, Vol. 66,
789-857
\ritem Mourard, D., Bosc, I., Labeyrie, A. et al. 1989, Nature,
342, 520
\ritem Ohishi, N., Nordgren, T.E. and Hutter, D.J. 2004, ApJ, 612,
463
\ritem Quirrenbach, A., Bjorkman, K.S., Bjorkman, J.E. et al.,
1997, ApJ, 479, 477
\ritem Quirrenbach, A., 2001, Annual Review of Astronomy and
Astrophysics, Vol. 39, 353-401.
\ritem Quirrenbach, A., Buscher, D.F., Mozurkewich, D. 1994, A\&A,
283, L13
\ritem Quirrenbach, A., Hummel, C.A., Buscher, D.F. et al., 1993,
ApJ, 416, L25
\ritem Stee, Ph., de Araujo, F.X., 1994, A\&A, 292, 221
\ritem Stee, P., de Araujo, F. X., Vakili, F. 1995, A\&A, 300, 219
\ritem Stee, P. 1996, A\&A, 311, 945
\ritem Stee, P. \& Gies, D. 2005, ASP Conf. Ser., in press
\ritem Thom, C., Granes, P., Vakili, F., 1986, A\&A, 165, L13
\ritem Tycner, Ch., Hajian, A.R., Armstrong, J.T. et al., 2004,
AJ, 127, 1194
\ritem Vakili, F., Mourard, D., Stee, Ph., et al. 1998, A\&A,
335, 261
\ritem Vakili, F., Mourard, D., Bonneau, D., 1997, A\&A, 323, 183
\ritem Vakili, F., Bonneau, D., Lawson, P.R., 1994, SPIE, 2200,
216
\ritem van Belle, G.T., Ciardi, D.R., Thompson, R.R. et al. 2001,
ApJ, 559, 1155
\ritem von Zeipel, H. 1924, MNRAS, 84, 665
\end{document}
| 2024-02-18T23:39:51.358Z | 2005-10-24T16:03:07.000Z | algebraic_stack_train_0000 | 658 | 5,010 |
|
proofpile-arXiv_065-3410 | \section{Introduction}
Image inpainting (a.k.a. image completion), which aims
to fill missing regions of an image, has been an active research topic of computer vision for decades.
Despite the great progress made in recent years~\cite{lahiri2020prior,suin2021distillation,zhou2021transfill,yi2020contextual,nazeri2019edgeconnect,iizuka2017globally,liu2018image,xiong2019foreground,ren2019structureflow,liao2021image,xiao2019cisi,yu2020region,yang2020learning,yang2017high,wangimage,pathak2016context,song2018contextual,ren2019structureflow}, image inpainting remains a challenging problem due to its inherent ambiguity and the complexity of natural images. Therefore, various guided inpainting methods have been proposed that exploit external guidance information such as examplar~\cite{kwatra2005texture,zhao2019guided,zhou2021transfill}, sketches~\cite{liu2021deflocnet,yang2020deep,jo2019sc,portenier2018faceshop,yu2019free}, label maps~\cite{ardino2021semantic},~\etc. However, previous work on image inpainting mainly focuses on inpainting background or partially missing objects. The problem of inpainting an entire missing object is still unexplored. In this paper, we study a new guided inpainting task,~\ie shape-guided object inpainting, where the guidance is implicitly given by the object shape. As shown in Fig.~\ref{teaser}, given an incomplete input image, the goal is to generate a new object to fill the hole. It can be used in various practical applications such as object re-generation, object insertion, and object/person anonymization.
This task has a similar input and output setup to the traditional image inpainting task; both take an incomplete/masked image and the hole mask as input to produce a complete image as output. However, previous methods are mainly designed for background inpainting and are not suitable for this object inpainting task.
Early patch-based synthesis methods borrow content from the remaining image to fill the hole. These methods are hardly seemed fit for this task as they cannot generate novel content.
Recent deep generative inpainting methods should be able to inpaint both background and objects, but in practice, they still have a strong bias towards background generation~\cite{katircioglu2020self}.
The reason lies in both the training strategy and the model architecture of previous deep learning based approaches.
First, previous methods synthesize the training data by simply masking images at random positions with different regions masked at equal probability. Since the appearance of background patches are usually similar to surrounding, it is easier to learn to extend the surrounding background to fill a hole than to generate objects.
Second, previous methods formulate image inpainting as a bottom-up context-based process that uses stacked convolution layers to propagate context information from the known region to the missing regions. However, object generation is essentially a top-down process: it starts from a high-level concept of the object and gradually hallucinate the concrete appearance centering around the concept. Without any top-down guidance, it is hard to generate a reasonable object of consistent semantic meaning.
Therefore, in order to find a better solution, we design a new data preparation method and a new generative network architecture for the object inpainting task. On the data side, to overcome the bias towards the background, we incorporate object prior by using object instances as holes in training. For the network architecture, we consider three important goals of object inpainting:
(1) visual coherency between the appearance of generated and existing pixels;
(2) semantic consistency within the inpainted region,~\ie the generated pixels should constitute a reasonable object;
(3) high-level coherency between the generated objects and the context.
To achieve these goals, we propose a contextual object generator (CogNet) with two-stream network architecture.
It consists of a bottom-up and top-down stream that models a bottom-up and top-down generation process, respectively.
The bottom-up stream resembles a typical framework used by previous approaches to achieve appearance coherency. It takes the incomplete image as input and fills the missing region based on contextual information extracted from the existing pixels.
The bridge between the bottom-up stream is a predictive class embedding (PCE) module. It predicts the class of the missing object based on features from the bottom-up stream to encourage high-level coherency.
The top-down stream is designed inspired by semantic image synthesis~\cite{isola2017image,park2019semantic} and has a similar framework to it.
It aims to hallucinate class-related object features based on a semantic object map obtained by combining the predicted class and the hole mask. Since the features at all object pixels are generated from the same class label, their semantic consistency can be ensured.
In summary, our contributions are as follows:
\begin{itemize}
\item We explore a new guided image inpainting task,~\ie shape-guided object inpainting.
\item We propose a new data preparation method and a novel Contextual Object Generator (CogNet) model for object inpainting.
\item Experiments demonstrate that the proposed method is effective for the task and achieves superior performance against state-of-the-art inpainting models finetuned for the task.
\end{itemize}
\section{Related Work}
\subsection{Image Inpainting}
Conventional image inpainting methods fill the holes by borrowing existing content from the known region. Patch-based methods search well-matched patches from the known part in the input image as replacement patches to fill in the missing region. Efros~\etal~\cite{efros1999texture} propose a non-parametric sampling method for texture synthesis method that can synthesize images by sampling patches from a texture example. It can be applied for hole-filling through constrained texture synthesis. Drori~\etal~\cite{drori2003fragment} propose to iteratively fill missing regions from high to low confidence with similar patches. Barnes~\etal~\cite{barnes2009patchmatch} propose a randomized algorithm for quickly finding matched patches for filling missing regions in an image. Diffusion-based methods propagate local image appearance surrounding the missing region based on the isophote direction field. Bertalmio~\etal~\cite{10.1145/344779.344972} propose to smoothly propagate information from the surrounding areas in the isophotes direction to fill the missing regions. Ballester~\etal~\cite{ballester2001filling} propose to jointly interpolate the image gray-levels and gradient/isophotes directions to smoothly extend the isophote lines into the holes.
These methods cannot generate entirely new content that does not exist in the input image.
In recent years, driven by the success of deep generative models, extensive research efforts have been put into data-driven deep learning based approaches. This branch of work usually formulates image completion as an image generation problem conditioned on the existing pixels in known regions.
They can generate plausible new content and have shown significant improvements in filling holes in complex images.
The first batch of deep learning based approaches only works on square holes. Iizuka~\etal~\cite{iizuka2017globally} propose to use two discriminators to train a conditional GAN to make the inpainted content both locally and globally consistent. Yu~\etal~\cite{yu2018generative} propose contextual attention to explicitly utilize surrounding image features as references in the latent feature space. Zeng~\etal~\cite{zeng2019learning} propose to use region affinity from high-level features to guide the completion of missing regions in low-level features. Later on, the research effort has shifted to image completion with irregular holes. Liu~\etal~\cite{liu2018image} use collect estimated occlusion/dis-occlusion masks between two consecutive frames of videos and use them to generate holes and propose partial convolution to exploit information from the known region more efficiently. Yu~\cite{yu2019free} generate free-form masks by simulating random strokes. They generalize partial convolution to gated convolution that learns to select features for each channel at each spatial location across all layers. Zeng~\etal~\cite{zeng2020high} use object-shaped holes to simulate real object removal cases and propose an iterative inpainting method with a confidence feedback mechanism.
The above deep learning based methods mainly focus on background inpainting. In training, images are masked at random positions, resulting in a bias towards background as background is usually more predictable in most images. In addition, some methods use attention mechanisms to explicitly borrow patches/features from known regions~\cite{yu2018generative,yu2019free,zeng2019learning,zeng2020high,zhang2019residual,liu2019coherent} as in the conventional methods, which can be seen as background prior and will further encourage the tendency to generate background. Some previous works on deep learning based inpainting have touched on topics related to object inpainting. Xiong~\etal~\cite{xiong2019foreground} propose a foreground-aware image inpainting system by predicting the contour of salient objects. Ke~\etal~\cite{ke2021occlusion} propose an occlusion aware inpainting method to inpaint partially missing objects in videos. These methods mainly focus on inpainting partially missing objects.
\subsection{Guided Image Inpainting}
Some works attempt to allow users to provide more guidance to reduce the ambiguity of image inpainting and improve the results. Many types of guidance have been explored, such as examplar images, sketches, label maps, text.
Yu~\etal~\cite{yu2019free} propose DeepFillV2, which can perform sketch-guided image inpainting of general images as well as face images.
Park~\cite{jo2019sc} explore face inpainting with sketch and color strokes as guidance.
Zhang~\etal~\cite{zhang2020text} propose to inpaint the missing part of an image according to text guidance provided by users. Ardino~\etal~\cite{ardino2021semantic} propose to use label maps as guidance for image inpainting. Although the guided inpainting methods \cite{zhang2020text} and \cite{ardino2021semantic} might be able to generate an entire new object if the text or label map about the object are given as guidance, they require the users to provide the external guidance explicitly. In comparison, our method only takes the incomplete image and hole mask as input.
\subsection{Semantic Image Synthesis}
Semantic image synthesis is a sub-class of conditional image generation which aims to generate photo-realistic images from user-specified semantic layouts. It was first introduced by Isola~\etal~\cite{isola2017image}, who proposed an image-to-image translation framework, called Pix2Pix, to generate images from label maps or edge maps.
Zhu~\etal~\cite{zhu2017unpaired} propose CycleGAN to allow training an image translation model on unpaired data with a cycle consistency constraint. Park~\etal~\cite{park2019semantic} propose spatially-adaptive normalization for semantic image synthesis, which modulates the activations using semantic layouts to propagate semantic information throughout the network. Chen~\etal~\cite{chen2017photographic} propose cascaded refinement networks and use perceptual losses for semantic image synthesis. Wang~\etal~\cite{wang2018high} propose Pix2PixHD which improves the quality of synthesized images using feature matching losses, multiscale discriminators and an improved generator. Our method takes inspiration from semantic image synthesis methods to design the top-down stream of the contextual object generator. Unlike semantic image synthesis, where the semantic layouts or label maps are known, our semantic object maps are derived by combining the predicted class and the hole mask.
\subsection{Background-based Object Recognition}
Object recognition is a task to categorize an image according to the visual
contents. In recent years, the availability of large-scale datasets and powerful computers made it possible to train deep CNNs, which achieved a breakthrough success for object recognition~\cite{krizhevsky2012imagenet}.
Normally, an object recognition model categorizes an object primarily by recognizing the visual patterns in the foreground region. However, recent research has shown that a deep network can produce reasonable object results with only background available. Zhu~\etal~\cite{zhu2016object} find that the AlexNet model~\cite{krizhevsky2012imagenet} trained on pure background without objects achieves highly reasonable recognition performance that beats human recognition in the same situations.
Xiao~\etal~\cite{xiao2020noise} analyze the performance of state-of-the-art architectures on object recognition with foreground removed in different ways. It is reported that the models can achieve over 70\% test accuracy in a no-foreground setting where the foreground objects are masked. These works aim to predict only the class of an object from background. In this paper, we show that the entire object can be generated based on the background.
\section{Method}
Given an input image with missing regions, our goal is to fill the missing region with generated objects. We take a data-driven approach based on generative adversarial networks (GANs)~\cite{goodfellow2014generative,radford2015unsupervised,karras2017progressive,brock2018large,karras2019style,karras2020analyzing}. A contextual object generator is designed to generate objects based on the context that not only fit the known region and of reasonable semantic meanings.
The generator is jointly trained with a discriminator on a synthetic dataset obtained by masking object regions in real images. We use the discriminator proposed in \cite{karras2019style,karras2020analyzing}. In what follows, we introduce our data acquisition approach in Sec.~\ref{sec:data} and network architecture of the generator in Sec.~\ref{sec:gen}.
\subsection{Data Preparation}
\label{sec:data}
\begin{figure}
\begin{center}
\centering
\includegraphics[width=\textwidth]{fig0.pdf}\\
\scriptsize{\hfill\hfill {Previous approaches} \hfill\hfill {\textcolor{white}{ious app}Ours\textcolor{white}{roaches}} \hfill\hfill}
\caption{Top: input. Bottom: original image and ground-truth. Previous deep learning based inpainting methods generate training data by masking at random positions, which results in a bias towards background generation. We propose to incorporate object prior into training data by masking object instances. }
\label{fig0}
\vspace{-10pt}
\end{center}%
\end{figure}
Most deep learning based image inpainting methods prepare data by masking images at random positions using synthetic masks obtained by drawing random rectangles~\cite{zeng2019learning,yu2018generative,yang2017high}, brush strokes or from a fixed set of irregular masks~\cite{liu2018image,zeng2020high,liu2020rethinking}. Paired training data $\{(x',m),x\}$ can be formed by taking the masked image $x'=x\odot m$ and mask $m$ as input with the original image $x$ as ground-truth.
This data synthesis pipeline can generate a very large dataset for training a powerful deep model capable of completing large holes and dealing with complex scenes.
Although this random masking process produces diverse data with masks on both background and object regions, the trained model often has a strong tendency to generate background as background is more common and easier to predict than objects~\cite{joung2012reliable,katircioglu2019self}.
In this work, since we aim to train an image completion model to generate objects, the random masking process is not suitable. Therefore, we design a new data synthesis method that incorporates the object prior into training data by using object instances as holes.
For an image $x$, its instance segmentation $\{m^i, y^i\}_{i=1}^c$ can be obtained by manual annotation or using segmentation models, where $m^i, y^i$ are the mask and class of each object instance, $c$ denotes the number of instances. Then $c$ training samples $\{(x'^i,m^i),x\}_{i=1}^c$ can be constructed by masking the image $x$ with each instance mask: $x'^i=x\odot m^i$.
There exist datasets such as COCO~\cite{lin2014microsoft} with manually annotated segmentation masks, which can be used to construct high-quality training samples for object-based image completion. However, these datasets are limited in size and are not sufficient for representing the complexity of objects in natural images. To obtain larger and more diverse training data, we can use instance segmentation models to automatically label a larger dataset with instance masks complementary to the manually annotated segmentation datasets. Although the automatically annotated masks are less accurate, they still cover most object regions and thus can provide a reasonable object prior.
Fig.~\ref{fig0} compares our training samples with object instances as holes and the randomly generated training samples used in previous approaches.
\subsection{Network Architecture}
\label{sec:gen}
\begin{figure}
\begin{center}
\centering
\includegraphics[width=\textwidth]{fig1.pdf}
\caption{Illustration of the two-stream network architecture. It consists of a bottom-up stream and a top-down stream. The bottom-up stream models the standard image inpainting process, which takes an incomplete image as input to produce a complete image. The predictive class embedding (PCE) predicts the object class label based on features from the bottom-up stream and embeds it into an embedding vector. The top-down stream generates an image conditioned on the semantic object map. The two streams share the same generator. }
\label{fig1}
\vspace{-10pt}
\end{center}%
\end{figure}
In this section, we present the network architecture of the proposed contextual object generator (CogNet).
Unlike the traditional image completion task, which only focuses on the consistency of the inpainted region and the context, object-based image completion also requires the inpainted content to be an object of semantic meanings.
Previous network architectures for image completion are mainly designed as a bottom-up process to propagate information from known regions to missing regions. The generated content can blend naturally into the context but rarely resemble an object due to the lack of top-down guidance.
To solve this problem, we design a two-stream architecture that combines the traditional image inpainting framework with a top-down object generation process inspired by the semantic image synthesis task~\cite{isola2017image,park2019semantic}. The overall structure is shown in Fig.~\ref{fig1}. Each stream has an independent encoder that takes input from the corresponding domains and interacts with each other through the shared generator.
\subsubsection{Bottom-up Process}
The bottom-up stream $g^b$ follows the standard design of an image inpainting model. It takes an incomplete RGB image $x' \in X$ and the hole mask $m$ as input and produce an inpainted RGB image $\hat{x} \in X$,~\ie $g^b:X\rightarrow X$.
Given an incomplete input image, the encoder extracts hierarchical features from the raw pixels of the known region. It consists of a sequence of $L$ convolutional blocks with a $2\times$ downsample operator between every two consecutive blocks. For an input of size $N\times N$, the encoder produces a series of feature maps $\{ f^{b,l} \}_{l=0}^{L-1}$ of various scales, where each feature map $f^{b,l}$ is of size $\frac{N}{2^l}$. Then the multi-scale feature maps $\{ f^{b,l} \}$ are used to modulate the generator features of the corresponding scale through the spatial-channel adaptive instance normalization (SC AdaIN) layers.
\subsubsection{Predictive Class Embedding}
The bottom-up stream can capture the environmental factor that affects the object's appearance, such as color, illumination, and style. However, the class-related information is still missing.
As recent studies~\cite{xiao2020noise,zhu2016object} have indicated, models can achieve reasonable object recognition performance by relying on the background alone. Based on this observation, we propose a predictive class embedding module to map the background features into object class embeddings by learning a background-based object recognition model.
First, the feature $f^{b,L-1}$ of the last block of the encoder is reshaped and transformed by a fully connected layer into a feature vector $h$. Then a linear classifier is trained to predict the object class given $h$ as input by minimizing $\mathcal{L}_c$:
\begin{equation}
\label{eq:loss_cls}
\mathcal{L}_c = \sum_i -t_i \log \hat{t}_i, \mbox{ where } \hat{t} = \sigma (W^c h)_i,
\end{equation}
where $t$ is the one-hot encoding of the true class label; $W^c$ is the weight of the linear classifier; $\sigma$ represents the softmax function; $\hat{t}$ represents the predicted class label. $h$ can be seen as an embedding of the predicted class and is also passed into the SC AdaIN layers.
\subsubsection{Top-down Process}
In most images, the appearance of the objects is less predictable from the context than background. Hence the bottom-up process is less effective for object-based image completion.
Therefore, we design a top-down stream to allow the model to hallucinate appearance features from semantic concepts for object generation.
The top-down stream $g^t:Y\rightarrow X$ is designed inspired by semantic image synthesis methods,~\ie generating image content from semantic layout.
Different from standard semantic image synthesis where the label maps are known, the top-down stream generated an RGB image based on the semantic object maps derived from the predicted class.
More specifically, given the predicted class $\hat{t}$, a semantic object map $y \in Y$ can be derived by combining the predicted class and the hole mask $m$:
\begin{equation}
y_i = \hat{t}_i \cdot m,
\end{equation}
where $y_i$ represents the semantic object map corresponding to the $i$-th class. Then an $L$-layer encoder with a similar structure to the one in the bottom-up stream encodes the semantic object maps into multi-scale feature maps $\{f^{t,l} \}_{l=1}^{L}$. These feature maps will be used to modulate the generator feature maps through SC AdaIN layers to provide spatial aware class-related information to the generator.
\subsubsection{SC AdaIN}
\begin{figure}
\begin{center}
\centering
\includegraphics[width=\textwidth]{fig2.pdf}
\caption{Illustration of the spatial-channel adaptive instance normalization module. It consists of two steps of normalization and modulation in the channel and spatial dimensions, respectively. }
\label{fig2}
\vspace{-10pt}
\end{center}%
\end{figure}
Given the environmental features and class inferred from the background, there still can be many possible object appearances.
To model the uncertainty in object generation while preserving the information propagated from the encoders, we design the spatial-channel adaptive instance normalization module (SC AdaIN). Fig.~\ref{fig2} illustrates the structure of a SC AdaIN module.
Given an input image, we obtain the multi-scale feature maps $\{f^{b,l}\}, \{f^{t,l}\}$ from the encoders and sample a random latent code $z \sim \mathcal{N}(0,1)$. Then the latent code is transformed by a fully connected network as in~\cite{karras2020analyzing,karras2019style} and concatenated with the class embedding $h$ into a style code $w$.
For each scale $l$, we normalize and modulate the generator feature map channel-wise using the encoder features and position-wise using the style code $w$.
Let $X^l$ denote the generator feature map at scale $l$, the modulated feature map $\hat{X}^l$ is produced as follow,
\begin{equation}
\bar{X}^l_{c,x,y} = \frac{X^l_{c,x,y}-\mu^l_{c}}{\sigma^l_c}\cdot \gamma^l(w)_c + \beta^l(w)_c
\end{equation}
\begin{equation}
\hat{X}^l_{c,x,y} = \frac{\bar{X}^l_{c,x,y}-\bar{\mu}^l_{x,y}}{\bar{\sigma}^l_{x,y}}\cdot \bar{\gamma}^l(f^{b,l}+f^{t,l})_{c,x,y} + \bar{\beta}^l(f^{b,l}+f^{t,l})_{c,x,y}
\end{equation}
where $\mu^l_c, \sigma^l_c$ are the mean and standard deviation of $X^l$ in channel $c$; $\bar{\mu}_{x,y}, \bar{\sigma}_{x,y}$ are the mean and standard deviation of $\bar{X}^l$ at position $x,y$; $\gamma^l(w), \beta^l(w)$ and $\bar{\gamma}^l(f^l), \bar{\beta}^l(f^l)$ transform the style code $w$ and the encoder feature maps $f^l$ into the modulation parameters at scale $l$.
\section{Experiment}
\subsection{Implementation Details}
We implement our method and train the model using Python and Pytorch~\cite{NEURIPS2019_9015}. We use the perceptual loss~\cite{johnson2016perceptual}, GAN loss~\cite{gulrajani2017improved}, and the loss in Eqn.~\ref{eq:loss_cls} to train the contextual object generator. The detailed network architectures can be found in the supplementary material. The code will be made publicly available after the paper is published.
The model is trained on two A100 GPUs. It takes about a week for training. The inference speed at $256\times256$ resolution is 0.05 seconds per image.
We compare with two state-of-the-art image inpainting methods DeepfillV2~\cite{yu2019free} and CoModGAN~\cite{zhao2021comodgan} and RFR~\cite{li2020recurrent}. Since the original models of the compared methods are trained using random masks, it is not suitable to directly apply the pretrained models for object inpainting. Therefore, to compare with these methods, we train the model on the corresponding dataset using the mask synthesis method described in Sec.~\ref{sec:data}. We evaluate the performance using the metrics FID~\cite{heusel2017gans} and LPIPS~\cite{zhang2018perceptual} as they are the most commonly used metric for assessing the quality of generative models~\cite{lucic2018gans} and image-conditional GANs~\cite{albahar2019guided,huang2018multimodal,shen2019towards}.
\subsection{Datasets}
We train and evaluate our model on three datasets COCO~\cite{lin2014microsoft}, Cityscapes~\cite{Cordts2016Cityscapes} and Places2~\cite{zhou2017places}, which are commonly used in image inpainting, semantic segmentation, and semantic image synthesis. Note that the segmentation maps are only required in training. In the inference stage, only an input image and hole mask are needed.
We use the official training split to train and evaluate the models on the official validation split. All images are cropped into $256\times 256$ patches during training and evaluation.
Cityscape dataset contains segmentation ground truths for objects in city scenes such as roads, lanes, vehicles, and objects on roads. This dataset contains 30 classes collected over different environmental and weather conditions in 50 cities. It provides dense pixel-level annotations for 5,000 images pre-split into training (2,975), validation (500) and test (1,525). Since Cityscapes provides accurate segmentation ground-truth, it can be directly used for training our model.
COCO dataset is a large-scale dataset designed to represent a vast collection of common objects. This dataset is split into a training split of 82,783 images, a validation split of 40,504 images, and a test split of 40,775 images.
There are 883,331 segmented object instances in COCO dataset.
The object masks in COCO dataset are given by polygons. To obtain more accurate object masks, we preprocess the COCO object masks using a segmentation refinement method~\cite{cheng2020cascadepsp}.
Places2 dataset is a large-scale dataset for scene recognition and contains about 10 million images covering more than 205 scene categories.
For Places2 dataset, since there is no segmentation ground-truth available, we annotate the object masks using a segmentation method~\cite{li2021fully}.
\subsection{Comparison with State-of-the-art Methods}
\subsubsection{Qualitative evaluation}
Fig.~\ref{fig_results} shows the object inpainting results of the proposed method and state-of-the-art methods. Fig.~\ref{fig_diverse} shows the multiple diverse results produced by our method for the same input images.
Since the existing deep learning based inpainting methods mainly focus on the coherency of appearance between inpainted regions and known regions and only model the bottom-up generation process, they do not perform well for object inpainting. Even when trained on the object datasets, the object inpainting results of the previous approaches are still far from satisfactory. As we can see from the results, DeepFillV2 usually generates a colored shape hardly resembling an object. Benefiting from the powerful StyleGAN architecture, CoModGAN can produce relatively more object-like results, but often without a consistent semantic meaning,~\eg, the horse with giraffe patterns as shown in the right column of the third row.
In comparison, our method combines the bottom-up and the top-down generation process to achieve both low-level and high-level coherency between the generated content and the surrounding.
Our method can generate objects that can naturally blend into the context in the sense of both appearance and semantic meanings. The object appearance is consistent with the environment,~\eg lighting, color, and style, and is also well aligned with the corresponding semantic class.
\begin{figure}[t]
\begin{center}
\centering
\includegraphics[width=\textwidth]{results.pdf}\\
\scriptsize{\hfill{Input} \hfill\hfill {DeepFillV2} \hfill\hfill {CoModGAN} \hfill\hfill {Ours} \hfill\hfill {Input} \hfill\hfill {DeepFillV2} \hfill\hfill {CoModGAN} \hfill\hfill {Ours} \hfill}
\caption{Object inpainting results of our method and state-of-the-art methods. Our method can generate objects coherent with the context in terms of both appearance and semantic meanings, while the generated contents of previous approaches seldom resemble reasonable objects.
}
\label{fig_results}
\vspace{-10pt}
\end{center}%
\end{figure}
\subsubsection{Quantitative Evaluation}
Table~\ref{table_lpips} reports quantitative evaluation results on COCO, Places2, and Cityscapes datasets. The evaluation results show that our method outperforms the state-of-the-art methods on all metrics, especially the significantly lower FID.
Since FID measures the distance between the distribution of the deep features of generated images and real images, the lower FID scores imply that the objects generated by our model have a closer distribution to the distribution of natural objects.
This further demonstrates the superiority of our method in terms of object inpainting.
\begin{table}[t]
\caption{\small Quantitative evaluation results. }
\vspace{-0pt}
\label{table_lpips}
\small
\begin{center}
\begin{tabular}{c||cc|cc|cc}
\hline
& \multicolumn{2}{c|}{COCO} &\multicolumn{2}{c|}{Places2} &\multicolumn{2}{c}{Cityscapes}\\
Method&FID &LPIPS &FID &LPIPS &FID &LPIPS\\
\hline
CoModGAN &7.693&0.1122 &7.471&0.1086 &8.161&0.0491\\
DeepFillV2 &10.56&0.1216 &8.751&0.1201 &10.56&0.0542\\
RFR &13.38&0.1141 &14.22&0.1125 &15.92&0.0497\\
Ours &\textbf{4.700}&\textbf{0.1049} &\textbf{3.801}&\textbf{0.0928} &\textbf{7.411}&\textbf{0.0458}\\
\hline
\end{tabular}
\end{center}
\vspace{-0pt}
\end{table}
\subsection{Ablation Study}
In this section, we discuss the effect of each component. First, different from previous work on image inpainting which generates the training data using random masks, we construct the specialized training data for object inpainting to incorporate object prior. Without this prior, the trained inpainting model usually has the bias towards background generation and will not generate objects when filling a missing region, as shown in Fig.~\ref{fig_ablation} (b). The predictive class embedding (PCE) extracts class-related information from the context. Without this module, the model trained on object data might be able to produce object-like content. However, it is challenging to generate a semantically reasonable object without knowing the object's class. As shown in Fig.~\ref{fig_ablation} (c), usually the appearance of the generated objects are simply taken from the nearby regions. For instance, in the second row, the model without PCE generates an object of zebra shape but with the texture of a nearby giraffe.
The top-down stream takes the semantic object mask as input, which provides stronger spatial semantic guidance for object generation. Without this information, the model can only access class-related information from PCE, which is insufficient for hallucinating object appearance. Hence the model will still rely on the appearance of the surrounding area. As shown in Fig.~\ref{fig_ablation} (d), although the model without the top-down stream can produce some zebra strikes, the color of the zebra seems to be from the surrounding background area. Table~\ref{table_ablation_score} reports FID scores with and without each component. We can see that the predictive class embedding and the incorporation of the top-down stream can significantly reduce the FID by providing class-related information.
\begin{figure}[t]
\begin{center}
\centering
\includegraphics[width=.9\textwidth]{ablation.pdf}
\caption{From left to right are: (a) input, (b) without object training data, (c) without predictive class embedding, (d) without top-down stream, (e) full model. }
\label{fig_ablation}
\vspace{-10pt}
\end{center}%
\end{figure}
\begin{figure}[t]
\begin{center}
\centering
\includegraphics[width=.9\textwidth]{d1.pdf}\\
\includegraphics[width=.9\textwidth]{d2.pdf}\\
\includegraphics[width=.9\textwidth]{d3.pdf}\\
\includegraphics[width=.9\textwidth]{d4.pdf}\\
\includegraphics[width=.9\textwidth]{d5.pdf}\\
\caption{Our method can produce multiple diverse object inpainting results for the same input image by using different random latent code $z$. }
\label{fig_diverse}
\vspace{-10pt}
\end{center}%
\end{figure}
\begin{table}[t]
\caption{\small Effect of each component in terms of FID and LPIPS. }
\label{table_ablation_score}
\small
\begin{center}
\begin{tabular}{cccc||cc}
\hline
& Object Data & PCE & Top-down &FID&LPIPS\\
\hline
& $\surd$ & & &6.144 &0.1066\\
& $\surd$ & $\surd$ & &5.434 &0.1081\\
& $\surd$ & $\surd$ & $\surd$ &4.700 &0.1049\\%769
\hline
\end{tabular}
\end{center}
\vspace{-0pt}
\end{table}
\section{Conclusion and Future Work}
We study a new image inpainting task,~\ie shape-guided object inpainting. We find that existing image inpainting methods are not suitable for object inpainting due to the bias towards background and a lack of top-down guidance. Therefore, we design a new data preparation method that incorporates object priors by using object instances as holes and propose a Contextual Object Generator (CogNet) with a two-stream network architecture that combines the bottom-up image completion process with a top-down object generation process.
Experiments demonstrate that the proposed method can generate realistic objects that fit the context in terms of both visual appearance and semantic meanings.
The proposed method can be easily extended to inpaint partially missing objects by using partial instances masks in training. This can be an interesting topic for future work.
\clearpage
\bibliographystyle{splncs04}
| 2024-02-18T23:39:51.967Z | 2022-04-19T02:20:14.000Z | algebraic_stack_train_0000 | 691 | 5,352 |
|
proofpile-arXiv_065-3421 | \section{Introduction}
A text is not a simple collection of isolated sentences. These sentences generally appear in a certain order and are connected with each other through logical or semantic means to form a coherent whole. In recent years, modelling beyond the sentence level is attracting more attention, and different natural language processing (NLP) tasks use discourse-aware models to obtain better performance, such as sentiment analysis~\citep{bhatia-etal-2015-better}, automatic essay scoring~\citep{nadeem-etal-2019-automated}, machine translation~\citep{sim-smith-2017-integrating}, text summarization~\citep{xu-etal-2020-discourse} and so on.
As discourse information typically involves the interaction of different levels of linguistic phenomena, including syntax, semantics, pragmatics and information structure, it is difficult to represent and annotate. Different discourse theories and discourse annotation frameworks have been proposed. Accordingly, discourse corpora annotated under different frameworks show considerable variation, and a corpus can be hardly used together with another corpus for natural language processing (NLP) tasks or discourse analysis in linguistics. Discourse parsing is a task of uncovering the underlying structure of text organization, and deep-learning based approaches are used in recent years. However, discourse annotation takes the whole document as the basic unit and is a laborious task. To boost the performance of neural models, we typically need a large amount of data.
Due to the above issues, the unification of discourse annotation frameworks has been a topic of discussion for a long time. Researchers have proposed varied methods to unify discourse relations and debated over whether trees are a good representation of discourse~\citep{egg-redeker-2010-complex, lee2008departures, wolf-gibson-2005-representing}. However, existing research either focuses on mapping or unifying discourse relations of different frameworks~\citep{bunt2016iso, benamara-taboada-2015-mapping, sanders2018unifying, demberg2019compatible}, or on finding a common discourse structure~\citep{yi-etal-2021-unifying}, without giving sufficient attention to the issue of relation mapping. There is still no comprehensive approach that considers unifying both discourse structure and discourse relations.
Another approach to tackling the task is to use multi-task learning so that information from a discourse corpus annotated under one framework can be used to solve a task in another framework, thus achieving synergy between different frameworks. However, existing studies adopting this method~\citep{liu2016implicit, braud-etal-2016-multi} do not show significant performance gain by incorporating a part of discourse information from a corpus annotated under a different framework. How to leverage discourse information from different frameworks remains a challenge.
Discourse information may be used in down-stream tasks.~\citet{huang-kurohashi-2021-extractive} and~\citet{xu-etal-2020-discourse} use both coreference relations and discourse relations for text summarization with graph neural networks (GNNs). The ablation study by~\citet{huang-kurohashi-2021-extractive} shows that using coreference relations only brings little performance improvement but incorporating discourse relations achieves the highest performance gain. While different kinds of discourse information can be used, how to encode different types of discourse information to improve discourse-awareness of neural models is a topic that merits further investigation.
The above challenges motivate our research on unifying different discourse annotation frameworks. We will focus on the following research questions:
\textbf{RQ1:} Which structure can be used to represent discourse in the unified framework?
\textbf{RQ2:} What properties of different frameworks should be kept and what properties should be ignored in the unification?
\textbf{RQ3:} How can entity-based models and lexical-based models be incorporated into the unified framework?
\textbf{RQ4:} How can the unified framework be evaluated?
The first three questions are closely related to each other. Automatic means will be used, although we do not preclude semi-automatic means, as exemplified by~\citet{yi-etal-2021-unifying}. We will start with the methods suggested by existing research and focus on the challenges of incorporating different kinds of discourse information in multi-task learning and graphical models.
The unified framework can be used for the following purposes:
\begin{enumerate}
\item A corpus annotated under one framework can be used jointly with another corpus annotated under a different framework to augment data, for developing discourse parsing models or for discourse analysis.
We can train a discourse parser on a corpus annotated under one framework and compare its performance with the case when it is trained on augmented data, similar to~\citet{yi-etal-2021-unifying}.
\item Each framework has its own theoretical foundation and focus. A unified framework may have the potential of combining the strengths of different frameworks. Experiments can be done with multi-task learning so that discourse parsing tasks of different frameworks can be solved jointly. We can also investigate how to enable GNNs to better capture different kinds of discourse information.
\item A unified framework may provide a common ground for exploring the relations of different frameworks and validating annotation consistency of a corpus. We can perform comparative corpus analysis and obtain new understanding of how information expressed in one framework is conveyed in another framework, thus validating corpus annotation consistency and finding some clues for solving problems in a framework with signals from another framework, similar to~\citet{polakova2017signalling} and~\citet{bourgonje-zolotarenko-2019-toward}.
\end{enumerate}
\iffalse
The study by~\citet{liu2016implicit} is one of the few efforts in this direction, but it is limited to implicit discourse relation classification using RST-DT, PDTB and raw texts annotated with discourse connectives.~\citet{braud-etal-2016-multi} propose a model to predict RST discourse trees in which the PDTB is used. Due to differences in the two frameworks, the PDTB is modified so that sentences are used as basic discourse units and intra-sentential explicit relations are ignored. Ablation studies show that modest performance is achieved using such information from the PDTB corpus. How to best leverage information from different frameworks remains a challenge. The unified framework may enable us to apply multi-task learning to discourse parsing across different frameworks. \fi
\section{Related Work}
\subsection{An Overview of Discourse Theories}
A number of discourse theories have been proposed. The theory by~\citet{grosz-sidner-1986-attention} is one of those earlier few whose linguistic claims about discourse are also computationally significant~\citep{mann1987rhetorical}. With this theory, it is believed that discourse structure is composed of three separated but interrelated components: linguistic structure, intentional structure and attentional structure. The linguistic structure focuses on cue phrases and discourse segmentation. The intentional structure mainly deals with why a discourse is performed (discourse purpose) and how a segment contributes to the overall discourse purpose (discourse segment purpose). The attentional structure is not related to the discourse participants, and it records the objects, properties and relations that are salient at each point in discourse. These three aspects capture discourse phenomena in a systematic way, and other discourse theories may be related to this theory in some way. For instance, the Centering Theory~\citep{grosz-etal-1995-centering} and the entity-grid model~\citep{barzilay-lapata-2008-modeling} focus on the attentional structure, and the Rhetorical Structure Theory (RST)~\citep{mann1988rhetorical} focuses on the intentional structure.
The theory proposed by~\citet{halliday2014cohesion} studies how various lexical means are used to achieve cohesion, these lexical means including reference, substitution, ellipsis, lexical cohesion and conjunction. Cohesion realized through the first four lexical means is in essence anaphoric dependency and conjunction is the only source of discourse relation under this theory~\citep{webber2006accounting}.
The other discourse theories can be divided into two broad types: relation-based discourse theories and entity-based discourse theories~\citep{jurafsky2018speech}. The former studies how coherence is achieved with discourse relations and the latter focuses on local coherence achieved through shift of focus, which abstracts a text into a set of entity transition sequences~\citep{barzilay-lapata-2008-modeling}.
RST is one of the most influential relation-based discourse theories. The RST Discourse Treebank (RST-DT)~\citep{carlson-etal-2001-building} is annotated based on this theory. In the RST framework, discourse can be represented by a tree structure whose leaves are Elementary Discourse Units (EDUs), typically clauses, and whose non-terminals are adjacent spans linked by discourse relations. The discourse relations can be symmetric or asymmetric, the former being characterized by equally important spans connected in parallel, and the latter typically having a nucleus and a satellite, which are assigned based on their importance in conveying the intended effects. An RST tree is built recursively by connecting the adjacent discourse units, forming a hierarchical structure covering the whole text. An example of RST discourse trees can be seen in Figure~\ref{rst-tree}.
\begin{figure*}[h!]
\vspace{-6\baselineskip}
\noindent\begin{minipage}{\linewidth}
\centering
\includegraphics[
width=0.9\textwidth,height=0.4\textheight, scale=1.2]{rst-example.pdf}
\vspace{-4\baselineskip}
\caption{An RST discourse tree, originally from~\citet{marcu-2000-rhetorical}.}
\label{rst-tree}
\end{minipage}
\end{figure*}
\iffalse
\noindent\begin{minipage}{\linewidth}
\centering
\includegraphics[width=\textwidth,height=0.4\textheight, scale=1.5]{rst-example.pdf}
\captionof{rst-tree}{An RST discourse tree, originally from~\citet{marcu-2000-rhetorical}.}
\end{minipage}
\begin{figure*}[h]
\vspace{-6\baselineskip}
\begin{center}
\hbox{\hspace{-2.5em}
\includegraphics[
width=0.8\textwidth,height=0.4\textheight, scale=1.5]
{rst-example.pdf}}
\vspace{-4\baselineskip}
\caption{An RST discourse tree, originally from~\citet{marcu-2000-rhetorical}.}
\label{rst-tree}
\end{center}
\end{figure*}
\fi
Another influential framework is the Penn Discourse Treebank (PDTB) framework, which is represented by the Penn Discourse Treebank~\citep{prasad-etal-2008-penn, prasad-etal-2018-discourse}. Unlike the RST framework, the PDTB framework does not aim at achieving complete annotation of the text but focuses on local discourse relations anchored by structural connectives or discourse adverbials. When there are no explicit connectives, the annotators will read adjacent sentences and decide if a connective can be inserted to express the relation. The annotation is not committed to a specific structure at the higher level. PDTB 3.0 adopts a three-layer sense hierarchy, including four general categories called classes at the highest level, the middle layer being more specific divisions, which are called types, and the lowest layer containing directionality of the arguments, called subtypes. An example of the PDTB-style annotation is shown as follows~\citep{ldcexample}:
\textit{The Soviet insisted that aircraft be brought into the talks,}(implicit=but)\{arg2-as-denier\}~\textbf{then argued for exempting some 4,000 Russian planes because they are solely defensive.}
The first argument is shown in italics and the second argument is shown in bold font for distinction. As the discourse relation is implicit, the annotator adds a connective that is considered to be suitable for the context.
The Segmented Discourse Representation Theory (SDRT)~\citep{asher2003logics} is based on the Discourse Representation Theory~\citep{Kamp1993-KAMFDT}, with discourse relations added, and discourse structure is represented with directed acyclic graphs (DAGs). Elementary discourse units may be combined recursively to form a complex discourse unit (CDU), which can be linked with another EDU or CDU~\citep{asher2017annodis}. The set of discourse relations developed in this framework overlap partly with those in the RST framework but some are motivated from pragmatic and semantic considerations. In~\citet{asher2003logics}, a precise dynamic semantic interpretation of the rhetorical relations is defined.
An example of discourse representation in the SDRT framework is shown in Figure~\ref{sdrt-graph}, which illustrates that the SDRT framework provides full annotation, similar to the RST framework, and it assumes a hierarchical structure of text organization. The vertical arrow-headed lines represent subordinate relations, and the horizontal lines represent coordinate relations. The textual units in solid-line boxes are EDUs and $\pi$\textquotesingle\ and $\pi$\textquotesingle\textquotesingle\ represent CDUs. The relations are shown in bold.
\begin{figure}
\begin{center}
\hbox{\hspace{-7.5 em}
\includegraphics[
width=0.9\textwidth,height=0.4\textheight, scale=1.2]
{sdrt-new-example}}
\vspace{-9\baselineskip}
\caption{SDRT representation of the text~\textit{a. Max had a great evening last night. b. He had a great meal. c. He ate salmon. d. He devoured lots of cheese. e. He then won a dancing competition.} The example is taken from~\citet{asher2003logics}.}
\label{sdrt-graph}
\end{center}
\end{figure}
\subsection{Research on Relations between Different Frameworks}
The correlation between different frameworks has been a topic of interest for a long time. Some studies explore how different frameworks are related, either in discourse structures or in relation sets. Some studies take a step further and try to map the relation sets of different frameworks.
\subsubsection{Comparison/unification of discourse structures of different frameworks}
~\citet{stede-etal-2016-parallel} investigate the relations between RST, SDRT and argumentation structure. For the purpose of comparing the three layers of annotation, the EDU segmentation in RST and SDRT is harmonized, and an ``argumentatively empty'' JOIN relation is introduced to address the issue that the basic unit of the argumentation structure is coarser than the other two layers. The annotations are converted to a common dependency graph format for calculating correlations. To transform RST trees to the dependency structure, the method introduced by~\citet{li-etal-2014-text} is used. The RST trees are binarized and the left-most EDU is treated as the head. In the transformation of the SDRT graphs to the dependency structure, the CDUs are simplified by a \textit{head replacement strategy}. The authors compare the dependency graphs in terms of common edges and common connected components. The relations of the argumentation structure are compared with those of RST and SDRT, respectively, through a co-occurrence matrix. Their research shows the systematic relations between the argumentation structure and the two discourse annotation frameworks. The purpose is to investigate if discourse parsing can contribute to automatic argumentation analysis. The authors exclude the PDTB framework because it does not provide full discourse annotation.
~\citet{yi-etal-2021-unifying} try to unify two Chinese discourse corpora annotated under the PDTB framework and the RST framework, respectively, with a corpus annotated under the dependency framework. They use semi-automatic means to transform the corpora to the discourse dependency structure which is presented in~\citet{li-etal-2014-text}. Their work shows that the major difficulty is the transformation from the PDTB framework to the discourse dependency structure, which requires re-segmenting texts and complementing some relations to construct complete dependency trees. They use the same method as~\citet{stede-etal-2016-parallel} to transform the RST trees to the dependency structure. Details about relation mapping across the frameworks are not given.
\subsubsection{Comparison/unification of discourse relations of different frameworks}
The methods of mapping discourse relations of different frameworks presented by~\citet{scheffler2016mapping}, ~\citet{demberg2019compatible} and~\citet{bourgonje-zolotarenko-2019-toward} are empirically grounded. The main approach is to make use of the same texts annotated under different frameworks.
~\citet{scheffler2016mapping} focus on mapping between explicit PDTB discourse connectives and RST rhetorical relations. The Potsdam Commentary Corpus~\citep{stede-neumann-2014-potsdam}, which contains annotations under both frameworks, is used. It is found that the majority of the PDTB connectives in the corpus match exactly one RST relation and mismatches are caused by different segment definitions and focuses, i.e., PDTB focuses on local/lexicalized relations and RST focuses on global structural relations.
As the Potsdam Commentary Corpus only contains explicit relations under the PDTB framework,~\citet{bourgonje-zolotarenko-2019-toward} try to induce implicit relations from the corresponding RST annotation. Since RST trees are hierarchical and the PDTB annotation is shallow, RST relations that connect complex spans are discarded. Moreover, because the arguments of explicit and implicit relations under the PDTB framework are determined based on different criteria, only RST relations that are signalled explicitly are considered in the experiment. It is shown that differences in segmentation and partially overlapping relations pose challenges for the task.
~\citet{demberg2019compatible} propose a method of mapping RST and PDTB relations. Since the number of PDTB relations is much smaller than that of RST relations for the same text, the PDTB relations are used as the starting point for the mapping. They aim for mapping as many relations as possible while making sure that the relations connect the same segments. Six cases are identified: direct mapping, which is the easiest case; when PDTB arguments are non-adjacent, the Strong Compositionality hypothesis~\citep{marcu2000theory} (i.e., if a relation holds between two textual spans, that relation also holds between the most important units of the constituent spans) is used to check if there is a match when the complex span of an RST relation is traced along the nucleus path to its nucleus EDU; in the case of multi-nuclear relations, it is checked if a PDTB argument can be traced to the nucleus of the RST relation along the nucleus path; the mismatch caused by different segmentation granularity is considered innately unalignable and discarded; centrally embedded EDUs in RST-DT are treated as a whole and compared with an argument of the PDTB relation; and the PDTB E\textsc{nt}R\textsc{el} relation is included to test its correlation with some RST relations that tend to be associated with cohesion.
Other studies are more theoretical.~\citet{hovy-1990-parsimonious} is the first to attempt to unify discourse relations proposed by researchers from different areas and suggests adopting a hierarchy of relations, with the top level being more general (from the functional perspective: ideational, interpersonal and textual) and putting no restrictions on adding fine-grained relations, as long as they can be subsumed under existing taxonomy. The number of researchers who propose a specific relation is taken as a vote of confidence of the relation in the taxonomy. The study serves as a starting point for research in this direction. There are a few other proposals for unifying discourse relations of different frameworks to facilitate cross-framework discourse analysis, including: introducing a hierarchy of discourse relations, similar to~\citet{hovy-1990-parsimonious}, where the top level is general and fixed, and the lowest level is more specific and allows variations based on genre and language~\citep{benamara-taboada-2015-mapping}, finding some dimensions based on cognitive evidence where the relations can be compared with each other and re-grouped~\citep{sanders2018unifying}, and formulating a set of core relations that are shared by existing frameworks but are open and extensible in use, with the outcome being ISO-DR-Core~\citep{bunt2016iso}. When the PDTB sense hierarchy is mapped to the ISO-DR-Core, it is found that the directionality of relations cannot be captured by the existing ISO-DR-Core relations and it remains a question whether to extend the ISO-DR-Core relations or to redefine the PDTB relations so that the directionality of arguments can be captured~\citep{prasad-etal-2018-discourse}.
\section{Research Plan}
RST-DT is annotated on texts from the Penn Treebank~\citep{marcus-etal-1993-building} that have also been annotated in PDTB. The texts are formally written Wall Street Journal articles. The English corpora annotated under the SDRT framework, i.e., the STAC corpus~\citep{asher-etal-2016-discourse} and the Molweni corpus~\citep{li-etal-2020-molweni}, are created for analyzing multi-party dialogues, making it difficult to be used together with the other two corpora. Therefore, in addition to RST-DT and PDTB 3.0, we will use the ANNODIS corpus~\citep{pery-woodley-etal-2009-annodis}, which consists of formally written French texts. We will first translate the texts into English with an MT system and then manually check the translated texts to reduce errors.
In the following, the research questions and the approach in our plan will be discussed. These questions are closely related to each other and the research on one question is likely to influence how the other questions should be addressed. They are presented separately just for easier description.
\textbf{RQ1:} Which structure can be used to represent discourse in the unified framework?
Although there is a lack of consensus on how to represent discourse structure, in a number of studies, the dependency structure is taken as a common structure that the other structures can be converted to~\citep{muller-etal-2012-constrained, hirao-etal-2013-single, venant-etal-2013-expressivity, li-etal-2014-text, yoshida-etal-2014-dependency, stede-etal-2016-parallel, morey-etal-2018-dependency, yi-etal-2021-unifying}.
This choice is mainly inspired by research in the field of syntax, where the dependency grammar is better studied and its computational and representational properties are well-understood\footnote{In communication with Bonnie Webber, January, 2022.}.
The research by~\citet{venant-etal-2013-expressivity} provides a common language for comparing discourse structures of different formalisms, which is used in the transformation procedure presented by~\citet{stede-etal-2016-parallel}. Another possibility is the constrained directed acyclic graph introduced by~\citet{danlos-2004-discourse}. While~\citet{venant-etal-2013-expressivity} focus on the expressivity of different structures, the constrained DAG is motivated from the perspective of strong generative capacity~\citep{danlos2008strong}. Although neither of the studies deals with the PDTB framework, since they are both semantically driven, we believe it is possible to deal with the PDTB framework using either of the two structures. We will start with the investigation of the two structures.
Another issue is how to maintain one-to-one correspondence during the transformation of the original structure and the unified structure back and forth. As indicated by~\citet{stede-etal-2016-parallel}, the transformation from the RST or SDRT structures into dependency structures always produces the same structure, but going back to the initial RST or SDRT structures is ambiguous.~\citet{morey-etal-2018-dependency} introduces head-ordered dependency trees in syntactic parsing~\citep{fernandez-gonzalez-martins-2015-parsing}
to reduce the ambiguity. We may start with a similar method.
As is clear from Section 2, using the dependency structure as a common ground for studying the relations between different frameworks is not new in existing literature, but comparing the RST, PDTB and SDRT frameworks with this method has not yet been done. This approach will be our starting point, and the suitability of the dependency structure in representing discourse will be investigated empirically. The SciDTB corpus~\citep{yang-li-2018-scidtb}, which is annotated under the dependency framework, will be used for this purpose.
\textbf{RQ2:\footnote{In communication with Bonnie Webber, January, 2022. We thank her for pointing out this aspect.}} What properties of different frameworks should be kept and what properties should be ignored in the unification?
We present a non-exhaustive list of properties, which we consider to have considerable influence on the unified discourse structure.
\begin{enumerate}
\item Nuclearity:~\citet{marcu1996building} uses the nuclearity principle as the foundation for a formal treatment of compositionality in RST, which means that two adjacent spans can be joined into a larger span by a rhetorical relation if and only if the relation holds between the most salient units of those spans. This assumption is criticized by~\citet{stede2008disentangling}. The remedy provided by~\citet{stede2008disentangling} is to separate different levels of discourse information, which is in line with the suggestions in~\citet{Knott00beyondelaboration:} and~\citet{ moore-pollack-1992-problem}. Our strategy is to keep this property in the initial stage of experimentation. The existing methods for transforming RST trees to dependency structure~\citep{hirao-etal-2013-single, li-etal-2014-text} rely heavily on the nuclearity principle and we will use these methods in the transformation and see what kinds of problems this procedure will cause, particularly with respect to the PDTB framework, which does not enforce a hierarchical structure for complete coverage of the text.
\item Sentence-boundedness:
The RST framework does not enforce well-formed discourse sub-trees for each sentence. However, it is found that 95\% of the discourse parse trees in RST-DT have well-formed sub-trees at the sentence level~\citep{soricut-marcu-2003-sentence}. For the PDTB framework, there is no restriction on how far an argument can be from its corresponding connective: it can be in the same sentence as the connective, in the sentence immediately preceding that of the connective, or in some non-adjacent sentence~\citep{Prasad2006ThePD}. Moreover, the arguments are determined based on the \textit{Minimality Principle}, which means that clauses and/or sentences that are minimally required for the interpretation of the relation should be included in the argument, and other spans that are relevant but not necessary can be annotated as supplementary information, which is labeled depending on which argument it is supplementary to~\citep{prasad-etal-2008-penn}. The SDRT framework developed in~\citet{asher2003logics} does not specify the basic discourse unit, but in the annotation of the ANNODIS corpus, EDU segmentation follows similar principles as RST-DT. The formation of CDU and the attachment of relations are where SDRT differs significantly from RST. A segment can be attached to another segment from the same sentence, the same paragraph or a larger context, and by one or possibly more relations. A CDU can be of any size and can have segments that are far apart in the text, and relations may be annotated within the CDU\footnote{See section 3 of the ANNODIS annotation manual, available through \url{ http://w3.erss.univ-tlse2.fr/textes/publications/CarnetsGrammaire/carnGram21.pdf}}.
The differences in the criteria on location and extent for basic discourse unit identification and relation labeling of the RST framework and the PDTB framework may be partly attributed to different annotation procedures. In RST, EDU segmentation is performed first and EDU linking and relation labelling are performed later. The balance between consistency and granularity is the major concern behind the strategy for EDU segmentation~\citep{carlson-etal-2001-building}. In contrast, in PDTB, the connectives are identified first, and their arguments are determined afterwards. Semantic relatedness is given greater weight and the location and extent of the arguments can be determined more flexibly. On the whole, neither SDRT nor PDTB shows any tendency of sentence-boundedness. We will investigate to what extent the tendency of sentence-boundedness complicates the unification and what the consequences are if entity-based models and lexical-based models are incorporated.
\item Multi-sense annotation: As shown above, SDRT and PDTB allow multi-sense annotation while RST only allows one relation to be labeled. The single-sense constraint actually gives rise to ambiguity because of the multi-faceted nature of local coherence~\citep{stede2008disentangling}. For the unification task, we assume that multi-sense annotation is useful. However, we agree with the view mentioned in~\citet{stede2008disentangling} that incrementally adding more relations as phenomena are being recognized is not a promising direction. There are two possible approaches: one is to separate different dimensions of discourse information~\citep{stede2008disentangling} and the other is to represent different kinds of discourse information simultaneously, similar to the approach adopted in~\citet{Knott00beyondelaboration:}. While multi-level annotation may reveal the interaction between discourse and other linguistic phenomena, it is less helpful for developing a discourse parser and requires more efforts in annotation. The second approach may be conducive to computationally cheaper discourse processing when proper constraints are introduced.
\end{enumerate}
\textbf{RQ3:} How can entity-based models and lexical-based models be incorporated into the unified framework?
The PDTB framework believes that lexical-based discourse relations are associated with anaphoric dependency, which is anchored by discourse adverbials~\citep{anaphora-discourse} and annotated as a type of explicit relations. As for entity-based relations, PDTB uses the E\textsc{nt}R\textsc{el} label to annotate this type of relations when neither explicit nor implicit relations can be identified and only entity-based coherence relations are present. In the RST framework, the ELABORATION relation is actually a relation between entities. However, it is encoded in the same way as the other relations between propositions, which bedevils the framework~\citep{Knott00beyondelaboration:}. Further empirical studies may be needed to identify how different frameworks represent these different kinds of discourse information. The main challenge is to use a relatively simple structure to represent different types of discourse information while keeping the complexity relatively low.
\textbf{RQ4:} How can the unified framework be evaluated?
We will use intrinsic evaluation to assess the complexity of the discourse structure.
Extrinsic evaluation will be used to assess the effectiveness of the unified framework. The downstream tasks in the extrinsic evaluation include text summarization and document discrimination, which are two typical tasks for evaluating discourse models. The document discrimination task asks a score of coherence to be assigned to a document. The originally written document is considered to be the most coherent, and with more permutations, the document becomes less coherent. For comparison with previous studies, we will use the CNN and Dailymail dataset~\citep{cnndailymaildataset15} for the text summarization task, and use the method and dataset\footnote{\url{https://github.com/AiliAili/Coherence_Modelling}} in~\citet{shen-etal-2021-evaluating} to control the degree of coherence for the document discrimination task.
Previous studies that use multi-task learning and GNNs to encode different types of discourse information will be re-investigated to test the effectiveness of the unified framework.
As we may have to ignore some properties, we will examine what might be lost with the unified framework.
\section{Conclusion}
We propose to unify the RST, PDTB and SDRT frameworks, which may enable discourse corpora annotated under different frameworks to be used jointly and achieve the potential synergy of different frameworks. The major challenges include determining which structure to use in the unified framework, choosing what properties to keep and what to ignore, and incorporating entity-based models and lexical-based models into the unified framework. We will start with existing research and try to find a computationally less expensive way for the task. Extensive experiments will be conducted to investigate how effective the unified framework is and how it can be used. An empirical evaluation of what might be lost through the unification will be performed.
\section{Acknowledgements}
We thank Bonnie Webber for valuable feedback that greatly shaped the work. We are grateful to the anonymous reviewers for detailed and insightful comments that improved the work considerably, and Mark-Jan Nederhof for proof-reading the manuscript. The author is funded by University of St Andrews-China Scholarship Council joint scholarship (NO.202008300012).
\section{Ethical Considerations and Limitations}
The corpora are used in compliance with the licence requirements:
The ANNODIS corpus is available under Creative Commons By-NC-SA 3.0.
RST-DT is distributed on Linguistic Data Consortium:
Carlson, Lynn, Daniel Marcu, and Mary Ellen Okurowski. RST Discourse Treebank LDC2002T07. Web Download. Philadelphia: Linguistic Data Consortium, 2002.
PDTB 3.0 is also distributed on Linguistic Data Consortium:
Prasad, Rashmi, et al. Penn Discourse Treebank Version 3.0 LDC2019T05. Web Download. Philadelphia: Linguistic Data Consortium, 2019.
\textbf{Bender Rule} English is the language studied in this work.
| 2024-02-18T23:39:52.001Z | 2022-04-19T02:16:53.000Z | algebraic_stack_train_0000 | 695 | 5,136 |
|
proofpile-arXiv_065-3517 | \section{Introduction}
The Shuffle Conjecture~\cite{HHLRU.2005}, now a theorem due to Carlsson and Mellit~\cite{CM.2015},
provides an explicit combinatorial description of the bigraded Frobenius characteristic of the $S_n$-module of
diagonal harmonic polynomials. It is stated in terms of parking functions and involves two statistics, $\mathsf{area}$
and $\mathsf{dinv}$.
Recently, Haglund, Remmel and Wilson~\cite{HRW.2015} introduced a generalization of the Shuffle Theorem,
coined the Delta Conjecture. The Delta Conjecture involves two quasisymmetric functions
$\mathsf{Rise}_{n,k}(\mathbf{x};q,t)$ and $\mathsf{Val}_{n,k}(\mathbf{x};q,t)$, which have
combinatorial expressions in terms of labelled Dyck paths. In this paper, we are only concerned with the specializations
$q=0$ or $t=0$, in which case~\cite[Theorem 4.1]{HRW.2015} and~\cite[Theorem 1.3]{Rhoades.2016} show
\[
\mathsf{Rise}_{n,k}(\mathbf{x};0,t) = \mathsf{Rise}_{n,k}(\mathbf{x};t,0) =
\mathsf{Val}_{n,k}(\mathbf{x};0,t) = \mathsf{Val}_{n,k}(\mathbf{x};t,0).
\]
It was proven in~\cite[Proposition 4.1]{HRW.2015} that
\begin{equation}
\label{equation.val}
\mathsf{Val}_{n,k}(\mathbf{x};0,t) = \sum_{\pi\in \mathcal{OP}_{n,k+1}} t^{\mm(\pi)} \mathbf{x}^{\mathsf{wt}(\pi)},
\end{equation}
where $\mathcal{OP}_{n,k+1}$ is the set of ordered multiset partitions of the multiset $\{1^{\nu_1},2^{\nu_2},\ldots\}$
into $k+1$ nonempty blocks and $\nu=(\nu_1,\nu_2,\ldots)$ ranges over all weak compositions of $n$.
The weak composition $\nu$ is also called the weight of $\pi$, denoted $\mathsf{wt}(\pi)=\nu$.
In addition, $\mm(\pi)$ is the minimum value of the major index of the set partition $\pi$ over all possible ways to
order the elements in each block of $\pi$. The symmetric function $\mathsf{Val}_{n,k}(\mathbf{x};0,t)$ is
known~\cite{Wilson.2016,Rhoades.2016} to be Schur positive, meaning that the coefficients are polynomials in $t$
with nonnegative coefficients.
In this paper, we provide a crystal structure on the set of ordered multiset partitions $\mathcal{OP}_{n,k}$.
Crystal bases are $q\to 0$ shadows of representations for quantum groups $U_q(\mathfrak g)$~\cite{Kashiwara.1990,
Kashiwara.1991}, though they can also be understood from a purely combinatorial
perspective~\cite{Stembridge.2003,Bump.Schilling.2017}. In type $A$, the character of a connected crystal
component with highest weight element of highest weight $\lambda$ is the Schur function $\mathsf{s}_\lambda$.
Hence, having a type $A$ crystal structure on a combinatorial set (in our case on $\mathcal{OP}_{n,k}$)
naturally yields the Schur expansion of the associated symmetric function. Furthermore, if the statistic
(in our case $\mm$) is constant on connected components, then the graded character can also be naturally
computed using the crystal.
Haglund, Rhoades and Shimozono~\cite{HRS.2016} introduced a generalization $R_{n,k}$ for $k\leqslant n$
of the coinvariant algebra $R_n$, with $R_{n,n}=R_n$. Just as the combinatorics of $R_n$ is governed by permutations
in $S_n$, the combinatorics of $R_{n,k}$ is controlled by ordered set partitions of $\{1,2\ldots,n\}$ with $k$ blocks.
The graded Frobenius series of $R_{n,k}$ is (up to a minor twist) equal to $\mathsf{Val}_{n,k}(\mathbf{x};0,t)$.
It is still an open problem to find a bigraded $S_n$-module whose Frobenius image is
$\mathsf{Val}_{n,k}(\mathbf{x};q,t)$. Our crystal provides another representation-theoretic interpretation of
$\mathsf{Val}_{n,k}(\mathbf{x};0,t)$ as a crystal character.
Wilson~\cite{Wilson.2016} analyzed various statistics on ordered multiset partitions, including $\mathsf{inv}$,
$\mathsf{dinv}$, $\mathsf{maj}$, and $\mm$. In particular, he gave a Carlitz type bijection, which
proves equidistributivity of $\mathsf{inv}$, $\mathsf{dinv}$, $\mathsf{maj}$ on $\mathcal{OP}_{n,k}$.
Rhoades~\cite{Rhoades.2016} provided a non-bijective proof that these statistics are also equidistributed with
$\mm$. Using our new crystal, we can give a bijective proof of the equidistributivity of the
$\mm$ statistic and the $\mathsf{maj}$ statistic on ordered multiset partitions.
The paper is organized as follows. In Section~\ref{section.minimaj} we define ordered multiset partitions and
the $\mm$ and $\mathsf{maj}$ statistics on them. In Section~\ref{section.bijection} we provide a bijection
$\varphi$ from ordered multiset partitions to tuples of semistandard Young tableaux that will be used in
Section~\ref{section.crystal} to define a crystal structure, which preserves $\mm$. We conclude in
Section~\ref{section.equi} with a proof that the $\mm$ and $\mathsf{maj}$ statistics are equidistributed using
the same bijection $\varphi$.
\subsection*{Acknowledgments}
Our work on this group project began at the workshop \emph{Algebraic Combinatorixx 2} at the Banff International
Research Station (BIRS) in May 2017. ``Team Schilling,'' as our group of authors is known, would like to extend thanks
to the organizers of ACxx2, to BIRS for hosting this workshop, and to the Mathematical Sciences Research Institute (MSRI)
for sponsoring a follow-up meeting of some of the group members at MSRI in July 2017 supported by the National Science
Foundation under Grant No. DMS-1440140.
We would like to thank Meesue Yoo for early collaboration and Jim Haglund, Brendon Rhoades and Andrew Wilson for
fruitful discussions. This work benefited from computations and experimentations in {\sc Sage}~\cite{combinat,sage}.
P. E. Harris was partially supported by NSF grant DMS--1620202.
R. Orellana was partially supported by NSF grant DMS--1700058.
G. Panova was partially supported by NSF grant DMS--1500834.
A. Schilling was partially supported by NSF grant DMS--1500050.
M. Yip was partially supported by Simons Collaboration grant 429920.
\section{Ordered multiset partitions and the minimaj and maj statistics}
\label{section.minimaj}
We consider \defn{ordered multiset partitions} of order $n$ with $k$ blocks.
Given a weak composition $\nu = (\nu_1, \nu_2, \ldots)$ of $n$ into nonnegative integer parts,
which we denote $\nu \models n$, let $\mathcal{OP}_{\nu,k}$ be the set of partitions of the multiset
$\{i^{\nu_i} \mid i \geqslant 1\}$ into $k$ nonempty ordered blocks, such that the elements within each block are distinct.
For each $i\geqslant 1$, the notation $i^{\nu_i}$ should be interpreted as saying that the integer $i$ occurs $\nu_i$ times
in such a partition. The weak composition $\nu$ is also called the \defn{weight} $\mathsf{wt}(\pi)$ of
$\pi \in \mathcal{OP}_{\nu,k}$. Let
\[
\mathcal{OP}_{n,k} = \bigcup_{\nu \models n} \mathcal{OP}_{\nu,k}.
\]
It should be noted that in the literature $\mathcal{OP}_{n,k}$ is sometimes used for ordered set partitions rather
than ordered multiset partitions (that is, without letter multiplicities).
We now specify a particular reading order for an ordered multiset partition $\pi = (\pi_1\mid \pi_2 \mid \ldots \mid \pi_k)
\in \mathcal{OP}_{n,k}$ with blocks $\pi_i$. Start by writing $\pi_k$ in increasing order. Assume $\pi_{i+1}$ has been ordered,
and let $r_i$ be the largest integer in $\pi_i$ that is less than or equal to the leftmost element of $\pi_{i+1}$. If no such $r_i$
exists, arrange $\pi_i$ in increasing order. When such an $r_i$ exists, arrange the elements of $\pi_i$ in increasing order,
and then cycle them so that $r_i$ is the rightmost number. Continue with $\pi_{i-1}, \dots, \pi_2, \pi_1$ until all blocks have
been ordered. This ordering of the numbers in $\pi$ is defined in \cite{HRW.2015} and is called the \defn{minimaj order}.
\begin{example}
\label{example.pi}
lf $\pi = (157 \mid 24 \mid 56 \mid 468 \mid 13 \mid 123) \in \mathcal{OP}_{15,6}$, then the minimaj order of $\pi$ is
$\pi = (571 \mid 24 \mid 56 \mid 468 \mid 31 \mid 123)$.
\end{example}
For two sequences $\alpha,\beta$ of integers, we write $\alpha < \beta$ to mean that each element of $\alpha$ is less
than every element of $\beta$. Suppose $\pi \in \mathcal{OP}_{n,k}$ is in minimaj order. Then each block $\pi_i$ of $\pi$
is nonempty and
can be written in the form $\pi_i = b_i \alpha_i \beta_i$, where $b_i \in \ZZ_{>0}$, and $\alpha_i,\beta_i$ are sequences
(possibly empty) of distinct increasing integers such that either $\beta_i < b_i < \alpha_i$ or $\alpha_i=\emptyset$.
Inequalities with empty sets should be ignored.
\begin{lemma}
\label{lemma.minimaj order}
With the above notation, $\pi \in \mathcal{OP}_{n,k}$ is in minimaj order if the following hold:
\begin{itemize}
\item [{\rm (1)}] \ $\pi_k = b_k\alpha_k$ with $b_k<\alpha_k$ and $\beta_k = \emptyset$;
\item [{\rm (2)}] \ for $1 \leqslant i < k$, either
\begin{itemize} \item[{(a)}] \ $\alpha_i = \emptyset$, $\pi_i = b_i \beta_i$, and
$b_i < \beta_i \leqslant b_{i+1}$, or
\item[{(b)}] \ $\beta_i \leqslant b_{i+1} < b_i < \alpha_i$.
\end{itemize}
\end{itemize}
\end{lemma}
A sequence or word $w_1 w_2 \cdots w_n$ has a \defn{descent} in position $1\leqslant i<n$ if $w_i>w_{i+1}$.
Let $\pi \in \mathcal{OP}_{n,k}$ be in minimaj order. Observe that a descent occurs in $\pi_i$ only in Case~2\,(b)
of Lemma~\ref{lemma.minimaj order}, and such a descent is either between the largest and smallest elements of $\pi_i$
or between the last element of $\pi_i$ and the first element of $\pi_{i+1}$.
\begin{example}
Continuing Example~\ref{example.pi} with $\pi = (571 \mid 24 \mid 56 \mid 468 \mid 31 \mid 123)$, we have
\[
\begin{array}{lll}
b_1 = 5, \alpha_1 =7, \beta_1 = 1 &\qquad b_2 = 2, \alpha_2 = \emptyset, \beta_2 =4 &\qquad
b_3 = 5, \alpha_3 = 6, \beta_3 =\emptyset \\
b_4 = 4, \alpha_4 = 68, \beta_4 = \emptyset &\qquad
b_5 = 3, \alpha_5 = \emptyset, \beta_5 = 1 &\qquad b_6 = 1, \alpha_6 = 23, \beta_6 = \emptyset.
\end{array}
\]
\end{example}
Suppose that $\pi$ in minimaj order has descents in positions
\[
\mathsf{D}(\pi) = \{d_1, d_1+d_2, \ldots, d_1+d_2 + \cdots + d_\ell\}
\]
for some $\ell \in [0,k-1]$ ($\ell= 0$ indicates no descents). Furthermore assume that these descents occur in the blocks
$\pi_{i_1}, \pi_{i_1+i_2}, \ldots,\pi_{i_1+i_2+\cdots + i_\ell}$, where $i_j >0$ for $1\leqslant j \leqslant \ell$ and
$i_1+i_2+\cdots+i_\ell <k$. Assume $d_{\ell+1}$ and $i_{\ell+1}$ are the distances to the end, that is,
$d_1+d_2 + \cdots+d_\ell + d_{\ell+1} = n$ and $i_1+i_2+\cdots+ i_\ell + i_{\ell+1} = k$.
The \defn{minimaj statistic} $\mm(\pi)$ of $\pi \in \mathcal{OP}_{n,k}$ as given by~\cite{HRW.2015} is
\begin{equation}
\label{equation.minimaj}
\mm(\pi) = \sum_{d \in \mathsf{D}(\pi)} d = \sum_{j=1}^\ell (\ell+1-j) d_j.
\end{equation}
\begin{example}
The descents for the multiset partition $\pi = (57.1 \mid 24 \mid 56. \mid 468. \mid 3.1 \mid 123)$ occur at positions
$\mathsf{D}(\pi)=\{2,7,10,11\}$ and are designated with periods. Hence $\ell=4$, $d_1 = 2$, $d_2 = 5$, $d_3 = 3$, $d_4 = 1$
and $d_5 = 4$, and $\mm(\pi) = 2 + 7 + 10 + 11 = 30$. The descents occur in blocks \ $\pi_1$, $\pi_3$, $\pi_4$, and
$\pi_5$, so that $i_1 = 1$, $i_2 = 2$, $i_3 = 1$, $i_4 = 1$, and $i_5 = 1$.
\end{example}
To define the \defn{major index} of $\pi \in \mathcal{OP}_{n,k}$, we consider the word $w$ obtained by ordering each
block $\pi_i$ in decreasing order, called the \defn{major index order}~\cite{Wilson.2016}.
Recursively construct a word $v$ by setting $v_0=0$ and $v_j = v_{j-1}+ \chi(j \text{ is the last position in its block})$
for each $1\leqslant j \leqslant n$. Here $\chi(\text{True})=1$ and $\chi(\text{False})=0$. Then
\begin{equation}
\label{equation.maj}
\mathsf{maj}(\pi) = \sum_{j \colon w_j>w_{j+1}} v_j.
\end{equation}
\begin{example}
Continuing Example~\ref{example.pi}, note that the major index order of $\pi = (157 \mid 24 \mid 56 \mid 468 \mid 13
\mid 123) \in \mathcal{OP}_{15,6}$ is $\pi = (751 \mid 42 \mid 65 \mid 864 \mid 31 \mid 321)$. Writing the word $v$
underneath $w$ (omitting $v_0=0$), we obtain
\begin{equation*}
\begin{split}
w &= 751 \mid 42 \mid 65 \mid 864\mid 31 \mid 321\\
v &= 001 \mid 12 \mid 23 \mid 334 \mid 45 \mid 556,
\end{split}
\end{equation*}
so that $\mathsf{maj}(\pi) = 0+0+1+2+3+3+4+4+5+5=27$.
\end{example}
Note that throughout this section, we could have also restricted ourselves to ordered multiset partitions with
letters in $\{1,2,\ldots, r\}$ instead of $\ZZ_{>0}$. That is, let $\nu=(\nu_1,\ldots,\nu_r)$ be a weak composition of $n$ and let
$\mathcal{OP}^{(r)}_{\nu,k}$ be the set of partitions of the multiset $\{i^{\nu_i} \mid 1\leqslant i \leqslant r\}$ into
$k$ nonempty ordered blocks, such that the elements within each block are distinct. Let
\[
\mathcal{OP}^{(r)}_{n,k} = \bigcup_{\nu \models n} \mathcal{OP}^{(r)}_{\nu,k}.
\]
This restriction will be important when we discuss the crystal structure on ordered multiset partitions.
\section{Bijection with tuples of semistandard Young tableaux}
\label{section.bijection}
In this section, we describe a bijection from ordered multiset partitions to tuples of semistandard Young tableaux
that allows us to impose a crystal structure on the set of ordered multiset partitions in Section~\ref{section.crystal}.
Recall that a \defn{semistandard Young tableau} $T$ is a filling of a (skew) Young diagram (also called the \defn{shape} of $T$)
with positive integers that weakly increase across rows and strictly increase down columns. The \defn{weight} of $T$ is
the tuple $\mathsf{wt}(T)=(a_1,a_2,\ldots)$, where $a_i$ records the number of letters $i$ in $T$. The set of semistandard Young
tableaux of shape $\lambda$, where $\lambda$ is a (skew) partition, is denoted by $\mathsf{SSYT}(\lambda)$. If we want to
restrict the entries in the semistandard Young tableau from $\ZZ_{>0}$ to a finite alphabet $\{1,2,\ldots,r\}$, we denote the set by $\mathsf{SSYT}^{(r)}(\lambda)$.
The tableaux relevant for us here are of two types: a single column of boxes with entries that increase from top to bottom,
or a skew ribbon tableau. If $\gamma =(\gamma_1,\gamma_2, \dots, \gamma_m)$ is a skew ribbon shape with
$\gamma_j$ boxes in the $j$-th row starting from the bottom, the ribbon condition requires that row $j+1$ starts in the
last column of row $j$. This condition is equivalent to saying that $\gamma$ is connected and contains no $2 \times 2$
block of squares. For example
\ytableausetup{boxsize=1.1em}
\[
\ytableaushort{\none\none {\mbox{}}{\mbox{}}{\mbox{}},\none\none {\mbox{}},\none
{\mbox{}}{\mbox{}}}
\]
corresponds to $\gamma = (2,1,3)$.
Let $\mathsf{SSYT}(1^c)$ be the set of semistandard Young tableaux obtained by filling a column of
length $c$ and $\mathsf{SSYT}(\gamma)$ be the set of semistandard Young tableaux obtained by filling the skew ribbon
shape $\gamma$.
To state our bijection, we need the following notation. For fixed positive integers $n$ and $k$, assume
$\mathrm{D}= \{d_1,d_1+d_2,\ldots,d_1+d_2+\cdots+d_\ell\} \subseteq \{1,2,\dots,n-1\}$ and
$\mathrm{I} = \{i_1,i_1+i_2,\ldots,i_1+i_2+\cdots+i_\ell\} \subseteq\{1,2,\dots, k-1\}$ are sets of $\ell$ distinct elements
each. Define $d_{\ell+1} := n-(d_1+\cdots +d_\ell)$ and $i_{\ell+1} := k - (i_1+\cdots +i_\ell)$.
\begin{proposition}
\label{P:biject}
For fixed positive integers $n$ and $k$ and sets $\mathrm{D}$ and $\mathrm{I}$ as above, let
\[
\mathrm{M(D,I)} = \{\pi \in \mathcal{OP}_{n,k} \mid \mathsf{D}(\pi) = \mathrm{D}, \
\text{and the descents occur in $\pi_i$ for $i \in \mathrm{I}$}\}.
\]
Then the following map is a weight-preserving bijection:
\begin{equation}
\begin{split}
\varphi \colon \mathrm{M(D,I)} &\rightarrow \mathsf{SSYT}(1^{c_1}) \times \cdots
\times \mathsf{SSYT}(1^{c_\ell}) \times \mathsf{SSYT}(\gamma)\\
\pi &\mapsto T_1 \times \cdots \times T_\ell \times T_{\ell+1}
\end{split}
\end{equation}
where
\begin{itemize}
\item [{\rm (i)}] \ $\gamma = (1^{d_1-i_1}, i_1, i_2, \dots, i_{\ell+1})$ and $c_j = d_{\ell+2-j} - i_{\ell+2-j}$ for
$1\leqslant j \leqslant \ell$.
\item [{\rm(ii)}] \ The skew ribbon tableau $T_{\ell+1}$ of shape $\gamma$ is constructed as follows:
\begin{itemize}
\item [$\bullet$] The entries in the first column of the skew ribbon tableau $T_{\ell+1}$ beneath the first box are the first
$d_1-i_1$ elements of $\pi$ in increasing order from top to bottom, excluding any $b_j$ in that range.
\item[$\bullet$] The remaining rows $d_1-i_1+j$ of $T_{\ell+1}$ for $1\leqslant j \leqslant \ell+1$ are filled with\newline
$b_{i_1+\dots + i_{j-1} + 1}, b_{i_1+\dots + i_{j-1} + 2}, \dots, b_{i_1+\dots + i_j}$.
\end{itemize}
\item [{\rm(iii)}] The tableau $T_j$ for $1\leqslant j \leqslant \ell$ is the column filled with the elements of $\pi$ from the positions
$d_1+d_2+\cdots + d_{\ell-j+1}+1$ through and including position $d_1+d_2+\cdots + d_{\ell-j+2}$, but excluding any
$b_i$ in that range.
\end{itemize}
\end{proposition}
Note that in item (ii), the rows of $\gamma$ are assumed to be numbered from bottom
to top and are filled starting with row $d_1-i_1+1$ and ending with row $d_1-i_1+\ell+1$
at the top.
Also observe that since the bijection stated in Proposition~\ref{P:biject} preserves the weight, it can be restricted to a
bijection
\[
\varphi \colon \mathrm{M(D,I)}^{(r)} \rightarrow \mathsf{SSYT}^{(r)}(1^{c_1}) \times \cdots
\times \mathsf{SSYT}^{(r)}(1^{c_\ell}) \times \mathsf{SSYT}^{(r)}(\gamma),
\]
where $\mathrm{M(D,I)}^{(r)} = \mathrm{M(D,I)} \cap \mathcal{OP}^{(r)}_{n,k}$.
Before giving the proof, it is helpful to consider two examples to illustrate the map $\varphi$.
\begin{example}\label{ex:ell=0}
When the entries of $\pi \in \mathcal{OP}_{n,k}$ in minimaj order are increasing, then $\ell = 0$.
In this case, $d_1 = n$ and $i_1 = k$. The mapping $\varphi$ takes $\pi$ to the semistandard tableau $T =T_1$
that is of ribbon-shape $\gamma = (1^{n-k},k)$. The entries of the boxes in the first column of the tableau $T$ are
$b_1$, followed by the $n-k$ numbers in the sequences $\beta_1,\beta_2,\dots, \beta_{k-1},\alpha_k$
from top to bottom. (The fact that $\pi$ has no descents means that all the $\alpha_i = \emptyset$ for $1\leqslant i <k$
and we are in Case 2\,(a) of Lemma~\ref{lemma.minimaj order} for $1\leqslant i<k$ and Case 1 for $i=k$.) Columns $2$
through $k$ of $T_1$ are filled with the numbers $b_2,\dots,b_k$ respectively, and $b_2 \leqslant b_3 \leqslant \cdots
\leqslant b_k$. The result is a semistandard tableau $T_1$ of hook shape.
For example, consider $\pi = (12 \mid 2 \mid 234) \in \mathcal{OP}_{6,3}$. Then $\gamma=(1^3,3)$ and
\[
T_1 = \ytableaushort{122,2,3,4}\;.
\]
Now suppose that $T$ is such a hook-shape tableau with entries $b_1,b_2,\dots,b_{k}$ from left to right in its top row,
and entries $b_1, t_1, \dots, t_{n-k}$ down its first column. The inverse $\varphi^{-1}$ maps $T$ to the set partition $\pi$
that has as its first block $\pi_1= b_1\beta_1$, where $\beta_1=t_1, \dots, t_{m_1}$, and $t_1 < \dots < t_{m_1} \leqslant b_2$,
but $t_{m_1+1} > b_2$ so that $\beta_1$ is in the interval $(b_1,b_2]$. The second block of $\pi$ is given by
$\pi_2 = b_2 \beta_2$, where $\beta_2 = t_{m_1+1},\dots,t_{m_2}$, and $t_{m_1+1}< t_{m_1+2}< \dots <t_{m_2} \leqslant
b_3$, but $t_{m_2+1} > b_3$ and $\beta_2 \subseteq (b_2,b_3]$.
Continuing in this fashion, we set $\pi_k = b_k \alpha_k$, where $\alpha_k = t_{m_{k-1}+1},\dots, t_{n-k}$ and
$\alpha_k \subseteq (b_k,+\infty)$. Then $\varphi^{-1}(T) = \pi = (\pi_1\mid \pi_2 \mid \cdots \mid \pi_k)$,
where the ordered multiset partition $\pi$ has no descents.
\end{example}
\begin{example}
\label{example.pi phi}
The ordered multiset partition $\pi = (124 \mid 45. \mid 3 \mid 46.1\mid 23.1\mid 1 \mid 25) \in \mathcal{OP}_{15,7}$
has the following data:
\[
\begin{array}{lll} b_1 = 1, \alpha_1 =\emptyset, \beta_1 = 24 & \qquad b_2 = 4,\alpha_2 = 5,
\beta_2 =\emptyset & \qquad b_3 = 3, \alpha_3 = \emptyset, \beta_3 =\emptyset \\
b_4 = 4, \alpha_4 = 6, \beta_4 = 1 & \qquad b_5 = 2, \alpha_5 = 3, \beta_5 = 1 & \qquad b_6 = 1,
\alpha_6 = \emptyset, \beta_6 = \emptyset \\
b_7=2, \alpha_7 = 5, \beta_7 = \emptyset & \\
\end{array}
\]
and $\ell=3$, $d_1=5, d_2=d_3=3, d_4=4$ and $i_1=i_2=2, i_3=1, i_4=2$. Then
\ytableausetup{boxsize=1.1em}
\[
\pi = ({\color{blue}1}{\color{red}2}{\color{red}4} \mid {\color{blue}4}{\color{red}5}. \mid {\color{blue}3} \mid
{\color{blue}4}{\color{darkgreen}6}.{\color{orange}1}\mid {\color{blue}2}{\color{orange}3}.{\color{violet}1}\mid {\color{blue}1}
\mid {\color{blue}2}{\color{violet}5}) \mapsto
\ytableaushort{{\color{violet}1},{\color{violet}5}} \times \ytableaushort{{\color{orange}1},{\color{orange}3}}
\times \ytableaushort{{\color{darkgreen}6}} \times
\ytableaushort{\none\none {\color{blue}1}{\color{blue}2},\none\none {\color{blue}2},\none
{\color{blue}3}{\color{blue}4},{\color{blue}1}{\color{blue}4},{\color{red}2},{\color{red}4},{\color{red}5}}\;.
\]
\end{example}
It is helpful to keep the following picture in mind during the proof of Proposition~\ref{P:biject}, where the map $\varphi$
is taking the ordered multiset partition $\pi$ to the collection of tableaux $T_i$ as illustrated below. We adopt the shorthand
notation $\eta_j := i_1+\cdots+i_j$ for $1\leqslant j \leqslant \ell$, where we also set $\eta_0=0$ and $\eta_{\ell+1}=k$:
\[
\pi = (b_1 \beta_1 | b_2 \beta_2 | \cdots |b_{\eta_1} \alpha_{\eta_1}. \beta_{\eta_1}|b_{\eta_1+1} \beta_{\eta_1+1}
|\cdots |b_{\eta_j} \alpha_{\eta_j}.\beta_{\eta_j}| b_{\eta_j+1} \beta_{\eta_j+1} | \cdots | b_k \alpha_k)
\]
\ytableausetup{boxsize=2.9em}
\begin{equation}
\label{equation.T picture}
T_{\ell+1-j} = \ytableaushort{{\beta_{\eta_j}},{\beta_{\eta_j+1}},{\vdots},{\beta_{\eta_{j+1}-1}},{\alpha_{\eta_{j+1}}}}
\; \text{ for }1\leqslant j\leqslant \ell, \quad
T_{\ell+1} = \ytableaushort{\none\none\none\none{ b_{\eta_\ell+1}}{\cdots}{b_{\eta_{\ell+1}}},
\none\none\none\none{\vdots},
\none\none{b_{\eta_{j-1}+1}}{\cdots}
{b_{\eta_j}}, \none\none{\vdots},{b_1}{\cdots}{b_{\eta_1}},{\beta_1},{\vdots},{\beta_{\eta_1-1}},{\alpha_{\eta_1}}}\;.
\end{equation}
\begin{proof}[Proof of Proposition \ref{P:biject}]
Since the entries of $\pi$ are mapped bijectively to the entries of $T_1 \times T_2 \times \cdots \times T_{\ell+1}$,
the map $\varphi$ preserves the total weight $\mathsf{wt}(\pi)=(p_1,p_2,\ldots) \mapsto \, \mathsf{wt}(T)$, where $p_i$ is the number of
entries $i$ in $\pi$ for $i \in \mathbb{Z}_{>0}$. We need to show that $\varphi$ is well defined and
exhibit its inverse. For this, we can assume that $\ell \geqslant 1$, as the case $\ell = 0$ was treated in
Example~\ref{ex:ell=0}.
Observe first that there are $d_j$ entries in $\pi$ which are between two consecutive descents,
and among these entries there are exactly $i_j$ entries that are first elements of a block, since descents happen $i_j$
blocks apart. This implies that the tableaux have the shapes claimed.
To see that the tableaux are semistandard, consider first $T_{\ell+1}$, and let $\eta_j = i_1+\cdots+i_j$ as above.
A row numbered $d_1-i_1+j$ for $1 \leqslant j \leqslant \ell+1$ is weakly increasing, because the lack of a descent in
a block $\pi_i$ means $b_i \leqslant b_{i+1}$, and this holds for $i$ in the interval $\eta_{j-1} +1, \ldots, \eta_{j}$ between
two consecutive descents. The leftmost column is strictly increasing because it consists of the elements $b_1 < \beta_1
< \beta_2 < \cdots <\beta_{\eta_1-1} <\alpha_{\eta_1}$ (the lack of a descent before $\pi_{\eta_1}$ implies that
$\alpha_i=\emptyset$ for $i<\eta_1$ and $b_i <\beta_i \leqslant b_{i+1}< \beta_{i+1}$ by Case 2\,(a)
of Lemma~\ref{lemma.minimaj order}).
The rest of the columns of $T_{\ell+1}$ contain elements $b_i$, where $b_{\eta_{j-1}+1}$ is the first element in row
$d_{1}-i_1+j$ and $b_{\eta_j}$ is the last, and $b_{\eta_{j}+1}$ is the first element in the row immediately above it. We have
$b_{\eta_j} > b_{\eta_j+1}$, since there is a descent in block $\pi_{i_j}$ which implies this inequality by the ordering
condition in Case 2\,(b) of Lemma~\ref{lemma.minimaj order}.
The strict inequalities for the column tableaux $T_1,\ldots,T_{\ell}$ hold for the same reason that they hold for the first
column in $T_{\ell+1}$. That is, the columns consist of the elements $\beta_{\eta_j} < \beta_{\eta_j+1} <\cdots <
\beta_{\eta_{j+1}-1} <\alpha_{\eta_{j+1}}$, where all the $\alpha_i$ for $\eta_j\leqslant i<\eta_{j+1}$ are in fact
$\emptyset$, since we are in Case~2\,(a) of Lemma~\ref{lemma.minimaj order} here.
Next, to show that $\varphi$ is a bijection, we describe the inverse map of $\varphi$.
For $\mathrm{D} = \{d_1,d_1+d_2, \ldots, d_1+d_2+\cdots+d_\ell\} \subseteq \{1,2,\dots,n-1\}$ and $\mathrm{I}
= \{i_1,i_1+i_2,\dots,i_1+i_2+\cdots+i_\ell\} = \{\eta_1,\eta_2,\ldots,\eta_\ell \} \subseteq \{1,2,\dots,k-1\}$ with $\ell$ distinct
elements each, suppose $d_{\ell+1}$ and $i_{\ell+1}$ are such that $d_1+d_2+\cdots+d_{\ell+1} = n$ and
$\eta_{\ell+1}=i_1+i_2+\cdots+i_{\ell+1}=k$. Assume
$T_1 \times \cdots \times T_\ell \times T_{\ell+1} \in \mathsf{SSYT}(1^{c_1}) \times \cdots \times \mathsf{SSYT}(1^{c_\ell})
\times \mathsf{SSYT}(\gamma)$, where $\gamma = (1^{d_1-i_1},i_2,\dots,i_{\ell+1})$ and $c_j = d_{\ell+2-j} - i_{\ell+2-j}$
for $1\leqslant j \leqslant \ell$. We construct $\pi$ by applying the following algorithm.
Read off the bottom $d_1-i_1$ entries of the first column of $T_{\ell+1}$. Let $b_1$ be the element immediately above
these entries in the first column of $T_{\ell+1}$, and note that $b_1$ is less than all of them. Let $b_2,\dots, b_{i_1}$ be the
elements in the same row of $T_{\ell+1}$ as $b_1$, reading from left to right. Assign $b_{\eta_1+1},\ldots, b_{\eta_2}$ to the
elements in the next higher row, and so forth, until reaching row $d_1-i_1+\ell+1$ (the top row) of $T_{\ell+1}$
and assigning $b_{\eta_\ell+1},\dots, b_{\eta_{\ell+1}}=b_k$ to its entries.
The elements in $\beta_1,\ldots,\beta_{\eta_1-1},\alpha_{\eta_1}$ are obtained by cutting the entries in
the first column of $T_{\ell+1}$ above $b_1$, so that $\beta_i$ lies in the interval $(b_i, b_{i+1}]$, and $\alpha_{\eta_1}$
lies in the interval $(b_{\eta_1},\infty)$.
Now for $1\leqslant j\leqslant \ell$, we obtain $\beta_{\eta_j},\beta_{\eta_j+1},\ldots,\beta_{\eta_{j+1}-1},\alpha_{\eta_{j+1}}$ by
cutting the elements in $T_{\ell+1-j}$ into sequences as follows: $\beta_{\eta_j} = T_{\ell+1-j} \cap ( -\infty, b_{\eta_j+1} ]$,
$\beta_{\eta_j+m} = T_{\ell+1-j} \cap (b_{\eta_j+m+1}, b_{\eta_j+m+2}]$ and $\alpha_{\eta_{j+1}} = T_{\ell+1-j} \cap
(b_{\eta_{j+1}},+\infty)$.
The inequalities are naturally forced from the inequalities in the semistandard tableaux, and the descents at the
given positions are also forced, because by construction $\alpha_{\eta_j} > b_{\eta_j} > b_{\eta_j+1} \geqslant \beta_{\eta_j}$.
This process constructs the $b_i$, $\alpha_i$, and $\beta_i$ for each $i=1,\dots,k$, where we assume that sequences that
have not been defined by the process are empty. Then $\varphi^{-1}(T_1 \times T_2 \times \cdots \times T_{\ell+1})
= \pi = (\pi_1 \mid \pi_2 \mid \cdots \mid \pi_{k})$, where $\pi_i = b_i \alpha_i \beta_i$.
\end{proof}
For a (skew) partition $\lambda$, the \defn{Schur function} $\mathsf{s}_\lambda(\mathbf{x})$ is defined as
\begin{equation}
\label{equation.schur}
\mathsf{s}_\lambda(\mathbf{x}) = \sum_{T \in \mathsf{SSYT}(\lambda)} \mathbf{x}^{\mathsf{wt}(T)}.
\end{equation}
Similarly for $m \geqslant 1$, the \defn{$m$-th elementary symmetric function} $\mathsf{e}_m(\mathbf{x})$ is given by
\[
\mathsf{e}_m(\mathbf{x}) = \sum_{1 \leqslant j_1 < j_2 < \cdots < j_m} x_{j_1} x_{j_2} \cdots x_{j_m}.
\]
As an immediate consequence of Proposition~\ref{P:biject}, we have the following symmetric function identity.
\begin{corollary}
\label{C:symexpand}
Assume $\mathrm{D} \subseteq \{1,2,\dots,n-1\}$ and $\mathrm{I} \subseteq\{1,2,\dots, k-1\}$ are sets of $\ell$
distinct elements each and let $\mathrm{M(D,I)}$, $\gamma$ and $c_j$ for $1\leqslant j \leqslant \ell$ be as in
Proposition~\ref{P:biject}. Then
\[
\sum_{\pi \in \mathrm{M(D,I)}} \mathbf{x}^{\mathsf{wt}(\pi)}
= \mathsf{s}_\gamma(\mathbf{x}) \ \prod_{j=1}^\ell \mathsf{e}_{c_j}(\mathbf{x}).
\]
\end{corollary}
\section{Crystal on ordered multiset partitions}
\label{section.crystal}
\subsection{Crystal structure}
Denote the set of words of length $n$ over the alphabet $\{1,2,\ldots,r\}$ by $\mathcal{W}^{(r)}_n$.
The set $\mathcal{W}_n^{(r)}$ can be endowed with an $\mathfrak{sl}_r$-crystal structure as follows.
The weight $\mathsf{wt}(w)$ of $w\in \mathcal{W}_n^{(r)}$ is the tuple $(a_1,\ldots,a_r)$, where $a_i$ is the number
of letters $i$ in $w$. The \defn{Kashiwara raising} and \defn{lowering operators}
\[
e_i, f_i \colon \mathcal{W}_n^{(r)} \to \mathcal{W}_n^{(r)} \cup \{0\} \qquad \qquad \text{for $1\leqslant i <r$}
\]
are defined as follows. Associate to each letter $i$ in $w$ an open bracket ``$)$'' and to each letter $i+1$ in $w$
a closed bracket ``$($''. Then $e_i$ changes the $i+1$ associated to the leftmost unmatched ``$($'' to an $i$; if there
is no such letter, $e_i(w)=0$. Similarly, $f_i$ changes the $i$ associated to the rightmost unmatched ``$)$'' to an $i+1$;
if there is no such letter, $f_i(w)=0$.
For $\lambda$ a (skew) partition, the $\mathfrak{sl}_r$-crystal action on $\mathsf{SSYT}^{(r)}(\lambda)$ is induced
by the crystal on $\mathcal{W}_{|\lambda|}^{(r)}$, where $|\lambda|$ is the number of boxes in $\lambda$. Consider the
row-reading word $\mathsf{row}(T)$ of $T\in \mathsf{SSYT}^{(r)}(\lambda)$, which is the word obtained from $T$ by
reading the rows from bottom to top, left to right. Then $f_i(T)$ (resp. $e_i(T)$) is the RSK insertion tableau of
$f_i(\mathsf{row}(T))$ (resp. $e_i(\mathsf{row}(T))$). It is well known that {$f_i(T)$ is a tableau in
$\mathsf{SSYT}^{(r)}(\lambda)$ with weight equal to} $\mathsf{wt}(T)-\epsilon_i+\epsilon_{i+1}$, where $\epsilon_i$ is $i$-th
standard vector in $\ZZ^r$.
Similarly, $e_i(T) \in \mathsf{SSYT}^{(r)}(\lambda)$, and $e_i(T)$ has weight $\mathsf{wt}(T)+\epsilon_i-\epsilon_{i+1}$. See for
example~\cite[Chapter 3]{Bump.Schilling.2017}.
In the same spirit, an $\mathfrak{sl}_r$-crystal structure can be imposed on
\[
\mathsf{SSYT}^{(r)}(1^{c_1},\ldots,1^{c_\ell},\gamma)
:= \mathsf{SSYT}^{(r)}(1^{c_1}) \times \cdots \times \mathsf{SSYT}^{(r)}(1^{c_\ell}) \times \mathsf{SSYT}^{(r)}(\gamma)
\]
by concatenating the reading words of the tableaux in the tuple. This yields crystal operators
\[
e_i,f_i \colon \mathsf{SSYT}^{(r)}(1^{c_1},\ldots,1^{c_\ell},\gamma) \to
\mathsf{SSYT}^{(r)}(1^{c_1},\ldots,1^{c_\ell},\gamma) \cup \{0\}.
\]
Via the bijection $\varphi$ of Proposition~\ref{P:biject}, this also imposes crystal operators on ordered
multiset partitions
\[
\tilde{e}_i,\tilde{f}_i \colon \mathcal{OP}_{n,k}^{(r)} \to \mathcal{OP}_{n,k}^{(r)} \cup \{0\}
\]
as $\tilde{e}_i = \varphi^{-1} \circ e_i \circ \varphi$ and $\tilde{f}_i = \varphi^{-1} \circ f_i \circ \varphi$.
An example of a crystal structure on $\mathcal{OP}_{n,k}^{(r)}$ is given in Figure~\ref{figure.crystal}.
\begin{figure}
\scalebox{0.7}{
\begin{tikzpicture}[>=latex,line join=bevel,]
\node (node_13) at (323.0bp,9.0bp) [draw,draw=none] {$\left(32 \mid 23\right)$};
\node (node_14) at (114.0bp,305.0bp) [draw,draw=none] {$\left(1 \mid 123\right)$};
\node (node_9) at (319.0bp,231.0bp) [draw,draw=none] {$\left(31\mid 12\right)$};
\node (node_8) at (32.0bp,231.0bp) [draw,draw=none] {$\left(23 \mid 12\right)$};
\node (node_7) at (360.0bp,157.0bp) [draw,draw=none] {$\left(31 \mid 13\right)$};
\node (node_6) at (196.0bp,231.0bp) [draw,draw=none] {$\left(2\mid 123\right)$};
\node (node_5) at (278.0bp,157.0bp) [draw,draw=none] {$\left(312\mid 2\right)$};
\node (node_4) at (114.0bp,231.0bp) [draw,draw=none] {$\left(12\mid 23\right)$};
\node (node_3) at (32.0bp,157.0bp) [draw,draw=none] {$\left(23 \mid 13\right)$};
\node (node_2) at (196.0bp,157.0bp) [draw,draw=none] {$\left(3\mid 123\right)$};
\node (node_1) at (114.0bp,157.0bp) [draw,draw=none] {$\left(123 \mid 3\right)$};
\node (node_0) at (32.0bp,305.0bp) [draw,draw=none] {$\left(231\mid 1\right)$};
\node (node_11) at (319.0bp,305.0bp) [draw,draw=none] {$\left(21 \mid 12\right)$};
\node (node_10) at (323.0bp,83.0bp) [draw,draw=none] {$\left(31\mid 23\right)$};
\node (node_12) at (196.0bp,305.0bp) [draw,draw=none] {$\left(21\mid 13\right)$};
\draw [blue,->] (node_10) ..controls (323.0bp,62.872bp) and (323.0bp,42.801bp) .. (node_13);
\definecolor{strokecol}{rgb}{0.0,0.0,0.0};
\pgfsetstrokecolor{strokecol}
\draw (332.0bp,46.0bp) node {$1$};
\draw [red,->] (node_6) ..controls (196.0bp,210.87bp) and (196.0bp,190.8bp) .. (node_2);
\draw (205.0bp,194.0bp) node {$2$};
\draw [red,->] (node_9) ..controls (330.34bp,210.54bp) and (341.93bp,189.61bp) .. (node_7);
\draw (354.0bp,194.0bp) node {$2$};
\draw [red,->] (node_8) ..controls (32.0bp,210.87bp) and (32.0bp,190.8bp) .. (node_3);
\draw (41.0bp,194.0bp) node {$2$};
\draw [blue,->] (node_9) ..controls (307.66bp,210.54bp) and (296.07bp,189.61bp) .. (node_5);
\draw (287.0bp,194.0bp) node {$1$};
\draw [red,->] (node_11) ..controls (319.0bp,284.87bp) and (319.0bp,264.8bp) .. (node_9);
\draw (328.0bp,268.0bp) node {$2$};
\draw [red,->] (node_5) ..controls (290.44bp,136.54bp) and (303.17bp,115.61bp) .. (node_10);
\draw (287.0bp,120.0bp) node {$2$};
\draw [blue,->] (node_0) ..controls (32.0bp,284.87bp) and (32.0bp,264.8bp) .. (node_8);
\draw (41.0bp,268.0bp) node {$1$};
\draw [red,->] (node_4) ..controls (114.0bp,210.87bp) and (114.0bp,190.8bp) .. (node_1);
\draw (123.0bp,194.0bp) node {$2$};
\draw [blue,->] (node_12) ..controls (196.0bp,284.87bp) and (196.0bp,264.8bp) .. (node_6);
\draw (205.0bp,268.0bp) node {$1$};
\draw [blue,->] (node_7) ..controls (349.82bp,136.65bp) and (339.51bp,116.01bp) .. (node_10);
\draw (355.0bp,120.0bp) node {$1$};
\draw [blue,->] (node_14) ..controls (114.0bp,284.87bp) and (114.0bp,264.8bp) .. (node_4);
\draw (123.0bp,268.0bp) node {$1$};
\end{tikzpicture}
}
\caption{The crystal structure on $\mathcal{OP}_{4,2}^{(3)}$. The $\mm$ of the connected components
are $2,0,1,1$ from left to right.
\label{figure.crystal}}
\end{figure}
\begin{theorem}
\label{theorem.crystal}
The operators $\tilde{e}_i, \tilde{f}_i$, and $\mathsf{wt}$ impose an $\mathfrak{sl}_r$-crystal structure on
$\mathcal{OP}_{n,k}^{(r)}$. In addition, $\tilde{e}_i$ and $\tilde{f}_i$ preserve the $\mm$ statistic.
\end{theorem}
\begin{proof}
The operators $\tilde{e}_i, \tilde{f}_i$, and $\mathsf{wt}$ impose an $\mathfrak{sl}_r$-crystal structure by construction
since $\varphi$ is a weight-preserving bijection. The Kashiwara operators $\tilde{e}_i$ and $\tilde{f}_i$ preserve
the $\mm$ statistic, since by Proposition~\ref{P:biject}, the bijection $\varphi$ restricts to $\mathrm{M(D,I)}^{(r)}$
which fixes the descents of the ordered multiset partitions in minimaj order.
\end{proof}
\subsection{Explicit crystal operators}
Let us now write down the crystal operator $\tilde{f}_i \colon \mathcal{OP}_{n,k} \to \mathcal{OP}_{n,k}$
of Theorem~\ref{theorem.crystal} explicitly on $\pi\in \mathcal{OP}_{n,k}$ in minimaj order.
Start by creating a word $w$ from right to left by reading the first element in each block of $\pi$ from right to left,
followed by the remaining elements of $\pi$ from left to right. Note that this agrees with $\mathsf{row}(\varphi(\pi))$.
For example, $w=513165421434212$ for $\pi$ in Example~\ref{example.pi phi}.
Use the crystal operator $f_i$ on words to determine
which $i$ in $w$ to change to an $i+1$. Circle the corresponding letter $i$ in $\pi$. The crystal operator $\tilde{f}_i$
on $\pi$ changes the circled $i$ to $i+1$ unless we are in one of the following two cases:
\begin{subequations}
\label{equation.f explicit}
\begin{align}
\label{equation.f explicit a}
\cdots \encircle{i} \mid i &\quad \stackrel{\tilde{f}_i}{\longrightarrow} \quad \cdots \mid i \; \encircle{i\!\!+\!\!1} \;, \\
\label{equation.f explicit b}
\mid \encircle{i} \; i\!\!+\!\!1 & \quad \stackrel{\tilde{f}_i}{\longrightarrow} \quad i\!\!+\!\!1 \mid \encircle{i\!\!+\!\!1} \;.
\end{align}
\end{subequations}
Here ``$\cdots$'' indicates that the block is not empty in this region.
\begin{example}
In Figure~\ref{figure.crystal}, $\tilde{f}_2(31 \encircle{2} \mid 2) = (31 \mid 2 \; \encircle{3})$ is an example
of~\eqref{equation.f explicit a}. Similarly, $\tilde{f}_1(31 \mid \encircle{1}\;2) = (312 \mid \encircle{2})$ is an example
of~\eqref{equation.f explicit b}.
\end{example}
\begin{proposition}
The above explicit description for $\tilde{f}_i$ is well defined and agrees with the definition of
Theorem~\ref{theorem.crystal}.
\end{proposition}
\begin{proof}
The word $w$ described above is precisely $\mathsf{row}(\varphi(\pi))$ on which $f_i$ acts.
Hence the circled letter $i$ is indeed the letter changed to $i+1$. It remains to check how $\varphi^{-1}$
changes the blocks. We will demonstrate this for the cases in~\eqref{equation.f explicit} as the other cases are similar.
In case \eqref{equation.f explicit a} the circled letter $i$ in block $\pi_j$ does not correspond to $b_j$ in $\pi_j$ as it is not at
the beginning of its block. Hence, it belongs to $\alpha_j$ or $\beta_j$. The circled letter is not a descent.
Changing it to $i+1$ would create a descent. The map $\varphi^{-1}$ distributes the letters in $\alpha_j$
and $\beta_j$ to preserve descents, hence the circled $i$ moves over to the next block on the right and becomes a
circled $i+1$. Note also that $i+1 \not \in \pi_{j+1}$, since otherwise the circled $i$ would have been bracketed in $w$,
contradicting the fact that $f_i$ is acting on it.
In case \eqref{equation.f explicit b} the circled letter $i$ in block $\pi_j$ corresponds to $b_j$ in $\pi_j$. Again,
$\varphi^{-1}$ now associates the $i+1 \in \pi_j$ to the previous block after applying $f_i$. Note that
$i+1 \not \in \pi_{j-1}$ since it would necessarily be $b_{j-1}$. But then the circled $i$ would have been bracketed
in $w$, contradicting the fact that $f_i$ is acting on it.
\end{proof}
\subsection{Schur expansion}
The \defn{character} of an $\mathfrak{sl}_r$-crystal $B$ is defined as
\[
\mathrm{ch} B = \sum_{b\in B} \mathbf{x}^{\mathsf{wt}(b)}.
\]
Denote by $B(\lambda)$ the $\mathfrak{sl}_\infty$-crystal on $\mathsf{SSYT}(\lambda)$ defined above.
This is a connected highest weight crystal with highest weight $\lambda$, and the character is the Schur
function $\mathsf{s}_\lambda(\mathbf{x})$ defined in~\eqref{equation.schur}
\[
\mathrm{ch}B(\lambda) = \mathsf{s}_\lambda(\mathbf{x}).
\]
Similarly, denoting by $B^{(r)}(\lambda)$ the $\mathfrak{sl}_r$-crystal on $\mathsf{SSYT}^{(r)}(\lambda)$,
its character is the Schur polynomial
\[
\mathrm{ch}B^{(r)}(\lambda) = \mathsf{s}_\lambda(x_1,\ldots,x_r).
\]
Let us define
\[
\mathsf{Val}^{(r)}_{n,k}(\mathbf{x};0,t) = \sum_{\pi\in \mathcal{OP}^{(r)}_{n,k+1}} t^{\mm(\pi)} \mathbf{x}^{\mathsf{wt}(\pi)},
\]
which satisfies $\mathsf{Val}_{n,k}(\mathbf{x};0,t) = \mathsf{Val}_{n,k}^{(r)}(\mathbf{x};0,t)$ for $r\geqslant n$,
where $\mathsf{Val}_{n,k}(\mathbf{x};0,t)$ is as in~\eqref{equation.val}.
As a consequence of Theorem~\ref{theorem.crystal}, we now obtain the Schur expansion of
$\mathsf{Val}_{n,k}^{(r)}(\mathbf{x};0,t)$.
\begin{corollary}
We have
\[
\mathsf{Val}_{n,k-1}^{(r)}(\mathbf{x};0,t)
= \sum_{\substack{\pi \in \mathcal{OP}^{(r)}_{n,k}\\ \tilde{e}_i(\pi) = 0 \;\; \forall \;1\leqslant i <r}} t^{\mm(\pi)}
\mathsf{s}_{\mathsf{wt}(\pi)}.
\]
\end{corollary}
When $r\geqslant n$, then by~\cite{Wilson.2016} and~\cite[Proposition 3.18]{Rhoades.2016} this is also equal to
\[
\mathsf{Val}_{n,k-1}(\mathbf{x};0,t) = \sum_{\lambda \vdash n} \;\; \sum_{T \in \mathsf{SYT}(\lambda)}
t^{\mathsf{maj}(T) + \binom{n-k}{2} -(n-k) \mathsf{des}(T)} \left[ \begin{array}{c} \mathsf{des}(T)\\ n-k \end{array} \right]
\mathsf{s}_\lambda(\mathbf{x}),
\]
where $\mathsf{SYT}(\lambda)$ is the set of standard Young tableaux of shape $\lambda$ (that is, the elements in
$\mathsf{SSYT}(\lambda)$ of weight $(1^{|\lambda|})$), $\mathsf{des}(T)$ is the number of descents of $T$,
$\mathsf{maj}(T)$ is the major index of $T$ (or the sum of descents of $T$), and the $t$-binomial coefficients in the sum
are defined using the rule
\[
\left[ \begin{array}{c} m \\ p \end{array} \right] = \frac{[m]!}{[p]!\ [m-p]!} \ \ \text{where $[p]! = [p][p-1] \cdots [2][1]$
and \ $[p] = 1 + t + \cdots + t^{p-1}$}.
\]
\begin{example}
The crystal $\mathcal{OP}_{4,2}^{(3)}$, displayed in Figure~\ref{figure.crystal}, has four highest weight elements
with weights $(2,1,1)$, $(2,1,1)$, $(2,1,1)$, $(2,2)$ from left to right. Hence, we obtain the Schur expansion
\[
\mathsf{Val}^{(3)}_{4,1}(\mathbf{x};0,t) = (1+t+t^2)\; \mathsf{s}_{(2,1,1)}(\mathbf{x}) + t \;\mathsf{s}_{(2,2)}(\mathbf{x}).
\]
\end{example}
\section{Equidistributivity of the minimaj and maj statistics}
\label{section.equi}
In this section, we describe a bijection $\psi \colon \calOP_{n,k} \to \calOP_{n,k}$ in Theorem~\ref{theorem.bij OP}
with the property that $\mm(\pi) = \mathsf{maj}(\psi(\pi))$ for $\pi \in \calOP_{n,k}$. This proves the link between
$\mm$ and $\mathsf{maj}$ that was missing in~\cite{Wilson.2016}.
We can interpret $\psi$ as a crystal isomorphism, where $\calOP_{n,k}$ on the left is the $\mm$ crystal of
Section~\ref{section.crystal} and $\calOP_{n,k}$ on the right is viewed as a crystal of $k$ columns with elements
written in major index order.
The bijection $\psi$ is the composition of $\varphi$ of Proposition~\ref{P:biject} with a certain shift operator.
When applying $\varphi$ to $\pi \in \calOP_{n,k}$, we obtain the tuple $T^\bullet=T_1 \times \cdots \times T_{\ell+1}$
in~\eqref{equation.T picture}.
We would like to view each column in the tuple of tableaux as a block of a new ordered multiset partition. However, note
that some columns could be empty, namely if $c_j=d_{\ell+2-j}-i_{\ell+2-j}$ in Proposition~\ref{P:biject} is zero for some
$1\leqslant j \leqslant \ell$. For this reason, let us introduce the set of \defn{weak ordered multiset partitions}
$\mathcal{WOP}_{n,k}$, where we relax the condition that all blocks need to be nonempty sets.
Let $T^\bullet = T_1 \times \cdots \times T_{\ell+1}$ be a tuple of skew tableaux. Define $\read(T^\bullet)$ to be the
weak ordered multiset partition whose blocks are obtained from $T^\bullet$ by reading the columns from the
left to the right and from the bottom to the top; each column constitutes one of the blocks in $\read(T^\bullet)$.
Note that given $\pi= (\pi_1 | \pi_2| \cdots | \pi_k) \in \calOP_{n,k}$ in minimaj order, $\read(\varphi(\pi))$ is a weak
ordered multiset partition in major index order.
\begin{example}
\label{example.pi ex}
Let $\pi = (1\mid 56.\mid 4.\mid 37.12\mid 2.1\mid 1\mid 34) \in \calOP_{13,7}$, written in minimaj order.
We have $\mm(\pi)=22$. Then
\ytableausetup{boxsize=1.1em}
\[
T^\bullet = \varphi(\pi) =
\ytableaushort{1,4} \times \ytableaushort{1,2}
\times \ytableaushort{7} \times \emptyset \times
\ytableaushort{\none 13,\none 2,\none 3,\none 4,15,6}
\]
and $\pi'=\read(T^\bullet) = (4.1\mid 2.1\mid 7.\mid \emptyset \mid 6.1\mid 5.4.3.2.1 \mid 3)$.
\end{example}
\begin{lemma}
\label{lem.majproperties}
Let $\mathcal{I}=\{\read(\varphi(\pi)) \mid \pi \in \calOP_{n,k}\} \subseteq \mathcal{WOP}_{n,k}$,
$\pi' = \read(\varphi(\pi)) \in \mathcal{I}$, and $b_i$ the first elements in each block of $\pi$ in minimaj order as
in Lemma~\ref{lemma.minimaj order}. Then $\pi'$ has the following properties:
\begin{enumerate}
\item The last $k$ elements of $\pi'$ are $b_1,\ldots,b_k$, and $b_i$ and $b_{i+1}$ are in different blocks if and only
if $b_i \leqslant b_{i+1}$.
\item If $b_1,\ldots,b_k$ are contained in precisely $k-j$ blocks, then there are at least $j$ descents in the blocks
containing the $b_i$'s.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\pi\in \calOP_{n,k}$, written in minimaj order. Then by~\eqref{equation.T picture},
$\pi'=\read(\varphi(\pi))$ is of the form
\[
\pi'=
(\alpha^{\mathrm{rev}}_{\eta_{\ell+1}}\beta^{\mathrm{rev}}_{\eta_{\ell+1}-1}\cdots\beta^{\mathrm{rev}}_{\eta_\ell} \mid
\cdots\mid
\alpha^{\mathrm{rev}}_{\eta_1}\beta^{\mathrm{rev}}_{\eta_1-1}\cdots\beta^{\mathrm{rev}}_1b_1 \cdots \mid
\cdots \mid
b_{\eta_1}b_{\eta_1-1} \cdots \mid
\cdots \mid
\cdots b_k),
\]
where the superscript $\mathrm{rev}$ indicates that the elements are listed in decreasing order (rather than increasing order).
Since the rows of a semistandard tableau are weakly increasing and the columns are strictly increasing, the blocks
of $\pi'=\read(\varphi(\pi))$ are empty or in strictly decreasing order. This implies that $b_i$ and $b_{i+1}$ are in different
blocks of $\pi'$ precisely when $b_i\leqslant b_{i+1}$, so a block of $\pi'$ that contains a $b_i$ cannot have a descent
at its end. This proves~(1).
In a weak ordered multiset partition written in major index order, any block of size $r\geqslant 2$ has $r-1$ descents.
So if $b_1,\ldots, b_k$ are contained in precisely $k-j$ blocks, then at least $j$ of these elements are contained in blocks
of size at least two, so there are at least $j$ descents in the blocks containing the $b_i$'s. This proves~(2).
\end{proof}
\begin{remark}
Let $\pi'\in \mathcal{WOP}_{n,k}$ be in major index order such that
there are at least $k$ elements after the rightmost occurrence of a block that is either empty or has a descent at its end.
In this case, there exists a skew tableau $T^\bullet$ such that $\pi'=\read(T^\bullet)$. In fact, this characterizes
$\mathcal{I} := \mathrm{im} (\read \circ \varphi)$.
\end{remark}
\begin{lemma}
\label{lemma.read}
The map $\read$ is invertible.
\end{lemma}
\begin{proof}
Suppose $\pi' \in \mathcal{WOP}_{n,k}$ is in major index order such that there are at least $k$ elements after the
rightmost occurrence of a block that is either empty or has a descent at its end. Since there are no occurrences of an
empty block or a descent at the end of a block amongst the last $k$ elements of $\pi'$, the blocks of $\pi'$ containing
the last $k$ elements form the columns of a skew ribbon tableau $T\in \mathsf{SSYT}(\gamma)$, and the remaining blocks
of $\pi'$ form the column tableaux to the left of the skew ribbon tableau, so $\read$ is invertible.
\end{proof}
We are now ready to introduce the shift operators.
\begin{definition}
\label{definition.Lshift}
We define the \defn{left shift operation} $\L$ on $\pi'\in \mathcal{I} = \{ \read(\varphi(\pi)) \mid \pi \in \mathcal{OP}_{n,k}\}$
as follows. Suppose $\pi'$ has $m \geqslant 0$ blocks $\pi_{p_m}',\ldots, \pi_{p_1}'$ that are either empty or have
a descent at the end, and $1 \leqslant p_m < \cdots < p_2 < p_1<k$. Set
\[
\L(\pi') = \L^{(m)}(\pi'),
\]
where $\L^{(i)}$ for $0\leqslant i\leqslant m$ are defined as follows:
\begin{enumerate}
\item
Set $\L^{(0)}(\pi')=\pi'$.
\item
Suppose $\L^{(i-1)}(\pi')$ for $1\leqslant i \leqslant m$ is defined. By induction, the $p_i$-th block of $\L^{(i-1)}(\pi')$ is
$\pi'_{p_i}$. Let $S_i$ be the sequence of elements starting immediately to the right of block $\pi'_{p_i}$ in $\L^{(i-1)}(\pi')$
up to and including the $p_i$-th descent after the block $\pi_{p_i}'$. Let $\L^{(i)}(\pi')$ be the weak ordered multiset
partition obtained by moving each element in $S_i$ one block to its left. Note that all blocks with index smaller than
$p_i$ in $\L^{(i)}(\pi')$ are the same as in $\pi'$.
\end{enumerate}
\end{definition}
\begin{example}
\label{example.pi ex2}
Continuing Example~\ref{example.pi ex}, we have
$\pi'=(4.1\mid 2.1\mid 7.\mid \emptyset \mid {\color{blue}6}.{\color{blue}1}\mid {\color{blue}5}.{\color{blue}4}.{\color{blue}3}
.2.1 \mid 3)$,
which is in major index order. We have $m=2$ with $p_2=3<4=p_1$, $S_1={\color{blue}61543}$,
$S_2={\color{red}6154}$ and
\begin{equation*}
\begin{split}
\L^{(1)}(\pi') &= (4.1\mid 2.1\mid 7. \mid {\color{red}6}.{\color{red}1}\mid {\color{red}5}.{\color{red}4}.3.\mid 2.1\mid 3),\\
\L(\pi') = \L^{(2)}(\pi') &= (4.1\mid 2.1\mid 7.6.1 \mid 5.4.\mid 3.\mid 2.1\mid 3).
\end{split}
\end{equation*}
Note that $\mathsf{maj}(\pi')=28$, $\mathsf{maj}(\L^{(1)}(\pi'))=25$, and $\mathsf{maj}(\L(\pi')) = 22 = \mm(\pi)$.
\end{example}
\begin{proposition}
\label{proposition.L}
The left shift operation $\L \colon \mathcal{I} \to \mathcal{OP}_{n,k}$ is well defined.
\end{proposition}
\begin{proof}
Suppose $\pi'\in\mathcal{I}$ has $m \geqslant 0$ blocks $\pi_{p_1}',\ldots, \pi_{p_m}'$ that are either empty or have a
descent at the end, and $1\leqslant p_m < \cdots < p_2 < p_1 < k$.
If $m=0$, then $\L(\pi')=\pi' \in \mathcal{OP}_{n,k}$ and we are done.
We proceed by induction on $m$. Note that $\L^{(1)}$ acts on the rightmost block $\pi_{p_1}'$.
Notice that $\pi_{p_1}'$ cannot contain any of the $b_i$'s by Lemma~\ref{lem.majproperties}~(1).
Hence, since there are at least $k$ elements in the $k-p_1$ blocks following $\pi_{p_1}'$,
by Lemma~\ref{lem.majproperties}~(2), there are at least $p_1$ descents after $\pi_{p_1}'$, so $\L^{(1)}$ can be applied
to $\pi'$.
Observe that applying $\L^{(1)}$ to $\pi'$ does not create any new empty blocks to the right of $\pi_{p_1}'$, because
creating a new empty block means that the last element of $S_1$, which is a descent, is at the end of a block.
This cannot happen, since the rightmost occurrence of an empty block or a descent at the end of its block was assumed
to be in $\pi_{p_1}'$. However, note that applying $\L^{(1)}$ to $\pi'$ does create a new block with a descent at its end,
and this descent is given by the $p_1$-th descent after the block $\pi_{p_1}'$ (which is the last element of $S_1$).
Now suppose $\L^{(i-1)}(\pi')$ is defined for $i \geqslant 2$. By induction, there are at least $p_1>p_i$ descents following
the block $\pi_{p_i}'$, so the set $S_i$ of Definition~\ref{definition.Lshift} exists and we can move the elements in $S_i$
left one block to construct $\L^{(i)}(\pi')$ from $\L^{(i-1)}(\pi')$. Furthermore, $\L^{(i)}(\pi')$ does not have any
new empty blocks to the right of $\pi_{p_i}'$. To see this, note that the number of descents in $S_i$ is $p_i$, so the
number of descents in $S_i$ is strictly decreasing as $i$ increases. This implies that the $i-1$ newly created descents
at the end of a block of $\L^{(i-1)}(\pi')$ occurs strictly to the right of $S_i$, and so the last element of $S_i$ cannot
be a descent at the end of a block of $\L^{(i-1)}(\pi')$.
Lastly, $\L(\pi') = \L^{(m)}(\pi')\in \calOP_{n,k}$, since it does not have any empty blocks, and every block of
$\L(\pi')$ is in decreasing order because either we moved every element of a block into an empty block or we moved
elements into a block with a descent at the end.
\end{proof}
\begin{definition}
\label{definition.Rshift}
We define the \defn{right shift operation} $\mathsf{R}$ on $\mu\in \mathcal{OP}_{n,k}$ in major index order as follows.
Suppose $\mu$ has $m\geqslant 0$ blocks $\mu_{q_1}, \ldots, \mu_{q_m}$ that have a descent at the end and
$q_1 < q_2 < \cdots < q_m$. Set
\[
\mathsf{R}(\mu) = \mathsf{R}^{(m)}(\mu),
\]
where $\mathsf{R}^{(i)}$ for $0\leqslant i \leqslant m$ are defined as follows:
\begin{enumerate}
\item
Set $\mathsf{R}^{(0)}(\mu)=\mu$.
\item
Suppose $\mathsf{R}^{(i-1)}(\mu)$ for $1\leqslant i \leqslant m$ is defined. Let $U_i$ be the sequence of $q_i$ elements
to the left of, and including, the last element in the $q_i$-th block of $\mathsf{R}^{(i-1)}(\mu)$. Let $\mathsf{R}^{(i)}(\mu)$ be the
weak ordered multiset partition obtained by moving each element in $U_i$ one block to its right.
Note that all blocks to the right of the $(q_i+1)$-th block are the same in $\mu$ and $\mathsf{R}^{(i)}(\mu)$.
\end{enumerate}
\end{definition}
Note that $\mathsf{R}$ can potentially create empty blocks.
\begin{example}
Continuing Example~\ref{example.pi ex2}, let $\mu = \L(\pi') = (4.1\mid 2.1\mid 7.6.1 \mid 5.4.\mid 3.\mid 2.1\mid 3)$.
We have $m=2$ with $q_1=4<5=q_2$, $U_1=6154$, $U_2=61543$ and
\begin{equation*}
\begin{split}
\mathsf{R}^{(1)}(\mu) &= (4.1\mid 2.1\mid 7. \mid 6.1 \mid 5.4.3.\mid 2.1\mid 3),\\
\mathsf{R}(\mu) = \mathsf{R}^{(2)}(\mu) &= (4.1\mid 2.1\mid 7. \mid \emptyset \mid 6.1 \mid 5.4.3.2.1\mid 3),
\end{split}
\end{equation*}
which is the same as $\pi'$ in Example~\ref{example.pi ex2}.
\end{example}
\begin{proposition}
\label{proposition.R}
The right shift operation $\mathsf{R}$ is well defined and is the inverse of $\L$.
\end{proposition}
\begin{proof}
Suppose $\mu\in \mathcal{OP}_{n,k}$ in major index order has descents at the end of the blocks
$\mu_{q_1},\ldots, \mu_{q_m}$. If $m=0$, then $\mathsf{R}(\mu) = \mu \in\mathcal{OP}_{n,k} \subseteq \mathcal{WOP}_{n,k}$
and there is nothing to show.
We proceed by induction on $m$. The ordered multiset partition $\mu$ does not have empty blocks, so there are at
least $q_1$ elements in the first $q_1$ blocks of $\mu$, and $\mathsf{R}^{(1)}$ can be applied to $\mu$.
Now suppose $\mathsf{R}^{(i-1)}(\mu)$ is defined for $i\geqslant2$. By induction, there are at least $q_{i-1}+1$ elements in
the first $q_{i-1}+1$ blocks of $\mathsf{R}^{(i-1)}(\mu)$. Since the blocks $\mu_{q_{i-1}+2},\ldots, \mu_{q_i}$ in $\mu$
are all nonempty, there are at least $q_{i-1}+1+(q_i-(q_{i-1}+1)) = q_i$ elements in the first $q_i$ blocks of $\mathsf{R}^{(i-1)}(\mu)$,
so the set $U_i$ of Definition~\ref{definition.Rshift} exists and we can move the elements in $U_i$ one block to the
right to construct $\mathsf{R}^{(i)}(\mu)$ from $\mathsf{R}^{(i-1)}(\mu)$.
Furthermore, every nonempty block of $\mathsf{R}(\mu)$ is in decreasing order because the rightmost element of each $U_i$ is
a descent. So $\mathsf{R}(\mu)\in\mathcal{OP}_{n,k}$ remains in major index order. This completes the proof that $\mathsf{R}$
is well defined.
Next we show that $\mathsf{R}$ is the inverse of $\L$. Observe that if $\pi' \in \mathcal{I}$ has $m$ occurrences of either an
empty block or a block with a descent at its end, then $\mu=\L(\pi')$ has $m$ blocks with a descent at its end.
Hence it suffices to show that $\mathsf{R}^{(m+1-i)}$ is the inverse operation to $\L^{(i)}$ for each $1\leqslant i \leqslant m$.
The property that the last element of $S_i$ cannot be a descent at the end of a block of $\L^{(i-1)}(\pi')$ in the proof of
Proposition~\ref{proposition.L} similarly holds for every element in $S_i$. Therefore, if the last element of $S_i$ is in the
$r_i$-th block of $\L^{(i-1)}(\pi')$, then $|S_i| = p_i + (r_i-1-p_i) = r_i-1$ because the blocks are decreasing and none of the
elements in $S_i$ can be descents at the end of a block.
Since the last element of $S_i$ becomes a descent at the end of the $(r_i-1)$-th block of $\L^{(i)}(\pi)$, this implies
$r_i-1 = q_{m-i+1}$, so $U_{m-i+1} = S_i$ for every $1\leqslant i \leqslant m$. As the operation $\L^{(i)}$ is a left shift
of the elements of $S_i$ by one block and the operation $\mathsf{R}^{(m+1-i)}$ is a right shift of the same set of elements
by one block, they are inverse operations of each other.
\end{proof}
For what follows, we need to extend the definition of the major index to the set $\mathcal{WOP}_{n,k}$ of weak ordered
multiset partitions of length $n$ and $k$ blocks, in which some of the blocks may be empty. Given
$\pi' \in \mathcal{WOP}_{n,k}$ whose nonempty blocks are in major index order, if the block $\pi_j'\neq\emptyset$, then
the last element in $\pi_j'$ is assigned the index $j$, and the remaining elements in $\pi_j'$ are assigned the index $j-1$
for $j=1,\ldots, k$. Then $\mathsf{maj}(\pi')$ is the sum of the indices where a descent occurs. This agrees
with~\eqref{equation.maj} in the case when all blocks are nonempty.
\begin{lemma}
\label{lemma.maj change}
Let $\pi'\in \mathcal{I}$. With the same notation as in Definition~\ref{definition.Lshift},
we have for $1\leqslant i \leqslant m$
\[
\mathsf{maj}(\L^{(i)}(\pi')) = \begin{cases}
\mathsf{maj}(\L^{(i-1)}(\pi'))-p_i+1, & \text{if $\pi_{p_i}'=\emptyset$,}\\
\mathsf{maj}(\L^{(i-1)}(\pi'))-p_i, &\text{if $\pi_{p_i}'$ has a descent at the end of its block}.
\end{cases}
\]
\end{lemma}
\begin{proof}
Assume $\pi_{p_i}'=\emptyset$. In the transformation from $\L^{(i-1)}(\pi')$ to $\L^{(i)}(\pi')$, the index of each of the first
$p_i-1$ descents in $S_i$ decreases by one, while the index of the last descent remains the same, since it is not at the
end of a block in $\L^{(i-1)}(\pi')$, but it becomes the last element of a block in $\L^{(i)}(\pi')$. The indices of elements
not in $S_i$ remain the same, so $\mathsf{maj}(\L^{(i)}(\pi'))=\mathsf{maj}(\L^{(i-1)}(\pi'))-p_i+1$ in this case.
Next assume that $\pi_{p_i}'$ has a descent at the end of the block. In the transformation from $\L^{(i-1)}(\pi')$ to $\L^{(i)}(\pi')$,
the indices of the descents in $S_i$ change in the same way as in the previous case, but in addition, the index of the last
descent in $\pi_{p_i}'$ decreases by one, so $\mathsf{maj}(\L^{(i)}(\pi'))=\mathsf{maj}(\L^{(i-1)}(\pi'))-p_i$ in this case.
\end{proof}
\begin{theorem}
\label{theorem.bij OP}
Let $\psi \colon \calOP_{n,k}\rightarrow \calOP_{n,k}$ be the map defined by
\[
\psi(\pi) = \L(\read(\varphi(\pi))) \qquad \text{for $\pi\in \calOP_{n,k}$ in minimaj order.}
\]
Then $\psi$ is a bijection that maps ordered multiset partitions in minimaj order to ordered multiset partitions in
major index order. Furthermore, $\mm(\pi) = \mathsf{maj}(\psi(\pi))$.
\end{theorem}
\begin{proof}
By Proposition~\ref{P:biject}, $\varphi$ is a bijection. By Lemma~\ref{lemma.read}, the map $\read$ is invertible, and
by Proposition~\ref{proposition.R} the shift operation $\L$ has an inverse. This implies that $\psi$ is a bijection.
It remains to show that $\mm(\pi) = \mathsf{maj}(\psi(\pi))$ for $\pi \in \mathcal{OP}_{n,k}$ in minimaj order.
First suppose that $\pi' = \read(\varphi(\pi))$ has no empty blocks and no descents at the end of any block.
In this case $\L(\pi')=\pi'$, so that in fact $\pi' = \psi(\pi)$. Using the definition of major index~\eqref{equation.maj} and
the representation~\eqref{equation.T picture} (where the columns in the ribbon are viewed as separate columns due
to $\read$), we obtain
\begin{equation}
\label{equation.base maj}
\mathsf{maj}(\pi') = \sum_{j=1}^\ell (\ell+1-j) ( d_j - i_j -1) + \ell + \sum_{j=1}^\ell ( \ell+\eta_j-j),
\end{equation}
where $d_j,i_j,\eta_j = i_1 + \cdots + i_j$ are defined in Proposition~\ref{P:biject} for $\pi$.
Here, the first sum in the formula arises from the contributions of the first $\ell$ blocks and the summand $\ell$
compensates for the fact that $b_1$ is in the $\ell$-th block. The second sum in the formula comes from the
contributions of the $b_i$'s. Comparing with~\eqref{equation.minimaj}, we find
\[
\mathsf{maj}(\pi') = \mm(\pi) - \binom{\ell+1}{2} - \sum_{j=1}^\ell (\ell+1-j) i_j + \binom{\ell+1}{2}
+ \sum_{j=1}^\ell \eta_j = \mm(\pi),
\]
proving the claim.
Now suppose that $\pi' = \read(\varphi(\pi))$ has a descent at the end of block $\pi'_p$. This will contribute an extra
$p$ compared to the major index in~\eqref{equation.base maj}. If $\pi'_p=\emptyset$, then
$c_p = d_{\ell+2-p} - i_{\ell+2-p} = 0$ and the term $j=\ell+2-p$ in~\eqref{equation.base maj} should be
$(\ell+1-j)(d_j-i_j)$ instead of $(\ell+1-j)(d_j-i_j-1)$ yielding a correction term of $\ell+1-j = \ell+1-\ell-2+p=p-1$.
Hence, with the notation of Definition~\ref{definition.Lshift}, we have
\[
\mathsf{maj}(\pi') = \mm(\pi) + \sum_{i=1}^m p_i - e,
\]
where $e$ is the number of empty blocks in $\pi'$. Since $\psi(\pi) = \L(\pi')$, the claim follows by
Lemma~\ref{lemma.maj change}.
\end{proof}
\bibliographystyle{alpha}
| 2024-02-18T23:39:52.367Z | 2017-11-06T02:11:33.000Z | algebraic_stack_train_0000 | 719 | 10,499 |
|
proofpile-arXiv_065-3533 | \section*{Introduction}
\label{sec-intro}
Every smooth curve of genus two admits a unique degree-two \emph{hyperelliptic} map to $\mathbb{P}^1$. The Riemann-Hurwitz formula forces such a map to have six ramification points called \emph{Weierstrass points}; each non-Weierstrass point $p$ exists as part of a \emph{conjugate pair} $(p,p')$ such that the images of $p$ and $p'$ agree under the hyperelliptic map.
The locus of curves of genus two with $\ell$ marked Weierstrass points is codimension $\ell$ inside the moduli space $\mathcal{M}_{2,\ell}$, and in \cite{chentarasca} it is shown that the class of the closure of this locus is rigid and extremal in the cone of effective classes of codimension $\ell$. Our main theorem extends their result to $\mathcal{H}_{2,\ell,2m,n}\subseteq \mathcal{M}_{2,\ell+2m+n}$, the locus of genus-two curves with $\ell$ marked Weierstrass points, $m$ marked conjugate pairs, and $n$ free marked points (see Definition \ref{def-hyp}). \\
\noindent\textbf{Main Theorem.}
\emph{For $\ell,m,n\geq 0$, the class $\overline{\mathcal{H}}_{2,\ell,2m,n}$, if non-empty, is rigid and extremal in the cone of effective classes of codimension $\ell+m$ in $\overline{\mathcal{M}}_{2,\ell+2m+n}$.} \\
In \cite{chencoskun2015}, the authors show that the effective cone of codimension-two classes of $\overline{\mathcal{M}}_{2,n}$ has infinitely many extremal cycles for every $n$. Here we pursue a perpendicular conclusion: although in genus two $\ell \leq 6$, the number of conjugate pairs and number of free marked points are unbounded, so that the classes $\overline{\mathcal{H}}_{2,\ell,2m,n}$ form an infinite family of rigid and extremal cycles in arbitrarily-high codimension. Moreover, the induction technique used to prove the main result is genus-agnostic, pointing towards a natural extension of the main theorem to higher genus given a small handful of low-codimension cases.
When $\ell + m \geq 3$, our induction argument (Theorem \ref{thm-main}) is a generalization of that used in \cite[Theorem 4]{chentarasca} to include conjugate pairs and free points; it relies on pushing forward an effective decomposition of one hyperelliptic class onto other hyperelliptic classes and showing that the only term of the decomposition to survive all pushforwards is the original class itself. This process is straightforward when there are at least three codimension-one conditions available to forget; however, when $\ell + m = 2$, and in particular when $\ell = 2$ and $m = 0$, more care must be taken. The technique used in \cite[Theorem 5]{chentarasca} to overcome this problematic subcase relies on an explicit expression for $\left[\overline{\mathcal{H}}_{2,2,0,0}\right]$ which becomes cumbersome when a non-zero number of free marked points are allowed. Although adding free marked points can be described via pullback, pullback does not preserve rigidity and extremality in general, so we introduce an intersection-theoretic calculation using tautological $\omega$-classes to handle this case instead.
The base case of the induction (Theorem \ref{thm-base}) is shown via a criterion (Lemma \ref{lem-divisor}) given by \cite{chencoskun2014} for rigidity and extremality for divisors; it amounts to an additional pair of intersection calculations. We utilize the theory of moduli spaces of admissible covers to construct a suitable curve class for the latter intersection, a technique which generalizes that used in \cite{rulla01} for the class of $\overline{\mathcal{H}}_{2,1,0,0}$.
\subsection*{Structure of the paper.} We begin in \textsection \ref{sec-prelim} with some background on $\overline{\mathcal{M}}_{g,n}$ and cones of effective cycles. This section also contains the important Lemma \ref{lem-divisor} upon which Theorem \ref{thm-base} depends. In \textsection \ref{sec-main}, we prove Theorem \ref{thm-base}, which establishes the base case for the induction argument of our main result, Theorem \ref{thm-main}. Finally, we conclude in \textsection \ref{sec-highergenus} with a discussion of extending these techniques for $g\geq 3$ and possible connections to a CohFT-like structure.
\subsection*{Acknowledgments.} The author wishes to thank Nicola Tarasca, who was kind enough to review an early version of the proof of the main theorem and offer his advice. The author is also greatly indebted to Renzo Cavalieri for his direction and support.
\section{Preliminaries on $\overline{\mathcal{M}}_{g,n}$ and effective cycles}
\label{sec-prelim}
\subsection*{Moduli spaces of curves, hyperelliptic curves, and admissible covers.} We work throughout in $\overline{\mathcal{M}}_{g,n}$, the moduli space of isomorphism classes of stable genus $g$ curves with $n$ (ordered) marked points. If $2g-2+n > 0$ this space is a smooth Deligne-Mumford stack of dimension $3g-3+n$. We denote points of $\overline{\mathcal{M}}_{g,n}$ by $[C; p_1,\dots,p_n]$ with $p_1,\dots,p_n\in C$ smooth marked points. For fixed $g$, we may vary $n$ to obtain a family of moduli spaces related by \emph{forgetful morphisms}: for each $1\leq i \leq n$, the map $\pi_{p_i}:\overline{\mathcal{M}}_{g,n}\to\overline{\mathcal{M}}_{g,n-1}$ forgets the $i$th marked point and stabilizes the curve if necessary. The maps $\rho_{p_i}:\overline{\mathcal{M}}_{g,n} \to \overline{\mathcal{M}}_{g,\{p_i\}}$ are the \emph{rememberful morphisms} which are the composition of all possible forgetful morphisms other than $\pi_{p_i}$.
Due to the complexity of the full Chow ring of $\overline{\mathcal{M}}_{g,n}$, the \emph{tautological ring} $R^*(\overline{\mathcal{M}}_{g,n})$ is often considered instead \cite{faberpandharipande} (for both rings we assume rational coefficients). Among other classes, this ring contains the classes of the boundary strata, as well as all $\psi$- and $\lambda$-classes. For $1\leq i \leq n$ the class $\psi_{p_i}$ is defined to be the first Chern class of the line bundle on $\overline{\mathcal{M}}_{g,n}$ whose fiber over a given isomorphism class of curves is the cotangent line bundle at the $i$th marked point of the curve; $\lambda_1$ is the first Chern class of the Hodge bundle. The tautological ring also includes pullbacks of all $\psi$- and $\lambda$-classes, including the $\omega$\emph{-classes}, sometimes called \emph{stable} $\psi$\emph{-classes}. The class $\omega_{p_i}$ is defined on $\overline{\mathcal{M}}_{g,n}$ for $g,n\geq 1$ as the pullback of $\psi_{p_i}$ along $\rho_{p_i}$. Several other notable cycles are known to be tautological, including the hyperelliptic classes considered below (\cite{faberpandharipande}).
\emph{Hyperelliptic curves} are those which admit a degree-two map to $\mathbb{P}^1$. The Riemann-Hurwitz formula implies that a hyperelliptic curve of genus $g$ contains $2g+2$ Weierstrass points which ramify over the branch locus in $\mathbb{P}^1$. For a fixed genus, specifying the branch locus allows one to recover the complex structure of the hyperelliptic curve and hence the hyperelliptic map. Thus for $g\geq 2$, the codimension of the locus of hyperelliptic curves in $\overline{\mathcal{M}}_{g,n}$ is $g-2$. In this context, requiring that a marked point be Weierstrass (resp. two marked points be a conjugate pair) is a codimension-one condition for genus at least two.
We briefly use the theory of \emph{moduli spaces of admissible covers} to construct a curve in $\overline{\mathcal{M}}_{2,n}$ in Theorem \ref{thm-base}. These spaces are particularly nice compactifications of Hurwitz schemes. For a thorough introduction, the standard references are \cite{harrismumford} and \cite{acv}. For a more hands-on approach in the same vein as our usage, see as well \cite{cavalierihodgeint}.
\subsection*{Notation.} We use the following notation for boundary strata on $\overline{\mathcal{M}}_{g,n}
; all cycles classes are given as stack fundamental classes. For $g\geq 1$, the divisor class of the closure of the locus of irreducible nodal curves is denoted by $\delta_{irr}$. By $\delta_{h,P}$ we mean the class of the divisor whose general element has one component of genus $h$ attached to another component of genus $g-h$, with marked points $P$ on the genus $h$ component and marked points $\{p_1,\dots,p_n\}\backslash P$ on the other. By convention $\delta_{h,P} = 0$ for unstable choices of $h$ and $P$.
Restrict now to the case of $g=2$. We use $W_{2,P}$ to denote the codimension-two class of the stratum whose general element agrees with that of $\delta_{2,P}$, with the additional requirement that the node be a Weierstrass point. We denote by $\gamma_{1,P}$ the class of the closure of the locus of curves whose general element has a genus $1$ component containing the marked points $P$ meeting in two points conjugate under a hyperelliptic map a rational component with marked points $\{p_1,\dots,p_n\}\backslash P$ (see Figure \ref{wfigure}).
\begin{figure}[t]
\begin{tikzpicture}
\draw[very thick] (0,0) ellipse (4cm and 1.5cm);
\draw[very thick] (5,0) circle (1);
\node at (3.7,0) {$w$};
\draw[very thick] (-2.3,-0.15) .. controls (-2.1,0.2) and (-0.9,0.2) .. (-0.7,-0.15);
\draw[very thick] (-2.5,0) .. controls (-2.2,-0.4) and (-0.8,-0.4) .. (-0.5,0);
\draw[very thick] (2.3,-0.15) .. controls (2.1,0.2) and (0.9,0.2) .. (0.7,-0.15);
\draw[very thick] (2.5,0) .. controls (2.2,-0.4) and (0.8,-0.4) .. (0.5,0);
\fill (-1.8,0.7) circle (0.10); \node at (-1.4,0.7) {$p_1$};
\fill (-1.4,-0.7) circle (0.10); \node at (-1,-0.7) {$p_2$};
\fill (1.1,-0.7) circle (0.10); \node at (1.5,-0.7) {$p_3$};
\fill (5.05,0.5) circle (0.10); \node at (5.45,0.5) {$p_4$};
\fill (5.05,-0.5) circle (0.10); \node at (5.45,-0.5) {$p_5$};
\draw[very thick] (0,-4.5) ellipse (4cm and 1.5cm);
\draw[very thick] (3.2,-3.6) .. controls (5.4,-3.8) and (5.4,-5.2) .. (3.2,-5.4);
\draw[very thick] (3.2,-3.6) .. controls (7.0,-2.6) and (7.0,-6.4) .. (3.2,-5.4);
\node at (3.2,-3.4) {$+$};
\node at (3.2,-5.8) {$-$};
\draw[very thick] (-0.8,-4.65) .. controls (-0.6,-4.3) and (0.6,-4.3) .. (0.8,-4.65);
\draw[very thick] (-1,-4.5) .. controls (-0.7,-4.9) and (0.7,-4.9) .. (1,-4.5);
\fill (-1.8,-3.8) circle (0.10); \node at (-1.4,-3.8) {$p_1$};
\fill (-1.4,-5.2) circle (0.10); \node at (-1,-5.2) {$p_2$};
\fill (1.1,-5.2) circle (0.10); \node at (1.5,-5.2) {$p_3$};
\fill (5.05,-4) circle (0.10); \node at (5.45,-4) {$p_4$};
\fill (5.05,-5) circle (0.10); \node at (5.45,-5) {$p_5$};
\draw[very thick] (9,0) circle (0.30);
\node at (9,0) {$2$};
\draw[very thick] (8.79,0.21) -- (8.5,0.5); \node at (8.2,0.5) {$p_1$};
\draw[very thick] (8.7,0) -- (8.3,0); \node at (8,0) {$p_2$};
\draw[very thick] (8.79,-0.21) -- (8.5,-0.5); \node at (8.2,-0.5) {$p_3$};
\draw[very thick] (9.30,0) -- (10.5,0);
\fill (10.5,0) circle (0.10);
\draw[very thick] (10.5,0) -- (11,0.5); \node at (11.3,0.5) {$p_4$};
\draw[very thick] (10.5,0) -- (11,-0.5); \node at (11.4,-0.5) {$p_5$};
\node at (9.8,0.2) {$w$};
\draw[very thick] (9,-4.5) circle (0.30);
\node at (9,-4.5) {$1$};
\draw[very thick] (8.79,-4.29) -- (8.5,-4); \node at (8.2,-4) {$p_1$};
\draw[very thick] (8.7,-4.5) -- (8.3,-4.5); \node at (8,-4.5) {$p_2$};
\draw[very thick] (8.79,-4.71) -- (8.5,-5); \node at (8.2,-5) {$p_3$};
\draw[very thick] (9.2,-4.3) .. controls (9.6,-3.9) and (10.2,-3.9) .. (10.5,-4.5);
\draw[very thick] (9.2,-4.7) .. controls (9.6,-5.1) and (10.2,-5.1) .. (10.5,-4.5);
\fill (10.5,-4.5) circle (0.10);
\draw[very thick] (10.5,-4.5) -- (11,-4); \node at (11.3,-4) {$p_4$};
\draw[very thick] (10.5,-4.5) -- (11,-5); \node at (11.4,-5) {$p_5$};
\node at (9.8,-3.8) {$+$};
\node at (9.8,-5.2) {$-$};
\end{tikzpicture}
\caption{On the left-hand side, the topological pictures of the general elements of $W_{2,P}$ (top) and $\gamma_{1,P}$ (bottom) in $\overline{\mathcal{M}}_{2,5}$ with $P = \{p_1,p_2,p_3\}$. On the right-hand side, the corresponding dual graphs.}
\label{wfigure}
\end{figure}
The space $\overline{Adm}_{2\xrightarrow{2} 0,t_1,\dots,t_6,u_{1\pm},\dots,u_{n\pm}}$ is the moduli space of degree-two admissible covers of genus two with marked ramification points (Weierstrass points) $t_i$ and marked pairs of points (conjugate pairs) $u_{j+}$ and $u_{j-}$. This space comes with a finite map $c$ to $\overline{\mathcal{M}}_{0,\{t_1,\dots,t_6,u_{1},\dots,u_{n}\}}$ which forgets the cover and remembers only the base curve and its marked points, which are the images of the markings on the source. It comes also with a degree $2^n$ map $s$ to $\overline{\mathcal{M}}_{2,1+n}$ which forgets the base curve and all $u_{j+}$ and $t_i$ other than $t_1$ and remembers the (stabilization of the) cover. \\
\begin{figure*}[t]
\begin{tikzpicture}
\draw[very thick] (0,0) circle (0.30);
\node at (0,0) {$1$};
\draw[very thick] (-0.21,0.21) -- (-0.5,0.5); \node at (-0.8,0.5) {$t_1$};
\draw[very thick] (-0.30,0) -- (-0.7,0); \node at (-1,0) {$t_2$};
\draw[very thick] (-0.21,-0.21) -- (-0.5,-0.5); \node at (-0.8,-0.5) {$t_6$};
\draw[very thick] (0.30,0) -- (1.5,0);
\fill (1.5,0) circle (0.10);
\draw[very thick] (1.5,0) -- (1,0.5); \node at (0.8,0.6) {$t_3$};
\draw[very thick] (3.5,0) .. controls (3.2,0.9) and (1.8,0.9) .. (1.5,0);
\draw[very thick] (3.5,0) .. controls (3.2,-0.9) and (1.8,-0.9) .. (1.5,0);
\fill (3.5,0) circle (0.10);
\draw[very thick] (3.5,0) -- (4,0.5); \node at (4.2,0.6) {$t_4$};
\draw[very thick] (3.5,0) -- (4.7,0);
\fill (4.7,0) circle (0.10);
\draw[very thick] (4.7,0) -- (5.2,0.5); \node at (5.5,0.5) {$t_5$};
\draw[very thick] (4.7,0) -- (5.4,0); \node at (5.8,0) {$u_{1+}$};
\draw[very thick] (4.7,0) -- (5.2,-0.5); \node at (5.6,-0.5) {$u_{1-}$};
\end{tikzpicture}
\end{figure*}
\begin{figure*}[t]
\begin{tikzpicture}
\draw[very thick] (0,0) -- (0,-1.5);
\draw[very thick] (0,-1.5) -- (-0.15,-1.3);
\draw[very thick] (0,-1.5) -- (0.15,-1.3);
\end{tikzpicture}
\end{figure*}
\vspace{-0.25cm}
\begin{figure}[t]
\begin{tikzpicture}
\fill (.3,0) circle (0.10);
\draw[very thick] (.3,0) -- (-0.2,0.5); \node at (-0.5,0.5) {$t_1$};
\draw[very thick] (.3,0) -- (-0.4,0); \node at (-.7,0) {$t_2$};
\draw[very thick] (.3,0) -- (-0.2,-0.5); \node at (-0.5,-0.5) {$t_6$};
\draw[very thick] (.3,0) -- (1.5,0);
\fill (1.5,0) circle (0.10);
\draw[very thick] (1.5,0) -- (1.5,0.5); \node at (1.5,0.7) {$t_3$};
\draw[very thick] (1.5,0) -- (3.5,0);
\fill (3.5,0) circle (0.10);
\draw[very thick] (3.5,0) -- (3.5,0.5); \node at (3.5,0.7) {$t_4$};
\draw[very thick] (3.5,0) -- (4.7,0);
\fill (4.7,0) circle (0.10);
\draw[very thick] (4.7,0) -- (5.2,0.5); \node at (5.5,0.5) {$t_5$};
\draw[very thick] (4.7,0) -- (5.2,-0.5); \node at (5.6,-0.5) {$u_{1}$};
\end{tikzpicture}
\caption{An admissible cover in $\overline{Adm}_{2\xrightarrow{2} 0,t_1,\dots,t_6,u_{1\pm}}$ represented via dual graphs. In degree two the topological type of the cover is uniquely recoverable from the dual graph presentation.}
\label{admfigure}
\end{figure}
\subsection*{$\omega$-class lemmas.} The following two lemmas concerning basic properties of $\omega$-classes prove useful in the last subcase of Theorem \ref{thm-main}. The first is a unique feature of these classes, and the second is the $\omega$-class version of the dilaton equation.
\begin{lemma}
\label{lem-fallomega}
Let $g\geq 1$, $n\geq 2$, and $P\subset \{p_1,\dots,p_n\}$ such that $|P|\leq n-2$. Then for any $p_i,p_j\not\in P$
\begin{align*}
\omega_{p_i}\cdot \delta_{g,P} = \omega_{p_j}\cdot \delta_{g,P}
\end{align*}
on $\overline{\mathcal{M}}_{g,n}$.
\end{lemma}
\begin{proof}
This follows immediately from Lemma 1.9 in \cite{bcomega}.
\end{proof}
\begin{lemma}
\label{lem-omegadil}
Let $g,n\geq 2$. Then on $\overline{\mathcal{M}}_{g,n}$,
\begin{align*}
\pi_{p_i*}\omega_{p_j} = 2g - 2
\end{align*}
if $i=j$, and $0$ otherwise.
\end{lemma}
\begin{proof}
Let $P = \{p_1,\dots,p_n\}$. When $i=j$, the pushforward reduces to the usual dilaton equation for $\psi_{p_i}$ on $\overline{\mathcal{M}}_{g,\{p_i\}}$. If $\pi$ is the morphism which forgets all marked points, the diagram
\begin{center}
\begin{tikzcd}[row sep=1cm, column sep=1cm]
\overline{\mathcal{M}}_{g,n} \arrow[d, "\rho_{p_i}"] \arrow[r, "\pi_{p_i}"] & \overline{\mathcal{M}}_{g,P\backslash\{p_i\}} \arrow[d, "\pi"] \\
\overline{\mathcal{M}}_{g,\{p_i\}} \arrow[r, "\pi_{p_i}"] & \overline{\mathcal{M}}_{g}
\end{tikzcd}
\end{center}
commutes, so $\pi_{p_i*}\omega_{p_i} = \pi_{p_i*}\rho_{p_i}^*\psi_{p_i} = \pi^*\pi_{p_i*}\psi_{p_i} = (2g-2)\mathds{1}$
If $i\neq j$, then $\pi_{p_i*}\omega_{p_j} = \pi_{p_i*}\pi_{p_i}^*\omega_{p_j} = 0$.
\end{proof}
\subsection*{Cones and properties of effective classes.} For a projective variety $X$, the sum of two effective codimension-$d$ classes is again effective, as is any $\mathbb{Q}_+$-multiple of the same. This gives a natural convex cone structure on the set of effective classes of codimension $d$ inside the $\mathbb{Q}$ vector space of all codimension-$d$ classes, called the \emph{effective cone of codimension-$d$ classes} and denoted $\text{Eff}^d(X)$. \iffalse The closure of this cone (in the usual $\mathbb{R}^n$ topology) is $\overline{\text{Eff}}^d(X)$, the \emph{pseudo-effective cone of codimension-$d$ classes}. \fi Given an effective class $E$ in the Chow ring of $X$, an \emph{effective decomposition of} $E$ is an equality
\begin{align*}
E = \sum_{s=1}^m a_sE_s
\end{align*}
with $a_s > 0$ and $E_s$ irreducible effective cycles on $X$ for all $s$. The main properties we are interested in for classes in the pseudo-effective cone are rigidity and extremality.
\begin{definition}
\label{def-rigex}
Let $E\in\text{Eff}^d(X)$.
$E$ is \emph{rigid} if any effective cycle with class $rE$ is supported on the support of $E$.
$E$ is \emph{extremal} if, for any effective decomposition of $E$, all $E_s$ are proportional to $E$.
\end{definition}
When $d=1$, elements of the cone correspond to divisor classes, and the study of $\text{Eff}^1(\overline{\mathcal{M}}_{g,n})$ is fundamental in the theory of the birational geometry of these moduli spaces. For example, $\overline{\mathcal{M}}_{0,n}$ is known to fail to be a Mori dream space for $n\geq 10$ (first for $n\geq 134$ in \cite{castravettevelev}, then for $n\geq 13$ in \cite{gonzalezkaru}, and the most recent bound in \cite{hkl2016}). For $n\geq 3$ in genus one, \cite{chencoskun2014} show that $\overline{\mathcal{M}}_{1,n}$ is not a Mori dream space; the same statement is true for $\overline{\mathcal{M}}_{2,n}$ by \cite{mullane2017}. In these and select other cases, the pseudo-effective cone of divisors has been shown to have infinitely many extremal cycles and thus is not rational polyhedral (\cite{chencoskun2015}).
These results are possible due in large part to the following lemma, which plays an important role in Theorem \ref{thm-base}. Here a \emph{moving curve $\mathcal{C}$ in $D$} is a curve $\mathcal{C}$, the deformations of which cover a Zariski-dense subset of $D$.
\begin{lemma}[{{\cite[Lemma 4.1]{chencoskun2014}}}]
\label{lem-divisor}
Let $D$ be an irreducible effective divisor in a projective variety $X$, and suppose that $\mathcal{C}$ is a moving curve in $D$ satisfying $\displaystyle \int_{X}[D]\cdot[\mathcal{C}] < 0$. Then $[D]$ is rigid and extremal. \hfill $\square$
\end{lemma}
\begin{remark}
Using Lemma \ref{lem-divisor} to show a divisor $D$ is rigid and extremal in fact shows more: if the lemma is satisfied, the boundary of the pseudo-effective cone is polygonal at $D$. We do not rely on this fact, but see \cite[\textsection 6]{opie2016} for further discussion.
\end{remark}
Lemma \ref{lem-divisor} allows us to change a question about the pseudo-effective cone into one of intersection theory and provides a powerful tool in the study of divisor classes. Unfortunately, it fails to generalize to higher-codimension classes, where entirely different techniques are needed. Consequently, much less is known about $\text{Eff}^d(\overline{\mathcal{M}}_{g,n})$ for $d\geq 2$. This paper is in part inspired by \cite{chentarasca}, where the authors show that certain hyperelliptic classes of higher codimension are rigid and extremal in genus two. In \cite{chencoskun2015}, the authors develop additional extremality criteria to show that in codimension-two there are infinitely many extremal cycles in $\overline{\mathcal{M}}_{1,n}$ for all $n\geq 5$ and in $\overline{\mathcal{M}}_{2,n}$ for all $n\geq 2$, as well as showing that two additional hyperelliptic classes of higher genus are extremal. These criteria cannot be used directly for the hyperelliptic classes we consider; this is illustrative of the difficulty of proving rigidity and extremality results for classes of codimension greater than one.
\section{Main theorem}
\label{sec-main}
In this section we prove our main result, which culminates in Theorem \ref{thm-main}. The proof proceeds via induction, with the base cases given in Theorem \ref{thm-base}. We begin by defining hyperelliptic classes on $\overline{\mathcal{M}}_{g,n}$.
\begin{definition}
\label{def-hyp}
Fix integers $\ell,m,n\geq 0$. Denote by $\overline{\mathcal{H}}_{g,\ell,2m,n}$ the closure of the locus of hyperelliptic curves in $\overline{\mathcal{M}}_{g,\ell+2m+n}$ with marked Weierstrass points $w_1,\dots,w_\ell$; pairs of marked points $+_1,-_1,\dots,+_m,-_m$ with $+_j$ and $-_j$ conjugate under the hyperelliptic map; and \emph{free} marked points $p_1,\dots,p_n$ with no additional constraints. By \emph{hyperelliptic class}, we mean a non-empty class equivalent to some $\left[\overline{\mathcal{H}}_{g,\ell,2m,n}\right]$ in the Chow ring of $\overline{\mathcal{M}}_{g,\ell+2m+n}$.
\end{definition}
\begin{figure}[h]
\begin{tikzpicture}
\draw[very thick] (0,0) ellipse (4cm and 1.5cm);
\draw[very thick] (-2.3,-0.15) .. controls (-2.1,0.2) and (-0.9,0.2) .. (-0.7,-0.15);
\draw[very thick] (-2.5,0) .. controls (-2.2,-0.4) and (-0.8,-0.4) .. (-0.5,0);
\draw[very thick] (2.3,-0.15) .. controls (2.1,0.2) and (0.9,0.2) .. (0.7,-0.15);
\draw[very thick] (2.5,0) .. controls (2.2,-0.4) and (0.8,-0.4) .. (0.5,0);
\fill (-4,0) circle (0.10); \node at (-4.4,0.2) {$w_1$};
\fill (4,0) circle (0.10); \node at (4.4,0.2) {$w_2$};
\fill (-1.8,0.7) circle (0.10); \node at (-1.4,0.7) {$p_1$};
\fill (-1.4,-0.7) circle (0.10); \node at (-1,-0.7) {$p_2$};
\fill (1.1,-0.7) circle (0.10); \node at (1.5,-0.7) {$p_3$};
\end{tikzpicture}
\caption{The general element of $\overline{\mathcal{H}}_{2,2,0,3}$.}
\label{hfigure}
\end{figure}
Lemma \ref{lem-divisor} allows us to establish the rigidity and extremality of the two divisor hyperelliptic classes for genus two, which together provide the base case for Theorem \ref{thm-main}.
\begin{theorem}
\label{thm-base}
For $n \geq 0$, the class of $\overline{\mathcal{H}}_{2,0,2,n}$ is rigid and extremal in $\mbox{\emph{Eff}}^1(\overline{\mathcal{M}}_{2,2+n})$ and the class of $\overline{\mathcal{H}}_{2,1,0,n}$ is rigid and extremal in $
\mbox{\emph{Eff}}^1(\overline{\mathcal{M}}_{2,1+n})$.
\end{theorem}
\begin{proof}
Define a moving curve $\mathcal{C}$ in $\overline{\mathcal{H}}_{2,0,2,n}$ by fixing a general genus-two curve $C$ with $n$ free marked points $p_1,\dots,p_n$ and varying the conjugate pair $(+,-)$.
Since $\left[\overline{\mathcal{H}}_{2,0,2,n}\right] = \pi_{p_n}^*\cdots\pi_{p_1}^*\left[\overline{\mathcal{H}}_{2,0,2,0}\right]$, by the projection formula and the identity (see \cite{logan2003})
\begin{align*}
\left[\overline{\mathcal{H}}_{2,0,2,0}\right] = -\lambda + \psi_+ + \psi_- - 3\delta_{2,\varnothing} - \delta_{1,\varnothing},
\end{align*}
we compute
\begin{align*}
\int_{\overline{\mathcal{M}}_{2,2+n}} \left[\overline{\mathcal{H}}_{2,0,2,n}\right]\cdot [\mathcal{C}] &= \int_{\overline{\mathcal{M}}_{2,2}} \left[\overline{\mathcal{H}}_{2,0,2,0}\right] \cdot \pi_{p_1*}\cdots\pi_{p_n*}[\mathcal{C}] \\
&= 0 + (4-2+6) + (4-2+6) - 3(6) - 0 \\
&= -2.
\end{align*}
In particular, intersecting with $\lambda$ is $0$ by projection formula. Intersecting with either $\psi$-class can be seen as follows: pullback $\psi_i$ from $\overline{\mathcal{M}}_{2,1}$ to $\psi_i - \delta_{2,\varnothing}$, then use projection formula on $\psi_i$ back to $\overline{\mathcal{M}}_{2,1}$. This is just $2g-2$, since $\psi_i$ is the first Chern class of the cotangent bundle of $C$ over $i$. The intersection with $\delta_{2,\varnothing}$ corresponds to the $2g+2$ Weierstrass points. Finally, $\delta_{1,\varnothing}$ intersects trivially, since by fixing $C$ we have only allowed rational tail degenerations.
As $\left[\overline{\mathcal{H}}_{2,0,2,n}\right]$ is irreducible, it is rigid and extremal by Lemma \ref{lem-divisor}.
We next apply Lemma \ref{lem-divisor} by constructing a moving curve $\mathcal{B}$ which intersects negatively with $\overline{\mathcal{H}}_{2,1,0,n}$ using the following diagram. Note that the image of $s$ is precisely $\overline{\mathcal{H}}_{2,1,0,n} \subset \overline{\mathcal{M}}_{2,1+n}$.
\begin{center}
\begin{tikzcd}[row sep=1cm, column sep=1cm]
\overline{Adm}_{2\xrightarrow{2} 0,t_1,\dots,t_6,u_{1\pm},\dots,u_{n\pm}} \arrow[d, "c"] \arrow[r, "s"] & \overline{\mathcal{M}}_{2,1+n} \\
\overline{\mathcal{M}}_{0,\{t_1,\dots,t_6,u_{1},\dots,u_{n}\}} \arrow[d, "\pi_{t_6}"] \\
\overline{\mathcal{M}}_{0,\{t_1,\dots,t_5,u_{1},\dots,u_{n}\}}
\end{tikzcd}
\end{center}
Fix the point $[b_n]$ in $\overline{\mathcal{M}}_{0,\{t_1,\dots,t_5,u_{1},\dots,u_{n}\}}$ corresponding to a chain of $\mathbb{P}^1$s with $n+3$ components and marked points as shown in Figure \ref{pfigure} (if $n=0$, $t_4$ and $t_5$ are on the final component; if $n=1$, $t_5$ and $u_1$ are on the final component; etc.).
Then $[\mathcal{B}_n] = s_*c^*\pi_{t_6}^*[b_n]$ is a moving curve in $\overline{\mathcal{H}}_{2,1,0,n}$ (after relabeling $t_1$ to $w_1$ and $u_{j-}$ to $p_{j}$).
\begin{figure}[h]
\begin{tikzpicture}
\fill (0,0) circle (0.10);
\draw[very thick] (0,0) -- (-0.5,0.5); \node at (-0.8,0.5) {$t_1$};
\draw[very thick] (0,0) -- (-0.5,-0.5); \node at (-0.8,-0.5) {$t_2$};
\draw[very thick] (0,0) -- (1.2,0);
\fill (1.2,0) circle (0.10);
\draw[very thick] (1.2,0) -- (1.2,0.5); \node at (1.2,0.7) {$t_3$};
\draw[very thick] (1.2,0) -- (2.4,0);
\fill (2.4,0) circle (0.10);
\draw[very thick] (2.4,0) -- (2.4,0.5); \node at (2.4,0.7) {$t_4$};
\draw[very thick] (2.4,0) -- (3.2,0);
\fill (3.4,0) circle (0.05); \fill (3.6,0) circle (0.05); \fill (3.8,0) circle (0.05);
\draw[very thick] (4,0) -- (4.8,0);
\fill (4.8,0) circle (0.10);
\draw[very thick] (4.8,0) -- (4.8,0.5); \node at (4.8,0.7) {$u_{n-3}$};
\draw[very thick] (4.8,0) -- (6,0);
\fill (6,0) circle (0.10);
\draw[very thick] (6,0) -- (6,0.5); \node at (6,0.7) {$u_{n-2}$};
\draw[very thick] (6,0) -- (7.2,0);
\fill (7.2,0) circle (0.10);
\draw[very thick] (7.2,0) -- (7.7,0.5); \node at (8.2,0.5) {$u_{n-1}$};
\draw[very thick] (7.2,0) -- (7.7,-0.5); \node at (8,-0.5) {$u_{n}$};
\end{tikzpicture}
\caption{The point $[b_n]$ in $\overline{\mathcal{M}}_{0,\{t_1,\dots,t_5,u_{1},\dots,u_{n}\}}$.}
\label{pfigure}
\end{figure}
The intersection $\left[\overline{\mathcal{H}}_{2,1,0,n}\right] \cdot [\mathcal{B}_n]$ is not transverse, so we correct with minus the Euler class of the normal bundle of $\overline{\mathcal{H}}_{2,1,0,n}$ in $\overline{\mathcal{M}}_{2,1+n}$ restricted to $\mathcal{B}_n$. In other words,
\begin{align*}
\int_{\overline{\mathcal{M}}_{2,1+n}} \left[\overline{\mathcal{H}}_{2,1,0,n}\right] \cdot [\mathcal{B}_n] &= \int_{\overline{\mathcal{M}}_{2,1+n}} -\pi_{p_n}^*\cdots\pi_{p_1}^*\psi_{w_1}\cdot[\mathcal{B}_n] \\
&= \int_{\overline{\mathcal{M}}_{2,1}} -\psi_{w_1}\cdot[\mathcal{B}_{0}].
\end{align*}
By passing to the space of admissible covers, this integral is seen to be a positive multiple (a power of two) of
\begin{align*}
\int_{\overline{\mathcal{M}}_{1,2}} -\psi_{w_1}\cdot \left[\overline{\mathcal{H}}_{1,2,0,0}\right] &= \int_{\overline{\mathcal{M}}_{1,2}} -\psi_{w_1}\cdot (3\psi_{w_1}) \\
&= -\frac{1}{8},
\end{align*}
where we have used the fact that $\left[\overline{\mathcal{H}}_{1,2,0,0}\right] = 3\psi_{w_1}$ \cite{cavalierihurwitz}.
\end{proof}
This establishes the base case for the inductive hypothesis in Theorem \ref{thm-main}. The induction procedure differs fundamentally for the codimension-two classes, so we first prove the following short lemma to simplify the most complicated of those.
\begin{lemma}
\label{lem-notprop}
The class $W_{2,\{p_1,\dots,p_n\}}$ is not proportional to $\left[\overline{\mathcal{H}}_{2,2,0,n}\right]$ on $\overline{\mathcal{M}}_{2,2+n}$.
\end{lemma}
\begin{proof}
Let $P = \{p_1,\dots,p_n\}$. Note that in $W_{2,P}$ the marked points $w_1$ and $w_2$ carry no special restrictions, and the class is of codimension two. By dimensionality on the rational component of the general element of $W_{2,P}$,
\begin{align*}
W_{2,P} \cdot \psi_{w_1}^{n+3} = 0.
\end{align*}
However, using the equality
\begin{align*}
\left[\overline{\mathcal{H}}_{2,2,0,0}\right] = 6\psi_{w_1}\psi_{w_2} - \frac{3}{2}(\psi_{w_1}^2+\psi_{w_2}^2) - (\psi_{w_1} + \psi_{w_2})\left(\frac{21}{10}\delta_{1,\{w_1\}} + \frac{3}{5}\delta_{1,\varnothing} + \frac{1}{20}\delta_{irr}\right)
\end{align*}
established in \cite[Equation 4]{chentarasca} and Faber's Maple program \cite{faberprogram}, we compute
\begin{align*}
\int_{\overline{\mathcal{M}}_{2,2+n}} \left[\overline{\mathcal{H}}_{2,2,0,n}\right] \cdot \psi_{w_1}^{n+3} &= \int_{\overline{\mathcal{M}}_{2,2+n}} \pi_{p_1}^*\cdots\pi_{p_n}^*\left[\overline{\mathcal{H}}_{2,2,0,0}\right] \cdot \psi_{w_1}^{n+3} \\
&= \int_{\overline{\mathcal{M}}_{2,2}} \left[\overline{\mathcal{H}}_{2,2,0,0}\right] \cdot \pi_{p_1*}\cdots\pi_{p_n*}\psi_{w_1}^{n+3} \\
&= \int_{\overline{\mathcal{M}}_{2,2}} \Bigg(6\psi_{w_1}\psi_{w_2} - \frac{3}{2}(\psi_{w_1}^2+\psi_{w_2}^2) \\
& \hspace{1.5cm} - (\psi_{w_1} + \psi_{w_2})\left(\frac{21}{10}\delta_{1,\{w_1\}} + \frac{3}{5}\delta_{1,\varnothing} + \frac{1}{20}\delta_{irr}\right)\Bigg) \cdot \psi_{w_1}^3 \\
&= \frac{1}{384},
\end{align*}
so $W_{2,P}$ is not a non-zero multiple of $\left[\overline{\mathcal{H}}_{2,2,0,n}\right]$.
\end{proof}
We are now ready to prove our main result. The bulk of the effort is in establishing extremality, though the induction process does require rigidity at every step as well. Although we do not include it until the end, the reader is free to interpret the rigidity argument as being applied at each step of the induction.
The overall strategy of the extremality portion of the proof is as follows. Suppose $\left[\overline{\mathcal{H}}_{2,\ell,2m,n}\right]$ is given an effective decomposition. We show (first for the classes of codimension at least three, then for those of codimension two) that any terms of this decomposition which survive pushforward by $\pi_{w_i}$ or $\pi_{+_j}$ must be proportional to the hyperelliptic class itself. Therefore we may write $\left[\overline{\mathcal{H}}_{2,\ell,2m,n}\right]$ as an effective decomposition using only classes which vanish under pushforward by the forgetful morphisms; this is a contradiction, since the hyperelliptic class itself survives pushforward.
\begin{theorem}
\label{thm-main}
For $\ell,m,n\geq 0$, the class $\overline{\mathcal{H}}_{2,\ell,2m,n}$, if non-empty, is rigid and extremal in $\mbox{\emph{Eff}}^{\ell+m}(\overline{\mathcal{M}}_{2,\ell+2m+n})$.
\end{theorem}
\allowdisplaybreaks
\begin{proof}
We induct on codimension; assume the claim holds when the class is codimension $\ell+m-1$. Theorem \ref{thm-base} is the base case, so we may further assume $\ell+m \geq 2$. Now, suppose that
\begin{align}
\label{eq-decomp}
\left[\overline{\mathcal{H}}_{2,\ell,2m,n}\right] = \sum_{s} a_s\left[X_s\right] + \sum_{t} b_t\left[Y_t\right]
\end{align}
is an effective decomposition with $\left[X_s\right]$ and $\left[Y_t\right]$ irreducible codimension-$(\ell+m)$ effective cycles on $\overline{\mathcal{M}}_{2,\ell+2m+n}$, with $[X_s]$ surviving pushforward by some $\pi_{w_i}$ or $\pi_{+_j}$ and $[Y_t]$ vanishing under all such pushforwards, for each $s$ and $t$.
Fix an $\left[X_s\right]$ appearing in the right-hand side of $\eqref{eq-decomp}$. If $\ell\neq 0$, suppose without loss of generality (on the $w_i$) that $\pi_{w_1*}\left[X_s\right] \neq 0$. Since
\begin{align*}
\pi_{w_1*}\left[\overline{\mathcal{H}}_{2,\ell,2m,n}\right] = (6-(\ell-1))\left[\overline{\mathcal{H}}_{2,\ell-1,2m,n}\right]
\end{align*}
is rigid and extremal by hypothesis, $\pi_{w_1*}\left[X_s\right]$ is a positive multiple of the class of $\overline{\mathcal{H}}_{2,\ell-1,2m,n}$ and $X_s\subseteq (\pi_{w_1})^{-1}\overline{\mathcal{H}}_{2,\ell-1,2m,n}$. By the commutativity of the following diagrams and the observation that hyperelliptic classes survive pushforward by all $\pi_{w_i}$ and $\pi_{+_j}$, we have that $\pi_{w_i*}\left[X_s\right] \neq 0$ and $\pi_{+_j*}\left[X_s\right] \neq 0$ for all $i$ and $j$.
\begin{center}
\begin{tikzcd}[row sep=1cm, column sep=1cm]
\overline{\mathcal{H}}_{2,\ell,2m,n} \arrow[d, "\pi_{w_1}"] \arrow[r, "\pi_{+_j}"] & \overline{\mathcal{H}}_{2,\ell,2(m-1),n+1} \arrow[d, "\pi_{w_1}"] & & \overline{\mathcal{H}}_{2,\ell,2m,n} \arrow[d, "\pi_{w_1}"] \arrow[r, "\pi_{w_i}"] & \overline{\mathcal{H}}_{2,\ell-1,2m,n} \arrow[d, "\pi_{w_1}"] \\
\overline{\mathcal{H}}_{2,\ell-1,2m,n} \arrow[r, "\pi_{+_j}"] & \overline{\mathcal{H}}_{2,\ell-1,2(m-1),n+1} & & \overline{\mathcal{H}}_{2,\ell-1,2m,n} \arrow[r, "\pi_{w_i}"] & \overline{\mathcal{H}}_{2,\ell-2,2m,n}
\end{tikzcd}
\end{center}
If $\ell = 0$, suppose without loss of generality (on the $+_j$) that $\pi_{+_1*}\left[X_s\right] \neq 0$. Then the same conclusion holds that $\left[X_s\right]$ survives all pushforwards by $\pi_{+_j}$, since
\begin{align*}
\pi_{+_1*}\left[\overline{\mathcal{H}}_{2,\ell,2m,n}\right] = \left[\overline{\mathcal{H}}_{2,\ell,2(m-1),n+1}\right]
\end{align*}
is rigid and extremal by hypothesis, and $\pi_{+_1}$ commutes with $\pi_{+_j}$.
It follows that for any $\ell+m\geq 2$
\begin{align*}
\displaystyle X_s \subseteq \bigcap_{i,j}\left((\pi_{w_i})^{-1}\overline{\mathcal{H}}_{2,\ell-1,2m,n}\cap (\pi_{+_j})^{-1}\overline{\mathcal{H}}_{2,\ell,2(m-1),n+1}\right).
\end{align*}
We now have two cases. If $\ell+m \geq 3$, any $\ell + 2m - 1$ Weierstrass or conjugate pair marked points in a general element of $X_s$ are distinct, and hence all $\ell + 2m$ such marked points in a general element of $X_s$ are distinct. We conclude that $\left[X_s\right]$ is a positive multiple of $\left[\overline{\mathcal{H}}_{2,\ell,2m,n}\right]$. If $\ell+m = 2$, we must analyze three subcases.
If $\ell = 0$ and $m = 2$, then
\begin{align*}
\displaystyle X_s\subseteq (\pi_{+_1})^{-1}\overline{\mathcal{H}}_{2,0,2,n+1} \cap (\pi_{+_2})^{-1}\overline{\mathcal{H}}_{2,0,2,n+1}.
\end{align*}
The modular interpretation of the intersection leaves three candidates for $\left[X_s\right]$: $W_{2,P}$ or $\gamma_{1,P}$ for some $P$ containing neither conjugate pair, or $\left[\overline{\mathcal{H}}_{2,0,4,n}\right]$ itself. However, for the former two, $\dim W_{2,P} \neq \dim \pi_{+_1}(W_{2,P})$ and $\dim \gamma_{1,P} \neq \dim \pi_{+_1}(\gamma_{1,P})$ for all such $P$, contradicting our assumption that the class survived pushforward. Thus $\left[X_s\right]$ is proportional to $\left[\overline{\mathcal{H}}_{2,0,4,n}\right]$.
If $\ell = 1$ and $m = 1$, similar to the previous case, $\left[X_s\right]$ could be $\left[\overline{\mathcal{H}}_{2,1,2,n}\right]$ or $W_{2,P}$ or $\gamma_{1,P}$ for some $P$ containing neither the conjugate pair nor the Weierstrass point. However, if $X_s$ is either of the latter cases, we have $\dim X_s \neq \dim \pi_{+_1}(X_s)$, again contradicting our assumption about the non-vanishing of the pushforward, and so again $[X_s]$ must be proportional to $\left[\overline{\mathcal{H}}_{2,1,2,n}\right]$.
If $\ell = 2$ and $m = 0$, as before, $\left[X_s\right]$ is either $\left[\overline{\mathcal{H}}_{2,2,0,n}\right]$ itself or $W_{2,P}$ or $\gamma_{1,P}$ for $P = \{p_1,\dots,p_n\}$. Now $\dim W_{2,P} = \dim \pi_{w_i}W_{2,P},$ so the argument given in the other subcases fails (though $\gamma_{1,P}$ is still ruled out as before). Nevertheless, we claim that $W_{2,P}$ cannot appear on the right-hand side of (\ref{eq-decomp}) for $\overline{\mathcal{H}}_{2,2,0,n}$; to show this we induct on the number of free marked points $n$. The base case of $n=0$ is established in \cite[Theorem 5]{chentarasca}, so assume that $\overline{\mathcal{H}}_{2,2,0,n-1}$ is rigid and extremal for some $n\geq 1$. Suppose for the sake of contradiction that
\begin{align}
\label{eq-decomp2}
\left[\overline{\mathcal{H}}_{2,2,0,n}\right] = a_0W_{2,P} + \sum_{s} a_s\left[Z_s\right]
\end{align}
is an effective decomposition with each $\left[Z_s\right]$ an irreducible codimension-two effective cycle on $\overline{\mathcal{M}}_{2,2+n}$. Note that
\begin{align*}
W_{2,P} = \pi_{p_n}^*W_{2,P\backslash\{p_n\}} - W_{2,P\backslash\{p_n\}}.
\end{align*}
Multiply (\ref{eq-decomp2}) by $\omega_{p_n}$ and push forward by $\pi_{p_n}$. On the left-hand side,
\begin{align*}
\pi_{p_n*}\left(\omega_{p_n}\cdot\left[\overline{\mathcal{H}}_{2,2,0,n}\right]\right) &= \pi_{p_n*}\left(\omega_{p_n}\cdot \pi_{p_n}^*\left[\overline{\mathcal{H}}_{2,2,0,n-1}\right]\right) \\
&= \pi_{p_n*} \left(\omega_{p_n}\right) \cdot \left[\overline{\mathcal{H}}_{2,2,0,n-1}\right] \\
&= 2 \left[\overline{\mathcal{H}}_{2,2,0,n-1}\right],
\end{align*}
having applied Lemma \ref{lem-omegadil}. Combining this with the right-hand side,
\begin{align*}
2\left[\overline{\mathcal{H}}_{2,2,0,n-1}\right] &= a_0\pi_{p_n*}\left(\omega_{p_n}\cdot \pi_{p_n}^*W_{2,P\backslash\{p_n\}} - \omega_{p_n}\cdot W_{2,P\backslash\{p_n\}}\right) + \sum_{s} a_s\pi_{p_n*}\left(\omega_{p_n}\cdot \left[Z_s\right]\right) \\
&= 2a_0W_{2,P\backslash\{p_n\}} + \pi_{p_n*}\left(\omega_{p_n}\cdot W_{2,P\backslash\{p_n\}}\right) + \sum_{s} a_s\pi_{p_n*}\left(\omega_{p_n}\cdot \left[Z_s\right]\right).
\end{align*}
The term $\pi_{p_n*}\left(\omega_{p_n}\cdot W_{2,P\backslash\{p_n\}}\right)$ vanishes by Lemma \ref{lem-fallomega}
\begin{align*}
\pi_{p_n*}\left(\omega_{p_n}\cdot W_{2,P\backslash\{p_{n-1}\}}\right) &= \pi_{p_n*} \left(\omega_{w_1}\cdot W_{2,P\backslash\{p_n\}}\right) \\
&= \pi_{p_n*} \left(\pi_{p_n}^*\omega_{w_1}\cdot W_{2,P\backslash\{p_n\}}\right) \\
&= \omega_{w_1}\cdot \pi_{p_n*} W_{2,P\backslash\{p_n\}} \\
&= 0,
\end{align*}
where $w_1$ is the Weierstrass singular point on the genus two component of $W_{2,P\backslash\{p_n\}}$. Altogether, we have
\begin{align*}
2\left[\overline{\mathcal{H}}_{2,2,0,n-1}\right] &= 2a_0W_{2,P\backslash\{p_n\}} + \sum_{s} a_s\pi_{p_n*}\left(\omega_{p_n}\cdot \left[Z_s\right]\right).
\end{align*}
\cite{rulla01} establishes that $\psi_{p_n}$ is semi-ample on $\overline{\mathcal{M}}_{2,\{p_n\}}$, so $\omega_{p_n}$ is semi-ample, and hence this is an effective decomposition. By hypothesis, $\overline{\mathcal{H}}_{2,2,0,n-1}$ is rigid and extremal, so $W_{2,P\backslash\{p_n\}}$ must be a non-zero multiple of $\left[\overline{\mathcal{H}}_{2,2,0,n-1}\right]$, which contradicts Lemma \ref{lem-notprop}. Therefore $W_{2,P}$ cannot appear as an $\left[X_s\right]$ in (\ref{eq-decomp}).
Thus for all cases of $\ell+m = 2$ (and hence for all $\ell+m\geq 2$), we conclude that each $\left[X_s\right]$ in (\ref{eq-decomp}) is a positive multiple of $\left[\overline{\mathcal{H}}_{2,\ell,2m,n}\right]$. Now subtract these $\left[X_s\right]$ from \eqref{eq-decomp} and rescale, so that
\begin{align*}
\left[\overline{\mathcal{H}}_{2,\ell,2m,n}\right] = \sum_{t} b_t\left[Y_t\right].
\end{align*}
Recall that each $\left[Y_t\right]$ is required to vanish under all $\pi_{w_i*}$ and $\pi_{+_j*}$. But the pushforward of $\left[\overline{\mathcal{H}}_{2,\ell,2m,n}\right]$ by any of these morphisms is non-zero, so there are no $[Y_t]$ in \eqref{eq-decomp}. Hence $\left[\overline{\mathcal{H}}_{2,\ell,2m,n}\right]$ is extremal in $\text{Eff}^{\ell+m}(\overline{\mathcal{M}}_{2,\ell+2m+n})$.
For rigidity, suppose that $E:= r\left[\overline{\mathcal{H}}_{2,\ell,2m,n}\right]$ is effective. Since $\pi_{w_i*}E = (6-(\ell-1))r\left[\overline{\mathcal{H}}_{2,\ell-1,2m,n}\right]$ and $\pi_{+_j*}E = r\left[\overline{\mathcal{H}}_{2,\ell,2(m-1),n+1}\right]$ are rigid and extremal for all $i$ and $j$, we have that $\pi_{w_i*}E$ is supported on $\overline{\mathcal{H}}_{2,\ell-1,2m,n}$ and $\pi_{+_j*}E$ is supported on $\overline{\mathcal{H}}_{2,\ell,2(m-1),n+1}$. This implies that $E$ is supported on the intersection of $(\pi_{w_i})^{-1}\left[\overline{\mathcal{H}}_{2,\ell-1,2m,n}\right]$ and $(\pi_{+_j})^{-1}\left[\overline{\mathcal{H}}_{2,\ell,2(m-1),n+1}\right]$ for all $i$ and $j$. Thus $E$ is supported on $\overline{\mathcal{H}}_{2,\ell,2m,n}$, so $\left[\overline{\mathcal{H}}_{2,\ell,2m,n}\right]$ is rigid.
\end{proof}
\section{Higher genus}
\label{sec-highergenus}
The general form of the inductive argument in Theorem \ref{thm-main} holds independent of genus for $g\geq 2$. However, for genus greater than one, the locus of hyperelliptic curves in $\mathcal{M}_{g}$ is of codimension $g-2$, so that the base cases increase in codimension as $g$ increases. The challenge in showing the veracity of the claim for hyperelliptic classes in arbitrary genus is therefore wrapped up in establishing the base cases of codimension $g-1$ (corresponding to Theorem \ref{thm-base}) and codimension $g$ (corresponding to the three $\ell+m = 2$ subcases in Theorem \ref{thm-main}).
In particular, our proof of Theorem \ref{thm-base} relies on the fact that $\overline{\mathcal{H}}_{2,0,2,n}$ and $\overline{\mathcal{H}}_{2,1,0,n}$ are divisors, and the subcase $\ell = 2$ in Theorem \ref{thm-main} depends on our ability to prove Lemma \ref{lem-notprop}. This in turn requires the description of $\overline{\mathcal{H}}_{2,2,0,0}$ given by \cite{chentarasca}. More subtly, we require that $\psi_{p_n}$ be semi-ample in $\overline{\mathcal{M}}_{2,\{p_n\}}$, which is known to be false in genus greater than two in characteristic 0 \cite{keel99}. In genus three, \cite{chencoskun2015} show that the base case $\overline{\mathcal{H}}_{3,1,0,0}$ is rigid and extremal, though it is unclear if their method will extend to $\overline{\mathcal{H}}_{3,1,0,n}$. Moreover, little work has been done to establish the case of a single conjugate pair in genus three, and as the cycles move farther from divisorial classes, such analysis becomes increasingly more difficult.
One potential avenue to overcome these difficulties is suggested by work of Renzo Cavalieri and Nicola Tarasca (currently in preparation). They use an inductive process to describe hyperelliptic classes in terms of decorated graphs using the usual dual graph description of the tautological ring of $\overline{\mathcal{M}}_{g,n}$. Such a formula for the three necessary base cases would allow for greatly simplified intersection-theoretic calculations, similar to those used in Theorem \ref{thm-base} and Lemma \ref{lem-notprop}. Though such a result would be insufficient to completely generalize our main theorem, it would be a promising start.
We also believe the observation that pushing forward and pulling back by forgetful morphisms moves hyperelliptic classes to (multiples of) hyperelliptic classes is a useful one. There is evidence that a more explicit connection between marked Weierstrass points, marked conjugate pairs, and the usual gluing morphisms between moduli spaces of marked curves exists as well, though concrete statements require a better understanding of higher genus hyperelliptic loci. Although it is known that hyperelliptic classes do not form a cohomological field theory over the full $\overline{\mathcal{M}}_{g,n}$, a deeper study of the relationship between these classes and the natural morphisms among the moduli spaces may indicate a CohFT-like structure, which in turn would shed light on graph formulas or other additional properties.
\bibliographystyle{amsalpha}
| 2024-02-18T23:39:52.441Z | 2018-10-19T02:00:47.000Z | algebraic_stack_train_0000 | 726 | 8,066 |
|
proofpile-arXiv_065-3551 | \section{Introduction}
The following instructions are directed to authors of papers submitted to
and accepted for publication in the EMNLP 2017 proceedings. All authors
are required to adhere to these specifications. Authors are required to
provide a Portable Document Format (PDF) version of their papers. {\textbf The
proceedings are designed for printing on A4 paper}. Authors from countries
where access to word-processing systems is limited should contact the
publication chairs as soon as possible. Grayscale readability of all
figures and graphics will be encouraged for all accepted papers
(Section \ref{ssec:accessibility}).
Submitted and camera-ready formatting is similar, however, the submitted
paper should have:
\begin{enumerate}
\item Author-identifying information removed
\item A `ruler' on the left and right margins
\item Page numbers
\item A confidentiality header.
\end{enumerate}
In contrast, the camera-ready {\bf should not have} a ruler, page numbers,
nor a confidentiality header. By uncommenting {\small\verb|\emnlpfinalcopy|}
at the top of the \LaTeX source of this document, it will compile to
produce a PDF document in the camera-ready formatting; by leaving it
commented out, the resulting PDF document will be anonymized for initial
submission. Authors should place this command after the
{\small\verb|\usepackage|} declarations when preparing their camera-ready
manuscript with the EMNLP 2017 style.
\section{General Instructions}
Manuscripts must be in two-column format. Exceptions to the two-column
format include the title, as well as the authors' names and complete
addresses (only in the final version, not in the version submitted for
review), which must be centered at the top of the first page (see the
guidelines in Subsection~\ref{ssec:first}), and any full-width figures or
tables. Type single-spaced. Start all pages directly under the top margin.
See the guidelines later regarding formatting the first page. Also see
Section~\ref{sec:length} for the page limits.
Do not number the pages in the camera-ready version.
By uncommenting {\small\verb|\emnlpfinalcopy|} at the top of this document,
it will compile to produce an example of the camera-ready formatting; by
leaving it commented out, the document will be anonymized for initial
submission. When you first create your submission on softconf, please fill
in your submitted paper ID where {\small\verb|***|} appears in the
{\small\verb|\def***{***}|} definition at the top.
The review process is double-blind, so do not include any author information
(names, addresses) when submitting a paper for review. However, you should
maintain space for names and addresses so that they will fit in the final
(accepted) version. The EMNLP 2017 \LaTeX\ style will create a titlebox
space of 2.5in for you when {\small\verb|\emnlpfinalcopy|} is commented out.
\subsection{The Ruler}
The EMNLP 2017 style defines a printed ruler which should be present in the
version submitted for review. The ruler is provided in order that
reviewers may comment on particular lines in the paper without
circumlocution. If you are preparing a document without the provided
style files, please arrange for an equivalent ruler to
appear on the final output pages. The presence or absence of the ruler
should not change the appearance of any other content on the page. The
camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment
the {\small\verb|\emnlpfinalcopy|} command in the document preamble.)
Reviewers:
note that the ruler measurements do not align well with lines in the paper
--- this turns out to be very difficult to do well when the paper contains
many figures and equations, and, when done, looks ugly. In most cases one
would expect that the approximate location will be adequate, although you
can also use fractional references ({\em e.g.}, the body of this section
begins at mark $112.5$).
\subsection{Electronically-Available Resources}
EMNLP provides this description to authors in \LaTeX2e{} format
and PDF format, along with the \LaTeX2e{} style file used to format it
({\small\tt emnlp2017.sty}) and an ACL bibliography style
({\small\tt emnlp2017.bst}) and example bibliography
({\small\tt emnlp2017.bib}).
A Microsoft Word template file (emnlp17-word.docx) and example submission pdf (emnlp17-word.pdf) is available at http://emnlp2017.org/downloads/acl17-word.zip. We strongly recommend the use of these style files, which have been appropriately tailored for the EMNLP 2017 proceedings.
We strongly recommend the use of these style files, which have been
appropriately tailored for the EMNLP 2017 proceedings.
\subsection{Format of Electronic Manuscript}
\label{sect:pdf}
For the production of the electronic manuscript, you must use Adobe's
Portable Document Format (PDF). This format can be generated from
postscript files: on Unix systems, you can use {\small\tt ps2pdf} for this
purpose; under Microsoft Windows, you can use Adobe's Distiller, or
if you have cygwin installed, you can use {\small\tt dvipdf} or
{\small\tt ps2pdf}. Note
that some word processing programs generate PDF that may not include
all the necessary fonts (esp.\ tree diagrams, symbols). When you print
or create the PDF file, there is usually an option in your printer
setup to include none, all, or just non-standard fonts. Please make
sure that you select the option of including ALL the fonts. {\em Before
sending it, test your {\/\em PDF} by printing it from a computer different
from the one where it was created}. Moreover, some word processors may
generate very large postscript/PDF files, where each page is rendered as
an image. Such images may reproduce poorly. In this case, try alternative
ways to obtain the postscript and/or PDF. One way on some systems is to
install a driver for a postscript printer, send your document to the
printer specifying ``Output to a file'', then convert the file to PDF.
For reasons of uniformity, Adobe's {\bf Times Roman} font should be
used. In \LaTeX2e{} this is accomplished by putting
\small
\begin{verbatim}
\usepackage{times}
\usepackage{latexsym}
\end{verbatim}
\normalsize
in the preamble.
It is of utmost importance to specify the \textbf{A4 format} (21 cm
x 29.7 cm) when formatting the paper. When working with
{\tt dvips}, for instance, one should specify {\tt -t a4}.
Or using the command \verb|\special{papersize=210mm,297mm}| in the latex
preamble (directly below the \verb|\usepackage| commands). Then using
{\tt dvipdf} and/or {\tt pdflatex} which would make it easier for some.
Print-outs of the PDF file on A4 paper should be identical to the
hardcopy version. If you cannot meet the above requirements about the
production of your electronic submission, please contact the
publication chairs as soon as possible.
\subsection{Layout}
\label{ssec:layout}
Format manuscripts with two columns to a page, following the manner in
which these instructions are formatted. The exact dimensions for a page
on A4 paper are:
\begin{itemize}
\item Left and right margins: 2.5 cm
\item Top margin: 2.5 cm
\item Bottom margin: 2.5 cm
\item Column width: 7.7 cm
\item Column height: 24.7 cm
\item Gap between columns: 0.6 cm
\end{itemize}
\noindent Papers should not be submitted on any other paper size.
If you cannot meet the above requirements about the production of
your electronic submission, please contact the publication chairs
above as soon as possible.
\subsection{The First Page}
\label{ssec:first}
Center the title, author name(s) and affiliation(s) across both
columns (or, in the case of initial submission, space for the names).
Do not use footnotes for affiliations.
Use the two-column format only when you begin the abstract.
\noindent{\bf Title}: Place the title centered at the top of the first
page, in a 15 point bold font. (For a complete guide to font sizes and
styles, see Table~\ref{font-table}.) Long titles should be typed on two
lines without a blank line intervening. Approximately, put the title at
2.5 cm from the top of the page, followed by a blank line, then the author
name(s), and the affiliation(s) on the following line. Do not use only
initials for given names (middle initials are allowed). Do not format
surnames in all capitals (e.g., ``Mitchell,'' not ``MITCHELL''). The
affiliation should contain the author's complete address, and if possible,
an email address. Leave about 7.5 cm between the affiliation and the body
of the first page.
\noindent{\bf Abstract}: Type the abstract at the beginning of the first
column. The width of the abstract text should be smaller than the
width of the columns for the text in the body of the paper by about
0.6 cm on each side. Center the word {\bf Abstract} in a 12 point
bold font above the body of the abstract. The abstract should be a
concise summary of the general thesis and conclusions of the paper.
It should be no longer than 200 words. The abstract text should be in
10 point font.
\begin{table}
\centering
\small
\begin{tabular}{cc}
\begin{tabular}{|l|l|}
\hline
{\bf Command} & {\bf Output}\\\hline
\verb|{\"a}| & {\"a} \\
\verb|{\^e}| & {\^e} \\
\verb|{\`i}| & {\`i} \\
\verb|{\.I}| & {\.I} \\
\verb|{\o}| & {\o} \\
\verb|{\'u}| & {\'u} \\
\verb|{\aa}| & {\aa} \\\hline
\end{tabular} &
\begin{tabular}{|l|l|}
\hline
{\bf Command} & {\bf Output}\\\hline
\verb|{\c c}| & {\c c} \\
\verb|{\u g}| & {\u g} \\
\verb|{\l}| & {\l} \\
\verb|{\~n}| & {\~n} \\
\verb|{\H o}| & {\H o} \\
\verb|{\v r}| & {\v r} \\
\verb|{\ss}| & {\ss} \\\hline
\end{tabular}
\end{tabular}
\caption{Example commands for accented characters, to be used in, e.g., \BibTeX\ names.}\label{tab:accents}
\end{table}
\noindent{\bf Text}: Begin typing the main body of the text immediately
after the abstract, observing the two-column format as shown in the present
document. Do not include page numbers in the camera-ready manuscript.
Indent when starting a new paragraph. For reasons of uniformity,
use Adobe's {\bf Times Roman} fonts, with 11 points for text and
subsection headings, 12 points for section headings and 15 points for
the title. If Times Roman is unavailable, use {\bf Computer Modern
Roman} (\LaTeX2e{}'s default; see section \ref{sect:pdf} above).
Note that the latter is about 10\% less dense than Adobe's Times Roman
font.
\subsection{Sections}
\noindent{\bf Headings}: Type and label section and subsection headings in
the style shown on the present document. Use numbered sections (Arabic
numerals) in order to facilitate cross references. Number subsections
with the section number and the subsection number separated by a dot,
in Arabic numerals.
\noindent{\bf Citations}: Citations within the text appear in parentheses
as~\cite{Gusfield:97} or, if the author's name appears in the text itself,
as Gusfield~\shortcite{Gusfield:97}. Using the provided \LaTeX\ style, the
former is accomplished using {\small\verb|\cite|} and the latter with
{\small\verb|\shortcite|} or {\small\verb|\newcite|}. Collapse multiple
citations as in~\cite{Gusfield:97,Aho:72}; this is accomplished with the
provided style using commas within the {\small\verb|\cite|} command, e.g.,
{\small\verb|\cite{Gusfield:97,Aho:72}|}. Append lowercase letters to the
year in cases of ambiguities. Treat double authors as in~\cite{Aho:72}, but
write as in~\cite{Chandra:81} when more than two authors are involved.
\noindent{\bf References}: We recommend
including references in a separate~{\small\texttt .bib} file, and include
an example file in this release ({\small\tt emnlp2017.bib}). Some commands
for names with accents are provided for convenience in
Table~\ref{tab:accents}. References stored in the separate~{\small\tt .bib}
file are inserted into the document using the following commands:
\small
\begin{verbatim}
\section{Introduction}
Producing true causal explanations requires deep understanding of the domain.
This is beyond the capabilities of modern AI.
However, it is possible to collect large amounts of causally related events, and, given powerful enough representational variability,
to construct cause-effect chains by selecting individual pairs appropriately and linking them together.
Our hypothesis is that chains composed of locally coherent pairs can suggest overall causation.
In this paper, we view \textit{causality} as (commonsense) cause-effect expressions that occur frequently in online text such as news articles or tweets. For example, ``\textit{greenhouse gases causes global warming}" is a sentence that provides an `atomic' link that can be used in a larger chain.
By connecting such causal facts in a sequence, the result can be regarded as a \textit{causal explanation} between the two ends of the sequence
(see Table~\ref{tab:exchains} for examples).
\noindent This paper makes the following contributions:
\begin{itemize}[leftmargin=*,noitemsep,topsep=0pt]
\item we define the problem of causal explanation generation,
\item we detect causal features of a time series event (\textsc{CSpikes}) using Granger~\cite{granger1988some} method with features extracted from text such as N-grams, topics, sentiments, and their composition,
\item we produce a large graph called \textsc{CGraph}~of local cause-effect units derived from text and develop a method to produce causal explanations by selecting and linking appropriate units, using neural representations to enable unit matching and chaining.
\end{itemize}
\begin{figure}[t]
\small
\centerin
{
{\includegraphics[width=.89\linewidth]{figs/{words_FB_False_3_2013-01-01_2013-12-31}.png}}
\caption{\label{fig:example}
Example of causal features for Facebook's stock change in 2013.
The causal features (e.g., \textit{martino}, \textit{k-rod}) rise before the Facebook's rapid stock rise in August.
\end{figure}
The problem of causal explanation generation arises for systems that seek to determine causal factors for events of interest automatically.
For given time series events such as companies' stock market prices, our system called \textsc{CSpikes}~detects events that are deemed causally related by time series analysis using Granger Causality regression~\cite{granger1988some}.
We consider a large amount of text and tweets related to each company, and produces for each company time series of values for hundreds of thousands of word n-grams, topic labels, sentiment values, etc.
Figure~\ref{fig:example} shows an example of causal features that temporally causes Facebook's stock rise in August.
\begin{table}[t]
\centering
\small
\caption{\label{tab:exchains}Examples of generated causal explanation between some temporal causes and target companies' stock prices.
}
\begin{tabularx}{\columnwidth}{@{}X@{}}
\toprule
\textbf{\color{Sepia}party} $\xmapsto[]{cut}$ budget\_cuts $\xmapsto[]{lower}$ budget\_bill $\xmapsto[]{decreas}$ republicans $\xmapsto[]{caus}$ obama $\xmapsto[]{lead to}$ facebook\_polls $\xmapsto[]{caus}$ \textbf{\color{BlueViolet}facebook's stock} $\downarrow$\\\hline
\bottomrule
\end{tabularx}
\end{table}
However, it is difficult to understand how the statistically verified factors actually cause the changes, and whether there is a latent causal structure relating the two.
This paper addresses the challenge of finding such latent causal structures, in the form of \textit{causal explanations} that connect the given cause-effect pair.
Table~\ref{tab:exchains} shows example causal explanation that our system found between \textit{party} and \textit{Facebook's stock fall ($\downarrow$)}.
To construct a general causal graph, we extract all potential causal expressions from a large corpus of text. We refer to this graph as \textsc{CGraph}. We use FrameNet~\cite{baker1998berkeley} semantics to provide various causative expressions (verbs, relations, and patterns),
which we apply to a resource of $183,253,995$ sentences of text and tweets.
These expressions are considerably richer than previous rule-based patterns~\cite{riaz2013toward,kozareva2012cause}.
\textsc{CGraph}~ contains 5,025,636 causal edges.
Our experiment demonstrates that our causality detection algorithm outperforms other baseline methods for forecasting future time series values. Also, we tested the neural reasoner on the inference generation task using the BLEU score.
Additionally, our human evaluation shows the relative effectiveness of neural reasoners in generating appropriate lexicons in explanations.
\section{\textsc{CSpikes}: Temporal Causality Detection from Textual Features}
\label{sec:method}
The objective of our model is, given a target time series $y$, to find the best set of textual features $F = \{f_1, ..., f_k\} \subseteq X$, that maximizes sum of causality over the features on $y$, where $X$ is the set of all features. Note that each feature is itself a time series:
\begin{equation}\label{objective}
\argmax_{F} \mathbf{C}{ (y, \Phi(X, y)) }
\end{equation}
where $\mathbf{C}(y,x)$ is a causality value function between $y$ and $x$, and $\Phi$ is a linear composition function of features $f$.
$\Phi$ needs target time series $y$ as well because of our graph based feature selection algorithm described in the next sections.
We first introduce the basic principles of Granger causality in Section~\ref{subsec:granger}. Section~\ref{subsec:feature} describes how to extract good source features $F = \{f_1, ..., f_k\}$ from text. Section~\ref{subsec:causality} describes the causality function $\mathbf{C}$ and the feature composition function $\Phi$.
\subsection{Granger Causality}\label{subsec:granger}
The essential assumption behind Granger causality is that a cause must occur before its effect, and can be used to predict the effect.
Granger showed that given a target time series $y$ (effect) and a source time series $x$ (cause), \textit{forecasting} future target value $y_t$ with both past target and past source time series $E(y_t | y_{<t}, x_{<t})$ is significantly powerful than with only past target time series $E(y_t | y_{<t})$ (plain auto-regression), if $x$ and $y$ are indeed a cause-effect pair.
First, we learn the parameters $\alpha$ and $\beta$ to maximize the prediction expectation:
\begin{align}\label{granger}
&E(y_t | y_{<t}, x_{t-l}) = \sum_{j=1}^{m} \alpha_j y_{t-j} + \sum_{i=1}^{n} \beta_i x_{t-i}
\end{align}
where $i$ and $j$ are size of lags in the past observation. Given a pair of causes $x$ and a target $y$, if $\beta$ has magnitude significantly higher than zero (according to a confidence threshold), we can say that $x$ causes $y$.
\subsection{Feature Extraction from Text}\label{subsec:feature}
Extracting meaningful features is a key component to detect causality.
For example, to predict future trend of presidential election poll of \textit{Donald Trump}, we need to consider his past poll data as well as people's reaction about his pledges such as \textit{Immigration}, \textit{Syria} etc.
To extract such ``good'' features crawled from on-line media data, we propose three different types of features: $F_{words}$, $F_{topic}$, and $F_{senti}$.
$F_{words}$ is time series of N-gram words that reflect popularity of the word over time in on-line media.
For each word, the number of items (e.g., tweets, blogs and news) that contains the N-gram word is counted to get the day-by-day time series.
For example, $x^{\small Michael\_Jordan} = [ 12,51,..]$ is a time series for a bi-gram word \textit{Michael Jordan}.
We filter out stationary words by using simple measures to estimate how dynamically the time series of each word changes over time. Some of the simple measures include Shannon entropy, mean, standard deviation, maximum slope, and number of rise and fall peaks.
$F_{topic}$ is time series of latent topics with respect to the target time series.
The latent topic is a group of semantically similar words as identified by a standard topic clustering method such as LDA~\cite{blei2003latent}.
To obtain temporal trend of the latent topics, we choose the top ten frequent words in each topic and count their occurrence in the text to get the day-by-day time series.
For example, $x^{healthcare}$ means how popular the topic \textit{healthcare} that consists of \textit{insurance}, \textit{obamacare} etc, is through time.
$F_{senti}$ is time series of sentiments (positive or negative) for each topic.
The top ten frequent words in each topic are used as the keywords, and tweets, blogs and news that contain at least one of these keywords are chosen to calculate the sentiment score.
The day-by-day sentiment series are then obtained by counting positive and negative words using OpinionFinder~\cite{wilson2005recognizing}, and normalized by the total number of the items that day.
\subsection{Temporal Causality Detection}\label{subsec:causality}
We define a causality function $\mathbf{C}$ for calculating causality score between target time series $y$ and source time series $x$.
The causality function $\mathbf{C}$ uses Granger causality~\cite{granger1988some} by fitting the two time series with a Vector AutoRegressive model with exogenous variables (VARX)~\cite{hamilton1994time}: $y_t = \alpha y_{t-l} + \beta x_{t-l} + \epsilon_t$
where $\epsilon_t$ is a white Gaussian random vector at time $t$ and $l$ is a lag term.
In our problem, the number of source time series $x$ is not single so the prediction happens in the $k$ multi-variate features $X=(f_1, ... f_k)$ so:
\begin{align}
y_t & = \alpha y_{t-l} + \bm{\beta} (f_{1,t-l} + ... + f_{k,t-l}) + \epsilon_t
\end{align}
where $\bm{\alpha}$ and $\bm{\beta}$ is the coefficient matrix of the target $y$ and source $X$ time series respectively, and $\epsilon$ is a residual (prediction error) for each time series.
$\bm{\beta}$ means contributions of each lagged feature $f_{k,t-l}$ to the predicted value $y_t$.
If the variance of $\bm{\beta_k}$ is reduced by the inclusion of the feature terms $f_{k,t-l} \in X $, then it is said that $f_{k,t-l}$ Granger-causes $y$.
Our causality function $\mathbf{C}$ is then $\mathbf{C}(y, f, l) =\Delta(\beta_{y,f,l})$ where $\Delta$ is change of variance by the feature $f$ with lag $l$.
The total Granger causality of target $y$ is computed by summing the change of variance over all lags and all features:
\begin{align}\label{eq:causality}
\mathbf{C}(y, X) =\sum_{k,l} \mathbf{C}(y, f_k, l)
\end{align}
We compose best set of features $\Phi$ by choosing top $k$ features with highest causality scores for each target $y$.
In practice, due to large amount of computation for pairwise Granger calculation, we make a bipartite graph between features and targets, and address two practical problems: \textit{noisiness} and \textit{hidden edges}.
We filter out noisy edges based on TFIDF and fill out missing values using non-negative matrix factorization (NMF)~\cite{hoyer2004non}.
\begin{table*}[h!]
\centering
\smal
\caption{\label{tab:causalgraph} Example (relation, cause, effect) tuples in different categories (manually labeled): \textit{general}, \textit{company}, \textit{country}, and \textit{people}. FrameNet labels related to causation are listed inside parentheses. The number of distinct relation types are 892.
\resizebox{\textwidth}{!}{%
\begin{tabular}{@{}r|r||r|l@{}}
\toprule
&\textbf{Relation} & \multicolumn{2}{c}{\textbf{Cause $\mapsto$ Effect $\qquad$}} \\\hline
\midrule
\parbox[t]{1mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{\tiny{General}}}} &
causes (Causation) & the virus (Cause) & aids (Effect) \\
&cause (Causation) & greenhouse gases (Cause) & global warming (Effect)\\
&forced (Causation) & the reality of world war ii (Cause) & the cancellation of the olympics (Effect)\\
\midrule
\parbox[t]{1mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{\tiny{Company}}}} &heats (Cause\_temperature\_change) & microsoft vague on windows (Item) & legislation battle (Agent) \\
&promotes (Cause\_change\_of\_position\_on\_a\_scale) & chrome (Item) & google (Agent)\\
&makes (Causation) & twitter (Cause) & love people you 've never met facebook (Effect)\\
\midrule
\parbox[t]{1mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{\tiny{Country}}}}
&developing (Cause\_to\_make\_progress) & north korea (Agent) & nuclear weapons (Project)\\
&improve (Cause\_to\_make\_progress) & china (Agent) & its human rights record (Project)\\
&forced (Causation) & war with china (Cause) & the japanese to admit , in july 1938 (Effect)\\
\midrule
\parbox[t]{1mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{\tiny{People}}}}
&attracts (Cause\_motion) & obama (Agent) & more educated voters (Theme)\\
&draws (Cause\_motion) & on america 's economic brains (Goal) & barack obama (Theme) \\
&made (Causation) & michael jordan (Cause) & about \$ 33 million (Effect)\\
\bottomrule
\end{tabular}
}
\end{table*}
\section{\textsc{CGraph}~Construction}\label{sec:graph}
Formally, given source $x$ and target $y$ events that are causally related in time series, if we could find a sequence of cause-effect pairs $(x \mapsto e_1)$, $(e_1 \mapsto e_2)$, ... $(e_t \mapsto y)$, then $e_1 \mapsto e_2, ... \mapsto e_t$ might be a good causal explanation between $x$ and $y$.
Section~\ref{sec:graph} and \ref{sec:reasoning} describe how to bridge the causal gap between given events ($x$, $y$) by (1) constructing a large general cause-effect graph (\textsc{CGraph}) from text, (2) linking the given events to their equivalent entities in the causal graph by finding the internal paths ($x \mapsto e_1, ... e_t \mapsto y$) as causal explanations, using neural algorithms.
\textsc{CGraph}~is a knowledge base graph where edges are directed and causally related between entities.
To address less representational variability of rule based methods~\cite{girju2003automatic,blanco2008causal,sharp2016creating} in the causal graph construction, we used FrameNet~\cite{baker1998berkeley} semantics.
Using a semantic parser such as SEMAFOR~\cite{chen2010semafor} that produces a FrameNet style analysis of semantic predicate-argument structures, we could obtain lexical tuples of causation in the sentence.
Since our goal is to collect only causal relations, we extract total 36 causation related frames\footnote{Causation, Cause\_change, Causation\_scenario, Cause\_ benefit\_or\_detriment, Cause\_bodily\_experience, etc.} from the parsed sentences.
\begin{table}[h]
\centerin
\small
\caption{\label{tab:graphstats} Number of sentences parsed, number of entities and tuples, and number of edges (\textit{KB-KB}, \textit{KBcross}) expanded by Freebase in \textsc{CGraph}.
\resizebox{\columnwidth}{!}{%
\begin{tabular}{c|c|c|c|c}
\toprule
\# Sentences & \# Entities & \# Tuples & \# \textit{KB-KB} & \# \textit{KBcross}\\
\midrule
183,253,995 & 5,623,924 & 5,025,636 & 470,250 & 151,752\\
\bottomrule
\end{tabular}
}
\end{table}
To generate meaningful explanations, high coverage of the knowledge is necessary.
We collect six years of tweets and NYT news articles from 1989 to 2007 (See Experiment section for details).
In total, our corpus has 1.5 billion tweets and 11 million sentences from news articles.
The Table~\ref{tab:graphstats} has the number of sentences processed and number of entities, relations, and tuples in the final \textsc{CGraph}.
Since the tuples extracted from text are very noisy~\footnote{SEMAFOR has around $62\%$ of accuracy on held-out set.}, we constructed a large causal graph by linking the tuples with string match and filter out the noisy nodes and edges based on some graph statistics.
We filter out nodes with very high degree that are mostly stop-words or auto-generated sentences.
Too long or short sentences are also filtered out.
Table~\ref{tab:causalgraph} shows the (case, relation, effect) tuples with manually annotated categories such as \textit{General}, \textit{Company}, \textit{Country}, and \textit{People}.
\section{Causal Reasoning}\label{sec:reasoning}
To generate a causal explanation using \textsc{CGraph}, we need traversing the graph for finding the path between given source and target events.
This section describes how to efficiently traverse the graph by expanding entities with external knowledge base and how to find (or generate) appropriate causal paths to suggest an explanation using symbolic and neural reasoning algorithms.
\subsection{Entity Expansion with Knowledge Base}
A simple choice for traversing a graph are the traditional graph searching algorithms such as Breadth-First Search (BFS). However, the graph searching procedure is likely to be incomplete (\textit{low recall}), because simple string match is insufficient to match an effect to all its related entities, as it misses out in the case where an entity is semantically related but has a lexically different name.
To address the \textit{low recall} problem and generate better explanations, we propose the use of knowledge base to augment our text-based causal graph with real-world semantic knowledge.
We use Freebase~\cite{freebase} as the external knowledge base for this purpose.
Among $1.9$ billion edges in original Freebase dump, we collect its first and second hop neighbours for each target events.
While our \textsc{CGraph}~is lexical in nature, Freebase entities appear as identifiers (MIDs).
For entity linking between two knowledge graphs, we need to annotate Freebase entities with their lexical names by looking at the wiki URLs.
We refer to the edges with freebase expansion as \textit{KB-KB} edges, and link the \textit{KB-KB} with our \textsc{CGraph}~using lexical matching, referring as \textit{KBcross} edges (See Table~\ref{tab:graphstats} for the number of the edges).
\subsection{Symbolic Reasoning}
Simple traversal algorithms such as BFS are infeasible for traversing the \textsc{CGraph}~due to the large number of nodes and edges.
To reduce the search space $k$ in $e_{t} \mapsto \{e_{t+1}^1, ...e_{t+1}^k\}$, we restricted our search by depth of paths, length of words in entity's name, and edge weight.
\begin{algorithm}[h]
\caption{ Backward Causal Inference. $y$ is target event, $d$ is depth of BFS, $l$ is lag size, $BFS_{back}$ is Breadth-First search for one depth in backward direction, and $\sum_l\mathbf{C}$ is sum of Granger causality over the lags. \label{alg:inference}}
\begin{algorithmic}[1]
\State $\mathbb{S} \gets {\textit{y}}$, $d = 0 $
\While {($\mathbb{S} = \emptyset$) or ($d > D_{max} $)}
\State $\{e_{-d}^1, ...e_{-d}^k\} \gets BFS_{back}(\mathbb{S})$
\State $d = d + 1$, $\mathbb{S} \gets \emptyset $
\For{\texttt{$j$ in $\{1,...,k\}$}}
\If {\textit{$\sum_l\mathbf{C}(y,e_{-d}^j,l) < \epsilon$}} $\mathbb{S} \gets e_{-d}^j $
\EndIf
\EndFor
\EndWhile
\end{algorithmic}
\end{algorithm}
For more efficient inference, we propose a backward algorithm that searches potential causes (instead of effects) $\{e_{t}^1, ...e_{t}^k\} \mapsfrom e_{t+1}$ starting from the target node $y = e_{t+1}$ using Breadth-first search (BFS).
It keeps searching backward until the node $e_{i}^j$ has less Granger confident causality with the target node $y$ (See Algorithm~\ref{eq:causality} for causality calculation).
This is only possible because our system has temporal causality measure between two time series events.
See Algorithm~\ref{alg:inference} for detail.
\subsection{Neural Reasoning}
While symbolic inference is fast and straightforward, the sparsity of edges may make our inference semantically poor.
To address the \textit{lexical sparseness}, we propose a lexically relaxed reasoning using a neural network.
Inspired by recent success on alignment task such as machine translation~\cite{bahdanau2014neural}, our model learns the causal alignment between cause phrase and effect phrase for each type of relation between them.
Rather than traversing the \textsc{CGraph}, our neural reasoner uses \textsc{CGraph}~ as a training resource.
The encoder, a recurrent neural network such as LSTM~\cite{hochreiter1997long}, takes the causal phrase while the decoder, another LSTM, takes the effectual phrase with their relation specific attention.
In original attention model~\cite{bahdanau2014neural}, the contextual vector $c$ is computed by $c_i = a_{ij} * h_j$ where $h_j$ is hidden state of causal sequence at time $j$ and $a_{ij}$ is soft attention weight, trained by feed forward network $a_{ij} = FF (h_j, s_{i-1})$ between input hidden state $h_j$ and output hidden state $s_{i-1}$.
The global attention matrix $a$, however, is easy to mix up all local alignment patterns of each relation.
For example, a tuple,
\textit{\small{(north korea (Agent)
$\xmapsto[(Cause\_to\_make\_progress)]{developing}$ nuclear weapons (Project))}},
is different with another tuple,
\textit{\small{(chrome (Item) $\xmapsto[(Cause \_change\_of\_position)]{promotes}$ google (Agent))}} in terms of local type of causality.
To deal with the \textit{local attention}, we decomposed the attention weight $a_{ij}$ by relation specific transformation in feed forward network:
\begin{align*}
&a_{ij} = FF (h_j, s_{i-1}, r)
\end{align*}
where $FF$ has relation specific hidden layer and $r \in R$ is a type of relation in the distinct set of relations $R$ in training corpus (See Figure~\ref{fig:proposedmodel}).
\begin{figure}[t]
\centering
\includegraphics[trim=2.2cm 5.5cm 2.1cm 5cm,clip,width=.96\linewidth]{figs/srs.pdf}
\caption{\label{fig:proposedmodel} Our neural reasoner. The encoder takes causal phrases and decoder takes effect phrases by learning the causal alignment between them. The MLP layer in the middle takes different types of FrameNet relation and locally attend the cause to the effect w.r.t the relation (e.g., ``because of'', ``led to'', etc).
\end{figure}
Since training only with our causal graph may not be rich enough for dealing various lexical variation in text, we use pre-trained word embedding such as word2vec~\cite{mikolov2013distributed} trained on GoogleNews corpus\footnote{https://code.google.com/archive/p/word2vec/} for initialization.
For example, given a cause phrase \textit{weapon equipped}, our model could generate multiple effect phrases with their likelihood: \textit{($\xmapsto[0.54]{result}$war)}, \textit{($\xmapsto[0.12]{force}$army reorganized)}, etc, even though there are no tuples exactly matched in \textsc{CGraph}.
We trained our neural reasoner in either forward or backward direction.
In prediction, decoder inferences by predicting effect (or cause) phrase in forward (or backward) direction.
As described in the Algorithm~\ref{alg:inference}, the backward inference continue predicting the previous causal phrases until it has high enough Granger confidence with the target event.
\section{Experiment}\label{sec:experiment}
\textbf{Data}. We collect on-line social media from tweets, news articles, and blogs.
Our Twitter data has one million tweets per day from 2008 to 2013 that are crawled using Twitter's Garden Hose API.
News and Blog dataset have been crawled from 2010 to 2013 using Google's news API.
For target time series, we collect companies' stock prices in NASDAQ and NYSE from 2001 until present for 6,200 companies.
For presidential election polls, we collect polling data of the 2012 presidential election from 6 different websites, including USA Today , Huffington Post, Reuters, etc.
\begin{table}[t]
\centering
\setlength\tabcolsep{2pt}
\caption{\label{tab:dynamics} Examples of $F_{words}$ with their temporal dynamics: Shannon entropy, mean, standard deviation, slope of peak, and number of peaks.}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{@{}r|cccccc@{}}
\toprule
& \textbf{entropy} & \textbf{mean} & \textbf{STD} & \textbf{max\_slope} & \textbf{\#-peaks} \\
\midrule
\#lukewilliamss & 0.72 & 22.01 & 18.12 & 6.12 & 31 \\
happy\_thanksgiving & 0.40 & 61.24& 945.95 &3423.75 &414 \\
michael\_jackson & 0.46 &141.93 &701.97 &389.19 &585 \\
\bottomrule
\end{tabular}
\end{table}
\textbf{Features}. For N-gram word features $F_{word}$,we choose the spiking words based on their temporal dynamics (See Table~\ref{tab:dynamics}).
For example, if a word is too frequent or the time series is too burst, the word should be filtered out because the trend is too general to be an event.
We choose five types of temporal dynamics: Shannon entropy, mean, standard deviation, maximum slope of peak, and number of peaks;
and delete words that have too low or high entropy, too low mean and deviation, or the number of peaks and its slope is less than a certain threshold.
Also, we filter out words whose frequency is less than five.
From the $1,677,583$ original words, we retain $21,120$ words as final candidates for $F_{words}$ including uni-gram and bi-gram words.
For sentiment $F_{senti}$ and topic $F_{topic}$ features, we choose 50 topics generated for both politicians and companies separately using LDA, and then use top 10 words for each topic to calculate sentiment score for this topic.
Then we can analyze the causality between sentiment series of a specific topic and collected time series.
\textbf{Tasks}. To show validity of causality detector, first we conduct random analysis between target time series and randomly generated time series.
Then, we tested forecasting stock prices and election poll values with or without the detected textual features to check effectiveness of our causal features.
We evaluate our reasoning algorithm for generation ability compared to held-out cause-effect tuples using BLEU metric.
Then, for some companies' time series, we describe some qualitative result of some interesting causal text features found with Granger causation and explanations generated by our reasoners between the target and the causal features.
We also conducted human evaluation on the explanations.
\subsection{Random Causality Analysis}
\begin{figure}[t]
\centerin
{
\subfloat[\footnotesize{$y \xleftarrow[]{lag=3} rf_1, ..., rf_k$ }]{
\fbox{\includegraphics[clip,trim=2.4cm 0.3cm 0.5cm 1.5cm,width=.84\linewidth, height=60px]{figs/{random_GOOGL_False_3_2013-01-01_2013-12-31}.jpg}
}}\\
\subfloat[\footnotesize{$y \xrightarrow[]{lag=3} rf_1, ..., rf_k$ }]{
\fbox{\includegraphics[clip,trim=2.4cm 0.3cm 0.5cm 0.7cm,width=.84\linewidth, height=60px]{figs/{random_GOOGL_True_3_2013-01-01_2013-12-31}.jpg}
}}\\
\caption{\label{fig:random} Random causality analysis on \textbf{Googles}'s stock price change ($y$) and randomly generated features ($rf$) during 2013-01-01 to 2013-12-31.
(a) shows how the random features $rf$ cause the target $y$, while (b) shows how the target $y$ causes the random features $rf$ with lag size of 3 days.
The color changes according to causality confidence to the target (blue is the strongest, and yellow is the weakest).
The target time series has y scale of prices, while random features have y scale of causality degree $\mathbf{C}(y,rf) \subset [ 0,1 ]$.
\end{figure}
To check whether our causality scoring function $\mathbf{C}$ detects the temporal causality well, we conduct a random analysis between target time series and randomly generated time series (See Figure~\ref{fig:random}).
For Google's stock time series, we regularly move window size of 30 over the time and generate five days of time series with a random peak strength using a SpikeM model~\cite{DBLP:conf/kdd/MatsubaraSPLF12a}\footnote{SpikeM has specific parameters for modeling a time series such as peak strength, length, etc.}.
The color of random time series $rf$ changes from blue to yellow according to causality degree with the target $\mathbf{C}(y,rf)$.
For example, blue is the strongest causality with target time series, while yellow is the weakest.
We observe that the strong causal (blue) features are detected just before (or after) the rapid rise of Google' stock price on middle October in (a) (or in (b)).
With the lag size of three days, we observe that the strength of the random time series gradually decreases as it grows apart from the peak of target event.
The random analysis shows that our causality function $\mathbf{C}$ appropriately finds cause or effect relation between two time series in regard of their strength and distance.
\subsection{Forecasting with Textual Features}\label{sec:forecasting}
\begin{table}[h]
\footnotesize
\caption{\label{tab:forecasting} Forecasting errors (RMSE) on \textbf{Stock} and \textbf{Poll} data with time series only (\textit{SpikeM} and \textit{LSTM}) and with time series plus text feature (\textit{random}, \textit{words}, \textit{topics}, \textit{sentiment}, and \textit{composition}).}
\centering
\setlength\tabcolsep{2pt}
\begin{tabular}{r|r|cc|ccccc}
\toprule
\multicolumn{2}{r}{\textit{}} & \multicolumn{2}{c}{\textbf{Time Series}} & \multicolumn{5}{c}{\textbf{Time Series + Text}} \\\hline
\multicolumn{2}{r}{\textit{Step}} & SpikeM & LSTM & $\mathbf{C}_{rand}$ & $\mathbf{C}_{words}$ & $\mathbf{C}_{topics}$ & $\mathbf{C}_{senti}$ & $\mathbf{C}_{comp}$\\
\midrule
\multirow{3}{2mm}{\rotatebox[origin=c]{90}{\textbf{Stock}}}
&1 & 102.13 & 6.80 & 3.63 & 2.97 & 3.01 & 3.34 & \underline{1.96} \\
&3 & 99.8 & 7.51 & 4.47 & 4.22 & 4.65 & 4.87 & \underline{3.78} \\
&5& 97.99 & 7.79 & 5.32 & \underline{5.25} & 5.44 & 5.95 & 5.28 \\
\hline
\multirow{3}{2mm}{\rotatebox[origin=c]{90}{\textbf{Poll}}}
&1 &10.13 & 1.46 &1.52 & 1.27 & 1.59 & 2.09 & \underline{1.11} \\
&3 & 10.63 & 1.89 & 1.84 & 1.56 & 1.88 & 1.94 & \underline{1.49}\\
&5 & 11.13 & 2.04 & 2.15 & 1.84 & 1.88 & 1.96 &\underline{1.82}\\
\bottomrule
\end{tabular}
\end{table}
We use time series forecasting task as an evaluation metric of whether our textual features are appropriately causing the target time series or not.
Our feature composition function $\Phi$ is used to extract good causal features for forecasting.
We test forecasting on stock price of companies (\textbf{Stock}) and predicting poll value for presidential election (\textbf{Poll}).
For stock data, We collect daily closing stock prices during 2013 for ten IT companies\footnote{Company symbols used: TSLA, MSFT, GOOGL, YHOO, FB, IBM, ORCL, AMZN, AAPL and HPO}.
For poll data, we choose ten candidate politicians~\footnote{Name of politicians used: Santorum, Romney, Pual, Perry, Obama, Huntsman, Gingrich, Cain, Bachmann} in the period of presidential election in 2012.
\begin{table}[t]
\centering
\small
\caption{\label{tab:beam} Beam search results in neural reasoning. These examples could be filtered out by graph heuristics before generating final explanation though.
\begin{tabular}{@{}l@{}|l@{}}
\toprule
Cause$\mapsto$Effect in \textsc{CGraph} & Beam Predictions\\\hline
\midrule
\specialcell{the dollar's \\$\xmapsto[]{caus}$ against the yen} & \specialcell{$[1]$$\xmapsto[]{caus}$ against the yen\\ $[2]$$\xmapsto[]{caus}$ against the dollar \\ $[3]$$\xmapsto[]{caus}$ against other currencies} \\\hline
\specialcell{without any exercise \\$\xmapsto[]{caus}$ news article} & \specialcell{$[1]$$\xmapsto[]{lead to}$ a difference \\ $[2]$$\xmapsto[]{caus}$ the risk \\ $[3]$$\xmapsto[]{make}$ their weight} \\
\bottomrule
\end{tabular
\end{table}
For each of stock and poll data, the future trend of target is predicted only with target's past time series or with target's past time series and past time series of textual features found by our system.
Forecasting only with target's past time series uses \textit{SpikeM}~\cite{DBLP:conf/kdd/MatsubaraSPLF12a} that models a time series with small number of parameters and simple \textit{LSTM}~\cite{hochreiter1997long,nnet} based time series model.
Forecasting with target and textual features' time series use Vector AutoRegressive model with exogenous variables (VARX)~\cite{hamilton1994time} from different composition function such as $\mathbf{C}_{random}$, $\mathbf{C}_{words}$, $\mathbf{C}_{topics}$, $\mathbf{C}_{senti}$, and $\mathbf{C}_{composition}$.
Each composition function except $\mathbf{C}_{random}$ uses top ten textual features that causes each target time series.
We also tested LSTM with past time series and textual features but VARX outperforms LSTM.
Table~\ref{tab:forecasting} shows root mean square error (RMSE) for forecasting with different step size (time steps to predict), different set of features, and different regression algorithms on stock and poll data.
The forecasting error is summation of errors over moving a window (30 days) by 10 days over the period.
Our $\mathbf{C}_{composition}$ method outperforms other time series only models and time series plus text models in both stock and poll data.
\subsection{Generating Causality with Neural Reasoner}
The reasoner needs to predict the next effect phrase (or previous cause phrase) so the model should be evaluated in terms of generation task.
We used the BLEU ~\cite{papineni2002bleu} metric to evaluate the predicted phrases on held out phrases in our \textsc{CGraph}~.
Since our \textsc{CGraph}~ has many edges, there may be many good paths (explanations), possibly making our prediction diverse. To evaluate such diversity in prediction, we used ranking-based BLEU method on the $k$ set of predicted phrases by beam search.
For example, $B@k$ means BLEU scores for generating $k$ number of sentences and $B@kA$ means the average of them.
Table~\ref{tab:beam} shows some examples of our beam search results when $k=3$.
Given a cause phrase, the neural reasoner sometime predicts semantically similar phrases (e.g., \textit{against the yen}, \textit{against the dollar}), while it sometimes predicts very diverse phrases (e.g., \textit{a different}, \textit{the risk}).
Table~\ref{tab:bleu} shows BLEU ranking results with different reasoning algorithms: \textbf{S2S} is a sequence to sequence learning trained on \textsc{CGraph}~by default, \textbf{S2S+WE} adds word embedding initialization, and \textbf{S2S+REL+WE} adds relation specific attention.
Initializing with pre-trained word embeddings (\textbf{+WE}) helps us improve on prediction.
Our relation specific attention model outperforms the others, indicating that different type of relations have different alignment patterns.
\begin{table}[t]
\centering
\caption{\label{tab:bleu} BLEU ranking.
Additional word representation \textbf{+WE} and relation specific alignment \textbf{+REL} help the model learn the cause and effect generation task especially for diverse patterns.
\begin{tabular}{@{}l|c|c|c}
\toprule
&B@1 & B@3A & B@5A\\
\midrule
\textbf{S2S} & 10.15 & 8.80 & 8.69 \\
\textbf{S2S + WE} & 11.86 & 10.78 & 10.04 \\
\textbf{S2S + WE + REL} & 12.42 & 12.28 & 11.53 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Generating Explanation by Connecting}
\begin{table*}[h]
\centering
\caption{\label{tab:explanation} Example causal chains for explaining the rise ($\uparrow$) and fall ($\downarrow$) of companies' stock price. The temporally causal {\color{Sepia}$feature$} and {\color{BlueViolet}$target$} are linked through a sequence of predicted cause-effect tuples by different reasoning algorithms: a symbolic graph traverse algorithm \textit{SYMB} and a neural causality reasoning model \textit{NEUR}.
\resizebox{\linewidth}{!}{%
\begin{tabular}{@{}c|l@{}}
\midrule
\parbox[t]{1.0mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{\textit{SYMB}}}}
& \textbf{\color{Sepia}medals} $\xmapsto[]{match}$ gold\_and\_silver\_medals $\xmapsto[]{swept}$ korea $\xmapsto[]{improving}$ relations $\xmapsto[]{widened}$ gap $\xmapsto[]{widens}$ \textbf{\color{BlueViolet}facebook} $\uparrow$ \\
& \textbf{\color{Sepia}excess}$\xmapsto[]{match}$excess\_materialism$\xmapsto[]{cause}$people\_make\_films$\xmapsto[]{make}$money $\xmapsto[]{changed}$ twitter $\xmapsto[]{turned} $\textbf{\color{BlueViolet}facebook} $\downarrow$\\
& \textbf{\color{Sepia}clinton} $\xmapsto[]{match}$president\_clinton $\xmapsto[]{raised}$antitrust\_case $\xmapsto[]{match}$government's\_antitrust\_case\_against\_microsoft $\xmapsto[]{match}$microsoft $\xmapsto[]{beats}$\textbf{\color{BlueViolet}apple} $\downarrow$\\
\hline
\parbox[t]{1.0mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{\textit{NEUR}}}}
& \textbf{\color{Sepia}google} $\xmapsto[]{forc}$ microsoft\_to\_buy\_computer\_company\_dell\_announces\_recall\_of\_batteries $\xmapsto[]{cause}$ \textbf{\color{BlueViolet}microsoft} $\uparrow$\\
& \textbf{\color{Sepia}the\_deal} $\xmapsto[]{make}$ money $\xmapsto[]{rais}$ at\_warner\_music\_and\_google\_with\_protest\_videos\_things $\xmapsto[]{caus}$ \textbf{\color{BlueViolet}google} $\downarrow$\\
& \textbf{\color{Sepia}party} $\xmapsto[]{cut}$ budget\_cuts$\xmapsto[]{lower}$ budget\_bill$\xmapsto[]{decreas}$ republicans$\xmapsto[]{caus}$ obama$\xmapsto[]{lead to}$ facebook\_polls$\xmapsto[]{caus}$ \textbf{\color{BlueViolet}facebook} $\downarrow$\\
& \textbf{\color{Sepia}company} $\xmapsto[]{forc}$ to\_stock\_price $\xmapsto[]{lead to}$ investors $\xmapsto[]{increas}$ oracle\_s\_stock $\xmapsto[]{increas}$ \textbf{\color{BlueViolet}oracle} $\uparrow$
\\
\bottomrule
\end{tabular}
\end{table*}
Evaluating whether a sequence of phrases is reasonable as an explanation is very challenging task.
Unfortunately, due to lack of quantitative evaluation measures for the task, we conduct a human annotation experiment.
Table~\ref{tab:explanation} shows example causal chains for the rise ($\uparrow$) and fall ($\downarrow$) of companies' stock price, continuously produced by two reasoners: \textit{SYBM} is symbolic reasoner and \textit{NEUR} is neural reasoner.
\begin{table}[h]
\centering
\caption{\label{tab:eval} Human evaluation on explanation chains generated by symbolic and neural reasoners.
\begin{tabular}{r|c|c}
\toprule
\textbf{Reasoners} &SYMB & NEUR\\
\midrule
\textbf{Accuracy (\%)}& 42.5 & 57.5 \\
\bottomrule
\end{tabular}
\end{table}
We also conduct a human assessment on the explanation chains produced by the two reasoners, asking people to choose more convincing explanation chains for each feature-target pair.
Table~\ref{tab:eval} shows their relative preferences
\section{Related Work}\label{sec:related}
Prior works on causality detection~\cite{acharya2014causal,websummary,qiu2012granger} in time series data (e.g., gene sequence, stock prices, temperature) mainly use Granger~\cite{granger1988some} ability for predicting future values of a time series using past values of its own and another time series.
\cite{hlavavckova2007causality} studies more theoretical investigation for measuring causal influence in multivariate time series based on the entropy and mutual information estimation.
However, none of them attempts generating explanation on the temporal causality.
Previous works on text causality detection use syntactic patterns such as $X \xmapsto[]{verb} Y$, where the $verb$ is causative~\cite{girju2003automatic,riaz2013toward,kozareva2012cause,do2011minimally} with additional features~\cite{blanco2008causal}.
\cite{kozareva2012cause} extracted cause-effect relations, where the pattern for bootstrapping has a form of $X^* \xmapsto[Z^*]{verb} Y$ from which terms $X^*$ and $Z^*$ was learned.
The syntax based approaches, however, are not robust to semantic variation.
As a part of SemEval~\cite{girju2007semeval}, \cite{mirza2016catena} also uses syntactic causative patterns~\cite{mirza2014analysis} and supervised classifier to achieve the state-of-the-art performance.
Extracting the cause-effect tuples with such syntactic features or temporality~\cite{bethard2008building} would be our next step for better causal graph construction.
\cite{grivaz2010human} conducts very insightful annotation study of what features are used in human reasoning on causation.
Beyond the linguistic tests and causal chains for explaining causality in our work, other features such as counterfactuality, temporal order, and ontological asymmetry remain as our future direction to study.
Textual entailment also seeks a directional relation between two given text fragments~\cite{dagan2006pascal}.
Recently, \cite{rocktaschel2015reasoning} developed an attention-based neural network method, trained on large annotated pairs of textual entailment, for classifying the types of relations with decomposable attention~\cite{parikh2016decomposable} or sequential tree structure~\cite{chen2016enhancing}.
However, the dataset~\cite{bowman2015large} used for training entailment deals with just three categories, \textit{contradiction}, \textit{neutral}, and \textit{entailment}, and focuses on relatively simple lexical and syntactic transformations~\cite{kolesnyk2016generating}. Our causal explanation generation task is also similar to \textit{future scenario generation}~\cite{hashimoto2014toward,hashimoto2015generating}.
Their scoring function uses heuristic filters and is not robust to lexical variation.
\section{Conclusion}\label{sec:conclusion}
This paper defines the novel task of detecting and explaining causes from text for a time series.
First, we detect causal features from online text. Then, we construct a large cause-effect graph using FrameNet semantics. By training our relation specific neural network on paths from this graph, our model generates causality with richer lexical variation. We could produce a chain of cause and effect pairs as an explanation which shows some appropriateness. Incorporating aspects such as time, location and other event properties remains a point for future work.
In our following work, we collect a sequence of causal chains verified by domain experts for more solid evaluation of generating explanations.
| 2024-02-18T23:39:52.498Z | 2017-07-28T02:06:54.000Z | algebraic_stack_train_0000 | 729 | 8,202 |
|
proofpile-arXiv_065-3644 | \section{Introduction}
When a high-quality direct semiconductor 2D quantum well (QW) is placed inside an optical microcavity, the strong coupling of photons and QW excitations gives rise to a new quasiparticle: the polariton.
The properties of this fascinating half-light, half-matter particle strongly depend on the nature of the involved matter excitations.
If the Fermi energy is in the semiconductor band gap, the matter excitations are excitons. This case is theoretically well understood \cite{carusotto2013quantum,byrnes2014exciton}, and the first observation of the
resulting microcavity exciton-polaritons was already accomplished in 1992 by Weisbuch
\textit{et al.}\ \cite{PhysRevLett.69.3314}. Several studies on exciton-polaritons revealed remarkable results. For example, exciton-polaritons can form a Bose-Einstein condensate \cite{Kasprzak2006}, and were proposed as a
platform for high-$T_c$ superconductivity \cite{PhysRevLett.104.106402}.
The problem gets more involved if the Fermi energy is above the conduction band bottom, i.e., a conduction band Fermi sea is present. Then the matter excitations have a complex many-body structure, arising from the complementary phenomena of Anderson orthogonality~\cite{Anderson1967} and the Mahan exciton effect, entailing the Fermi-edge singularity~\cite{PhysRev.163.612,PhysRev.178.1072,NOZIERES1969,
PhysRev.178.1097,combescot1971infrared}.
An experimental study of the resulting ``Fermi-edge polaritons'' in a GaAs QW was first conducted in 2007
by Gabbay \textit{et al.}~\cite{PhysRevLett.99.157402}, and subsequently extended by Smolka \textit{et al.}~\cite{Smolka} (2014). A similar experiment on transition metal dichalcogenide monolayers was recently published by Sidler \textit{et al.}~\cite{sidler2017fermi} (2016).
From the theory side, Fermi-edge polaritons have been investigated in Ref.~\cite{PhysRevB.76.045320, PhysRevB.89.245301}. However, in these works only the case of infinite valence band hole mass was considered, which is the standard assumption in the Fermi-edge singularity or X-ray edge problem. Such a model is valid for low-mobility samples only and thus fails to explain the experimental findings in~\cite{Smolka}: there, a high-mobility sample was
studied, for which an almost complete vanishing of the polariton splitting was reported. Some consequences of a finite hole mass for polaritons were considered in a recent treatment~\cite{baeten2015mahan}, but without fully accounting for the so-called crossed diagrams that describe the Fermi sea shakeup, as we further elaborate below.
The aim of the present paper is therefore to study the effects of both finite mass and Fermi-edge singularity on polariton spectra in a systematic fashion. This is done analytically for a simplified model involving a contact interaction, which nethertheless preserves the qualitative features of spectra stemming from the finite hole mass and the presence of a Fermi sea. In doing so, we distinguish two regimes, with the Fermi energy $\mu$ being either much smaller or much larger than the exciton binding energy $E_B$.
For the regime where the Fermi energy is much larger than the exciton binding energy, $\mu \gg E_B$, several treatments of finite-mass effects on the Fermi-edge singularity alone (i.e., without polaritons) are available, both analytical and numerical. Without claiming completeness, we list~\cite{gavoret1969optical,PhysRevB.44.3821,
PhysRevLett.65.1048,PhysRevB.35.7551, Nozi`eres1994}. In our work we have mainly followed the approach of
Ref.~\cite{gavoret1969optical}, extending it by going from 3D to 2D and, more importantly, by addressing the cavity coupling which gives rise to polaritons.
For infinite hole mass the sharp electronic spectral feature caused by the Fermi edge singularity can couple with the cavity mode to create sharp polariton-type spectral peaks~\cite{PhysRevB.76.045320, PhysRevB.89.245301}.
We find that the finite hole mass cuts off the Fermi edge singularity and suppresses these polariton features.
In the opposite regime of $\mu \ll E_B$, where the Fermi energy is much smaller than the exciton binding energy, we are not aware of any previous work addressing the modification of the Fermi-edge singularity due to finite mass.
Here, we propose a way to close this gap using a diagrammatic approach. Interestingly, we find that in this regime the excitonic singularities are not cut off, but are rather enhanced by finite hole mass, in analogy to the heavy valence band hole propagator treated in~\cite{PhysRevLett.75.1988}.
This paper has the following structure: First, before embarking into technical details, we will give an intuitive overview of the main results in Sec.~\ref{Pisummarysec}. Detailed computations will be performed in subsequent sections:
In Sec.~\ref{Model sec}, the full model describing the coupled cavity-QW system is presented. The key quantity that determines its optical properties is the cavity-photon self-energy $\Pi$, which we will approximate by the electron-hole correlator in the absence of a cavity. Sec.~\ref{Photon self-energy zero mu sec} shortly recapitulates how $\Pi$ can be obtained in the regime of vanishing Fermi energy, for infinite and finite hole masses. Then we turn to the many-body problem in the presence of a Fermi sea in the regimes of small (Sec.~\ref{Photon self-energy small mu sec}) and large Fermi energy (Sec.\ref{Photon self-energy large mu sec}). Using the results of the previous sections, polariton properties are addressed in Sec.~\ref{Polariton properites sec}. Finally, we summarize our findings and list several possible venues for future study in Sec.~\ref{Conclusion sec}.
\section{Summary of results}
\label{Pisummarysec}
In a simplified picture, polaritons arise from the hybridization of two quantum excitations with energies close to each other, the cavity photon and a QW resonance~\cite{carusotto2013quantum,byrnes2014exciton}. The resulting energy spectrum consists of two polariton branches with an avoided crossing, whose light and matter content are determined by the energy detuning of the cavity mode from the QW mode.
While the cavity photon can be approximated reasonably by a bare mode with quadratic dispersion and a Lorentzian broadening due to cavity losses, the QW resonance has a complicated structure of many-body origin. The QW optical response function is rather sensitive to nonzero density of conduction band (CB) electrons. Roughly, it tends to broaden QW spectral features, which contribute to the spectral width of polariton lines.
A more detailed description of the polariton lines requires finding first the optical response function $\Pi({\textbf{Q}}, \Omega)$ of the QW alone (without polaritons).
Here, ${\textbf{Q}}$ and $\Omega$ are, respectively, the momentum and the energy of an incident photon probing the optical response. The imaginary part of $\Pi({\textbf{Q}},\Omega)$, $A({\textbf{Q}}, \Omega) = -\text{Im}\left[\Pi({\textbf{Q}}, \Omega)\right]/\pi$, defines the spectral function of particle-hole excitations in the QW. In the following, we discuss the evolution of $A({\textbf{Q}}, \Omega)$ as the chemical potential $\mu$ is varied, concentrating on the realistic case of a finite ratio of the electron and hole masses. We assume that the temperature is low, and consider the zero-temperature limit in the entire work. In addition, we will limit ourselves to the case where the photon is incident perpendicular to the QW, i.e.\ its in-plane momentum is zero, and study $A(\Omega) \equiv A(Q=0, \Omega)$.
\begin{comment}
The sought optical properties of the coupled cavity-QW system are determined by the retarded dressed photon Green's function \cite{PhysRevB.89.245301, baeten2015mahan}:
\begin{align}
\label{dressedphot}
D^R(\textbf{Q},\Omega) = \frac{1}{\Omega - \omega_\textbf{Q} + i0^+ - \Pi(\textbf{Q},\Omega)} \ ,
\end{align}
where $\omega_\textbf{Q}$ is the (quadratic) cavity mode dispersion, and $\Pi(\textbf{Q},\Omega)$ is the retarded photon self-energy. This dressed photon is nothing but the polariton. The spectral function corresponding to (\ref{dressedphot}) is given by
\begin{align}
\label{Polaritonspectralfunction}
\mathcal{A}(\textbf{Q},\omega) = -\frac{1}{\pi}\text{Im}\left[D^R(\textbf{Q},\omega)\right] \quad .
\end{align}
$\mathcal{A}$ determines the absorption respectively reflection of the coupled cavity-QW system, which are the quantities typically measured in polariton experiments like \cite{PhysRevLett.99.157402}, \cite{Smolka}.
\end{comment}
In the absence of free carriers ($\mu$ is in the gap), a CB electron and a hole in the valence band (VB) create a hydrogen-like spectrum of bound states. In the case of a QW it is given by the 2D Elliot formula (see, e.g.,~\cite{haug1990quantum}). Being interested in the spectral function close to the main exciton resonance, we replace the true Coulomb interaction by a model of short-ranged interaction potential of strength $g$ [see Eqs.~(\ref{Helectronic}) and (\ref{gdef})].
As a result, there is a single bound state at an energy $E_G - E_B(g)$, which we identify with the the lowest-energy exciton state. Here, $E_G$ is the VB-CB gap, and energies are measured with respect to the minimum of the conduction band.
A sketch of $A(\Omega)$ is shown in Fig.~\ref{mahanexciton1}.
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{fig1.pdf}
\caption{(Color online) Absorption spectrum for short-range electron-hole interaction and $\mu<0$, given by the imaginary part of Eq.\ (\ref{ladderseries}). }
\label{mahanexciton1}
\end{figure}
For $\mu>0$, electrons start to populate the CB. If the chemical potential lies within the interval $0<\mu \ll E_B$, then the excitonic Bohr radius $r_B$ remains small compared to the Fermi wavelength $\lambda_F$ of the electron gas, and the exciton is well defined. Its interaction with the particle-hole excitations in the CB modifies the spectral function $A(\Omega)$ in the vicinity of the exciton resonance. The limit of an infinite hole mass was considered
by Nozi\`{e}res \textit{et al.}~\cite{PhysRev.178.1072, NOZIERES1969, PhysRev.178.1097}: Due to particle-hole excitations of the CB Fermi sea, which can happen at infinitesimal energy cost, the exciton resonance is replaced by a power law spectrum, see inset of Fig.\ \ref{finmasssmallmu1}.
In terms of the detuning from the exciton threshold,
\begin{align}
\omega = \Omega - \Omega_T^{\text{exc}} \ , \quad \Omega_T^{\text{exc}} = E_G + \mu - E_B,
\end{align}
the spectral function, $A_{\text{exc}}(\omega) = - \text{Im}\left[\Pi_{\text{exc}}(\omega)\right]/\pi$, scales as:
\begin{align}
\label{Aexcsummary}
A_{\text{exc}}(\omega)\bigg|_{M = \infty} \sim \theta(\omega) \frac{E_B}{\omega} \left(\frac{\omega} {\mu}\right)^{\alpha^2 }, \quad
\omega \ll \mu.
\end{align}
The effective exciton-electron interaction parameter $\alpha$ was found by Combescot \textit{et al.}~\cite{combescot1971infrared}, making use of final-state Slater determinants.
In their work, $\alpha$ is obtained in terms of the scattering phase shift $\delta$ of Fermi level electrons off the hole potential, in the presence of a bound state, as $\alpha = |\delta/\pi-1|$. For the system discussed here this gives \cite{adhikari1986quantum}:
\begin{align}
\alpha = 1/\left|\ln\left(\frac{\mu}{E_B}\right)\right|.
\label{excitonleadingbehaviour}
\end{align}
We re-derive the result for $\alpha$ diagrammatically (see Sec.~\ref{Photon self-energy small mu sec}), in order to extend the result of Combescot \textit{et al.}\ to the case of a small but nonzero CB electron-VB hole mass ratio $\beta$, where
\begin{align}
\beta = m/M.
\end{align}
While the deviation of $\beta$ from zero does not affect the effective interaction constant $\alpha$, it brings qualitatively new features to $A(\Omega)$, illustrated in Fig.\ \ref{finmasssmallmu1}. The origin of these changes is found in the kinematics of the interaction of the exciton with the CB electrons. Momentum conservation for finite exciton mass results in phase-space constraints for the CB particle-hole pairs which may be excited in the process of exciton creation. As a result, the effective density of states $\nu(\omega)$ of the pairs with pair energy $\omega$ (also corresponding to the exciton decay rate) is reduced from $\nu(\omega) \sim \omega$ at $\beta=0$ \cite{combescot1971infrared} to $\nu(\omega) \sim \omega^{3/2}$ when $\omega$ is small compared to the recoil energy $E_R=\beta\mu$. A smaller density of states for pairs leads to a reduced transfer of the spectral weight to the tail; therefore, the delta function singularity at the exciton resonance survives the interaction with CB electrons, i.e.\ $\beta >0$ tends to restore the exciton pole, and one finds:
\begin{subequations}
\label{bothfives}
\begin{align}
\label{Excgeneral}
&A_{\text{exc}}(\omega)\bigg|_{M<\infty} = A_{\text{exc,incoh.}}(\omega) \theta(\omega) + \beta^{\alpha^2} E_B \delta(\omega),
\\ \notag \\
&A_{\text{exc,incoh.}}(\omega) \sim E_B
\label{Exccases}
\begin{cases}
\frac{\alpha^2}{\sqrt{\omega \beta\mu}} \beta^{\alpha^2} \quad & \omega \ll \beta\mu \\
\frac{\alpha^2}{\omega} \left(\frac{\omega}{\mu}\right)^{\alpha^2} \quad &\beta\mu \ll \omega \ll \mu.
\end{cases}
\end{align}
\end{subequations}
The main features of this spectral function are summarized in Fig.\ \ref{finmasssmallmu1}:
As expected, the exciton recoil only plays a role for small frequencies $\omega \ll \beta\mu$, while the infinite mass edge singularity is recovered for larger frequencies. The spectral weight of the delta peak is suppressed by the interaction. For $\beta \rightarrow 0$ and $\alpha \ne 0$, we recover the infinite mass result, where no coherent part shows up. If, on the opposite, $\alpha^2 \rightarrow 0$ but $\beta \neq 0$, the weight of the delta peak goes to one: The exciton does not interact with the Fermi sea, and its spectral function becomes a pure delta peak, regardless of the exciton mass. A partial survival of the coherent peak at $\alpha,\beta\neq 0$ could be anticipated from the results of Rosch and Kopp~\cite{PhysRevLett.75.1988} who considered the motion of a heavy particle in a Fermi gas of light particles. This problem was also analyzed by Nozi\`eres \cite{Nozi`eres1994}, and the coherent peak can be recovered by Fourier transforming his time domain result for the heavy particle Green's function.
At this point, let us note the following: for $\mu > 0$, the hole can bind two electrons with opposite spin, giving rise to trion features in the spectrum. We will not focus on those, since, for weak doping, their spectral weight is small in $\mu$ (more precisely, in $\mu/E_T$, where $E_T \ll E_B$ is the trion binding energy), and they are red detuned w.r.t.\ the spectral features highlighted in this work. In the regime of $\mu\gg E_B \gg E_T$, trions should be neglible as well. Some further discussion of trion properties can be found in Appendix \ref{trion-contribution}.
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{fig2.pdf}
\caption{(Color online) Absorption for $\mu \ll E_B$ and finite hole mass, illustrating Eq.\ (\ref{bothfives}). The full green curve shows the delta peak (broadened for clarity), while the dashed blue line is the incoherent part. Frequencies are measured from the exciton threshold frequency $\Omega_T^{\text{exc}} = E_G + \mu - E_B$. The inset shows the infinite mass spectrum for comparison. The dashed region in the inset indicates the continuous part of the spectrum, whose detailed form is beyond the scope of this paper, as we only consider the leading singular parts of all spectra.}
\label{finmasssmallmu1}
\end{figure}
Upon increase of chemical potential $\mu$,
the CB continuum part (inset of Fig.\ \ref{finmasssmallmu1}) starts building up into the well-known Fermi-edge singularity
(FES) at the Burstein-Moss \cite{PhysRev.93.632,moss1954interpretation} shifted
threshold, $\Omega_T^{
\text{FES}} = E_G + \mu$. For finite mass ($\beta \neq 0$), the FES will however be broadened by recoil effects (see below). At the same time, the delta function singularity of Eq.\ (\ref{Excgeneral}) at the absorption edge vanishes at some value of $\mu$. So, at higher electron densities, it is only the FES which yields a nonmonotonic behavior of the absorption coefficient, while the absorption edge is described by a converging power law with fixed exponent, see Eq.\ (\ref{AFES}). This evolution may be contrasted to the one at $\beta=0$. According to \cite{combescot1971infrared,PhysRevB.35.7551}, the counterparts of the absorption edge and broadened FES are two power law nonanalytical points of the spectrum which are present at any $\mu$ and characterized by exponents continuously evolving with $\mu$.
A more detailed discussion of the evolution of absorption spectra as $\mu$ increases from small to intermediate to large values is presented in Appendix \ref{muincapp}.
Let us now consider the limit $\mu\gg E_B$, where the FES is the most prominent spectral feature, in closer detail. In the case of infinite hole mass ($\beta=0$), and in terms of the detuning from the FES threshold,
\begin{align}
\omega = \Omega - \Omega_T^{\text{FES}}, \quad \Omega_T^{\text{FES}} = E_G + \mu,
\end{align}
the FES absorption scales as \cite{PhysRev.178.1072, NOZIERES1969, PhysRev.178.1097}:
\begin{align}
\label{FESfirst}
A_{\text{FES}}(\omega)\bigg|_{M = \infty} \sim \theta(\omega) \left(\frac{\omega}{\mu}\right)^{-2g}, \end{align}
as illustrated in the inset of Fig.\ \ref{FEScomp}.
In the above formula, the interaction contribution to the treshold shift, which is of order $g\mu$, is implicitly contained in a renormalized gap $E_G$.
What happens for finite mass? This question was answered in \cite{gavoret1969optical,PhysRevB.35.7551, Nozi`eres1994}: As before, the recoil comes into play, effectively cutting the logarithms contributing to
(\ref{FESfirst}). Notably, the relevant quantity is now the \emph{VB hole} recoil, since the exciton is no longer a well defined entity.
The FES is then replaced by a rounded feature, sketched in Fig.\ \ref{FEScomp}, which sets in continuously:
\begin{align}
\label{AFES}&A_\text{FES}(\omega)\bigg|_{M<\infty} \hspace{-1em} \sim
\begin{cases}
\left(\!\frac{\omega}{\beta\mu}\right)^3 \beta^{-2g} \cdot \theta(\omega) & \omega \ll \beta\mu \\
\ \ \left(\frac{\sqrt{(\omega - \beta\mu)^2 + (\beta\mu)^2}}{\mu}\right)^{\!-2g} &\beta\mu \ll \omega \ll \mu.
\end{cases}
\end{align}
Eq.\ (\ref{AFES}) can be obtained by combining and extending to 2D the results presented in Refs. \cite{gavoret1969optical,PhysRevB.35.7551}.
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{fig3.pdf}
\caption{(Color online) Finite mass absorption in the case $E_B \ll \mu$. Frequencies are measured from $\Omega_T^{\text{FES}} = E_G + \mu$. The inset shows the infinite mass case for comparison.}
\label{FEScomp}
\end{figure}
The maximum of Eq.\ (\ref{AFES}) is found at the so-called direct threshold, $\omega_D = \beta\mu$ (see Fig.\ \ref{twothresholds}(a)). This shift is a simple effect of the Pauli principle: the photoexcited electron needs to be placed on top of the CB Fermi sea. The VB hole created this way, with momentum $k_F$, can subsequently decay into a zero momentum hole, scattering with conduction band electrons [see Fig.\ \ref{twothresholds}(b)]. These processes render the lifetime of the hole finite, with a decay rate $\sim g^2 \beta\mu$.
Within the logarithmic accuracy of the Fermi edge calculations, this is equal to $\beta \mu$, the cutoff of the power law in Eq.~(\ref{AFES}) (See Sec.~\ref{FESfiniteholemasssubseq} for a more detailed discussion).
As a result, the true threshold of absorption is found at the indirect threshold, $\omega_I = 0$.
Due to VB hole recoil, the CB hole-electron pair density of states now scales as $\nu(\omega) \sim\omega^3$, leading to a similar behavior of the spectrum, see Fig.\ \ref{FEScomp}.
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{fig4.pdf}
\caption{(Color online) (a): The direct threshold $\Omega_D = \Omega_T^{\text{FES}} + \beta\mu$ and the indirect threshold $\Omega_I = \Omega_T^{\text{FES}}$ [in the main text, $\omega_{D\!/\!I} = \Omega_{D\!/\!I} - \Omega_T^{\text{FES}}]$ (b): The VB hole can undergo inelastic processes which reduces its energy, smearing the infinite mass edge singularity.}
\label{twothresholds}
\end{figure}
We note that at finite ratio $\beta = m/M$, raising the chemical potential $\mu$ from $\mu \ll E_B$ to $\mu \gg E_B$ results in a qualitative change of the threshold behavior
from a singular one of Eq.\ (\ref{Exccases}), to a converging power law, see the first line of Eq.\ (\ref{AFES}). Simultaneously, a broadened FES
feature appears in the continuum, at $\omega>0$.
The difference in the value of the exponent in the excitonic result [Eq.~(\ref{Exccases})], as compared to the FES low-energy behavior [Eq.~(\ref{AFES}) for $\omega \ll \beta\mu$], can be understood from the difference in the kinematic structure of the excitations: In the exciton case, the relevant scattering partners are an exciton and a CB electron-hole pair. In the FES case, one has the photoexcited electron as an additional scattering partner, which leads to further kinematic constraints and eventually results in a different low-energy power law.
In the frequency range $\beta\mu \ll \omega \lesssim \mu$, the physics is basically the same as in the infinite hole mass case ($\beta=0$). There, the behavior near the lowest threshold (which is exciton energy for $\mu \ll E_B$ and the CB continuum for $\mu \gg E_B$) is always $\sim \omega^{(1-\delta/\pi)^2-1}=\omega^{(\delta/\pi)^2-2\delta/\pi}$. But in the first case ($\mu \ll E_B$), $\delta \sim \pi - \alpha$ is close to $\pi$ (due to the presence of a bound state), so the threshold singularity is in some sense close to the delta peak , $\sim \text{Im}[1/(\omega+i0^+)]$, that one would have for $\mu=0$, whereas in the second case ($\mu \gg E_B$), $\delta \sim g$ is close to zero, so the threshold singularity is similar to a discontinuity.
Having discussed spectral properties of the QW alone, we can now return to polaritons. Their spectra $A_p (\omega)$ can be obtained by inserting the QW polarization as photon self-energy.
While a full technical account will be given in Sec.~\ref{Polariton properites sec}, the main results can be summarized as follows:
In the first case of study, of $\mu \ll E_B$ and finite $\beta$, the
polaritons arise from a mixing of the cavity and the sharp exciton mode.
The smaller the hole mass, the more singular the exciton features,
leading also to sharper polariton features. Furthermore, the enhanced
exciton quasiparticle weight pushes the two polariton branches further
apart. Conversely, in the singular limit of infinite hole mass, the pole
in the exciton spectrum turns into the pure power law familiar from
previous work, resulting in broader polariton features. A comparison of
the infinite and finite hole mass versions of the polariton spectra
$A_p(\omega)$ when the cavity photon is tuned into resonance with the
exciton is presented in Fig.\ \ref{summary_excpol}.
Notably, the above effects are rather
weak, since the exciton is a relatively sharp resonance even for
infinite hole mass.
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{fig5.pdf}
\caption{(Color online) Comparison of the polariton spectrum for $\mu \ll E_B$, at zero cavity detuning.
Frequencies are measured from the exciton threshold, $\Omega_T^\text{exc} = E_G + \mu-E_B$. The energy unit $\Delta$ corresponds to the half mode splitting at zero detuning in the bare exciton case ($\mu = 0$).
}
\label{summary_excpol}
\end{figure}
In the second case, $\mu \gg E_B$, the matter component of the polaritons corresponds to the FES singularity, which is much less singular than the exciton. Consequently, the polaritons (especially the upper one, which sees the high-frequency tail of the FES) are strongly washed out already at $\beta = 0$.
For finite hole mass, the hole recoil cuts off the FES singularity, resulting in further broadening of the polaritons. In addition, there is an overall upward frequency shift by $\beta\mu$, reflecting the direct threshold effect.
Fig.~\ref{FESpol1} shows the two polariton spectra at zero detuning.
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{fig6.pdf}
\caption{(Color online) Comparison of the polariton spectrum for $\mu \gg E_B$, at \textbf{zero cavity detuning}. Frequencies are measured from the indirect threshold, $\Omega_T^{\text{FES}} = E_G + \mu$. The energy unit $\tilde{\Delta}$, which determines the polariton splitting at zero detuning, is defined in Sec.~\ref{Polariton properites sec}, Eq.\ (\ref{deltatildedef}). The dotted vertical line indicates the position of the direct threshold, $\omega_D = \beta\mu$.
}
\label{FESpol1}
\end{figure}
The cutoff of the lower polariton for finite masses is even more drastic when the cavity is blue-detuned with respect to the threshold: Indeed, at large positive cavity detuning, the lower polariton is mostly matter-like, and thus more sensitive to the FES broadening. It therefore almost disappears, as seen in Fig.~\ref{FESpol2}.
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{fig7.pdf}
\caption{(Color online) Comparison of the polariton spectrum for $\mu \gg E_B$, at \textbf{large positive cavity detuning}. Frequencies are measured from the indirect threshold, $\Omega_T^{\text{FES}} = E_G + \mu$.
}
\label{FESpol2}
\end{figure}
\section{Model}
\label{Model sec}
After the qualitative overview in the previous section, let us now go into more detail, starting with the precise model in question.
To describe the coupled cavity-QW system, we study the following 2D Hamiltonian:
\begin{align}
\label{fullhamil}
H &= H_M + H_L, \\
H_M &=
\label{Helectronic} \sum_{\textbf{k}} \epsilon_\textbf{k} a^{\dagger}_{\textbf{k}}a_{\textbf{k}} - \sum_{\textbf{k}} \left[E_\textbf{k} +E_G\right] b^{\dagger}_{\textbf{k}}b_{\textbf{k}} \\ \nonumber &\quad - \frac{V_0}{\mathcal{S}}\sum_{\textbf{k}, \textbf{p}, \textbf{q}} a^{\dagger}_\textbf{k} a_\textbf{p} b_{\textbf{k}-\textbf{q}}b^\dagger_{\textbf{p} - \textbf{q}}, \\
H_L &= \sum_{\textbf{Q}}\omega_{\textbf{Q}} c^\dagger_\textbf{Q} c_{\textbf{Q}} -i\frac{d_0}{\sqrt{\mathcal{S}}}\sum_{ \textbf{p},\textbf{Q}} a^\dagger_{\textbf{p}+ \textbf{Q}}b_{\textbf{p}}c_{\textbf{Q}}
+ \text{h.c.}
\end{align}
Here, $H_M$, adapted from the standard literature on the X-ray edge problem \cite{gavoret1969optical}, represents the matter part of the system, given by a semiconductor in a two-band approximation: $a_\textbf{k}$ annihilates a conduction band (CB) electron with dispersion $\epsilon_{\textbf{k}} = \frac{k^2}{2m}$, while $b_{\textbf{k}}$ annihilates a valence band (VB) electron with dispersion $-(E_{\textbf{k}} + E_G) = -(\frac{k^2}{2M} + E_G)$. $E_G$ is the gap energy, which is the largest energy scale under consideration: In GaAs, $E_G \simeq 2$eV, while all other electronic energies are on the order of meV. The energies are measured from the bottom of the conduction band. $\mathcal{S}$ is the area of the QW, and we work in units where $\hbar = 1$. Unless explicitly stated otherwise, we assume spinless electrons, and concentrate on the zero temperature limit.
When a valence band hole is created via cavity photon absorption, it interacts with the conduction band electrons with an attractive Coulomb interaction. Taking into account screening, we model the interaction as point-like, with a constant positive matrix element $V_0$. The effective potential strength is then given by the dimensionless quantity
\begin{align}
\label{gdef}
g = \rho V_0, \quad \rho = \frac{m}{2 \pi},
\end{align}
$\rho$ being the 2D DOS. The appropriate value of $g$ will be further discussed in the subsequent sections.
Interactions of CB electrons with each other are completely disregarded in Eq.~(\ref{fullhamil}), presuming a Fermi liquid picture. This is certainly a crude approximation. It can be justified if one is mostly interested in the form of singularities in the spectral function. These are dominated by various power laws, which arise from low-energy particle hole excitations of electrons close to the Fermi energy, where a Fermi-liquid description should be valid.
The photons are described by $H_L$: We study lossless modes with QW in-plane momenta $\textbf{Q}$ and energies $\omega_\textbf{Q} = \omega_c + Q^2/2m_c$, where $m_c$ is the cavity mode effective mass. Different in-plane momenta $\textbf{Q}$ can be achieved by tilting the light source w.r.t.\ the QW. In the final evaluations we will mostly set $\textbf{Q}=0$, which is a valid approximation since $m_c$ is tiny compared to electronic masses.
The interaction term of $H_L$ describes the process of absorbing a photon while creating an VB-CB electron hole pair, and vice versa. $d_0$ is the interband electric dipole matrix element, whose weak momentum dependence is disregarded. This interaction term can be straightforwardly derived from a minimal coupling Hamiltonian studying interband processes only, and employing the rotating wave and electric dipole approximations (see, e.g., \cite{yamamoto1999mesoscopic}).
The optical properties of the full system are determined by the retarded dressed photon Green's function~\cite{PhysRevB.89.245301, baeten2015mahan}:
\begin{align}
\label{dressedphot}
D^R(\textbf{Q},\Omega) = \frac{1}{\Omega - \omega_\textbf{Q} + i0^+ - \Pi(\textbf{Q},\Omega)},
\end{align}
where $\Pi(\textbf{Q},\Omega)$ is the retarded photon self-energy. This dressed photon is nothing but the polariton. The spectral function corresponding to (\ref{dressedphot}) is given by
\begin{align}
\label{Polaritonspectralfunction}
\mathcal{A}(\textbf{Q},\omega) = -\frac{1}{\pi}\text{Im}\left[D^R(\textbf{Q},\omega)\right].
\end{align}
$\mathcal{A}(\textbf{Q},\omega)$ determines the absorption respectively reflection of the coupled cavity-QW system, which are the quantities typically measured in polariton experiments like \cite{PhysRevLett.99.157402,Smolka}.
Our goal is to determine $\Pi(\textbf{Q},\Omega)$. To second order in $d_0$ it takes the form
\begin{align}
\label{Kubo-formula}
\Pi(\textbf{Q},\Omega) \simeq &-i\frac{d_0^2}{\mathcal{S}}\int_{-\infty}^{\infty}\! dt \theta(t) e^{i\Omega t}\\
\nonumber&
\times \sum_{\textbf{k},\textbf{p}}\braket{0| b_{\textbf{k}}^\dagger(t) a_{\textbf{k}+\textbf{Q}}(t) a^\dagger_{\textbf{p}+\textbf{Q}}(0) b_{\textbf{p}} (0)|0} ,
\end{align}
where $\ket{0}$ is the noninteracting electronic vacuum with a filled VB, and the time dependence of the operators is generated by $H_M$.
Within this approximation, $\Pi(\textbf{Q},\omega)$ is given by the ``dressed bubble'' shown in Fig.~\ref{bubble}.
The imaginary part of $\Pi(\textbf{Q},\omega)$ can also be seen as the linear response absorption of the QW alone with the cavity modes tuned away.
\begin{figure}[H]
\centering
\includegraphics[width=.7\columnwidth]{fig8.pdf}
\caption{The photon self-energy $\Pi(\textbf{Q},\Omega)$ in linear response. Full lines denote CB electrons, dashed lines VB electrons, and wavy lines photons. The grey-shaded area represents the full CB-VB vertex.}
\label{bubble}
\end{figure}
Starting from Eq.~(\ref{Kubo-formula}), in the following we will study in detail how $\Pi(\textbf{Q},\omega)$ behaves as the chemical potential $\mu$ is increased, and distinguish finite and infinite VB masses $M$. We will also discuss the validity of the approximation of calculating $\Pi$ to lowest order in $d_0$.
\section{Electron-hole correlator in the absence of a Fermi sea}
\label{Photon self-energy zero mu sec}
We start by shortly reviewing the diagrammatic approach in the case when the chemical potential lies within the gap (i.e. $-E_G<\mu<0$). This is mainly done in order to set the stage for the more involved diagrammatic computations in the subsequent sections.
In this regime of $\mu$,
$\Pi$ is exactly given by the sum of the series of ladder diagrams shown in Fig.~\ref{excladder2}, first computed by Mahan \cite{PhysRev.153.882}.
Indeed, all other diagrams are absent here since they either contain VB or CB loops, which are forbidden for $\mu$ in the gap.
This is seen using the following expressions for the zero-temperature time-ordered free Green's functions:
\begin{align}
\label{standardformula}
G_{c}^{(0)}(\textbf{k},\Omega) &= \frac{1}{\Omega - \epsilon_\textbf{k} + i0^+\text{sign}(\epsilon_\textbf{k}-\mu)},
\\
G_{v}^{(0)}(\textbf{k},\Omega) &= \frac{1}{\Omega + E_G + E_{\textbf{k}} + i0^+\text{sign}(-E_G-E_\textbf{k}-\mu)},
\end{align}
where the indices $c$ and $v$ stand for conduction and valence band, respectively, and $0^+$ is an infinitesimal positive constant. For $-E_G < \mu< 0$, CB electrons are purely retarded, while VB electrons are purely advanced. Thus, no loops are possible. Higher order terms in $d_0$ are not allowed as well.
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{fig9.pdf}
\caption{The series of ladder diagrams. Dotted lines represent the electron-hole interaction.}
\label{excladder2}
\end{figure}
One can easily sum up the series of ladder diagrams assuming the simplified interaction $V_0$~\cite{gavoret1969optical}. Let us start from the case of infinite VB mass ($\beta=0$), and concentrate on energies $|\Omega - E_G| \ll \xi$, where $\xi$ is an appropriate UV cutoff of order of CB bandwidth.
Since the interaction is momentum independent, all integrations in higher-order diagrams factorize. Therefore, the $n$-th order diagram of Fig. \ref{excladder2} is readily computed:
\begin{align}
\label{ladder contribution}
\Pi^{(n)}_\text{ladder}(\Omega) = d_0^2
\rho (-g)^n\ln\left(\frac{\Omega - E_G + i0^+}{ - \xi }\right)^{n+1}.
\end{align}
Here and henceforth, the branch cut of the complex logarithm and power laws is chosen to be on the negative real axis.
The geometric series of ladder diagrams can be easily summed:
\begin{align}
\label{ladderseries}
\Pi_{\text{ladder}}(\Omega) = \sum_{n=0}^{\infty} \Pi^{(n)}_{\text{Ladder}}(\Omega) = \frac{d_0^2\rho \ln\left(\frac{\Omega-E_G + i0^+}{-\xi}\right)}{1+g\ln\left(\frac{\Omega-E_G + i0^+}{-\xi}\right)}.
\end{align}
A sketch of the corresponding QW absorption $A_{\text{ladder}}= -\text{Im}[\Pi_{\text{ladder}}]/\pi$ was already shown in Fig.~\ref{mahanexciton1}.
$\Pi_{\text{ladder}}(\Omega)$ has a pole,
the so-called Mahan exciton \cite{PhysRev.153.882, gavoret1969optical}, at an energy of
\begin{align}
\label{EBfctofg}
\Omega - E_G = -E_B = -\xi e^{-1/g}.
\end{align}
In the following, we will treat $E_B$ as a phenomenological parameter. To match the results of the short-range interaction model with an experiment, one should equate $E_B$ with $E_0$, the energy of lowest VB hole-CB electron hydrogenic bound state (exciton).
Expanding Eq.~(\ref{ladderseries}) near the pole, we obtain:
\begin{align}
\label{infmassmahanexciton}
\Pi_{\text{ladder}}(\omega) &= \frac{d_0^2E_B \rho}{g^2} G^0_{\text{exc}}(\omega) + \mathcal{O}\left(\frac{\omega}{E_B}\right),\\ \nonumber G^0_{\text{exc}}(\omega) &= \frac{1}{\omega+i0^+},
\end{align}
where $\omega = \Omega-E_G + E_B$, and we have introduced the bare exciton Green's function $G^0_{\text{exc}}$, similar to Ref. \cite{Betbeder-Matibet2001}.
In this regime of $\mu$, a finite hole mass only results in a weak renormalization of the energy by factors of $1+\beta$, where $\beta= m/M$ is the small CB/VB mass ratio. Furthermore, if finite photon momenta $\textbf{Q}$ are considered, the exciton Green's function is easily shown to be (near the pole):
\begin{align}
\label{finmassmahanexciton}
G_{\text{exc}}^{0}(\textbf{Q},\omega) = \frac{1}{\omega + Q^2/M_\text{exc} + i0^+},
\end{align}
with $M_\text{exc} = M + m = M (1+\beta)$.
\section{Electron-hole correlator for small Fermi energy}
\label{Photon self-energy small mu sec}
\subsection{Infinite VB hole mass}
Let us now slightly increase the chemical potential $\mu$, and study the resulting absorption. More precisely, we consider the regime
\begin{align}
\label{scales1}
0<\mu \ll E_B \ll \xi.
\end{align}
We first give an estimate of the coupling constant $g = \rho V_0$ Accounting for screening of the VB hole 2D Coulomb potential by the CB Fermi sea in the static RPA approximation, and averaging %
over the Fermi surface~\cite{gavoret1969optical,PhysRev.153.882}
one finds:
\begin{align}
\label{gestimate}
g \sim
\begin{cases}
1-8x/\pi & x\rightarrow 0,\\
\ln(x)/x & x\rightarrow \infty,
\end{cases}
\end{align}
where $x= \sqrt{\mu/E_0}$ with $E_0$ being the true 2D binding energy of the lowest exciton in the absence of a CB Fermi sea.
In the regime under study we may assume $E_B \simeq E_0 \gg \mu$, and therefore $g \lesssim 1$ \footnote{Strictly speaking, this also means $E_B \lesssim \xi$, contradicting Eq.~(\ref{scales1}). However, this clearly is a non-universal property, and we will not pay any attention to it in the following}. As a result, perturbation theory in $g$ is meaningless. Instead, we will use $\mu/E_B$ as our small parameter, and re-sum all diagrams which contribute to the lowest nontrivial order in it.
We will now restrict ourselves to the study of energies close to $E_B$
in order to understand how a small density of CB electrons modifies the shape of the bound state resonance; we will not study in detail the VB continuum in the spectrum (cf.\ Fig.~\ref{finmasssmallmu1}).
We first compute the contribution of the ladder diagrams;
as compared to Eqs.~(\ref{infmassmahanexciton})--(\ref{finmassmahanexciton}), the result solely differs by a shift of energies:
\begin{align}
\label{muexcitonpole}
\omega = \Omega - \Omega_T^{\text{exc
}}, \quad \Omega_T^{\text{exc}} = (E_G + \mu) - E_B.
\end{align}
Also, the continuum now sets in when $\Omega$ equals $\Omega_{T}^{\text{FES}} = E_G + \mu$, which is known as the Burstein-Moss shift \cite{PhysRev.93.632,moss1954interpretation}.
However, for finite $\mu$ one clearly needs to go beyond the ladder approximation, and take into account the ``Fermi sea shakeup''. To do so, we first consider the limit of infinite $M$ ($\beta=0$). In this regime, the QW absorption in the presence of a bound state for the model under consideration was found by Combescot and Nozières \cite{combescot1971infrared}, using a different approach \footnote{In fact, their computation is in 3D, but the case of infinite hole mass is effectively 1D anyway.}.
For finite $\mu$, the physics of the Fermi-edge singularity comes into play: Due to the presence of the CB Fermi sea, CB electron-hole excitations are possible at infinitesimal energy cost.
As a result,
the exciton Green's function, which we analogously to (\ref{infmassmahanexciton}) define as proportional to the dressed bubble in the exciton regime,
\begin{align}
&\Pi_{\text{exc}}(\omega) = \frac{d_0^2 E_B \rho}{g^2} G_{\text{exc}}(\omega) + \mathcal{O}\left(\frac{\omega}{E_B}\right) , \\ &G_{\text{exc}}(\omega) = \frac{1}{\omega - \Sigma^{\text{exc}}(\omega)} , \label{dressedexcwithsigma}
\end{align}
gets renormalized by a self-energy $\Sigma^{\text{exc}}(
\omega)$. This self-energy turns the
exciton pole turns into a divergent power law~\cite{combescot1971infrared}:
\begin{align}
\label{Nozieresresult}
G_{\text{exc}}(\omega) \sim \frac{1}{\omega+i0^+}\cdot \left(\frac{\omega+i0^+}{-\mu}\right)^{(\delta/\pi -1)^2},
\end{align}
where $\delta$ is the scattering phase shift of electrons at the Fermi-level off the point-like hole potential. One should note that no delta-peak will appear for $\delta/\pi \neq 1$. A sketch of the resulting absorption $A$ is shown in Fig.~\ref{Infmasssmallmu}.
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{fig10.pdf}
\caption{(Color online) QW Absorption for $\mu\ll E_B$ and $M=\infty$. The power law (\ref{Nozieresresult}) is valid asymptotically close to the left peak. The dashed region indicates the continuous part of the spectrum, compare caption of Fig.\ \ref{finmasssmallmu1}.}
\label{Infmasssmallmu}
\end{figure}
Let us further discuss the result~(\ref{Nozieresresult}). It was obtained in~\cite{combescot1971infrared} using an elaborate analytical evaluation of final state Slater determinants, and actually holds for any value of $\mu$. A numerical version of this approach for the infinite VB mass case was recently applied by Baeten and Wouters~\cite{PhysRevB.89.245301} in their treatment of polaritons. In addition, the method was numerically adapted to finite masses by Hawrylak~\cite{PhysRevB.44.3821}, who, however, mostly considered the mass effects for $\mu\gg E_B$.
However, due to the more complicated momentum structure, it seems difficult to carry over the method of~\cite{combescot1971infrared} to finite masses analytically. Instead, we will now show how to proceed diagrammatically.
Our analysis will give (\ref{Nozieresresult}) to leading order in the small parameter $\mu/E_B$, or, equivalently, $\alpha = \delta/\pi -1$ (recall that by Levinson's theorem~\cite{adhikari1986quantum} $\delta=\pi$ for $\mu=0$ due to the presence of a bound state --- the exciton):
\begin{align}
\label{excspec}
G_{\text{exc}}(\omega) \simeq \frac{1}{\omega+i0^+}\left(1+ \alpha^2 \ln\left(\frac{|\omega|}{\mu}\right)-i\alpha^2\pi\theta(\omega)\right).
\end{align}
The merit of the diagrammatical computation is twofold: First, it gives an explicit relation between
$\alpha$ and the experimentally-measurable parameters $\mu$, $E_B$. Second, the approach can be straightforwardly generalized to finite masses, as we show in the next subsection.
Let us note that a similar diagrammatic method was also examined by Combescot, Betbeder-Matibet \textit{et al.}\ in a series of recent papers~\cite{Betbeder-Matibet2001,Combescot2002,combescot2003commutation,Combescot2008215,5914361120110215}. Their model Hamiltonians are built from realistic Coulomb electron-hole and electron-electron interactions. As a result, they assess the standard methods of electron-hole diagrams as too complicated \cite{Betbeder-Matibet2001}, and subsequently resort to exciton diagrams and the so-called commutation technique, where the composite nature of the excitons is treated with care. However, the interaction of excitons with a Fermi sea is only treated at a perturbative level, assuming that the interaction is small due to, e.g., spatial separation~\cite{Combescot2002}. This is not admissible in our model, where the interaction of the VB hole with all relevant electrons (photoexcited and Fermi sea) has to be treated on the same footing. Rather, we stick to the simplified form of contact interaction, and show how one can use the framework of standard electron-hole diagrams to calculate all quantities of interest for infinite as well as for finite VB mass. The results presented below then suggest that for $\mu \ll E_B$ the finite mass does not weaken, but rather strengthens the singularities, which is in line with results on the heavy hole found in~\cite{PhysRevLett.75.1988}.
Here we only present the most important physical ingredients for our approach, and defer the more technical details to Appendix~\ref{technical}.
In the regime of interest, we can perform a low-density computation, employing the small parameter $\mu/E_B$. Since all energies are close to $E_B$, the leading-order exciton self-energy diagrams is then the sum of all diagrams with one CB electron loop. One can distinguish two channels: direct and exchange, to be denoted by $D$ and $X$, as depicted in Fig.~\ref{directtimedomain}.
All such diagrams with an arbitrary number of interactions connecting the VB line with the CB lines in arbitrary order have to be summed. Factoring out $E_B\rho/g^2 \cdot G^{0}_{\text{exc}}(\omega)^2$, the remaining factor can be identified as the exciton self-energy diagram.
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{fig11.pdf}
\caption{Leading-order direct self-energy diagrams: (a) direct contribution $D$ and (b) exchange contribution $X$.}
\label{directtimedomain}
\end{figure}
An evaluation of these diagrams is possible either in the time or in the frequency domain. Of course, both approaches must give same result. In practice, however, the time domain evaluation is more instructive and requires less approximations, which is why we will discuss it first. The frequency domain evaluation, however, is far more convenient for obtaining finite mass results, and will be discussed thereafter.
The time domain approach is similar in spirit to the classical one-body solution of the Fermi-edge problem by Nozières and de Dominicis \cite{PhysRev.178.1097}. Since the infinite-mass hole propagator is trivial, $G_v(t) = i\theta(-t)e^{i E_G t}$, the direct diagrams just describe the independent propagation of two electrons in the time-dependent hole potential. Thus, in the time domain the sum of all direct diagrams $D(t)$ factorizes into two parts representing the propagation of these two electrons:
\begin{align}
\label{Dnew}
D(t) = \int_{k_1<k_F} \frac{d\textbf{k}_1}{(2 \pi)^2} i e^{-i(E_G-\epsilon_{\textbf{k}_1})t} B(t) C(t),
\end{align}
where $B(t)$, $C(t)$ are infinite sums of convolutions (denoted by an asterisk) of the form
\begin{align} &
B(t) = \sum_{m=1}^{\infty} (-V_0)^m \int_{k_2>k_F} \frac{d\textbf{k}_2}{(2 \pi)^2} ... \int_{k_{m}>k_F} \frac{d\textbf{k}_{m}}{(2 \pi)^2} \\ \nonumber &\left[G_c^{0, R}(\textbf{k}_1,\ ) \ast \cdots \ast G_c^{0, R}(\textbf{k}_{m},\ )\ast G_c^{0, R} (\textbf{k}_1,\ )\right](t),
\end{align}
and similarly for $C(t)$. $G_c^{{0},R}$ is the retarded bare CB Green's function in the time domain. Fourier-transforming, $D(\omega)$ is then given by a convolution of $B(\omega)$ and $C(\omega)$, each of which in turn reduces to simple summations of ladder diagrams. The full convolution $D(\omega)$ is difficult to compute; one can proceed by noting that $B(\omega)$, $C(\omega)$ have poles at $\omega \simeq 0$ and continuum contributions at $\omega \gtrsim E_B$. These are readily identified with the pole and continuum contributions of the exciton absorption, c.f.\ Fig.~\ref{mahanexciton1}. Combining these, there are four combinations contributing to $D(\omega)$: pole-pole, pole-continuum (two possibilities), and continuum-continuum. The imaginary part of the latter, which is of potential importance for the line shape of the exciton spectrum, can be shown to vanish in our main regime of interest, $\omega \gtrsim 0$.
It is instructive to study the pole-pole combination, which corresponds to a would be ``trion'' (bound state of the exciton and an additional electron) and is further discussed in Appendix~\ref{trion-contribution}.
Adding to it the pole-continuum contributions we find, for small $\omega$:
\begin{align}
\label{Ddirectfinal}
D(\omega) = \frac{\rho E_B}{g^2} \frac{1}{(\omega + i0^+)^2} \Sigma_\text{exc}^{\text{D}}(\omega).
\end{align}
This corresponds to a contribution to the exciton self-energy which reads:
\begin{align}
\label{SigmaDint}
\Sigma^\text{D}_{\text{exc}}(\omega) = -\frac{1}{\rho}\int_{k_1<k_F} \frac{d\textbf{k}_1}{(2\pi)^2} \frac{1}{\ln\left(\frac{\omega + \epsilon_{\textbf{k}_1} - \mu + i0^+}{-E_B}\right)} .
\end{align}
Before discussing this term further, we consider the contribution of the exchange diagrams, $X(\omega)$, of Fig.\ \ref{directtimedomain}(b). Their structure is more involved compared to the direct channel, since these diagrams do not just represent the independent propagation of two electrons in the hole potential. However, relying on a generalized convolution theorem which we prove, the computation can be performed in the same vein as before (see Appendix~\ref{technical}), leading to the following results:
First, the pole-pole contribution cancels that of the direct diagrams (see Appendix~\ref{trion-contribution}), which holds in the spinless case only (in the spinful case, the direct diagrams will come with an extra factor of two). This could be expected: trion physics is only recovered in the spinful case, where two electrons can occupy the single bound state created by the attractive potential of the hole.
In a realistic 2D setup trion features will become important for large enough values of $\mu$ (see, e.g., \cite{sidler2017fermi,suris2001excitons, PhysRevB.91.115313,efimkin2017many}). Although we do not focus on trions here, let us stress that all standard results on trions can be recovered within our diagrammatic approach, if electrons and holes are treated as spin-$1/2$ particles; see Appendix \ref{trion-contribution} for further details.
The dominant contribution to $X(\omega)$ then arises from the pole-continuum contribution. It is given by:
\begin{align}
\label{Xomegamaintext}
X(\omega) = -\frac{\rho E_B}{g^2} \frac{1}{(\omega + i 0^+)^2} \mu.
\end{align}
Thus, the self-energy contribution to the exciton Green's function is simply
\begin{align}
\label{Fumitypeshift}
\Sigma_{\text{exc}}^{\text{X}}(\omega) = -\mu.
\end{align}
Since it is purely real, it will essentially just red-shift the exciton pole by $\mu$. A discussion of this result is presented in Appendix~\ref{Fumidiscussion}.
Now, it should be noted that $\Sigma_\text{exc}^{\text{X}}(\omega)$ is not proportional to the small parameter $\mu/E_B$ -- the latter effectively canceled when factoring out the bare excitons Green's function. Thus, it is inconsistent to treat $\Sigma_\text{exc}^{\text{X}}(\omega)$ as perturbative self-energy correction. Instead, one should repeat the calculation, but replace all ladders by ladders dressed with exchange-type diagrams. It can be expected, however, that the structure of the calculations will not change. The only change that should happen is the appearance of the renormalized binding energy $\tilde{E}_B = E_B + \mu$, in accordance with~\cite{combescot1971infrared}, as discussed in Appendix~\ref{Fumidiscussion}. In the following, we will assume this is accounted for, and therefore suppress all exchange diagrams.
Let us now return to the direct self-energy contribution $\Sigma_\text{exc}^{\text{D}}(\omega)$, Eq.~(\ref{SigmaDint}), writing
\begin{align}
\Sigma_{\text{exc}}(\omega) = \Sigma^D_{\text{exc}}(\omega)
\end{align}
henceforth. We may apply the following asymptotic expansion for the logarithmic integral (generalized from~\cite{R.Wong1989}), which will also prove useful later:
\begin{align}
\label{theorem}
\int _0^\omega dx \frac{x^n}{\ln^m(x)} = \frac{1}{\ln^m(\omega)}\frac{\omega^{n+1}}{(n+1)} + \mathcal{O}\left(\frac{\omega^{n+1}}{\ln(\omega)^{m+1}}\right).
\end{align}
This can be shown easily by integrating by parts and comparing orders.
Based on this result we find, to leading logarithmic accuracy,
\begin{align}
\label{sigmadendlich}
\Sigma_\text{exc}(\omega) \simeq& -\frac{\mu}{\ln\left(\frac{\mu}{E_B}\right)} + \frac{\omega \ln\left(\frac{|\omega|}{\mu}\right)}{\ln\left(\frac{\mu}{E_B}\right)\!\ln\left(\frac{|\omega|}{E_B}\right)} \\&- i\frac{\pi \omega \theta(\omega)}{\ln^2\left(\frac{|\omega|}{E_B}\right)}. \notag
\end{align}
This result has several interesting features.
First, we see the appearance of a small parameter $\alpha \equiv 1/|\ln(\mu/E_B)|$, which can be interpreted as follows:
the scattering phase-shift at the Fermi level, $\delta$, which determines the Anderson orthogonality power law [c.f.\ Eq.~(\ref{Nozieresresult})] is approximately given by~\cite{adhikari1986quantum}
\begin{align}
\delta \simeq \frac{\pi}{\ln\left(\frac{\mu}{E_B}\right)} + \pi ,
\end{align}
which holds for small Fermi energies, where $\delta$ is close to $\pi$.
Therefore, $\delta$ and $\alpha$ are related by:
\begin{align}
\label{alphaisphase}
\alpha \simeq 1-\frac{\delta}{\pi} .
\end{align}
The small pole shift of order $\alpha\mu$ contained in Eq.~(\ref{sigmadendlich}) could be expected from Fumi's theorem (see, e.g., \cite{G.D.Mah2000} and the discussion in Appendix~\ref{Fumidiscussion}). We now perform an energy shift
\begin{align}
\omega \rightarrow \omega + \alpha\mu.
\end{align}
To leading order in $\alpha$, we may then rewrite $\Sigma_\text{exc}^{\text{D}}$ with logarithmic accuracy as
\begin{align}
\label{Sigmanice}
\Sigma_\text{exc}(\omega) \simeq \alpha^2\omega \ln\left(\frac{|\omega|}{\mu}\right) - i\alpha^2\pi \omega \theta(\omega),
\end{align}
Here, the imaginary part can be identified with the density of states of CB electron-hole excitations as function of $\omega$, as discussed in Sec.~\ref{Pisummarysec}.
Upon inserting (\ref{Sigmanice}) into the exciton Green's function (\ref{dressedexcwithsigma}), we recover (\ref{Nozieresresult}) to leading (quadratic) order in $\alpha$:
\begin{align}
\label{excspec2}
G_{\text{exc}}(\omega) \simeq \frac{1}{\omega+i0^+}\left(1+ \alpha^2 \ln\left(\frac{|\omega|}{\mu}\right)-i\alpha^2\pi\theta(\omega)\right).
\end{align}
As a result, our one-loop computation has given the first logarithm of the orthogonality power law, in complete analogy to the standard Fermi-edge problem (see Sec.~\ref{Photon self-energy large mu sec}). All higher loop contributions, evaluated to leading logarithmic order, should then add up to give the full power law; since we are more interested in finite mass effects here, we will not go into the details of this calculation.
To carry the diagrammatics over to finite mass, as done in the next section, it is convenient to switch to the frequency domain. A summation of all one-loop diagrams is possible by evaluating the series shown in Fig.\ \ref{alldirect}.
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{fig12.pdf}
\caption{(Color online) Series of diagrams contributing to the direct self-energy in the frequency domain. Vertical blue bars denote interaction ladders.}
\label{alldirect}
\end{figure}
To perform the evaluation, we make use of the following simplification:
To begin with, we often encounter complicated logarithmic integrals; however, the imaginary part of the integrand is just a delta function, so, upon integration, one finds step functions.
Since the integrand is retarded, it is then possible to recover the full expression from the imaginary part using the Kramers-Kronig relation; the step functions then become logarithms.
With that, the sum over diagrams appearing in Fig.~\ref{alldirect} assumes the form
\begin{align}
\label{Dfinalguessed}
D(\omega) &= \frac{E_B}{g^2} \frac{1}{(\omega + i 0^+)^2} \int_{k_1<k_F} \frac{d\textbf{k}_1}{(2\pi)^2} \left\{I + I^3 + ...\right\},
\end{align}
where
\begin{align}
\label{thisisI}
I &= \ln\left(\frac{\epsilon_{\textbf{k}_1} + \omega - \mu+i0^+}{-E_B}\right).
\end{align}
Summing up the geometric series
exactly reproduces the time-domain result, Eq.~(\ref{Ddirectfinal}).
Thus, we have established how the
photon self-energy can be calculated diagrammatically for the case of infinite VB mass $M$ (to leading order in $d_0$).
\subsection{Finite hole mass}
\label{excfinitemasssec}
We are now in a position to tackle finite VB mass $M$. Let us also consider a finite incoming momentum $\textbf{Q}$. Clearly, the one-loop criterion for choosing diagrams still holds, since we are still considering the low-density limit, $\mu \ll E_B$. We also disregard any exchange contributions for the same reasons as for the infinite mass case. As a result, we only have to recompute the series of direct diagrams of Fig \ref{alldirect}. We start with the first one. It gives:
\begin{widetext}
\begin{align}
\label{someI}
I = &-\frac{E_B V_0}{g}\!\int \displaylimits_{k_2 > k_F}\!\frac{d\textbf{k}_2}{(2\pi)^2} \frac{1}{\left(-\omega + E_B + E(\textbf{k}_2 - \textbf{Q}) + \epsilon_{\textbf{k}_2} - \mu - i0^+\right)^2} \frac{1}{\ln\left(\frac{-E_B + \omega - \left(\textbf{Q} - \textbf{q}\right)^2/2M_\text{exc} - \epsilon_{\textbf{k}_2} + \epsilon_{\textbf{k}_1} + i0^+}{-E_B}\right)},
\end{align}
\end{widetext}
where $\textbf{q} = \textbf{k}_2 - \textbf{k}_1$. The imaginary part of (\ref{someI}) reads:
\begin{align}
\label{simplified}
\nonumber
\text{Im}[I] = -\frac{V_0}{g} \int \displaylimits_{k_2 > k_F} \frac{d\textbf{k}_2}{(2\pi)^2} \pi &\delta\left(\omega - \frac{(\textbf{Q}-\textbf{q})^2}{2M_\text{exc}} - \epsilon_{\textbf{k}_2} + \epsilon_{\textbf{k}_1}\right) \\ & + \mathcal{O}\left(\frac{\mu}{E_B}\right).
\end{align}
By Eq.~(\ref{simplified}), $I$ can be rewritten in a simpler form (ensuring retardation), valid for small $\omega$:
\begin{align}
\label{datisI}
I \simeq \frac{V_0}{g} \int\displaylimits_{k_2>k_F} \frac{d\textbf{k}_2}{(2\pi)^2} \frac{1}{\omega - \frac{(\textbf{Q} -\textbf{q} )^2}{2M_\text{exc}} - \epsilon_{\textbf{k}_2} + \epsilon_{\textbf{k}_1} + i0^+}.
\end{align}
This form can be integrated with logarithmic accuracy, which, however, only gives $\text{Re}[I]$. Specializing to $Q \ll k_F$ for simplicity, one obtains:
\begin{align}
\label{Resimplified}
\text{Re}[I] \simeq \ln\left(\frac{\max(|\omega + \epsilon_{\textbf{k}_1} - \mu |, \beta\mu)}{E_B}\right).
\end{align}
As for the infinite mass case, the higher order diagrams of Fig.~\ref{alldirect}
give higher powers of $I$. Similarly to Eq.~(\ref{Dfinalguessed}), one then obtains for the self-energy part, to leading logarithmic accuracy:
\begin{align}
\label{oneoverI}
\Sigma_\text{exc}(\textbf{Q},\omega) = -\int_{k_1 < k_F} \frac{d\textbf{k}_1}{(2\pi)^2} \cdot \frac{1}{I}.
\end{align}
The imaginary part, which determines the lineshape of $G_{\text{exc}}$, is given by
\begin{align}
\nonumber
& \text{Im}\left[\Sigma_\text{exc}(\textbf{Q},\omega)\right] \simeq - \frac{\pi V_0}{\rho g} \int_{k_1 < k_F} \frac{d\textbf{k}_1}{(2\pi)^2} \int_{k_2 > k_F} \frac{d\textbf{k}_2}{(2\pi)^2} \\ & \frac{\delta(\omega - (\textbf{Q} - \textbf{q})^2/2M_\text{exc} - \epsilon_{\textbf{k}_2} + \epsilon_{\textbf{k}_1})}{\ln^2\left(\frac{\max(|\omega + \epsilon_1 - \mu|, \beta\mu)}{E_B}\right)}.
\label{ImSigmaComplicated}
\end{align}
We now apply the analogue of the logarithmic identity, Eq.~(\ref{theorem}), for a 2D integral. Thus, in leading order we may simply pull the logarithm out of the integral of Eq.~(\ref{ImSigmaComplicated}) and rewrite it as
\begin{align}
\label{imende} \nonumber
&\text{Im}[\Sigma_\text{exc}](\textbf{Q},\omega) \simeq -\frac{\pi V_0}{\rho g} \alpha^2 \int_{k_1 < k_F} \frac{d\textbf{k}_1}{(2\pi)^2} \int_{k_2 > k_F} \frac{d\textbf{k}_2}{(2\pi)^2} \\ &\qquad\qquad \delta(\omega - (\textbf{Q}-\textbf{q})^2/2M_\text{exc} - \epsilon_{\textbf{k}_2} + \epsilon_{\textbf{k}_1}).
\end{align}
The result (\ref{imende}) is physically transparent: It is just a phase-space integral giving the total rate of scattering of an exciton with momentum $\textbf{Q}$ by a CB Fermi sea electron. The prefactor is determined by the scattering phase shift $\delta$.
At least for sufficiently small momenta $\textbf{Q}$, the integral in Eq.~(\ref{imende}) can be straightforwardly computed. For the most important case $\textbf{Q} = 0$, one obtains for small energies (see Appendix~\ref{phasespacesec}):
\begin{align}
\label{correctnum}
\text{Im}[\Sigma_\text{exc}](\textbf{Q}=0,\omega) \sim -\alpha^2 \frac{1}{\sqrt{\beta\mu}} \theta(\omega) \omega^{3/2}, \quad \omega \ll \beta\mu,
\end{align}
where we suppressed an irrelevant prefactor of order one.
For $\omega \gg \beta\mu$ one recovers the infinite mass case as in (\ref{Sigmanice}).
Compared to the infinite mass case, where $\text{Im}[\Sigma_{\text{exc}}]\sim \omega\ln(\omega)$, the self-energy (\ref{correctnum}) shows a suppression of the low-frequency scattering phase space, as seen from the higher frequency power law.
Physically, the phase space suppression is understood as follows: We have found that, after accounting for the exchange diagrams, it is admissible to view the exciton as elementary particle with mass $M_\text{exc}$, which interacts with the Fermi sea with an effective interaction strength $\alpha$ [Eq.~(\ref{alphaisphase})]. As can be seen from Fig.~\ref{recoilenergy}, scatterings of the exciton with CB electrons involving a large momentum transfer necessarily cost a finite amount of energy (the so-called recoil energy). By contrast, in the infinite mass case such scatterings could still happen at infinitesimal energy cost, since the exciton dispersion was flat. Thus, the finite-mass phase space is reduced as compared to the infinite mass case.
This change eventually leads to the previously asserted reappearance of the exciton delta peak.
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{fig13.pdf}
\caption{(Color online) Scattering process of an exciton by a VB electron with large momentum transfer. The lower band represents the exciton dispersion. The scattering significantly increases the exciton energy.}
\label{recoilenergy}
\end{figure}
This phase space reduction also affects the exciton spectral function, and hence the absorption: We first restrict ourselves to the leading behavior, i.e., we disregard any small renormalizations that arise from including
$\text{Re}[\Sigma_{\text{exc}}]$ or from higher-loop corrections. Inserting Eq.~(\ref{correctnum}) into
Eq.\ (\ref{dressedexcwithsigma}) we then obtain, for small energies $\omega$:
\begin{align}
\label{oneoversqrt}
A(\textbf{Q} &= 0, \omega) \simeq - \Delta^2 \frac{\text{Im}[\Sigma(\omega)]}{\omega^2} \sim \Delta^2 \alpha^2 \frac{\theta(\omega)}{\sqrt{\beta\mu\cdot \omega}},
\end{align}
with
\begin{align}
\Delta^2 &= \frac{d_0^2\rho E_B}{g^2}. \label{Deltadef}
\end{align}
The factor $\Delta$ (with units of energy) determines the polariton splitting at zero detuning, and will be discussed in Sec.~\ref{Polariton properites sec}.
The $1/\sqrt{\omega}$ divergence seen in (\ref{oneoversqrt}) was also found by Rosch and Kopp using a path-integral approach \cite{PhysRevLett.75.1988} for a related problem, that of a heavy hole propagating in a Fermi sea. In addition, Rosch and Kopp find a quasi particle delta peak with a finite weight. This peak can also be recovered within our approach upon inclusion of the correct form of $\text{Re}[\Sigma_{\text{exc}}]$.
From Eqs.~(\ref{Resimplified}) and (\ref{oneoverI}) we may infer it to be
\begin{align}
\label{Reinfer}
\text{Re}[\Sigma_{\text{exc}}(\textbf{Q}=0,\omega)] = \alpha^2 \omega \ln\left(\frac{\sqrt{\omega^2 + (\beta\mu)^2}}{\mu}\right),
\end{align}
where we have rewritten the maximum-function with logarithmic accuracy using a square root.
This cut-off of logarithmic singularities (which are responsible for edge power laws) by recoil effects is a generic feature of our model,
and will reoccur in the regime of $\mu \gg E_B$ presented in
Sec.~\ref{Photon self-energy large mu sec}. In qualitative terms, this is also discussed in Ref.\ \cite{Nozi`eres1994} (for arbitrary dimensions).
Our results are in full agreement with this work.
We may now deduce the full photon self-energy $\Pi_{\text{exc}}$ as follows: In the full finite-mass version of the power law (\ref{Nozieresresult}), the real part of the logarithm in the exponent will be replaced by the cut-off logarithm from Eq.~(\ref{Reinfer}). The imaginary part of this logarithm will be some function $f(\omega)$ which continuously interpolates between the finite-mass regime for $\omega \ll \beta \mu$ [given by Eq.~(\ref{correctnum}) times $\omega^{-1}$], and the infinite mass regime for $\omega \gg \beta\mu$.
Therefore, we arrive at
\begin{align}
\label{Piexcfinitemass}
&\Pi_{\text{exc}}(\textbf{Q} = 0,\omega) = \\ &\frac{\Delta^2}{\omega+i0^+} \exp \left[\alpha^2 \left(\ln\left(\frac{\sqrt{\omega^2 + (\beta\mu)^2}}{\mu}\right) - if(\omega)\right)\right] \nonumber,
\end{align}
where
\begin{align}
&f(\omega) =
\begin{cases}
\pi \sqrt{\frac{\omega}{\beta\mu}} \theta(\omega) \quad &\omega \ll \beta\mu \\
\pi \quad &\omega \gg \beta\mu.
\end{cases}
\end{align}
It is seen by direct inspection that (\ref{Piexcfinitemass}) has a delta peak at $\omega =0$ with weight $\Delta^2 \beta^{\alpha^2}$.
One can also asses the weight of the delta peak by comparing the spectral weights of the exciton spectral function in the infinite and finite mass cases:
The weight of the delta peak must correspond to the difference in spectral weight as the absorption frequency power law is changed
once $\beta$ becomes finite.
In the infinite mass case, the absorption scales as
\begin{align}
A_\infty(\omega)\sim\frac{\Delta^2 \alpha^2}{\omega} \left(\frac{\omega}{\mu}\right)^{\alpha^2}\theta(\omega),
\end{align}
as follows from Eq.~(\ref{Nozieresresult}) above.
Thus, the spectral weight in the relevant energy region is given by
\begin{align}
\label{masspoleweight}
\int_0^{\beta\mu} d\omega A_{\infty}(\omega) = \Delta^2 \beta^{{\alpha}^2}.
\end{align}
In contrast, using Eq.~(\ref{correctnum}), the spectral weight of the finite mass case is
\begin{align}
\int_0^{\beta\mu} d\omega A(\textbf{Q}=0,\omega) = \Delta^2 \alpha^2.
\end{align}
For scattering phase shifts $\delta$ close to $\pi$ (i.e., $\alpha \rightarrow 0$), and for finite mass, $\beta>0$, a pole with weight proportional to $\beta^{\alpha^2}$ [Eq.~(\ref{masspoleweight})] at $\omega =0$ should be present in the spectrum, if $\beta$ is not exponentially small in $\alpha$.
This weight is exactly the same as for the heavy hole when computed in a second order cumulant expansion~\cite{PhysRevLett.75.1988}.
The full imaginary part of $\Pi_\text{exc}(\textbf{Q}=0,\omega)$ was already given explicitly in Eqs.~(\ref{Excgeneral}) and (\ref{Exccases}), and plotted in Fig.~\ref{finmasssmallmu1}.
That plot illustrates the main conclusion of this section: For finite mass, Fermi sea excitations with large momentum transfer are energetically unfavorable, and are therefore absent from the absorption power law. As a result, the pole-like features of the absorption are recovered.
\subsection{Validity of the electron-hole correlator as a photon self-energy}
Let us now assess the validity of the expressions for the CB electron-VB hole correlator [Eqs.~(\ref{Nozieresresult}) and (\ref{Piexcfinitemass})] as a photon self-energy. Using them, one assumes that only electron-hole interactions within one bubble are of relevance, and electron-hole interactions connecting two bubbles (an example is shown in Fig.~\ref{twobubbles}) can be disregarded.
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{fig14.pdf}
\caption{Two dressed bubbles, connected by one electron-hole interaction (dotted line).
This is an example of a photon self-energy diagram that is not contained in our approximation for $\Pi(\textbf{Q},\omega)$.}
\label{twobubbles}
\end{figure}
The regime where such an approximation is valid may be inferred from the following physical argument:
Electronic processes (i.e. electron-hole interactions) happen on the time scale of Fermi time $1/\mu$. On the other hand, the time scale for the emission and reabsorption of a photon (which is the process separating two bubbles) is given by $1/\rho d_0^2$ (where $d_0$ is the dipole matrix element). If the second scale is much larger than the first one, electrons and holes in distinct bubbles do not interact. Thus, the our approach is valid as long as
\begin{align}
\label{M0smallerthanmu}
\rho d_0^2 \ll \mu.
\end{align}
Under this condition, the following physical picture is applicable: an exciton interacts with the Fermi sea, giving rise to a broadened exciton, which in turn couples to the cavity photons. When Eq.~(\ref{M0smallerthanmu}) is violated, one should think in different terms: excitons couple to photons, leading to exciton-polaritons. These then interact with the Fermi sea. The second scenario is, however, beyond the scope of this paper.
The above discussion is likewise valid for the regime of large Fermi energy, which is studied below.
\section{Electron-hole correlator for large Fermi energy}
\label{Photon self-energy large mu sec}
We now switch to the opposite regime, where $\mu \gg E_B$, and excitons are not well-defined.
For simplicity, we also assume that $\mu$ is of the order of the CB bandwidth.
Hence, $E_B \ll \mu \simeq \xi$.
Within our simplified model, the finite mass problem in 3D was solved in \cite{gavoret1969optical}. This treatment can be straightforwardly carried over to 2D \cite{Pimenov2015}. To avoid technicalities, we will, however, just show how to obtain the 2D results in a ``Mahan guess'' approach~\cite{PhysRev.163.612}, matching known results from~\cite{PhysRevB.35.7551}. To this end, we will first recapitulate the main ingredients of the infinite mass solution.
\subsection{Infinite hole mass}
The FES builds up at the Burstein-Moss shifted threshold $\Omega_T^{\text{FES}} = E_G + \mu$.
Its diagrammatic derivation relies on a weak-coupling ansatz: The parameter $g = \rho V_0$ is assumed to be small. As seen from Eq.~(\ref{gestimate}), this is indeed true for $\mu \gg E_0$.
In principle, below the FES there will still be the exciton peak; however, this peak will be broadened into a weak power law, and thus merge with the FES. For finite mass (see below), the position of the would-be exciton may even be inside FES continuum, which makes the exciton disappear completely. What is more, the exciton weight, being proportional to $E_B$, is exponentially small in $g$ (since $\mu \simeq \xi$). We may therefore safely disregard the exciton altogether (see also discussion in Appendix \ref{muincapp}).
To leading order in $g\ln(\omega/\mu)$, the dominant contribution comes from the so called ``parquet'' diagrams, containing all possible combinations of ladder and crossed diagrams~\cite{PhysRev.178.1072, NOZIERES1969}.
The value of the pure ladder diagrams is given by Eq.~(\ref{ladder contribution}), with $\Omega - E_G$ replaced by $\omega = \Omega -\Omega_T^{\text{FES}}$.
The lowest-order crossed diagram is shown in Fig.~\ref{crossed_infmass}.
With logarithmic accuracy the contribution of this diagram is easily computed:
\begin{align}
\Pi_{\text{crossed}}
= -\frac{1}{3}d_0^2\rho g^2 \left[\ln(\omega/\mu)\right]^3.
\end{align}
This is $-1/3$ times the contribution of the second order ladder diagram, c.f.\ Eq.~(\ref{ladder contribution}). Thus, the ladder and crossed channels partially cancel each other, a feature which persists to all orders. This also shows that the FES is qualitatively different from the broadened exciton discussed in the previous section: now the exciton effects (ladder diagrams) and the Fermi sea shakeup (crossed diagrams) have to be treated on equal footing.
\begin{figure}[H]
\centering
\includegraphics[width=.6\columnwidth]{fig15.pdf}
\caption{Lowest order crossed diagram contributing to the FES.}
\label{crossed_infmass}
\end{figure}
In his original paper Mahan computed all leading diagrams to third order and guessed the full series from an exponential ansatz~\cite{PhysRev.163.612}. The corresponding result for the photon self-energy $\Pi_{\text{FES}}(\omega)$ reads
\begin{align}
\label{Mahanresult}
\Pi_{\text{FES}}(\omega) = \frac{d_0^2\rho}{2g}\left(1-\exp\left[-2g\ln\left(\frac{\omega+i0^+}{-\mu}\right)\right]\right).
\end{align}
Relying on coupled Bethe-Salpeter equations in the two channels (ladder and crossed), Nozi\`{e}res \textit{et al.}\ then summed all parquet diagrams, where a bare vertex is replaced by (anti-)parallel bubbles any number of times~\cite{PhysRev.178.1072, NOZIERES1969}. The result corresponds exactly to Mahan's conjecture, Eq.~(\ref{Mahanresult}).
By the standard FES identification
$\delta/\pi = g + \mathcal{O}(g^3)$, the power law in Eq.~(\ref{Mahanresult}) coincides with the one given in Eq.~(\ref{Nozieresresult}); the phase shift is now small.
One should also point out that the peaks in the spectra in the regimes of small $\mu$ (Fig.~\ref{finmasssmallmu1}) and large $\mu$ (Fig.~\ref{FEScomp}) are not continuously connected, since the FES arises from the continuous threshold, whereas the exciton does not.
Let us finally note that since $\mu$ is a large scale,
Eq.~(\ref{Mahanresult}) should be a good approximation for the
photon self-energy, since the condition (\ref{M0smallerthanmu}) is easily satisfied.
\subsection{Finite hole mass}
\label{FESfiniteholemasssubseq}
As in the regime of the exciton, in the finite mass case the result~(\ref{Mahanresult}) will be modified due to the recoil energy $\beta\mu$. However, it will now be the \textit{VB hole} recoil (or the hole lifetime, see below) instead of the exciton recoil --- the latter is meaningless since the exciton is not a well defined entity anymore. This is most crucial:
Since CB states with momenta smaller than $k_F$ are occupied, VB holes created by the absorption of zero-momentum photons must have momenta larger than $k_F$. Therefore, the hole energy can actually be lowered by scatterings with the Fermi sea that change the hole momenta to some smaller value, and these scattering processes will cut off the sharp features of $\Pi_{\text{FES}} (\omega)$.
The actual computation of the photon self-energy with zero photon momentum, $\Pi_{\text{FES}}(\textbf{Q}=0,\omega)$, proceeds in complete analogy to the 3D treatment of~\cite{gavoret1969optical}. Limiting ourselves to the ``Mahan guess'' for simplicity, the main steps are as follows.
The first major modification is the appearance of two thresholds: As easily seen by the calculation of the ladder diagrams, the finite mass entails a
shift of the pole of the logarithm from $\omega =0$ to $\omega = \beta\mu$, which is the minimal energy for direct transitions obeying the Pauli principle. Correspondingly, $\omega_D=\beta\mu$ is called the direct threshold. Near this threshold, logarithmic terms can be large, and a non-perturbative resummation of diagrams is required. However, the true onset of 2DEG absorption will actually be the indirect threshold $\omega_I=0$. There, the valence band hole will have zero momentum, which is compensated
by a low-energy conduction electron-hole pair, whose net momentum is $-k_F$. The two thresholds were shown in Fig.~\ref{twothresholds}.
It should be noted that for $E_B < \beta\mu$
the exciton energy $\approx \omega_D - E_B$,
is between $\omega_I$ and $\omega_D$. Hence, in this case the exciton overlaps with the continuum and is completely lost.
Near $\omega_I$, the problem is completely perturbative. In leading (quadratic) order in $g$, the absorption is determined by two diagrams only. The first one is the crossed diagram of Fig.~\ref{crossed_infmass}. The second one is shown in Fig.~\ref{omega3self}.
When summing these two diagrams, one should take into account spin, which will simply multiply the diagram of Fig.~\ref{omega3self} by a factor of two (if the spin is disregarded, the diagrams will cancel in leading order). Up to prefactors of order one, the phase-space restrictions then result in a 2DEG absorption~(see \cite{PhysRevB.35.7551} and Appendix~\ref{phasespacesec}):
\begin{align}
\label{Abspowerlaw}
A(\textbf{Q} = 0,\omega) = d_0^2 g^2 \left(\frac{\omega}{\beta\mu}\right)^3 \theta(\omega).
\end{align}
The phase space power law $\omega^3$ is specific to 2D . Its 3D counterpart has a larger exponent, $\omega^{7/2}$ \cite{PhysRevB.35.7551}, due to an additional restriction of an angular integration.
\begin{figure}[H]
\centering
\includegraphics[width=.6\columnwidth]{fig16.pdf}
\caption{(Color online) Second diagram (in addition to Fig.~\ref{crossed_infmass}) contributing to the absorption at the indirect threshold $\omega_I$. The blue ellipse marks the VB self-energy insertion used below.}
\label{omega3self}
\end{figure}
Let us now turn to the vicinity of
$\omega_D$, where one has to take into account the logarithmic singularities and the finite hole life-time in a consistent fashion. Regarding the latter,
one can dress all VB lines with self-energy diagrams as shown in Fig.~\ref{omega3self}. The self-energy insertion at the dominant momentum $k = k_F$ reads
\begin{align}
\label{VBself-energy}
\text{Im}[\Sigma_{\rm{VB}}(k_F, \omega)] = \frac{1}{\sqrt{3}}\theta(\omega) g^2 \beta\mu \frac{\omega^2}{(\beta\mu)^2}, \quad \omega \ll \beta\mu.
\end{align}
As can be shown by numerical integration, this expression reproduces the correct order of magnitude for $\omega = \beta\mu$, such that it can be safely used in the entire interesting regime $\omega \in [0,\beta\mu]$. The power law in Eq.~(\ref{VBself-energy}) is again specific to 2D. In contrast, the order of magnitude of the inverse lifetime is universal,
\begin{align}
\label{imselfhole}
\text{Im}[\Sigma_{\rm{VB}}(k_F, \beta\mu)] \sim g^2\beta\mu.
\end{align}
Disregarding the pole shift arising from $\text{Re}[\Sigma]$, the self-energy (\ref{imselfhole}) can be used to compute the ``dressed bubble'' shown in Fig.~\ref{dressedbubble}.
With logarithmic accuracy, the dressed bubble can be evaluated analytically. In particular, its real part
reads:
\begin{align}
\label{relogdressed}
\text{Re}\left[\Pi_{\text{db}}\right](\omega) \simeq
{\rho d_0^2}\ln\left(\frac{\sqrt{(\omega - \beta\mu)^2 + \left(g^2 \beta\mu \right)^2}}{\mu}\right).
\end{align}
This is just a logarithm whose low-energy divergence is cut by the VB hole life time, in full analogy to Eq.~(\ref{Reinfer}), and in agreement with Ref.\ \cite{Nozi`eres1994}.
\begin{figure}[H]
\centering
\includegraphics[width=.5\columnwidth]{fig17.pdf}
\caption{The CB electron-VB hole bubble, with the hole propagator dressed by the self-energy, Eq.~(\ref{imselfhole}).}
\label{dressedbubble}
\end{figure}
For the computation of polariton spectra later on, it turns out to be more practical to obtain both the real and the imaginary parts of $\Pi_{\text{db}}(\omega)$ by numerically integrating the approximate form \cite{Pimenov2015}:
\begin{align}
\label{contclosed}
&\Pi_{\text{db}}(\omega) \simeq \\&
\notag
\frac{d_0^2}{(2\pi)^2}\hspace{-.8em}\int\displaylimits_{k > k_F}\hspace{-.8em}d\textbf{k} \frac{1}{\omega - (\epsilon_{\textbf{k}}-\mu) - \frac{k^2}{2M} + i\text{Im}[\tilde{\Sigma}_{\rm{VB}}(\omega - \epsilon_\textbf{k} + \mu)]}, \\& \notag
\text{Im}[\tilde{\Sigma}_{\rm{VB}}(x)]= \begin{cases} \tfrac{g^2}{\sqrt{3}}\theta(x) \frac{x^2}{(\beta\mu)} & x<
\beta\mu\\
\tfrac{g^2}{\sqrt{3}}\beta\mu & x>\beta\mu,
\end{cases}
\end{align}
to avoid unphysical spikes arising from the leading logarithmic approximation. A corresponding plot of $-\text{Im}\left[\Pi_{\text{db}}\right]$ is shown in Fig.~\ref{Imdressedbubbleplot}.
The numerical expression $-\text{Im}\left[\Pi_{\text{db}}\right]$ simplifies to the correct power law (\ref{Abspowerlaw}) in the limit $\omega \rightarrow 0$, and approaches the infinite mass value $ d_0^2\rho\pi$ for large frequencies.
Higher-order diagrams will contain higher powers of the rounded logarithm~(\ref{relogdressed}). The parameter controlling the leading log scheme now reads
\begin{align}
l\equiv g\ln(\beta g^2).
\end{align}
One can distinguish different regimes of $l$.
The simplest is
$l \ll 1$, which holds in the limit $g \rightarrow 0$ (or, put differently, if $\beta$ is not exponentially small in $g$). In this limit, no singularity is left. The large value of the Fermi energy (small $g$) and the large value of the hole decay $\beta\mu$ have completely overcome all interaction-induced excitonic effects. A decent approximation to the 2-DEG absorption is then already given by the imaginary part of the dressed bubble. Fig.~\ref{Imdressedbubbleplot} shows the corresponding absorption.
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{fig18.pdf}
\captionsetup[justification]{justified}
\caption{(Color online) Imaginary part of the dressed bubble for two values of $g$, obtained from numerical integration of $\Pi_{
\text{db}}$, using the hole self-energy insertion of (\ref{VBself-energy}).}
\label{Imdressedbubbleplot}
\end{figure}
The more interesting regime corresponds to $g\ln(\beta g^2) \gtrsim 1$, where arbitrary numbers of conduction band excitations contribute to the absorption alike
\footnote{The regime of $g\ln(\beta g^2) \gg 1$ is out of reach for the methods used in~\cite{gavoret1969optical}. To study it, a consistent treatment of the divergences is needed, similar to~\cite{NOZIERES1969}. We will not attempt this here}. A non-perturbative summation is needed, which is, however, obstructed by the following fact:
As found by straightforward computation, the crossed diagrams are not only cut by $g^2\beta\mu$ due to the hole decay, but also acquire an inherent cutoff of order $\beta\mu$ due to the hole recoil. A standard parquet summation is only possible in a regime where these two cutoffs cannot be distinguished with logarithmic accuracy, i.e.\ where $\beta \ll g^2$. For small enough $g$ this will, however, always be the case in the truly non-perturbative regime where $\beta$ must be exponentially small in $g$.
As a result of these considerations, the logarithms of the parquet summation have to be replaced by the cut-off logarithms~(\ref{relogdressed}), with $g^2\beta\mu$ replaced by $\beta\mu$. The imaginary part of the logarithm is then given by the function plotted in Fig.~\ref{Imdressedbubbleplot}.
The resulting full photon self-energy in the non-perturbative FES regime reads:
\begin{align}
\label{Pillow}
\Pi_\text{FES}(\textbf{Q}=0,\omega) &\simeq -\frac{d_0^2\rho}{2g}\left(\exp\left[-2g\left(\frac{\Pi_{\text{db}}(\omega)}{\rho d_0^2}\right)\right] -1\right).
\end{align}
A sketch of $\text{Im} \left[\Pi_{\text{FES}}\right]$ is shown in Fig.~\ref{FEScomp}.
\section{Polariton properties}
\label{Polariton properites sec}
When the cavity energy $\omega_c$ is tuned into resonance with the excitonic 2DEG transitions, the matter and light modes hybridize, resulting in two polariton branches. We will now explore their properties in the different regimes.
\subsection{Empty conduction band}
To gain some intuition, it is first useful to recapitulate the properties of the exciton-polariton in the absence of a Fermi sea. Its (exact) Green's function is given by Eq.~(\ref{dressedphot}), with
$\omega_\textbf{Q=0} = \omega_c$ and $\Pi(\omega) = \Delta^2/{(\omega+i0^+)}
$, where $\Delta$ is a constant (with units of energy) which determines the polariton splitting at zero detuning. In terms of our exciton model, one has $\Delta = \sqrt{d_0^2\rho E_B/g^2}$. $\omega$ is measured from the exciton pole.
A typical density plot of the polariton spectrum $A_p = -\text{Im}\left[D^R(\omega,\omega_c)\right]/\pi$, corresponding to optical (absorption) measurements as e.g.\ found in \cite{Smolka}, is shown in Fig.\ \ref{pureexcitonpolariton}.
A finite cavity photon linewidth $\Gamma_c = \Delta$ is used.
The physical picture is transparent: the bare excitonic mode (corresponding to the vertical line) and the bare photonic mode repell each other, resulting in a symmetric avoided crossing of two polariton modes.
For analytical evaluations, it is more transparent to consider an infinitesimal cavity linewidth $\Gamma_c$. The lower and upper polaritons will then appear as delta peaks in the polariton spectral function, at positions
\begin{align}
\omega_\pm = \frac{1}{2} \left(\omega_c \pm \sqrt{\omega_c^2 + 4\Delta^2}\right),
\end{align}
and with weights
\begin{align}
\label{Weightsexcitonexact}
W_\pm = \frac{1}{1 + \frac{4 \Delta^2}{(\omega_c \pm \sqrt{4 \Delta^2 + \omega_c^2})^2}}.
\end{align}
We note that the maximum of the polariton spectra scales as $1/\Gamma_c$ for finite $\Gamma_c$.
Our spectral functions are normalized such that the total weight is unity. From Eq.~(\ref{Weightsexcitonexact}) it is seen that the weight of the ``excitonic'' polaritons (corresponding to the narrow branches of Fig.~\ref{pureexcitonpolariton}) decays as $\Delta^2/\omega_c^2$ for large absolute values of $\omega_c$.
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{fig19.pdf}
\caption{(Color online) $\mu = 0$: Exciton-polariton spectrum as function of cavity detuning $\omega_c$ and energy $\omega$, measured in units of the half polariton splitting $\Delta$, with
$\Gamma_c = \Delta$.
}
\label{pureexcitonpolariton}
\end{figure}
\subsection{Large Fermi energy}
Let us study polariton properties in the presence of a Fermi sea. Reverting the order of presentation previously taken in the paper, we first turn to the regime of large Fermi energy, $E_B \ll \mu$.
This is because for $E_B \ll \mu$ the inequality $\rho d_0^2 \ll \mu$~(\ref{M0smallerthanmu}) is more easily satisfied than in the opposite limit of $E_B \gg \mu$, facilitating experimental realization. We compute the polariton properties using the electron-hole correlators as cavity photon self-energy.
A similar approach was applied recently by Averkiev and Glazov~\cite{PhysRevB.76.045320}, who computed cavity transmission coefficients semiclassically, phenomenologically absorbing the effect of the Fermi-edge singularity into the dipole matrix element. Two further recent treatments of polaritons for nonvanishing Fermi energies are found in \cite{PhysRevB.89.245301} and \cite{baeten2015mahan}. In the first numerical paper \cite{PhysRevB.89.245301}, the Fermi-edge singularity as well as the excitonic bound state are accounted for, computing the electron-hole correlator as in~\cite{combescot1971infrared}, but an infinite mass is assumed. The second paper~\cite{baeten2015mahan} is concerned with finite mass. However, the authors only use the ladder approximation and neglect the crossed diagrams, partially disregarding the physical ingredients responsible for the appearance of the Fermi-edge power laws. We aim here to bridge these gaps and describe the complete picture in the regime of large Fermi energy (before turning to the opposite regime of $\mu \ll E_B$).
In the infinite mass limit we will use Eq.~(\ref{Mahanresult}) as the photon self-energy. It is helpful to explicitly write down the real and imaginary parts of the self-energy in leading order in $g$:
\begin{align}
\label{ReFermiInfinite}
\text{Re}\left[\Pi_{\text{FES}}\right](\omega) &= \tilde{\Delta} \left(1-\left(\frac{|\omega|}{\mu }\right)^{-2g}\right),\\
\label{AbsFermiInfinite}
\text{Im}\left[\Pi_{\text{FES}}\right](\omega) &= - \tilde{\Delta} \cdot 2\pi g\left(\frac{\omega}{\mu }\right)^{-2g}\theta(\omega) \\
\label{deltatildedef}
\tilde{\Delta} &\equiv \frac{d_0^2\rho}{2g},
\end{align}
where we have introduced the parameter $\tilde{\Delta}$, which determines the splitting of the polaritons, playing a similar role to $\Delta$ in the previous case of empty CB. In the following, $\tilde{\Delta}$ will serve as the unit of energy.
For a cavity linewidth $\Gamma_c = 1\tilde{\Delta}$, a typical spectral plot of the corresponding "Fermi-edge polaritons" is shown in Fig.~\ref{Fermipolaritoninfinitemass}. It is qualitatively similar to the results of~\cite{PhysRevB.76.045320}.
A quantitative comparison to the empty CB case is obviously not meaningful due to the appearance of the additional parameters $\mu$ (units of energy) and $g$ (dimensionless). Qualitatively, one may say the following: The lower polariton is still a well-defined spectral feature. For zero cavity linewidth (see below), its lifetime is infinite. The upper polariton, however, is sensitive to the high-energy tail of the 2DEG absorption power law~(\ref{AbsFermiInfinite}), and can decay into the continuum of CB particle-hole excitations. Its linewidth is therefore strongly broadened. Only when the 2DEG absorption is cut off by finite bandwidth effects (i.e., away from the Fermi-edge), a photonic-like mode reappears in the spectrum (seen in the upper right corner of Fig.~\ref{Fermipolaritoninfinitemass}).
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{fig20.pdf}
\caption{(Color online) $\mu \gg E_B$: Infinite hole mass Fermi-edge-polariton spectrum $A_p(\omega,\omega_c)$ as function of cavity detuning $\omega_c$ and energy $\omega$, measured in units of the effective splitting $\tilde{\Delta}$. It was obtained by inserting Eqs.~(\ref{ReFermiInfinite}) and~(\ref{AbsFermiInfinite}) into Eq.~(\ref{Polaritonspectralfunction}). Parameter values: $\mu = 30\tilde{\Delta}$, $\Gamma_c = 1\tilde{\Delta}$, and $g=0.25$.}
\label{Fermipolaritoninfinitemass}
\end{figure}
For more detailed statements, one can again consider the case of vanishing cavity linewidth $\Gamma_c$. A spectral plot with the same parameters as in Fig.~\ref{Fermipolaritoninfinitemass}, but with small cavity linewidth, $\Gamma_c = 0.01 \tilde{\Delta}$, is shown in Fig.~\ref{combined_spectra}(a).
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{fig21.pdf}
\caption{(Color online) $\mu \gg E_B$:
(a) Fermi-edge-polariton spectrum with the same parameters as in Fig.~\ref{Fermipolaritoninfinitemass}, but $\Gamma_c=0.01\tilde{\Delta}$. The white dashed lines denote the location of the spectral cuts presented in Fig.~\ref{combined_cuts}.
(b) Spectrum with a nonzero mass-ratio $\beta = 0.2$, and otherwise the same parameters as in (a). This plot was obtained by inserting the finite mass photon self-energy of Eq.~(\ref{Pillow}) into Eq.~(\ref{Polaritonspectralfunction}), with $\omega_c$ replaced by $\omega_c + \beta\mu$ to make sure that the cavity detuning is measured from the \textit{pole} of the photon self-energy. Note that the frequency range of panel~(b) is shifted as compared to~(a).}
\label{combined_spectra}
\end{figure}
\begin{figure*}[!]
\centering
\includegraphics[width=\textwidth]{fig22.pdf}
\caption{(Color online) $\mu \gg E_B$: Spectral cuts at fixed cavity detuning through the polariton spectra of Fig.~\ref{combined_spectra}, for both infinite (continuous blue lines) and finite (dashed orange lines) hole mass.
(a) Large negative cavity detuning. The dotted vertical line line always indicates the position of the direct threshold at $\omega = \beta\mu$. The inset
is a zoom-in on the absorption onset at the indirect threshold.
(b) Zero cavity detuning.
(c) Large positive cavity detuning.
}
\label{combined_cuts}
\end{figure*}
We first examine the lower polariton (assuming zero linewidth), which is a pure delta peak. Its position is determined by the requirement
\begin{align}
\label{findlowerpole}
\omega-\omega_c - \text{Re}\left[\Pi_\text{FES}(\omega)\right] = 0.
\end{align}
One may study the solution of this equation in three distinct regimes, corresponding to $\omega_c \rightarrow -\infty$, $\omega_c = 0$, and $\omega_c \rightarrow + \infty$.
For $\omega_c \rightarrow - \infty$, the solution of Eq.~(\ref{findlowerpole}) approaches $\omega = \omega_c$, and the lower polariton acquires the full spectral weight (unity): For strong negative cavity detunings, the bare cavity mode is probed. The corresponding spectral cut is shown in Fig.~\ref{combined_cuts}(a) (continuous line). We will refrain from making detailed statements about the way the bare cavity mode is approached, since this would require the knowledge of the photon self-energy at frequencies far away from the threshold.
As the cavity detuning is decreased, the lower polariton gets more matter-like. At zero detuning [see Fig.~\ref{combined_cuts}(b)], and for $g$ not too small (w.r.t.\ $g\tilde{\Delta}/\mu$), the weight of the lower polariton is approximately given by $1/(1+2g)$.
For large positive cavity detunings [see Fig.~\ref{combined_cuts}(c)], the position of the matter-like lower polariton approaches $\omega=0$,
\begin{align}
\label{peaklargewc}
\omega \sim -\omega_c^{-1/(2g)} \quad \text{as} \quad \omega_c \rightarrow \infty.
\end{align}
The lower polariton weight also scales in a power law fashion,
$\sim \omega_c^{-1-1/(2g)}$, distinct from the excitonic regime, where the weight falls off quadratically [Eq.~(\ref{Weightsexcitonexact})].
Due to the finite imaginary part of the self-energy $\Pi_{\text{FES}}(\omega)$, the upper polariton is much broader than the lower one: the photonic mode can decay into the continuum of matter excitations. At large negative detunings [see the inset to Fig.~\ref{combined_cuts}(a)], the upper polariton has a power law like shape (with the same exponent as the Fermi-edge singularity), and for $
\omega_c \rightarrow - \infty$ its maximum approaches $\omega = 0$ from the high-energy side. As the detuning is increased (made less negative), the maximum shifts away from $\omega=0$, approaching the free cavity mode frequency $\omega = \omega_c$ for $\omega_c \rightarrow \infty$. Since the weight and height are determined by the value of $\text{Im}[\Pi_{\text{FES}}]$ at the maximum, they increase correspondingly.
Let us now consider the case of finite mass. Using the finite mass photon self-energy (\ref{Mahanresult}) instead of (\ref{Pillow}), the Fermi-edge-polariton spectrum with a nonzero mass-ratio of $\beta = 0.2$ is plotted in Fig.~\ref{combined_spectra}(b).
Compared to the infinite mass case of Fig.~\ref{combined_spectra}(a), Fig.~\ref{combined_spectra}(b) has the following important features:
(i) The boundary line separating the lower and upper thresholds is shifted to the high-energy side from $\omega = 0$ in the infinite mass case to $\omega = \beta\mu$ in the finite mass case, reflecting the Burstein-Moss shift in the 2DEG absorption.
(ii) As opposed to the infinite mass case, the lower polariton is strongly broadened at large positive detunings.
These points are borne out more clearly in Fig.~\ref{combined_cuts}(a)--(c) (dashed lines), which presents cuts through Fig.~\ref{combined_spectra}(b) at fixed detuning.
The situation at large negative detuning is shown in Fig.~\ref{combined_cuts}(a): Compared to the infinite mass case, shown as full line, the polaritons are shifted towards higher energies. In addition, the shape of the upper polariton is slightly modified --- its onset reflects the convergent phase-space power law $\omega^3$ of Eq.~(\ref{Abspowerlaw}) found for the 2DEG absorption. This is emphasized in the inset.
At zero cavity detuning [Fig.~\ref{combined_cuts}(b)], the situation of the finite and infinite mass cases is qualitatively similar.
When the cavity detuning is further increased, the position of the pole-like lower polariton approaches the direct threshold at $\omega = \beta\mu$ (indicated by the vertical dotted line). When the pole is in the energy interval $[0,\beta\mu]$, the lower polariton overlaps with the
2DEG continuum absorption, and is therefore broadened. This is clearly seen in Fig. \ref{combined_cuts}(c): Instead of a sharp feature, there is just a small remainder of the lower polariton at $\omega = \beta\mu$.
As a result, one may say that in the regime of the Fermi-edge singularity, i.e., large $\mu$, the finite mass will cut off the excitonic features from the polariton spectrum -- instead of the avoided crossing of Fig.~\ref{pureexcitonpolariton}, Fig.~\ref{combined_spectra}(b) exhibits an almost photonic-like spectrum, with a small (cavity) linewidth below the threshold at $\omega = \beta\mu$, and a larger linewidth above the threshold, reflecting the step-like 2DEG absorption spectrum of Fig.~\ref{FEScomp}.
The finite mass thus leads to a general decrease of the mode splitting between the two polariton branches. This trend continues
when the Fermi energy is increased further.
It is instructive to compare this behavior with the experimental results reported in~\cite{Smolka}. There, two differential reflectivity measurements were conducted, which can be qualitatively identified with the polariton spectra. The first measurement was carried out using a low-mobility GaAs sample (which should behave similarly to the limit of large VB hole mass), and moderate Fermi energies. A clear avoided crossing was seen, with the upper polariton having a much larger linewidth than the lower one (see Fig.~2(A) of \cite{Smolka}). In the second measurement, the Fermi energy was increased further, and a high-mobility sample was studied, corresponding to finite mass. A substantial reduction of the mode splitting between the polaritons was observed (Fig.~2(C) of \cite{Smolka}). While a detailed comparison to the experiment of \cite{Smolka} is
challenging, due to the approximations we made and the incongruence of the parameter regimes (in the experiment one has $\mu \simeq E_B$), the general trend of reduced mode splitting is correctly accounted for by our theory.
\subsection{Small Fermi energy}
We now switch to the regime of of small Fermi energy discussed in Sec.~\ref{Photon self-energy small mu sec}, a regime in which the polariton spectra have not been studied analytically before. We again assume that the condition~ (\ref{M0smallerthanmu}), required for the approximating
the photon self-energy by Eq.~(\ref{Kubo-formula}), is fulfilled. This may be appropriate for systems with a large exciton-binding energies, e.g., transition metal dichalcogenide monolayers as recently studied in \cite{sidler2017fermi}.
For infinite mass, we may use Eq.~(\ref{Nozieresresult}) as photon self-energy, multiplied by a prefactor $\Delta^2 = d_0^2 \rho E_B/ g^2 $ [cf.\ Eq.~(\ref{Deltadef})], and expand the real and imaginary parts to leading order in $\alpha^2 = (\delta/\pi-1)^2$. The energy $\omega$ is now measured from the exciton pole: $\omega = \Omega-\Omega_T^{\text{exc}}$, $\Omega_T^{\rm{exc}} = E_G + \mu - E_B$. The corresponding polariton spectrum for a small cavity linewidth is shown in Fig.~\ref{combined_spectra2}(a). Qualitatively, it strongly resembles the bare exciton case as in Fig.~\ref{pureexcitonpolariton} (note that in Fig.\ \ref{combined_spectra2} the cavity linewidth was chosen to be 100 times smaller than in Fig.~\ref{pureexcitonpolariton}), but with a larger linewidth of the upper polariton. This is due to the possible polariton decay into the particle hole continuum contained in the excitonic power law, Eq.~(\ref{Nozieresresult}).
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{fig23.pdf}
\caption{(Color online) $\mu \ll E_B$: Exciton-polariton spectrum for small Fermi energy.
The white dashed lines denote the location spectral cuts presented in Fig.~\ref{combined_cuts2}.
(a) Infinite mass. This plot was obtained by inserting the Exciton Green's function for $\mu\gtrsim 0$, given by Eq.~(\ref{Nozieresresult}) multiplied by $\Delta^2 = d_0^2\rho E_B/g^2$, into the photon Green's function, Eq.~(\ref{Polaritonspectralfunction}). Parameters: $\mu = 10 \Delta$, $\Gamma_c = 0.01\Delta$, $\alpha^2 = (\delta/\pi -1)^2 = 0.25$.
(b) Finite mass, with mass ratio $\beta=0.4$. In this plot, the finite mass Exciton Green's function, Eq.~(\ref{Piexcfinitemass}), was used, with the same parameters as in (a).
}
\label{combined_spectra2}
\end{figure}
\begin{figure*}[!]
\centering
\includegraphics[width=\textwidth]{fig24.pdf}
\caption{(Color online) $\mu \ll E_B$: Spectral cuts at fixed cavity detuning through the polariton spectra of Fig.~\ref{combined_spectra2}, for both infinite (continuous blue lines) and finite hole mass (dashed orange lines).
(a) Large negative cavity detuning. The inset shows a zoom onto the upper polaritons.
(b) Zero cavity detuning.
(c) Large positive cavity detuning. }
\label{combined_cuts2}
\end{figure*}
The detailed discussion of polariton properties in the regime of $\mu \ll E_B$ parallels the previous discussion in the regime $E_B \ll \mu$. For small negative detuning $\omega_c$ [Fig.\ \ref{combined_cuts2} (a)], the lower polariton is found at approximately $\omega = \omega_c$. The upper polariton has a significantly smaller weight, its shape reflects the excitonic power law of Eq.~(\ref{Nozieresresult}). However, compared to the previous spectral cuts (Fig.~\ref{combined_cuts}) the upper polariton peak is much more pronounced. This results from the exciton being now pole-like,
as compared to the power law Fermi-edge singularity. Increasing the detuning, weight is shifted to the upper polariton. At zero detuning [Fig.~\ref{combined_cuts2}(b)], the weight of the lower polariton is only order $\mathcal{O}\left(\alpha^2\right)$ larger than the weight of the upper polariton. At large positive detuning, the position of the lower polariton is found at approximately
\begin{align}
\label{peaklargewc_no}
\omega \sim -\omega_c^{-1/(1-\alpha^2)} \quad \text{as} \quad \omega_c \rightarrow \infty.
\end{align}
The lower polariton thus approaches the exciton line faster than in the pure exciton case, but slower than in the Fermi-edge regime [Eq.~(\ref{peaklargewc})].
A similar statement holds for the weight of the lower polariton, which scales as $\omega_c^{-2-\alpha^2}$.
The spectrum in the finite mass case is qualitatively similar, see Fig.~\ref{combined_spectra2}(b).
Quantitatively, a stronger peak repulsion can be seen, which may be attributed to the enhanced excitonic quasiparticle weight in the finite mass case.
A comparison of spectral cuts in the finite mass case [Fig.~\ref{combined_cuts2}(a)--(c)] further corroborates this statement [especially in
Fig.~\ref{combined_cuts2}(c)]. Indeed, one finds that the position of the lower polariton at large cavity detuning is approximately given by
\begin{align}
\label{peaklargewc_nofinite}
\omega \sim -\beta^{\alpha^2} \cdot \omega_c^{-1} \qquad
\text{as} \quad \omega_c \rightarrow \infty ,
\end{align}
i.e., the excitonic line at $\omega =0$ is approached more slowly than in the infinite mass case, Eq.~(\ref{peaklargewc_no}). The corresponding weight falls off as $\omega_c^{-2}$. Thus, the lower polariton has a slightly enhanced weight compared to the infinite mass case.
In addition, in the spectral cut at large negative detuning, [inset to Fig.~\ref{combined_cuts2}(a)], the upper polariton appears as a sharper peak compared to the infinite mass case, which again results from the enhanced quasi particle weight of the finite mass case.
\section{Conclusion}
\label{Conclusion sec}
In this paper we have studied the exciton-polariton spectra of a 2DEG in an optical cavity in the presence of finite CB electron density.
In particular, we have elucidated the effects of finite VB hole mass, distinguishing between two regimes.
In the first regime (small Fermi energy as compared to the exciton binding energy), we have found that excitonic features in the 2DEG absorption are enhanced by the exciton recoil and the resulting suppression of the Fermi edge singularity physics. In contrast, in the second regime of Fermi energy larger than the exciton binding energy,
it is the VB hole which recoils at finite mass. This cuts off the excitonic features. These modifications also translate to polariton spectra, especially to the lower polariton at large cavity detuning, which is exciton-like. Our findings reproduce a trend seen in a recent experiment~\cite{Smolka}.
We would like to mention several possible extensions of this work.
To begin with, it would be promising to study the effect of long-range interactions on the power laws, and hence on polariton spectra, from an analytical perspective. Long-range interactions are expected to be most important in the regime of small Fermi energy, leading to additional bound states and to the Sommerfeld enhancement effects~\cite{haug1990quantum}. Moreover, one should try to explore trionic features, for which it is necessary to incorporate the spin degree of freedom (to allow an electron to bind to an exciton despite the Pauli principle).
Another interesting direction would be to tackle the limit of equal electron and hole masses, which is relevant to transition metal dichalcogenides, whose polariton spectra in the presence of a Fermi sea where measured in a recent experiment~\cite{sidler2017fermi}. Lastly, one should address the behavior of the polariton in the regime of small Fermi energy and strong light-matter interactions. Then, not the exciton, but rather the polariton interacts with the Fermi sea, and different classes of diagrams have to be resummed to account for this change in physics.
\begin{acknowledgments}
This work was initiated by discussions with A.~Imamo\u{g}lu.
The authors also acknowledge many helpful comments from F.~Kugler, A.~Rosch, D.~Schimmel, and O.~Yevtushenko.
This work was supported by funding from the German Israeli Foundation (GIF) through I-1259-303.10.
D.P.\ was also supported by the German Excellence Initiative via the Nanosystems Initiative Munich (NIM).
M.G.\ received funding from the Israel Science
Foundation (Grant 227/15), the US-Israel Binational
Science Foundation (Grant 2014262), and the Israel Ministry of Science and Technology (Contract 3-12419), while L.G.\ was supported by NSF Grant DMR-1603243.
\end{acknowledgments}
| 2024-02-18T23:39:52.809Z | 2017-10-30T01:06:22.000Z | algebraic_stack_train_0000 | 743 | 16,600 |
|
proofpile-arXiv_065-3692 | \section{Introduction}
Active matter consists of a large number of self-driven agents converting chemical energy, usually stored in the surrounding environment, into mechanical motion \cite{Ram2010,MarJoaRamLivProRaoSim2013,ElgWinGom2015}.
In the last decade various realizations of active matter have been studied including living self-propelled particles as well as synthetically manufactured ones. Living agents are for example bacteria \cite{DomCisChaGolKes2004,sokolov2007concentration}, microtubules in biological cells \cite{SurNedLeiKar2001,SanCheDeCHeyDog2012}, spermatozoa \cite{2005Riedel_Science,Woolley,2008Friedrich_NJP} and animals \cite{CavComGiaParSanSteVia2010,CouKraJamRuxFra2002,VisZaf2012}.
Such systems are out-of-equilibrium and show a variety of collective effects, from clustering \cite{BocquetPRL12,Bialke_PRL2013,Baskaran_PRL2013,Palacci_science}
to swarming, swirling and turbulent type motions \cite{ElgWinGom2015, DomCisChaGolKes2004,sokolov2007concentration,VisZaf2012,wensink2012meso,SokAra2012,SaiShe08,RyaSokBerAra13}, reduction of effective viscosity \cite{sokolov2009reduction,GacMinBerLinRouCle2013,LopGacDouAurCle2015,HaiAraBerKar08,HaiSokAraBer09,HaiAroBerKar10,RyaHaiBerZieAra11}, extraction of useful energy \cite{sokolov2010swimming,di2010bacterial,kaiser2014transport},
and enhanced mixing \cite{WuLib2000,SokGolFelAra2009,pushkin2014stirring}.
Besides the behavior of microswimmers in the bulk the influence of confinement has been studied intensively in experiments \cite{DenissenkoPNAS,Chaikin2007} and numerical simulations \cite{ElgetiGompper13,Lee13Wall,Ghosh,Wensink2008}.
There are two distinguishing features of swimmers confined by walls and exposed to an external flow: accumulation at the walls and upstream motion (rheotaxis). Microorganisms such as bacteria \cite{BerTur1990,RamTulPha1993,FryForBerCum1995,VigForWagTam2002,BerTurBerLau2008} and sperm cells \cite{Rot1963} are typically attracted by no-slip surfaces. Such accumulation was also observed for larger organisms such as worms \cite{YuaRaiBau2015} and for synthetic particles \cite{DasGarCam2015}. The propensity of active particles to turn themselves against the flow (rheotaxis) is also typically observed. \textcolor{black}{While for larger organisms, such as fish, rheotaxis is caused by a deliberate response to a stream to hold their position
\cite{JiaTorPeiBol2015}, for micron sized swimmers rheotaxis has a pure mechanical origin \cite{HilKalMcMKos2007,fu2012bacterial,YuaRaiBau2015rheo,TouKirBerAra14,PalSacAbrBarHanGroPinCha2015}.}
These phenomena observed in living active matter can also be achieved using synthetic swimmers, such as self-thermophoretic \cite{Sano_PRL2010} and self-diffusiophoretic \cite{paxton2004catalytic,HowsePRL2007,Bechinger_JPCM,Baraban_SM2012} micron sized particles as well as particles set into active motion due to the influence of an external field \cite{bricard2013emergence,bricard2015emergent,KaiserSciAdv2017}.
Using simple models we describe the extrusion of a dilute active suspension through a trapezoid nozzle.
We analyze the qualitative behavior of trajectories of an individual active particle in the nozzle and study the statistical properties of the particles in the nozzle.
The accumulation at walls and rheotaxis are important for understanding how an active suspension is extruded through a nozzle. Wall accumulation may eliminate all possible benefits caused by the activity of the particles in the bulk.
\textcolor{black}{Due to rheotaxis active particles may never reach the outlet and leave the nozzle through the inlet, so that properties of the suspension coming out through the outlet will not differ from those of the background fluid.}
The specific geometry of the nozzle is also important for our study. The nozzle is a finite domain with two open ends (the inlet and the outlet) and the walls of the nozzle are not parallel but convergent, that is, the distance between walls decreases from the inlet to the outlet. The statistical properties of active suspension (e.g., concentration of active particles) extruded in the infinite channel with parallel straight or periodic walls are well-established, see e.g., \cite{EzhSai2015} and \cite{MalSta2017}, respectively. The finite nozzle size leads to a ``proximity effect", i.e., the equilibrium distribution of active particles changes significantly in proximity of both the inlet and the outlet. The fact that the walls are convergent, results in a ``focusing effect", i.e., the background flow compared to the pressure driven flow in the straight channel (the Poiseuille flow) has an additional convergent component that turns a particle toward the centerline. \textcolor{black}{Specifically, in this work it is shown that due to this convergent component of the background flow both up- and downstream swimming at the centerline are stable. Stability of the upstream swimming at the centerline is somewhat surprising since from observations in the Poisueille flow it is expected that an active particle turns against the flow only while swimming towards the walls, where the shear rate is higher. This means that we find rheotaxis in the bulk of an active suspension.}
\section{Model}
\label{sec:model}
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{nozzle_sketch_title.jpg}
\caption{Sketch of a trapezoid nozzle filled with an dilute suspension of rodlike active particles in the presence of a converging background flow.}
\label{fig:nozzle-sketch}
\end{figure}
To study the dynamics of active particles in a converging flow, two modeling approaches are exploited. In both, an active particle is represented by a rigid rod of length $\ell$ swimming in the $xy$-plane. In the first - simpler - approach, the rod is a one-dimensional segment which cannot penetrate a wall, whereas in the second - more sophisticated - approach we use the Yukawa segment model \cite{Kirchhoff1996} to take into account both finite length and width of the rod, as well as a more accurate description of particle-wall steric interaction.
The active particle's center location and its unit orientation vector are denoted by ${\bf r}=(x,y)$ and ${\bf p}=(\cos \varphi,\sin\varphi)$, respectively. The active particles are self-propelled with a velocity directed along their orientation $v_{0}{\bf p}$.
\textcolor{black}{The active particles are confined by a nozzle, see Fig.~\ref{fig:nozzle-sketch}, which is an isosceles trapezoid $\Omega$, placed in the $xy$-plane so that inlet $x=x_{\text{in}}$ and outlet $x=x_{\text{out}}$ are bases and the $y$-axis is the line of symmetry:
\begin{equation}
\Omega=\left\{x_{\text{in}}<x<x_{\text{out}}, \; \alpha^2 x^2 -y^2>0\right\}.
\end{equation}
The nozzle length, the distance between the inlet and the outlet, is denoted by $L$, i.e., $L=|x_{\text{out}}-x_{\text{in}}|$. The width of the outlet and the inlet are denoted by $w_{\text{out}}$ and $w_{\text{in}}$, respectively, and their ratio is denoted by $k={w_{\text{out}}}/{w_{\text{in}}}$.}
\textcolor{black}{Furthermore, the active particles are exposed to an external background flow. We approximate the resulting converging background flow due to the trapezoid geometry of the nozzle by
\begin{equation}\label{convergent_flow}
{\bf u}_{\text{BG}}({\bf r})=(u_x(x,y),u_y(x,y))=(-u_0 (\alpha^2 x^2-y^2)/x^3, -u_0 y (\alpha^2x^2-y^2)/x^4),
\end{equation}
where $u_0$ is a constant coefficient related to the flow rate and $\alpha$ is the slope of walls of the nozzle.
Equation \eqref{convergent_flow} is an extension of the Poiseuille flow to channels with convergent walls\footnote{In order to recover the Poiseuille flow (for channels of width $2H$) from Eq.~\eqref{convergent_flow}, take $x=H/\alpha$, $u_0=H^3/\alpha^3$ and pass to the limit $\alpha\to 0$. Note that the walls of the nozzle are placed so that they intersect at the origin, so in the limit of parallel walls, $\alpha \to 0$, both the inlet and the outlet locations, $x_{\text{in}}$ and $x_{\text{out}}$, go to $-\infty$.}}.
Active particles swim in the low Reynolds-number regime. The corresponding overdamped equations of motion for the locations ${\bf r}$ and orientations ${\bf p}$ are given by:
\begin{equation}
\label{orig-location}
\dfrac{\text{d}\bf r}{\text{d}t}={\bf u}_{\text{BG}}({\bf r})+v_{0}{\bf p},
\end{equation}
\begin{equation}
\label{orig-orientation}
\dfrac{\text{d}\bf p}{\text{d}t} =(\text{I}-{\bf p}{\bf p}^{\text{T}})\nabla_{\bf r}{\bf u}_{\text{BG}}({\bf r}){\bf p}\,+\sqrt{2D_r}\,\zeta \,{\bf e}_{\varphi}.
\end{equation}
Here \eqref{orig-orientation} is the Jeffery's equation \cite{SaiShe08,Jef1922,KimKar13} for rods with an additional term due to random re-orientation with rotational diffusion coefficient $D_r$; $\zeta$ is an uncorrelated noise with the intensity $\langle \zeta(t),\zeta(t')\rangle=\delta(t-t')$, $e_{\varphi}=(-\sin \varphi, \cos \varphi)$. Equation \eqref{orig-orientation} can also be rewritten for the orientation angle $\varphi$:
\begin{equation}
\label{orig-orientation-angle}
\dfrac{\text{d}\varphi}{\text{d}t}=\omega+ \nu\, \sin 2\varphi + \gamma \,\cos 2\varphi+\sqrt{2D_r}\,\zeta.
\end{equation}
Here $\omega=\dfrac{1}{2}\left(\dfrac{\partial u_y}{\partial x}-\dfrac{\partial u_x}{\partial y}\right)$, $\nu=\dfrac{1}{2}\left(\dfrac{\partial u_y}{\partial y}-\dfrac{\partial u_x}{\partial x}\right)=\dfrac{\partial u_y}{\partial y}=-\dfrac{\partial u_x}{\partial x}$, and $\gamma=\dfrac{1}{2}\left(\dfrac{\partial u_y}{\partial x}+\dfrac{\partial u_x}{\partial y}\right)$ are local vorticity, vertical expansion (or, equivalently, horizontal compression; similar to Poisson's effect in elasticity) and shear.
The strength of the background flow is
\textcolor{black}{quantified by the inverse Stokes number, which is the ratio between the background flow at the center of the inlet and the self-propulsion velocity $v_0$. Specifically,}
\begin{equation}
\sigma = \dfrac{u_x(x_{\text{in}},0)}{v_{0}}=\dfrac{u_0\alpha^2}{v_{0}|x_{\text{in}}|},
\end{equation}
where $(x_{\text{in}},0)$ denotes the location at the center of the inlet.
In the first modeling approach we include the particle wall interaction in the following way: an active particle is not allowed to penetrate the walls of the nozzle. To enforce this, we require that both the front and the back of the particle, ${\bf r}(t)\pm(\ell/2) {\bf p}$, are located inside the nozzle. In numerical simulations of the system \eqref{orig-location}-\eqref{orig-orientation-angle} this requirement translates into the following rule: if during numerical integration of \eqref{orig-location}-\eqref{orig-orientation-angle} a particle penetrates one of the two walls, then this particle is instantaneously shifted back along the inward normal at the minimal distance, so its front and back are again located inside the nozzle while its orientation is kept fixed.
\textcolor{black}{Unless mentioned otherwise, in this modeling approach we consider a nozzle whose inlet width $w_{\text{in}}=0.2$ mm and outlet width $w_{\text{out}}=0.1$ mm are fixed. The following nozzle lengths are considered: $L=0.2$ mm, $L=0.5$ mm and $L=1.0$ mm. The length of the active particles is $\ell = 20$ $\mu$m, they swim with a self-propulsion velocity $v_{0}=10$ $\mu$m $\text{s}^{-1}$ and their rotational diffusion coefficient is given by $D_r=0.1$ $\text{s}^{-1}$.}
All active particles are initially placed at the inlet, $x(0)=x_{\text{in}}$, with random $y$-component $y(0)$ and orientation angle $\varphi(0)$. The probability distribution function for initial conditions $y(0)$ and $\varphi(0)$ is given by $\Psi\propto 1$ (uniform). The trajectory of an active particle is studied until it leaves the nozzle either through the inlet or the outlet. To gather statistics we use 96,000 trajectories.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.6\textwidth]{nozzleSketchAK2.jpg}
\caption{Sketch of a discretized active rod (red) of length $\ell$ and width $\lambda$ which is propelled with a velocity $v_0$ along its orientation ${\bf p}$ and is exposed to a converging background flow ${\bf u}_{\text{BG}}$ in the presence of a trapezoid nozzle confinement of length $L$ and with an inlet of size $w_{\text{in}}$ and outlet of size $w_{\text{out}}$(blue).
To study a system with a packing fraction $\rho=0.1$ a channel is attached to the inlet with a non-converging background flow.}
\label{fig:nozzle-sketchAK}
\end{figure}
We use the second approach to describe the particle-wall interactions and the torque induced by the flow more accurately. \textcolor{black}{For this purpose each rod, representing an active particle, of length $\ell$, width $\lambda$ and the corresponding aspect ratio $a=\ell/\lambda$ is discretized into $n_r$ spherical segments with $n_r = \lfloor 9 a /8 \rceil$ ($\lfloor x \rceil$ denotes the nearest integer function).} The resulting segment distance is also used to discretize the walls of the nozzle into $n_w$ segments in the same way. Between the segments of different objects a repulsive Yukawa potential is imposed. The resulting total pair potential is given by $U = U_0\sum_{i=1}^{n_r}\sum_{j=1}^{n_w} \exp [-r_{ij} / \lambda]/r_{ij}$, where $\lambda$ is the screening length defining the particle diameter, $U_0$ is the prefactor of the Yukawa potential and $r_{ij} = |{\bf r}_{i} - {\bf r}_{j}|$ is the distance between segment $i$ of a rod and $j$ of the wall of the nozzle, see Fig.~\ref{fig:nozzle-sketchAK}.
The equations of motion (\ref{orig-location}) and (\ref{orig-orientation}) are complemented by the respective derivative of the total potential energy of a rod along with the one-body translational and rotational friction tensors for the rods ${\bf f}_{\cal T}$ and ${\bf f}_{\cal R}$ which can be decomposed into parallel $f_\parallel$, perpendicular $f_\perp$
and rotational $f_{\cal R}$ contributions which depend solely on the aspect ratio $a$~\cite{tirado}.
For this approach we measure distances in units of $\lambda$, velocities in units of $v_0=F_0/f_\parallel$ (here $F_0$ is an effective self-propulsion force), and time in units of $\tau = \lambda f_\parallel / F_0$. While the width of the outlet $w_{\text{out}}$ is varied, the width of the inlet $w_{\text{in}}$ as well as the length of the nozzle $L$ is fixed to $100\lambda$ in our second approach.
\textcolor{black}{Initial conditions are the same as in the first approach.
To avoid that a rod and a wall initially intersect each other, the rod is allowed to reorient itself during an equilibration time $t_e = 10 \tau$ while its center of mass is fixed.}
\textcolor{black}{Furthermore, we use the second approach to study the impact of a finite density of swimmers. For this approach we initialize $N$ active rods in a channel confinement which is connected to the inlet of the nozzle, see Fig.~\ref{fig:nozzle-sketchAK}. Inside the channel we assume a regular (non-converging) Poiseuille flow~\cite{zottl2013periodic}. We restrict our study to a dilute active suspension with a two dimensional packing fraction $\rho=0.1$. To maintain this fraction, particles which leave the simulation domain are randomly placed at the inlet of the channel confinement.}
\section{Results}
\label{sec:results}
\subsection{Focusing of outlet distribution}
\label{sec:focusing}
Here we characterize the properties of the particles leaving the nozzle at either the outlet or the inlet. Specifically, our objective is to determine whether particles accumulate at the center or at walls when they pass through the outlet or the inlet.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\textwidth]{dependence_on_sigma_joint.jpg}
\caption{Histograms of the outlet distribution for $y|_{\text{out}}$ for given \textcolor{black}{inverse Stokes number} $\sigma$ and length $L$ of the nozzle. The histograms are obtained from numerical integration of \eqref{orig-location}-\eqref{orig-orientation-angle}.
}
\label{fig:dependence-on-sigma}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\textwidth]{y_phi_diagram.jpg}
\caption{Outlet distribution histograms for $(y,\varphi)|_{\text{out}}$ computed for given inverse Stokes number $\sigma$ and nozzle length $L=0.2$. }
\label{fig:y_phi_diagram}
\end{figure}
We start with the first modeling approach. Figure~\ref{fig:dependence-on-sigma} shows the spatial distribution of active particles leaving the nozzle at the outlet for various \textcolor{black}{inverse Stokes number} $\sigma$ and three different lengths $L$ of the nozzle, while the width of the inlet and the outlet are fixed.
For small inverse Stokes number $\sigma$, the background flow is negligible compared to the self-propulsion velocity. Active particles swim close to the walls and peaks at walls are still clearly visible for $\sigma=0.5$ for all nozzle lengths $L$, see Fig.~\ref{fig:dependence-on-sigma}(a). For $\sigma=1$, the self-propulsion velocity and the background flow are comparable; in this case the histogram shows a single peak at the center of the outlet, see Fig.~\ref{fig:dependence-on-sigma}(b). Further increasing the inverse Stokes number from $\sigma=1$ to $\sigma=9$ leads to a broadening of the central peak and then to the formation of two peaks with a well in the center of the outlet, see Fig.~\ref{fig:dependence-on-sigma}(c)-(e). Finally, for an even larger inverse Stokes number $\sigma$, the self-propulsion velocity is negligible and the histogram becomes close to the one in the passive (no self-propulsion, $v_0 = 0$) case, see Fig.~\ref{fig:dependence-on-sigma}(f). Here the histogram for a nozzle length $L=0.2$ mm is uniform except at the edges where it has local peaks due to accumulation at the walls caused by steric interactions.
Histograms for both the $y$-component and the orientation angle $\varphi$ of the active particles reaching the outlet are depicted in Fig.~\ref{fig:y_phi_diagram}(a)-(c). While active particles leave the nozzle with orientations away from the centerline for small inverse Stokes number, $\sigma = 0.5$, they are mostly oriented towards the centerline for larger values of the inverse Stokes number.
\textcolor{black}{In Fig.~\ref{fig:y_phi_diagram}(c), one can observe that the histogram is concentrated largely for downstream orientations $\varphi \approx 0$ and slightly for upstream orientations $\varphi \approx \pm \pi$. These local peaks for $\varphi \approx \pm \pi$ away from walls are evidence of rheotaxis in the bulk. \textcolor{black}{These peaks are visible for large inverse Stokes numbers only and the corresponding active particles are flushed out of the nozzle with upstream orientations.}
}
\begin{figure}[ht!]
\centering
\includegraphics[width=1.0\textwidth]{share_of_particles_edited.jpg}
\caption{(a) Probability of active particles to reach the outlet for various \textcolor{black}{inverse Stokes number} $\sigma$ (horizontal axis) and given lengths of the nozzle $L$. Insets: Trajectories for the case of $L=0.2$ mm. (b-d) Distribution histograms for particles leaving the nozzle through the inlet $y|_{\text{in}}$ computed for given reduced flow velocities $\sigma$ and nozzle lengths $L$.
}
\label{fig:share-of-particles}
\end{figure}
\textcolor{black}{Due to rotational diffusion and rheotaxis it is possible that an active particle can leave the nozzle through the inlet. We compute the probability of active particles to reach the outlet.} This probability, as a function of the inverse Stokes number $\sigma$ for the three considered nozzle lengths $L$, is shown in Fig.~\ref{fig:share-of-particles}(a), together with selected trajectories, see insets in Fig.~\ref{fig:share-of-particles}(a).
The figure shows that the probability that an active particle eventually reaches the outlet monotonically grows with \textcolor{black}{the inverse Stokes number} $\sigma$. Note that a passive particle always leaves the nozzle through the outlet. By comparing the probabilities for different nozzle lengths $L$ it becomes obvious that an active particle is less likely to leave the nozzle through the outlet for longer nozzles. Due to the larger distance $L$ between the inlet and the outlet an active particle spends more time within the nozzle, which makes it more likely to swim upstream by either rotational diffusion or rheotaxis.
In Fig.~\ref{fig:share-of-particles}(b)-(d) histograms for active particles leaving the nozzle through the inlet are shown. In the case of small inverse Stokes number, $\sigma=0.5$, the majority of active particles leaves the nozzle at the inlet. Specifically, most of them swim upstream due to rheotaxis close to the walls, but some active particles leave the nozzle at the inlet close to the center. These active particles are oriented upstream due to random reorientation. By increasing the inverse Stokes number $\sigma \geq 1$, active particles are no longer able to leave the nozzle at the inlet close to the center.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{two_representative_trajectories2.jpg}
\caption{Examples of two trajectories for $L=1$ mm and $\sigma=1.0$. The red trajectory starts and ends at the inlet (the endpoint is near the lower wall). The blue trajectory has a zigzag shape with loops close to the walls; the particle that corresponds to the blue trajectory manages to reach the outlet.}
\label{fig:two_representative}
\end{figure}
Let us now consider specific examples of active particles' trajectories,
see
Fig.~\ref{fig:two_representative}. The first trajectory (red) starts and ends at the inlet.
Initially the active particle swims downstream and collides with the upper wall due to the torque induced by the background flow. Close to the wall it exhibits rheotactic behavior, but before it reaches the inlet it is expelled towards the center of the nozzle due to rotational diffusion, similar to bacteria that may escape from surfaces due to tumbling \cite{DreDunCisGanGol2011}.
Eventually, the active particle leaves the nozzle at the inlet.
As for the other depicted trajectory (blue), the active particle manages to reach the outlet. Along its course through the nozzle it swims upstream several times but in the end the active particle is washed out through the outlet by the background flow. For larger flow rates the trajectories of active particles are less curly, since the flow gets more dominant, see insets of Fig.~\ref{fig:share-of-particles}(a).
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\textwidth]{AKratio.jpg}
\caption{(a) Probability for an active particle to reach the outlet of the nozzle $P_{\text{out}}$ as a function of \textcolor{black}{inverse Stokes number} $\sigma$ for three given aspect ratios $a$ of self-propelled rods and (b) for a fixed aspect ratio $a$ and three given ratios of the nozzle $k$. \textcolor{black}{Insets show close-ups.}}
\label{fig:AKratio}
\end{figure}
Next we present results of the second modeling approach which is based on the Yukawa-segment model.
So far we have concentrated on fixed widths of the inlet and outlet. Here we consider nozzles with fixed length $L$ and inlet width $w_{\text{in}}$ and vary
nozzle ratio $k$. We study the behavior of active rods with varied aspect ratio $a$.
As shown in Fig.~\ref{fig:AKratio}, neither the aspect ratio $a$, see Fig.~\ref{fig:AKratio}(a), nor the nozzle ratio $k$, see Fig.~\ref{fig:AKratio}(b), have a significant impact on the probability $P_{\text{out}}$ which measures how many active rods leave the nozzle at the outlet. However, the aspect ratio $a$ is important for the location where the active rods leave the nozzle at the inlet and the outlet, see Fig.~\ref{fig:AK1d}. For short rods $(a=2)$ and small inverse Stokes numbers $(\sigma \leq 1)$ the distribution of active particles shows just a single peak located at the center. This peak broadens if the inverse Stokes number increases, which is in perfect agreement with the results obtained by the first approach, cf. Fig.~\ref{fig:dependence-on-sigma}.
It is more likely for short rods than for long ones to be expelled towards the center due to rotational diffusion.
Hence the distribution of particles at the outlet for long rods $(a=10)$ shows additional peaks close to the wall. These peaks become smaller if the inverse Stokes number increases. The distribution of particles leaving the nozzle at the inlet is similar to our first approach. While the distribution is almost flat for small inverse Stokes numbers, increasing this number makes it impossible to leave the nozzle close to the center at the inlet. Similar to the outlet the wall accumulation at the inlet is more pronounced for longer rods.
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\textwidth]{AK1d.jpg}
\caption{ Comparison of the spatial distribution of active particles at (top row) the outlet and (bottom row) the inlet of the nozzle for given inverse Stokes numbers $\sigma$ and aspect ratios $a$, an outlet width $w_{\text{out}}=50\lambda$ and an inlet width $w_{\text{in}}=100\lambda$.}
\label{fig:AK1d}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\textwidth]{AK2d.jpg}
\caption{Outlet distribution histograms for $(y,\varphi)|_{\text{out}}$ computed for given \textcolor{black}{inverse Stokes numbers} $\sigma$ and a nozzle with an outlet width of $w_{\text{out}}=50\lambda$ for active rods with an aspect ratio (top row) $a=2$ and (bottom row) $a=10$.}
\label{fig:AK2d}
\end{figure}
By comparing the orientation of the particles at the outlet, the influence of the actual length of the rod becomes visible, see Fig.~\ref{fig:AK2d}. As seen before for short rods, $a=2$, for small inverse Stokes numbers $\sigma$ there is no wall accumulation. Hence most particles leave the nozzle close to the center and are orientated in the direction of the outlet. This profile smears out if the inverse Stokes number is increased to $\sigma = 1$. For larger inverse Stokes numbers the figures are qualitatively similar to the one obtained by the first approach, cf. Fig.~\ref{fig:y_phi_diagram}(c). Particles in the bottom half of the nozzle tend to point upwards and particles in the top half tend to point downwards.
The same tendency is seen for long rods $a=10$ and small inverse Stokes number. However for long active rods, this is because they slide along the walls. The bright spots close to the walls for long rods and large inverse Stokes numbers indicate that particles close to the walls
are flushed through the outlet by the large background flow even if they are oriented upstream.
\textcolor{black}{In addition, there are blurred peaks away from the walls for large inverse Stokes numbers $\sigma$. The corresponding particles crossed the outlet with mostly upstream orientations. This is similar to Fig.~\ref{fig:y_phi_diagram}(c), where particles exhibiting in-bulk rheotactic characteristics were observed at the outlet of the nozzle.}
\textcolor{black}{By comparing the results for individual active rods, see again Fig.~\ref{fig:AK2d}, with those for interacting active rods at a finite packing fraction $\rho = 0.1$, see Fig.~\ref{fig:AK2dint}, we find that wall accumulation becomes more pronounced. Mutual collisions of the rods lead to a broader distribution of particles. For long rods, $a=10$, the peaks at $\varphi \approx 0$ and $\varphi \approx \pm \pi$ remain close to the walls and the blurred peaks at the center vanish. }
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\textwidth]{AK2dint.jpg}
\caption{Outlet distribution histograms for $(y,\varphi)|_{\text{out}}$ computed for given \textcolor{black}{inverse Stokes numbers} $\sigma$ and a nozzle with an outlet width $w_{\text{out}}=50\lambda$ for active rods with an aspect ratio (top row) $a=2$ and (bottom row) $a=10$ for a packing fraction $\rho = 0.1$.
}
\label{fig:AK2dint}
\end{figure}
\subsection{Optimization of focusing}
\label{sec:optimization}
Here we study the properties of the active particles in more detail and provide insight into the nozzle geometry, the background flow and the size of the swimmers that should be used in order to optimize the focusing at the outlet of the nozzle.
For this purpose we study three distinct quantities. The averaged dwell time $\langle T\rangle$, the time it takes for an active particle to reach the outlet, the mean alignment of the particles measured by $\langle \cos \varphi_{\text{out}}\rangle$ and the mean deviation from the center $y=0$ at the outlet $\langle |y_{\text{out}}|\rangle$.
As depicted in Fig.~\ref{fig:share-of-particles}, for increasing inverse Stokes number the probability for active particles to reach the outlet increases. However they are spread all over the outlet. This is quantified by the $\langle |y_{\text{out}}|\rangle$. Small values of $\langle |y_{\text{out}}|\rangle$ correspond to a better focusing. If particles leave the nozzle with no preferred orientation, their mean orientation vanishes, $\langle \cos \varphi_{\text{out}}\rangle = 0$; in case of being orientated upstream we obtain $\langle \cos \varphi_{\text{out}}\rangle = -1$ and finally $\langle \cos \varphi_{\text{out}}\rangle = 1$ if the particles are pointing in the direction of the outlet. Obviously in an experimental realization a fast focusing process and hence small dwell times $T$ would be preferable.
The numerical results obtained by the first modeling approach are depicted in Fig.~\ref{fig:optimization}. While the dwell time hardly depends on the size ratio $k$ of the nozzle, obviously the strength of the background flow has a huge impact on the dwell time and large inverse Stokes numbers $\sigma$ lead to a faster passing through the nozzle of the active particles, see Fig.~\ref{fig:optimization}(a). The alignment of the active particles, $\langle \cos \varphi_{\text{out}}\rangle$, becomes better if the nozzle ratio $k$ is large and the flow is slow, see Fig.~\ref{fig:optimization}(b). The averaged deviation from the centerline $\langle |y_{\text{out}}|\rangle$ increases with increasing nozzle ratio $k$ since the width of the outlet becomes larger. As could already be seen in Fig.~\ref{fig:dependence-on-sigma}, the averaged deviation from the centerline is non-monotonic as a function of the inverse Stokes number and shows the smallest distance from the centerline for all nozzle ratios if the strength of the flow is comparable to the self-propulsion velocity of the swimmers, $\sigma=1$.
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\textwidth]{optimization.jpg}
\caption{(a) Dwell time $\langle T\rangle$; (b) mean alignment at the outlet, $\langle \cos \varphi\rangle$; (c) mean deviation from center $y=0$ at the outlet $\langle |y_{\text{out}}|\rangle$.
}
\label{fig:optimization}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\textwidth]{AKopti.jpg}
\caption{(a,e) Dwell time $\langle T\rangle$, (b,f) the mean alignment, $\langle \cos \varphi_{\text{out}}\rangle$ and (c,f) mean deviation from center $y=0$ at the outlet $\langle |y_{\text{out}}|\rangle$ for (top row) a fixed outlet width of $w_{\text{out}}=50\lambda$ and given aspect ratios $a$ of the swimmers and (bottom row) fixed aspect ratio $a=2$ and varied nozzle ratio $k$, whereby the width of the outlet changes.}
\label{fig:AKopti}
\end{figure}
Let us now study how these three quantities depend on the aspect ratio of the swimmer. To this end, we use the second modeling approach. We consider all three parameters as a function of the \textcolor{black}{inverse Stokes number} $\sigma$. Longer rods have a shorter dwell time so that they reach the outlet faster, see Fig.~\ref{fig:AKopti}(a). Increasing the flow velocity obviously leads to a decreasing dwell time. The same holds for the mean alignment -- it decreases for increasing inverse Stokes number, see Fig.~\ref{fig:AKopti}(b). Moreover, for small inverse Stokes numbers, $\sigma \leq 2$, the mean alignment is better for long rods. For large inverse Stokes numbers, long rods $a=10$ are washed out with almost random orientation, however short rods $a=2$ are slightly aligned with the flow. Short rods are focused better for small inverse Stokes numbers, $\sigma \leq 2$, see Fig.~\ref{fig:AKopti}(c), due to wall alignment and wall accumulation of longer rods. For larger inverse Stokes numbers, it is the other way around -- long rods are better focused.
Comparing various nozzle ratios $k$ with fixed simmers' aspect ratio $a$, we obtain that smaller ratios $k$ lead to smaller dwell times [Fig.~\ref{fig:AKopti}(d)] and better alignment [Fig.~\ref{fig:AKopti}(e)]. For narrow outlets (small $k$) the active particles leave the outlet closer to the center, see Fig.~\ref{fig:AKopti}(f).
\textcolor{black}{
\section{Discussion}
}
\textcolor{black}{We discuss the stability of particles around the centerline $y=0$ in the presence of a background flow and confining walls if they are converging with a non-zero slope $\alpha$. This stability is in contrast to a channel with parallel walls, where an active particle swims away from the centerline provided that its orientation angle $\varphi$ is different from $n\pi$, $n=0,\pm1,\pm2,\ldots$.}
Indeed, in the case of a straight channel, $\alpha = 0$, the background flow is defined as $u_x=u_0 (H^2-y^2)$, $u_y=0$ (Poiseuille flow; $u_0$ is the strength of the flow, $2H$ is the distance between the walls).
Then the system \eqref{orig-location}-\eqref{orig-orientation-angle} reduces to
\begin{eqnarray}
\dot{\varphi} &=& u_0 y (1-\cos 2\varphi) \label{varphi_poiseuille}\\
\dot{y} &=& v_{0}\sin \varphi. \label{y_poiseuille}
\end{eqnarray}
Here we omit the equation for $x(t)$ due to invariance of the infinite channel with respect to $x$ and neglect orientation fluctuations, that is $D_r=0$.
The phase portrait for this system is depicted in Fig.~\ref{fig:stability}(a). Dashed vertical lines $\varphi=n\pi$, $n=0,\pm1,\pm2,\ldots$ consist of stationary solutions: if an active particle is initially oriented parallel to the walls, it keeps swimming parallel to them. If initially $\varphi$ is different from $n\pi$, then the active particle swims away from the centerline, $y(t)\to\pm\infty$ as $t \to \infty$.
When the walls are converging, $\alpha > 0$, the $y$-component of the background flow is non-zero and directed towards the centerline. For the sake of simplicity we take $u_y=-\alpha y$, $\alpha>0$ and $u_x$ as in the Poiseuille flow, $u_x=u_0 (H^2-y^2)$. In this case, the system \eqref{orig-location}-\eqref{orig-orientation-angle} reduces to
\begin{eqnarray}
\dot{\varphi} &=& -(\alpha/2)\sin 2\varphi + u_0 y (1-\cos 2\varphi) \label{varphi_convergent_simple}\\
\dot{y} &=& -\alpha y+ v_{0}\sin \varphi. \label{y_convergent_simple}
\end{eqnarray}
The corresponding phase portrait for this system is depicted in Fig.~\ref{fig:stability}(b). Orientations $\varphi=n\pi$ represent stationary solutions only if $y=0$. In contrast to the Poiseuille flow in a straight channel, see Eqs. (\ref{varphi_poiseuille}) and (\ref{y_poiseuille}), these stationary solutions $(\varphi=\pi n, y=0)$ are asymptotically stable with a decay rate $\alpha$ (recall that $\alpha$ is the slope of walls). In addition to these stable stationary points there are pairs of unstable (saddle) points with non-zero $y$ (provided that $v_{0}>0$). In these saddle points, the distance from centerline $|y|$ does not change, since a particle is oriented away from centerline, so the propulsion force moves the particle away from the centerline and this force is balanced by the convergent component of the background flow, $u_y$, moving the particle toward the centerline. The orientation angle $\varphi$ does not change since the torque from the Poiseuille component of the background flow, $u_x$, is balanced by the torque from the convergent component, $u_y$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=\textwidth]{phase_portraits_new.jpg}
\caption{\footnotesize \textcolor{black}{Phase portraits $(\varphi,y)$ for $v=0.2$, $H=1.0$ and $u_0=0.6$. (a) System \eqref{varphi_poiseuille}-\eqref{y_poiseuille}, describing Poiseuille flow in a straight channel; dashed lines consist of stationary points. (b) System \eqref{varphi_convergent_simple}-\eqref{y_convergent_simple} describing a simplified convergent flow with $\alpha=0.25$,; stationary points: stable $(\pi n,0)$ (in red) and pairs of saddles with non-zero $y$ (in blue). Trajectories near the centerline converge to a stationary solution in the centerline. (c) System \eqref{orig-location}-\eqref{orig-orientation-angle} with the convergent flow ${\bf u}_{\text{BG}}=(u_x,u_y)$ used in Section~\ref{sec:focusing} with $x=-H/\alpha=-4.0$.}}
\label{fig:stability}
\end{center}
\end{figure}
We also draw the phase portrait for the converging flow ${\bf u}_{\text{BG}}=(u_x,u_y)$ introduced in Section~\ref{sec:model}, Fig.~\ref{fig:stability}(c).
One can compare the phase portraits Fig.~\ref{fig:stability}(b) and Fig.~\ref{fig:stability}(c) around the stationary point $(\varphi=0, y=0)$ to see that the qualitative picture is the same: this stationary point is stable and it neighbors with two saddle points.
{\textcolor{black}{The asymptotic stability of $(\varphi=0, y=0)$ means that if a particle is close to the centerline and its orientation angle is close to $0$ (particle is oriented towards the outlet), it will keep swimming at the centerline pointing toward the outlet}, whereas in Poiseuille flow the particle would swim away. The asymptotic stability of $(\varphi=\pm \pi, y=0)$ is evidence of that in the converging flow there is rheotaxis not only at walls but also in the bulk, specifically at the centerline.
Another consequence of this stability is the reduction of effective rotational diffusion of an active particle in the region around the centerline, that is the mean square angular displacement $\langle\Delta \varphi^2\rangle$ is bounded in time due to the presence of restoring force coming from the converging component of the background flow (cf. diffusion quenching for Janus particles in \cite{DasGarCam2015}).}
\textcolor{black}{Finally, we note that the nozzle has a finite length $L$ and thus, the conclusions of the stability analysis are valid if the stability relaxation time, $1/\alpha$ s, does not exceed the average dwell time $\langle T \rangle$.
We introduce a lower bound $\tilde{T}$ for the dwell time $\langle T\rangle$ as the dwell time of an active particle swimming along the centerline oriented forward, $\varphi=0$:} \textcolor{black}{\begin{equation*}\tilde{T}=Lk/(\sigma v_0 (1-k))\ln |1+\sigma (1-k)/(k(\sigma+1))|.\end{equation*} }
\textcolor{black}{Our numerical simulations show that $\tilde{T}$ underestimates the average dwell time by a factor larger than two. Using this lower bound, we obtain the following sufficient condition for stability: $\dfrac{k w_{\text{in}}}{\sigma v_0}\ln\left|1+\dfrac{\sigma(1-k)}{k(\sigma+1)}\right|\geq 1$. }
\medskip
\medskip
\section{Conclusion}
In this work we study a dilute suspension of active rods in a viscous fluid extruded through a trapezoid nozzle.
Using numerical simulations we examined the probability that a particle leaves the nozzle through the outlet - which is the result of the two counteracting phenomena. On the one hand, swimming downstream together with being focused by the converging flow increases the probability that an active rod leaves the nozzle at the outlet. On the other hand, rheotaxis results in a tendency of active rods to swim upstream.
Theoretical approaches introduced in this paper can be used to design experimental setups for the extrusion of active suspensions through a nozzle.
The optimal focusing is the result of a compromise. While for large flow rates it is very likely for active rods to leave the nozzle through the outlet very fast, their orientation is rather random and they pass through the outlet close to the walls. The particles are much better aligned with the flow for small flow rates and focused closer to the centerline of the nozzle, however the dwell time of the particles becomes quite large. Based on our findings the focusing is optimal if the velocity of the background flow and the self-propulsion velocity of the active rods are comparable. To reduce wall accumulation, the rods should have a small aspect ratio.
\textcolor{black}{We find that rheotaxis in bulk is possible for simple rigid rodlike active particles.} We also established analytically the local stability of active particle trajectories in the vicinity of the centerline. This stability leads to the decrease of the effective rotational diffusion of the active particles in this region \textcolor{black}{as well as the emergence of rheotaxis away from walls.}
Our findings can be experimentally verified using biological or artificial swimmers in a converging flow.
\section*{Acknowledgements }
The work was supported by NSF DMREF grant DMS-1628411. A.K. gratefully acknowledges financial support through a Postdoctoral Research Fellowship (KA 4255/1-2) from the Deutsche Forschungsgemeinschaft (DFG).
\section*{Author contributions statement}
Simulations have been performed by M.P. and A.K., the research has been conceived by L.B. and I.S.A. and all authors wrote the manuscript.
| 2024-02-18T23:39:53.559Z | 2017-07-28T02:02:25.000Z | algebraic_stack_train_0000 | 754 | 6,619 |
|
proofpile-arXiv_065-3778 | \section{Summary of Major Changes}
We thank the reviewers for their detailed reviews. These reviews have significantly help us improve the exposition of our method, and validate our claims. We present a brief summary of the major changes we made in this revision
\begin{enumerate}
\item Presented a conceptual model of DeepMVI in Section~3 where we discuss the conditional dependency structure that is used to recover imputation signal, and the training methodology for generalization to missing values.
\item Rewrote the neural network architecture for easier understanding.
\item Ran empirical comparisons with two state-of-the-art deep learning methods of MVI. Introduced a second multidimensional dataset. Improved the exposition of the downstream analytics section.
\item Carefully proof-read the paper.
\end{enumerate}
A more detailed response to each reviewer's comments appear in each of the subsequent sections.
\section{Reviewer 1}
\begin{enumerate}
\item
\textbf{R1} I hope that the author could summarize contributions more carefully, for as I debated before, some claims like contribution (5) are not well-supported. Besides, too many experimental results are listed, I hope more theoretical analysis and highlights of your network design (instead of ‘careless’) can be added here.
\begin{answer}
We have updated the contributions.
Organized the experiments better.
Created a new section (Section 3) to first present the theoretical basis behind DeepMVI before going into the neural architecture.
\end{answer}
\item
\textbf{R2} The selection of downstream analytics should be rationally explained.
\begin{answer}
Section 4.7 has been updated and better explained with a figure.
\end{answer}
\item
\textbf{R3} The parameter setting should be further discussed and experimentally proved, such as The configuration of batch size, the number of filters, the number of attention heads and w.
\begin{answer}
Values of the hyper-parameters such as batch size, number of filters, number of attention heads that we have used in our algorithms are commonly used in deep learning literature. We have hence skipped the analysis of the same. Though our window size $w$ is a non-trivial hyperparameter for which we show the errors for different block sizes in Fig ~\ref{fig:windowsize}. More discussions on hyper-parameters appears in Section~4.3.
\end{answer}
\begin{figure}[h]
\centering
\begin{tikzpicture
\begin{groupplot}[group style={group size= 1 by 1,ylabels at=edge left},width=0.30\textwidth,height=0.25\textwidth]
\nextgroupplot[legend style={at={($(2.0,-0.5)$)},legend columns=2,fill=none,draw=black,anchor=north,align=left,legend cell align=left,font=\small},
ylabel = {MAE},
symbolic x coords={2,5,10,20,50},
xtick = {2,5,10,20,50},
legend to name=fred,
mark size=2pt]
\addplot [red,mark=square*] coordinates {(2, 0.568) (5, 0.390) (10, 0.287) (20,0.383) (50, 0.417)};
\addplot [green,mark=*] coordinates {(2,0.311) (5, 0.301) (10, 0.288) (20,0.44) (50, 0.611)};
\addlegendentry{Electricity};
\addlegendentry{Climate};
\end{groupplot}
{\pgfplotslegendfromname{fred}};
\end{tikzpicture}
\caption{MAE (y-axis) with MCAR scenario with 100\% of the series containing missing values. X-axis is different window sizes.}
\label{fig:windowsize}
\end{figure}
\item
\textbf{R4} This work would haven been significantly improved if the authors can highlight the problem of correlation ignorance and design a novel component to approach this. Even if this paper is accepted with no major revision on this part, some experimental results (comparing Transformer-only version with Transformer and Kernel-combined version) illustrating the significance of such multi-dimensional module is suggested.
\begin{answer}
Figure 7(left) shows the importance of the kernel regression module. Figure 7 (middle and right) show the role of the transformer module. Figure 8 shows the importance of the fine-grained module. Figure 9 highlights the importance of the multi-dimensional module.
\end{answer}
\item
\textbf{R5} Several claims are neither experimentally nor theoretical supported.
Details :
For example , in Sect. 3.1, it is said "When we tried to train time-embeddings, we found that the model was prone to over-fitting.." as the reason of not using time-embeddings. However, the intuition explained later are reluctant (maybe authors should highlight the existence ratio of blackout in real world datasets) compared to other parts, and no experimental results, or mathematical proof are given. I suggest to add this part.
\begin{answer}
We have rewritten most of Section 3 to exclude unsupported claims. Specifically, we have removed the claim about time-embeddings in our writeup. We do not include experiments to compare with time-embeddings because other reviewers have also complained that we have too many experiments and graphs.
\end{answer}
\item \textbf{D1} The scalability is not convincing, since only the combined scale-time. Absolute time is preferred. However, factor
increase in Fig.11(b) chart on synthetic dataset is given, and no combined time-performance chart is given.
\begin{answer}
We have added a scalability graph showing absolute time on real data in Figure~10b
\end{answer}
\end{enumerate}
\section{Reviewer 2}
\begin{enumerate}
\item \textbf{R1} Mathematical notations could be carefully revised.
\begin{answer}
We have rewritten section 3 completely and proof-read the new version.
\end{answer}
\item \textbf{R2} Detailed explanation on some assertions could be added.
Details :
The authors mentioned in the first paragraph of Section 3 that the proposed model is less prone to over-fitting on small X. However, it’s unclear to the audience according to which specific design the overfitting problem is alleviated.
\begin{answer}
Section 3 explains how our method of train alleviates over-fitting.
\end{answer}
\item \textbf{R2} (misnumbered) More motivative use cases or examples could be introduced to justify the category-based multidimensional scenario.
Details :
The category setting for utilizing the multidimensional relations limits the generalizability of the proposed methods. It would be useful to make it clear among the number of series, dimensions, and categories, More use cases, and more detailed motivations on this setting could better justify this setting and the corresponding design.
\begin{answer}
We have generalized our framework to support dimensions with real-valued members. Please see modifications to section 2.1 and sections 4.2.
We have added another real-world dataset from Walmart M5 (section 5.1.1) which has two categorical dimensions: items sold and stores where sold.
\end{answer}
\item \textbf{R3} More experiments could be added to solidify the improvements of the proposed transformer MVI over existing DL methods and the vanilla transformer (see W2) and on the category-based multidimensional scenario (W3).
Details :
Performances of another two DL-based methods are quoted from their original papers on one dataset under one missing setting. Considering this work is not the first to propose DL methods for time series MVI, comprehensive results (maybe with re-implementations of best efforts if the original authors' codes are not available) would better justify this new DL method. Extra related experiments would also be very interesting, e.g., enhancing other DL methods with fine-grain local signals and kernel regressions.
Comparisons on the multidimensional dataset, i.e., JanataHack, is only with the traditional methods (no DL methods) and only under one missing scenario.
\begin{answer}
We have added comparison with two state of the art DL methods and vanilla transformer on two multidimensional datasets and three other datasets in Table~2
\end{answer}
\end{enumerate}
\section{Reviewer 3}
\begin{enumerate}
\item \textbf{D1} While the basic structure of the new deep learning architecture is convincing, I believe that Sec. 3 requires a
more detailed elaboration. I was able to get a high-level understanding of the 3 components and the respective
design decisions, but was not able to get a deeper and detailed understanding. Hence, I believe that more details
are required here.
\begin{answer}
We have added a separate section (Section 3) to first explain the conceptual basis of the DeepMVI model. Also, we have rewritten the original section 3 extensively to be easier to understand.
\end{answer}
\item \textbf{D2} Sec. 3 merely describes the new system, including many parameter choices that have been taken, e.g.,
d\_feed=512, d\_out=32, cs=50w, w=10 or 20. What is missing is a systematic analysis (or at least a discussion)
working out some key properties of the proposed solution and why the chosen parameter settings are good. As a
consequence, it is difficult to understand how robust and general DeepMVI is for other datasets.
\begin{answer}
Section 3 has been completely rewritten. The rationale for hyper-parameters is discussed in the last subsection of section 4.
We have experimented with ten datasets with widely varying data characteristics (Table~1). Our training algorithm is designed to be robust in the way we generate identically distributed labeled data with synthetic misses, and use of a validation dataset to avoid over-fitting.
\end{answer}
\item \textbf{D3} While the experimental evaluation compares many different data sets, I have the impression that most of them
are rather short. I think that many time series in real-world applications are much longer and might slightly change
over time. Hence, it is not clear how the approach works in an online setting if the model is trained on "old" data.
\begin{answer}
We have included another real-world dataset M5 comprising of Walmart sales. Our method is not designed for an online setting if data shows significant change along time. Online models are useful in an operational setting, whereas our focus in this work is the analytical or warehouse setting where data collected in batch mode contains missing values. The batch setting also has multiple applications in the real-world as motivated in references [3,10,18,21].
\end{answer}
\item \textbf{D4} There are no experiments on the time required for training the model.
\begin{answer}
The run-time we report in Figure~10b is end to end time including the training time which comprises the bulk of our total time.
\end{answer}
\item \textbf{D5} In Fig. 6, I am bit surprised that the MAE decreases if larger portions of the time series are missing in the
electricity dataset.
\begin{answer}
MAE decrease with increasing portions of missing values in Fig 6 is an artefact of the Benchmark \footnote{https://github.com/eXascaleInfolab/bench-vldb20} that we used for these experiments. Specifically in MCAR scenario with 10\% missing values the first 10\% of time series are selected to be incomplete and blocks of missing values are placed randomly within those series. Hence lower MAE on increasing might point to the imputation on the higher indexed series being easier than on lower indexed series. We experimentally verify the same in Fig \ref{fig:shuffle}, where we generate two additional versions of electricity dataset (Green and Brown) by shuffling the rows. Red series correspond to the unshuffled version of Electricity used in the paper. We can see that the first 10\% rows of Brown shuffle are much easier to impute than of first 10\% Red, but with missing values in all the series i.e. at 10\% all the shuffles have similar MAE with the variation attributed to different location of blocks.
\end{answer}
\begin{figure}[h]
\centering
\begin{tikzpicture
\begin{groupplot}[group style={group size= 1 by 1,ylabels at=edge left},width=0.30\textwidth,height=0.25\textwidth]
\nextgroupplot[title=Electricity MCAR,
legend style={at={($(0,0)+(1cm,1cm)$)},legend columns=1,fill=none,draw=black,anchor=center,align=left,legend cell align=left,font=\small},
ylabel = {MAE},
legend to name=fred,
mark size=2pt]
\addplot [red,mark=square*] coordinates {(10, 0.390) (40, 0.349) (70, 0.338) (100, 0.309) };
\addplot [green,mark=*] coordinates {(10,0.344) (40, 0.366) (70, 0.316) (100, 0.341) };
\addplot [brown,mark=otimes*] coordinates {(10, 0.260) (40, 0.339) (70, 0.318) (100, 0.320) };
\end{groupplot}
\end{tikzpicture}
\caption{MAE (y-axis) on Electricity with MCAR scenario. X-axis is percent of time-series with a missing block. Red plot is the presented in the paper, Green and Brown are made with row-shuffled Electricity Dataset}
\label{fig:shuffle}
\end{figure}
\item \textbf{D7} As described in Sec. 2.4, the work in [14] seems quite similar in spirit to DeepMVI. Therefore it is not clear why
this approach is not included in the experimental evaluation?
\begin{answer}
[14] is designed for forecasting where in the future none of the series values are available. In missing value imputation, we are able to exploit values at $t$ from other series using the kernel regression. Our method without KR is similar in spirit to [14], and we present that comparison Figure~7. But there are many important design details that are different in our temporal transformer compared to [14], convolution on values, block keys after discounting immediate neighborhood, and our particular method of training.
\end{answer}
\item \textbf{D8} The authors classify previous works into matrix factorization techniques, statistical models, and deep learning methods. However, there exist some previous works that are based on pattern search across time series, similar to
what is proposed in this paper:
- Wellenzohn et al.: Continuous Imputation of Missing Values in Streams of Pattern-Determining Time Series. EDBT
2017.
How is DeepMVI related to such pattern-based approaches?
\begin{answer}
We have discussed this paper in the related work section.
\end{answer}
\item \textbf{D9} The paper contains many small language mistakes, which hamper the readability. A careful proofreading is needed.
\begin{answer}
We have carefully re-revised the paper.
\end{answer}
\end{enumerate}
\pagebreak
\end{document}
\section{Introduction}
In this paper we present a system for imputing missing values across multiple time series occurring in multidimensional databases.
Examples of such data include sensor recordings along time of different types of IoT devices at different locations, daily traffic logs of web pages from various device types and regions, and demand along time for products at different stores.
Missing values are commonplace in analytical systems that integrate data from multiple sources over long periods of time. Data may be missing because of errors or breakdowns at various stages of the data collection pipeline ranging from faulty recording devices to deliberate obfuscation. Analysis on such incomplete data may yield biased results misinforming data interpretation and downstream decision making. Therefore, missing value imputation is an essential tool in any analytical systems~\cite{Cambronero2017,milo2020automating,kandel2012profiler,Mayfield2010}.
Many techniques exist for imputing missing values in time-series datasets including several matrix factorization techniques~\cite{yu2016temporal,troyanskaya2001missing,khayati2019scalable,mei2017nonnegative,mazumder2010spectral,cai2010singular}, statistical temporal models~\cite{li2009dynammo}, and recent deep learning methods~\cite{cao2018brits,fortuin2020gp}. Unfortunately, even the best of existing techniques still incur high imputation errors. We show that top-level aggregates used in analytics could get worse after imputation with existing methods, compared to discarding missing data parts before aggregation. Inspired by the recent success of deep learning in other data analytical tasks like entity matching, entity extraction, and time series forecasting, we investigate if better deep learning architectures can reduce this gap for the missing value imputation task.
The pattern of missing blocks in a time series dataset can be quite arbitrary and varied. Also, datasets could exhibit very different characteristics in terms of the length and number of series, amount of repetitions (seasonality) in a series, and correlations across series. An entire contiguous block of entries might be missing within a time series, and/or across multiple time-series. The signals from the rest of the dataset that are most useful for imputing a missing block would depend on the size of the block, its position relative to other missing blocks, patterns within a series, and correlation (if any) with other series in the dataset. If a single entry is missing, interpolation with immediate neighbors might be useful. If a range of values within a single time series is missing, repeated patterns within the series and trends from correlated series might be useful. If the same time range across several series is missing, only patterns within a series will be useful.
Existing methods based on matrix factorization can exploit across series correlations but are not as effective in combining them with temporal patterns within a series.
%
Modern deep learning methods, because of their higher capacity and flexibility, can in principle combine diverse signals when trained end to end. However, designing a neural architecture whose parameters can be trained accurately and scalably across diverse datasets and missing patterns proved to be non-trivial. Early solutions based on popular architectures for sequence data, such as recurrent neural networks (RNNs) have been shown to be worse both in terms of accuracy and running time. We explored a number of alternative architectures spanning Transformers, CNNs, and Kernel methods. A challenge we faced when training a network to combine a disparate set of potentially useful signals was that, the network was quick to overfit on easy signals. Whereas robust imputation requires that the network harness all available signals.
After several iterations, we converged on a model and training procedure, that we call \sysname\ that is particularly suited to the missing value imputation task.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{figures/missing_patterns.pdf}
\caption{Grey-shaded regions denote missing blocks. Patterns of left and right windows around each missing block match with another part of the same series. Series 1-3 have high similarity and series 1,2,4 show good window match along time.
}
\label{fig:missing_patterns}
\end{figure}
\subsection{Contributions}
{\color{black}
(1) We propose a tractable model to express each missing value using a distribution conditioned on available values at other times within the series and values of similar series at the same time. (2) We design a flexible and modular neural architecture to extract fine-grained, coarse-grained, and cross-series signals to parameterize this distribution. (3) We provide a robust method of training generalizable parameters by simulating missing patterns around available indices that are identically distributed to the actual missing entries. (4) Our neural architecture includes a Temporal transformer that differs from off-the-shelf Transformers in our method of creating contextual keys used for self-attention.
(5)
We propose the use of Kernel regression for incorporating information from correlated time-series.
This method of extracting relatedness is scalable and extends naturally to multidimensional datasets, that none of the existing methods handle.
(6) We achieve 20--70\% reduction in imputation error as shown via an extensive comparison with both state of the art neural approaches and traditional approaches across ten real-life datasets and five different patterns of missing values. We show that this also translates to more accurate aggregates analytics on datasets with missing entries.
(7) Our method is six times faster than using off-the-shelf deep-learning components for MVI.}
\section{Preliminaries and Related Work}
We present a formal problem statement, discuss related work, and provide background on relevant neural sequence models.
\subsection{Problem Statement}
{\color{black}
We denote our multidimensional time series dataset as an $n+1$ dimensional data tensor of real values $X \in \RR^{n+1}$. The dimensions of $X$ are denoted as ($K_1$,$K_2$,...,$K_n$, $K_{n+1}$). The dimension $K_{n+1}$ denotes a regularly spaced time index which, without loss of generality we denote as $\{1,\ldots,T\}$.
Each $K_i$ is a dimension comprising of a discrete set of members $\{m_{i,1},\ldots,m_{i,|K_i|}\}$. Each member $m_{ij}$ could be either a categorical string or a real-valued vector.
For example, a retail sales data might consist of two such dimensions: $K_1$ comprising of categorical members denoting identity of items sold and $K_2$ comprising of stores
where a store is defined in terms of its continuous latitude and longitude value.}
We denote a specific combination of members of each dimension as $\vek{k}=k_1,\ldots,k_n$ where each $k_i \in \mathrm{Dim}(K_i)$. We refer to the value at a combination $\vek{k}$ and time $t$ as $\X{\vek{k},t}$. For example in Figure~\ref{fig:missing_patterns} we show four series of length 50 and their index $\vek{k}$ sampled from a two dimensional categorical space of item-ids and region-ids.
We are given an $X$ with some fraction of values $\X{\vek{k},t}$ missing. Let $M$ and $A$ be tensors of only ones and zeros with same shape as $X$ that denote the missing and available values respectively in $X$.
We use $\cI(M)$ to denote all missing values' $(\vek{k},t)$ indices.
The patterns in $\cI(M)$ of missing index combinations can be quite varied --- for example missing values may be in contiguous blocks or isolated points; across time-series the missing time-ranges may be overlapping or missing at random; or in an extreme case called Blackout a time range may be missing in all series.
Our goal is to design a procedure that can impute the missing values at the given indices $I(M)$ so that the error between the imputed values $\hatX{}$ and ground truth values $\gtX{}$ is minimized.
\begin{equation}
\sum_{(\vek{k},t)\in \cI(M)} \cE(\hatX{\vek{k},t}, \gtX{\vek{k},t})
\end{equation}
where $\cE$ denotes error functions such as root mean square error (RMSE) and mean absolute error (MAE). As motivated in Figure~\ref{fig:missing_patterns} both patterns within and across a time series may be required to fill a missing block.
\input{relate}
\subsection{Background on Neural Sequence Models}
We review\footnote{Readers familiar with Deep Learning may skip this subsection.} two popular neural architectures for processing sequence data.
\subsubsection{Bidirectional Recurrent Neural Networks}
\newcommand{\vek{U}}{\vek{U}}
\newcommand{\vek{b}}{\vek{b}}
Bidirectional RNN \cite{graves2005framewise} is a special type of RNN that captures dependencies in a sequence in both forward and backward directions. Unlike forecasting, context in both forward and backward directions is available in MVI task.
Bidirectional RNN maintains two sets of parameters, one for forward and another for backward direction. Given a sequence $X$, the forward RNN maintains a state $\vek{h}^f_t$ summarizing $X_1\ldots X_{t-1}$, and backward RNN maintains a state $\vek{h}^b_t$ summarizing $X_T\ldots X_{t+1}$. These two states jointly can be used to predict a missing value at $t$.
Because each RNN models the dependency along only one direction, a bidirectional RNN can compute loss at each term in the input during training.
\subsubsection{Transformers}
\label{sec:Transfomers}
A Transformer \cite{Vaswani2017} is a special type of feed-forward neural network that captures sequential dependencies through a combination of self-attention and feed-forward layers. Transformers are primarily used on text data for language modelling and various other NLP tasks~\cite{devlin2018bert}, but have recently also been used for time-series forecasting \cite{li2019enhancing}.
Given an input sequence $X$ of length $T$, a transformer processes it as follows: It first embeds the input $X_{t}$ for each $t \in [1, T]$ into a vector $E_t \in \RR^{p}$, called the input embedding. It also creates a positional encoding vector at position $t$ denoted as $e_t \in \RR^p$.
\begin{align}
e_{t,r}=
\begin{cases}
\sin(t/10000^{\frac{r}{p}}), & \textrm{if}~~~ r\%2 == 0 \\
\cos(t/10000^{\frac{r-1}{p}}), & \textrm{if}~~~ (r-1)\%2 == 0
\end{cases}
\label{eqn:position_encoding}
\end{align}
Then it uses linear transformation of input embedding and positional encoding vector to create query, key, and value vectors.
\begin{align}
\label{eq:oldKey}
Q_t &= (E_t + e_t) W^Q \quad K_t = (E_t + e_t) W^K \quad V_t = (E_t + e_t) W^V
\end{align}
where the $W$s denote trained parameters.
Key and Value vectors at all times $t \in [1, T]$ are stacked to create matrices $K$ and $V$ respectively. Then the query vector at time $t$ and keys pair at other positions $t' \neq t$ are used to compute a self-attention distribution, which is used to compute a vector at each $t$ as an attention weighted sum of its neighbors as follows:
\begin{align}
\vek{h}_t = \textrm{Softmax}(\frac{Q_tK^T}{\sqrt{p}})V
\label{eqn:vanilla_transformer_attn}
\end{align}
Such self-attention can capture the dependencies between various positions of the input sequence. Transformers use multiple such self-attentions to capture different kinds of dependencies, and these are jointly referred as multi-headed attention.
In general, multiple such layers of self-attention are stacked. The final vector $\vek{h}_t$ at each $t$ presents a contextual representation of each $t$.
For training the parameters of the transformer, a portion of the input would be masked (replaced by 0). We denote the masked indices by $M$.
The training loss is computed only on the masked indices in $M$. This is because multiple layers of self-attention can compute $\vek{h}_t$ as a function of any of the input values. This is unlike bidirectional RNNs where the forward and backward RNN states clearly demarcate the values used in each state. This allow loss to be computed at each input position. However, transformers are otherwise faster to train in parallel unlike RNNs.
\subsection{Related work in Deep-learning}
\label{sec:relate:deep}
In spite of the recent popularity and success of deep learning (DL) in several difficult tasks, existing work on the MVI task are few in number. Also there is limited evidence of DL methods surpassing conventional methods across the board.
MRNN\cite{yoon2018estimating} is one of the earliest deep learning methods. MRNNs use
Bidirectional RNNs to capture context of a missing block within a series, and capture correlations across series using a fully connected network. However, a detailed empirical evaluation in \cite{khayati2020mind} found MRNN to be orders of magnitude slower than above matrix completion methods, and also (surprisingly) much worse in accuracy.
More recently, BRITS\cite{cao2018brits} is another method that also uses bidirectional RNNs. At each time step $t$ the RNN is input a column $X_{:,t}$ of $X$. The RNN state is the black box charged with capturing both the dependencies across time and across series. %
GP-VAE\cite{fortuin2020gp} adds more structure to the dependency by first converting each data column $X_{:,t}$ of $X$ to a low-dimensional embedding, and then using a Gaussian Process to capture dependency along time in the embedding space. Training is via an elaborate structured variational method. On record time series datasets GP-VAE has been shown to be worse empirically than BRITS, and seems to be geared towards image datasets.
\nocite{liu2019naomi}
Compared to these deep models, our network architecture is more modular and light-weight in design, allows for more stable training without dataset specific hyper-parameter tuning, more accurate, and significantly faster.
\myparagraph{Other Deep Temporal Models}
Much of the work on modeling time series data has been in the context of the forecasting task. State of the art methods for forecasting are still RNN-based~\cite{FlunkertSG17,deshpande2019streaming,salinas2019high,sen2019think}. The only exceptions is \cite{li2019enhancing} that uses
convolution to extract local context features of the time-series and then a transformer to capture longer-range features. Such
transformer and convolutions models have been quite successful in speech transcription literature \cite{li2019jasper}.
Our architecture is also based on transformers and convolutions but our design of the keys and queries is better suited for missing value imputation. Further, we also include a fine-grained context and a second kernel regression model to handle across time correlations.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figures/architecture_illustration.pdf}
\caption{Architecture of \sysname. Here model is shown imputing the three circles marked in {\color{red} red} at the top. The temporal transformer convolves on window of size $w=3$ to create queries, keys and values for the multi-headed attention. The deconvolution creates three vectors, one for each red circle. These are concatenated with fine-grained signal and kernel regression to predict the final output.}
\label{alg:trsf-arch}
\end{figure}
\section{\sysname: The conceptual Model }
\label{sec:ctrain}
{\color{black}
We cast the missing value imputation task as solving an objective of the form:
$$
\mathrm{max}_{\hatX{}} \prod_{(\vek{k},t)\in \cI(M)} \Pr(\hatX{\vek{k},t}|X,A;\theta)
$$
where $\theta$ are the parameters of the model and $A$ is the mask denoting available values in $X$.
$X$ is the entire set of available values, and any tractable models will need to break-down the influence of $X$ at an $(\vek{k},t)$ into simpler, learnable subparts.
State of the art deep learning methods
such as BRITS, simplify the dependence as:
$$
\Pr(\hatX{\vek{k},t}|\X{},A;\theta ) = \Pr(\hatX{\vek{k},t}|\X{\bullet,1\ldots t-1}, \X{\bullet,t+1\ldots T}, \theta)
$$
The first part $\X{\bullet,1\ldots t-1}$ denotes the entire vector of values over all times before $t$ and likewise $\X{\bullet,t+1\ldots T}$ for after $t$. Note here
observed values at time $t$ from correlated sequences are ignored. Also, each of these sequences are summarized using RNNs that take as input values over all series $\X{\bullet,j}$ at each step $j$. This limits the scalability on the number of series.
\newcommand{\XA}[1]{[X,A]_{#1}}
In contrast, DeepMVI captures the dependence both along time within series $\vek{k}$ and along other related series at time $t$ to simplify the dependency structure as:
\begin{align}
\label{eq:concept} \Pr(\hatX{\vek{k},t}|X,A;\theta) = \Pr(\hatX{\vek{k},t} |
\X{\vek{k},\bullet},
\X{\text{Sib}(\vek{k}),t}, A, \theta)
\end{align}
The first part $\X{\vek{k},\bullet}$ is used to capture the long term dependency within the series $\vek{k}$ and also fine-grained signals from the immediate temporal neighborhood of $t$. The second part extracts signals
from related series $\text{Sib}(\vek{k})$ at time $t$. The notion of $\text{Sib}(\vek{k})$ is defined using learnable kernels that we discuss in Section~\ref{ssec:kr}. Unlike conventional matrix factorization or statistical methods like Kalman Filters that assume fixed functional forms, we depend on the universal approximation power of neural networks to extract signals from the context $\X{\vek{k},\bullet}$ and $\X{\text{Sib}(\vek{k}),t}$ to create a distribution over the missing value at $\vek{k},t$. The neural network architecture we used for parameterizing this distribution is described Section~\ref{sec:mviDL}.
The parameters ($\theta$) of high-capacity neural networks need to be trained carefully to avoid over-fitting. We do not have separate labeled datasets for training the model parameters. Instead, we need to create our own labeled dataset using available values $A$ in the same data matrix $X$.
We create a labeled dataset from randomly sampled $(\vek{k}_i,t_i)$ indices from the available set $A$. Each index $(\vek{k}_i,t_i)$ defines a training instance with input $\vx_i=(\X{\vek{k},\bullet},\X{Sib(\vek{k}),t},A)$ and continuous output $y_i=\X{\vek{k}_i,t_i}$ as label, and thus can be cast as a standard regression model. In order for the trained parameters $\theta$ to generalize to the missing indices in $\cI(M)$, the available values in the context of a $(\vek{k}_i,t_i)$ used in training need to be distributed identically to those in $\cI(M)$. We achieve this by creating synthetic missing values around each $(\vek{k}_i,t_i)$. The shape of the missing block is chosen by sampling a shape $B_i$ from anywhere in $M$. Note the shape $B_i$ is a cuboid characterized by just the {\em number} (and not the position) of missing values along each of the $n+1$ dimensions. We then place $B_i$ randomly around $(\vek{k}_i,t_i)$, to create a new availability matrix $A_i$ after masking out the newly created missing entries around $(\vek{k}_i,t_i)$. Our training objective thereafter is simple likelihood maximization over the training instances as follows:
\begin{equation*}
\theta^*= \mathrm{argmax}_{\theta} \underset{(\vek{k}_i,t_i) \in \cI(A)}{\sum}[\log\Pr(\X{\vek{k},t}|\X{\vek{k},\bullet},\X{\text{Sib}(\vek{k}),t}, A^i,\theta)]
\end{equation*}
Thus $\theta^*$ has been trained to predict the true value of $|A|$ instances, where our method of sampling $A^i$ ensures that these are identically distributed as the missing entries $\cI(M)$. We further prevent over-fitting on the training instances by using a validation dataset to do early stopping. This implies that we can invoke the standard ML guarantees of generalization to claim that our model will generalize well to the unseen indices.
}
\section{\sysname: The Neural Architecture}
We implement the conditional model of Equation~\ref{eq:concept} as a multi-layered modular neural network. The first module is the temporal transformer that takes as input $\X{\vek{k},\bullet}$ and extracts two types of imputation signals along time:
longer-term seasonality represented as a output vector $\vek{h}^{\mathrm{tt}} = TT_\theta(\X{\vek{k},\bullet}, A_{\vek{k},\bullet})$, and a fine-grained signal from the immediate neighborhood of $(\vek{k},t)$ represented as $\vek{h}^{\mathrm{fg}} = FG_{\theta}(\X{\vek{k},\bullet}, A_{\vek{k},\bullet})$.
The second module is the kernel regression that extracts information from related series at time $t$ i.e., from $\X{\text{Sib}(\vek{k}),t}$ to output another hidden vector $\vek{h}^{\mathrm{kr}} = KR_{\theta}(\X{\bullet,t}, A{\bullet,t})$. The last layer combines these three outputs as a light-weight linear layer to output a mean value of the distribution of $\hatX{\vek{k},t}$ as follows:\begin{align}
\mu[\hatX{\vek{k},t}] &= \bm{w}_o^T [\vek{h}^{\mathrm{tt}},\vek{h}^{\mathrm{fg}}, \vek{h}^{\mathrm{kr}}] + \bm{b}_o
\end{align}
The above mean is used to model a Gaussian distribution for the conditional probability with a shared variance.
A pictorial depiction of our pipeline appears in Figure~\ref{alg:trsf-arch}. We describe each of these modules in the following sections.
\subsection{Temporal Transformer Module}
\label{subsec:tt}
We build this module for capturing temporal dependency in the series akin to seasonality and draw inspiration from the Transformer~\cite{Vaswani2017} architecture.
The parallel processing framework of attention module provides a natural formulation for handling missing values by masking inputs in contrast to the sequential modeling in RNNs.
However, our initial attempts at using the vanilla Transformer model (described in Sec~\ref{sec:Transfomers}) for the MVI task was subject to over-fitting on long time-series, and inaccurate for block missing values. We designed a new transformer specifically suited for the MVI task which we call the Temporal Transformer.
With the slight abuse of notation we override the definition of data and availability tensors $X$ and $A$ to 1-dimensional data and availability series respectively. Accordingly $I(A)$ and $I(M)$ are also overridden to confirm to series data.
We use $I(A) = I-I(M)$ to denote all the indices that are not missing in $X$.
We next describe how Temporal Transformer computes the function $TT_{\theta}(X,A)$.
\myparagraph{Window-based Feature Extraction}
A linear operation on the window $\X{jw:(j+1)w}$ computes a $p$-dimensional vector $Y_j$ as follows:
\begin{align}
\label{eqn:TT_Yj} Y_j = W_f \X{jw:(j+1)w} + b_f
\end{align}
where $W_f \in \RR^{p\times w}$ and $b_f \in \RR^{p}$ are parameters. This operation is also termed as non-overlapping convolutions in Deep Learning literature.
We use self-attention on $Y_j$ vectors obtained from Eqn.~\ref{eqn:TT_Yj} above. We now describe the computation of query, key and value vectors for the self-attention.
\myparagraph{Query, Key and Value functions}
For an index $j$ (corresponding to the vector $Y_j$), we define the functions query and key functions $Q(\cdot)$ and $K(\cdot)$ as the functions of $Y_{j-1}$ and $Y_{j+1}$ respectively. Similarly we define value $V(\cdot)$ as function of $Y_j$.
\begin{align}
\label{eqn:TT_query} Q(Y,j) &= ([Y_{j-1},Y_{j+1}]+e_j)W_q + b_q \\
\label{eqn:TT_key} K(Y,j, A) &= (([Y_{j-1},Y_{j+1}]+e_j)W_k + b_k)\cdot \prod_{i=jw}^{(j+1)w}A_{i}\\
\label{eqn:TT_value} V(Y_j) &= Y_{j}W_v + b_v
\end{align}
where $W_q,W_k \in \RR^{2p \times 2p},W_v \in \RR^{p \times p}$, $e_j$ is the positional encoding of index $j$ defined in Eqn.~\ref{eqn:position_encoding}. The product of $A_i$-s in Eqn.~\ref{eqn:TT_key} is one only if all values in the window are available. This prevents attention on windows with missing values. Note that keys and values are calculated for other indices $j' \neq j$ as well.
\myparagraph{Attention Module}
Attention module computes the attention-weighted sum of vectors $V(Y_{\bullet})$. The attention-weighted sum at index $j$ is calculated as follows:
\begin{align}
\label{eqn:TT_attn}
\mathrm{Attn}(Q(\cdot), K(\cdot), V(\cdot), A, j) &= \frac{\sum_{j'}\langle Q(Y,j), K(Y,j',A) \rangle V(Y_{j'})}{\sum_{j'} \langle Q(Y,j), K(Y,j',A) \rangle
\end{align}
Note that for indices $j'$ with missing values, including the index $j$, the key $K(Y, j', A)$ is zero. Hence such indices are not considered in the attention.
\myparagraph{MultiHead Attention}
Instead of using only one attention module, our temporal transformer uses multiple instantiations of the functions $Q(Y, j)$, $K(Y, j, A)$, $V(Y_j)$. We compute $n_{\mathrm{head}}$ such instantiations and denote them using index $l=1\ldots n_{\mathrm{head}}$.
We obtain the output of multi-head attention by concatenating the output vectors of all $n_{\mathrm{head}}$ attentions (obtained from Eqn.~\ref{eqn:TT_attn}) into a vector $\vek{h} \in \RR^{pn_{\mathrm{head}}}$.
\begin{align}
\vek{h}_j = [\mathrm{Attn}^1(\cdots), \ldots, \mathrm{Attn}^{n_{\mathrm{head}}}(\cdots)]
\end{align}
\myparagraph{Decoding Attention Output}
Vector $\vek{h}_j$ is the output vector for window $X_{jw:(j+1)w}$. Decoding module first passes the vector $\vek{h}_j$ through a feed-forward network to obtain the vector $\vek{h}_j^{\mathrm{ff}}$. It then transforms this vector to obtain the output vectors for positions $\{jw,\ldots,t,\ldots,(j+1)w \}$:
\begin{align}
\label{eqn:TT_decodeMLP} \vek{h}_j^{\mathrm{ff}} &= \mathrm{ReLU}(W_{d_2}(\mathrm{ReLU}(W_{d_1}(\mathrm{ReLU}(\vek{h}_j)))))\\
\label{eqn:TT_deconv} \vek{h}_j^{\mathrm{tt}} &= \mathrm{ReLU}(W_d\vek{h}_j^{\mathrm{ff}} + b_d)
\end{align}
where $W_d \in \RR^{w\times p \times p}$. Note that $\vek{h}_j^{\mathrm{tt}} \in \RR^{w \times \bullet}$ consists of output vectors for all indices in the $j$-th window. We can obtain the output vector corresponding to index $t$ as $\vek{h}^{\mathrm{tt}} = \vek{h}_j^{\mathrm{tt}}[t\%w]$ as the final output of the $TT_{\theta}(X,A,t)$ module.
\subsubsection{Fine Grained Attention Module}
\label{ssec:fg}
We utilise this module to capture the local structure from immediate neighbouring time indices which are especially significant in the case of point missing values. Let the time index $t$ be part of the window $j=\big\lfloor \frac{t}{w} \big\rfloor$ with start and end times as $t_s^j$ and $t_e^j$ respectively. Then we define the function $FG_{\theta}(X, A)$ as
\begin{align}
\label{eqn:finegrained}
FG_{\theta}(X, A) = \vek{h}^{\mathrm{fg}} = \frac{\underset{j \in I(A)}{\sum} \X{j}}{|I(A)|}
\end{align}
\subsection{Kernel Regression Module}
\label{ssec:kr}
We build the kernel regression module to exploit information from correlated series along each of the $n$ data dimensions. A series in our case is associated with $n$ variables $\vek{k}=k_1,\ldots, k_n$.
{\color{black}
\myparagraph{Index Embeddings}
First, we embed each dimension member in a space that preserves their relatedness. If a member $m_{ij}$ of a dimension $K_i$ is categorical we learn an embedding vector $E_\theta(m_{ij}) \in R^{d_i}$. When the dimension is real-valued, i.e., $m_{ij} \in R^p$, we use $E_\theta(m_{ij})$ to denote any feed-forward neural network to transform the raw vector into a $d_i$-dimensional vector.}
We define relatedness among series pairs. We only consider series pairs that differ in exactly one dimension. We call these sibling series:
\myparagraph{Defining Siblings}
We define siblings for an index $\vek{k}$ along dimension $i$, $\text{Sib}(\vek{k},i)$ as the set of all
indices $\vek{k}'$ such that $\vek{k}$ and $\vek{k}'$ differ only at $i$-th dimension.
\begin{align}
\text{Sib}(\vek{k},i) = \{ \vek{k}' :k'_j = k_j ~~ \forall j \neq i \land k'_i \neq k_i \}
\label{eqn:siblings}
\end{align}
Here we override the notation $\text{Sib}(\vek{k})$ (used earlier in Eqn.~\ref{eq:concept}) to identify the siblings along each dimension. For example, in a retail sales data, that contains three items \{$i_0, i_1, i_2$\} and four regions \{$r_0, r_1, r_2, r_3$\}, siblings of an (item, region) pair $\vek{k}=(i_1, r_2)$ along the product dimension would be
$\text{Sib}(\vek{k},0) = \{(i_0,r_2), (i_2,r_2)\}$
and along the region dimension would be $\text{Sib}(\vek{k},1) = \{(i_1,r_0), (i_1,r_1), (i_1,r_3)\}$.
\myparagraph{Regression along each dimension}
An RBF Kernel computes the similarity score $\mathcal{K}(k_i,k'_i)$ between indices $k_i$ and $k'_i$ in the $i$-th dimension:
\begin{align}
\mathcal{K}(k_i,k'_i) = \exp \Big(-\gamma*||E[k_i] - E[k'_i]||_2^2 \Big)
\end{align}
Given a series $X$ at index $(\vek{k},t)$, for each dimension $i$, we compute the kernel-weighted sum of measure values as
\begin{align}
\label{eqn:KR_U} U_{(\vek{k},i),t} = \frac{\sum_{\vek{k}' \in \text{Sib}(\vek{k},i)}X_{\vek{k}',t} \mathcal{K}(k_i,k'_i) A_{\vek{k}',t}}{\sum_{\vek{k}' \in \text{Sib}(\vek{k},i)}\mathcal{K}(k_i,k'_i) A_{\vek{k}',t}}
\end{align}
where $A_{\vek{k}',t} = 1$ for non-missing indices and $0$ for missing indices.
When a dimension $i$ is large, we make the above computation efficient by pre-selecting the top $L$ members based on their kernel similarity.
We also compute two other measures: Sum of kernel weights and the variance in $X$ values along each sibling dimension:
\begin{align}
\label{eqn:KR_W} W_{(\vek{k},i),t} &= \sum_{\vek{k}' \in \text{Sib}(\vek{k},i)}\mathcal{K}(k_i,k'_i)A_{\vek{k}',t} \\
\label{eqn:KR_V} V_{(\vek{k},i),t} &= Var(X_{\text{Sib}(\vek{k}, i),t})
\end{align}
The last layer of the kernel-regression module is concatenation of $U$, $V$, and $W$ components:
\begin{align}
\vek{h}^{\mathrm{kr}} = \mathrm{Concat}(U_{(\vek{k},i),t}, V_{(\vek{k},i),t}, W_{(\vek{k},i),t})
\end{align}
where $\vek{h}^{\mathrm{kr}} \in \RR^{3n}$.
\iffalse
\section{The \sysname\ Model}
\todo[inline] {Revise first paragraph?}
Our approach to missing value imputation is to train a multi-layered neural network over known values of $X$ and use the trained network to predict the missing values. %
The network is designed to depend less on parameters, and more on signals derived from a missing block's context so that its training is light and less prone to over-fitting on small $X$.
Broadly, the three modules of our network are:
\begin{enumerate}
\item A temporal transformer to extract signals from within a series,
\item Multidimensional kernel regression to extract signals from related series, and
\item Final output layer to combine results from above two.
\end{enumerate}
\subsection{Temporal Transformer Across Time}
We start with the observation that dependency along time for the missing value imputation task is better captured using a Transformer~\cite{Vaswani2017} architecture than a bidirectional Recurrent Neural Networks used in existing deep learning models for MVI~\cite{cao2018brits,yoon2018estimating}. The self-attention in transformers can more easily adapt to missing blocks of data by masking rather than states in RNNs.
However, our initial attempts at using the vanilla Transformer model (described in Sec~\ref{sec:Transfomers}) for the MVI task was subject to over-fitting on long time-series, and inaccurate for block missing values. We designed a new transformer specifically suited for the MVI task which we call the Temporal Transformer. We describe the different stages of this network next.
Our temporal transformer operates on chunks of $X$ at a time, but to avoid notation clutter we continue to refer them as $X$. We use $I(A) = I-I(M)$ to denote all the indices that are not missing in $X$.
\myparagraph{Window-based Feature Extraction}
Given a series $X$ of length $T$, in \sysname, we first apply 1D Convolution on non-overlapping windows of size $w$:
\begin{align}
\label{eqn:windowing}
\vek{h}^{\text{conv}} = \text{CONV}_{\theta_{\textrm{conv}}}(X, w, p)
\end{align}
where $p$ denotes number of convolutional filters. The stride in our convolution is equal to window size. This gives us $c = \frac{T}{w}$ windows each represented by a feature of length $p$. Hence, $\vek{h}^{\text{conv}} \in \RR^{c \times p}$. For each window
$j \in 1\ldots c$
$\vek{h}^{\text{conv}}_j$ represents features of $X$ at time indices $[t_{(j-1)c+1}, t_{jc})$.
Such a convolution effectively summarizes a local window into a vector of size $p$.
As we will see later, this is useful for the attention module to capture long-range dependencies and not rely on local context alone to predict the missing value. Another advantage is that the length of the sequence has shrunk by $w$ which greatly helps to reduce computation time in the following self-attention phase.
\myparagraph{Query, Key, and Value}
Query and Key vectors in self-attention are crucial in order to capture dependencies that are useful for the task. In vanilla transformer architecture, it suffices to use linear transformations of input embeddings as queries and keys. However, because patterns in time-series emerge at window-level and not at fine-grained level, queries and keys derived from convolution output are more effective.
We create queries and keys in a special way that is more effective for imputation task. Our query and key vector for attention at position $j$ are concatenations of features $\vek{h}^\text{conv}$ at window $j-1$ to its left and window $j+1$ to its right. Additionally, following standard Transformer architecture, we encode a position $j \in 1\ldots c$ as a $2p$-dimensional positional encoding vector $e_j$ using Eq.~\ref{eqn:position_encoding}.
\begin{align}
\vek{r}_j = [\vek{h}^{\text{conv}}_{j-1}, \vek{h}^{\text{conv}}_{j+1}] + e_j
\label{eqn:query_key}
\end{align}
where $e_j$ is the positional encoding of the window.
\todo[inline]{Rephrase the positional encoding part}
See Figure~\ref{fig:missing_patterns} for an illustration of how patterns in the left and right window of a missing block match other time ranges where data is available.
We generate multiple queries and keys for multi-headed attention, using linear transformations over the concatenated vector. For each head $l = 1 \ldots n_{\mathrm{head}}$ we get:
\begin{align}
Q_l &= \vek{r} W_l^Q \quad K_l = \vek{r} W_l^K \quad V = \vek{h}^{\mathrm{conv}}
\end{align}
where $W_l^Q \in \RR^{2p \times 2p}$, $W_l^K \in \RR^{2p \times 2p}$.
\todo[inline] {Rewrite the below paragraph. Embeddings need to be clarified}
Compare our method of creating keys with the old method in Eq~\ref{eq:oldKey}. We use the left-side-window and right-side-window of a block (Figure \ref{fig:missing_patterns}), and not the values at $j$ which may be missing. From $j$ we only include the position features which helps us capture any regular periodicity in the data. This makes our model more suitable for imputation on time-series data.
A second difference is that we do not learn embeddings $E_t$ for each input position.
When we tried to train time-embeddings, we found that the model was prone to over-fitting. Also, keys derived from poorly trained embeddings were not able to correctly attend to relevant patterns within a series. Also, time embeddings cannot be trained in scenarios like blackout, where all series have a missing block at the same time range. Another advantage is that the number of parameters in our convolution filters is constant and independent of the length of the time-series.
\myparagraph{Attention Weighted Context}
With our keys and queries, we use the same logic as in standard attention to compute a contextual vector for each position $j$ with $n_{\mathrm{head}}$ heads as follows:
\begin{align}
a_l(\vek{h}^{\mathrm{conv}}) &= \mathrm{Softmax}(\frac{Q_l K_l^T}{\sqrt{2p}}, \text{mask}=A^\text{rand}) V \quad l=1\ldots n_{\mathrm{head}}\\
\vek{h}^{\mathrm{mha}} &= \mathrm{Concat}[a_1(\vek{h}^{\mathrm{conv}}),\ldots,a_{n_{\mathrm{head}}}(\vek{h}^{\mathrm{conv}})]
\end{align}
We use a mask $A^\text{rand}$ to control the subset of positions over which we attend. In test mode $A^\text{rand}=A$, the available positions in $X$. During training we set $A^\text{rand}$ to additionally drop random windows around $t$ so as to be robust to missing data. Let $b$ be the average size of the missing block in the data. We sample $v \in [1, \frac{b}{w}]$ and disable the attention in the range $[j-v, j+(\frac{b}{w}-v)]$ for each window $j$. This way the attention module relies more on values outside the $\frac{b}{w}$-sized block around each $t$.
\myparagraph{Feed-Forward Network}
We pass the output of the attention through a feed-forward layer network of single hidden layer of size $d_{\mathrm{feed}} = 512$ and output layer of size $d_{\mathrm{out}} = 32$.
\begin{align}
\vek{h}^{\mathrm{ff}} = \mathrm{ReLU}(W_2(\mathrm{ReLU}(W_1(\mathrm{ReLU}(\vek{h}^{\mathrm{mha}})))))
\end{align}
where $W_1 \in \RR^{p n_{\mathrm{head}} \times d_{\mathrm{feed}}}$, $W_2 \in \RR^{d_{\mathrm{feed}}\times d_{\mathrm{out}}}$, and $\vek{h}^{\mathrm{ff}} \in \RR^{c \times d_{\mathrm{out}}}$.
\myparagraph{Deconvolution}
Finally the transposed convolution is applied on sequence of states $\vek{h}^{\mathrm{ff}}$ to obtain the output of the convolution+attention module.
\begin{align}
\vek{h} = \mathrm{CONVT}_{\theta_{\textrm{convt}}}(\vek{h}^{\mathrm{ff}}, w, d_{\mathrm{out}},1)
\end{align}
where $\vek{h} \in \RR^{T \times d_{\mathrm{out}}}$.
\myparagraph{Fine-grained Local Signal Module}
For point missing values, the immediately adjacent values might be highly useful which the above window-level attention ignores. We therefore added a second light fine-grained
local signal.
Each $t \in [1, T]$ is part of window $j=\big\lfloor \frac{t}{w} \big\rfloor$. Let start and end times of this window be $t_s^j$ and $t_e^j$ respectively. Then, our fine-grained signal is average of all measure values in $X_{t_s^j}$ to $X_{t_e^j}$, except a randomly selected block of size $b$.
\todo[inline]{add adjustment for available indices}
\begin{align}
\label{eqn:finegrained}
\vek{h}^{\mathrm{fg}}_t = \frac{1}{w-b} \bigg[\sum_{r=t_s^j}^{t_s^j+(t-v)} X_{r} &+ \sum_{r=t+(b-v)}^{t_e^j} X_{r}\bigg] \\
\nonumber & \quad \text{where } \quad v \sim \textrm{Uniform}[1, b]
\end{align}
We create a matrix of all fine-grained signal values as $\vek{h}^{\mathrm{fg}} \in \RR^{T \times 1}$.
The concatenation of $\vek{h}$ from the coarse-grained attention and $\vek{h}^{\mathrm{fg}}$ from the fine-grained signal forms the final output $\vek{h}^{\mathrm{tr}}$ of the temporal transformer layer. For each $t$, the vector $\vek{h}^{\mathrm{tr}}_t$ is derived from $X$ values at positions other than $t$. Further during training it also stochastically avoids a block of $b$ values around $t$. These two properties allow us to train the network by imposing a loss at each available $t \in A$. In contrast, in vanilla transformer since the final vector includes the influence of input embedding at $t$, loss can be imposed only at pre-decided masked positions of the input.
\subsection{Kernel Regression Across Series}
\label{subsec:kr}
\todo[inline] {Re-write for general dimensions}
We next seek to exploit signal from related series along each of the $n$ categorical dimensions. A series in our case is associated with an $n$ dimensional index $\vek{k}=k_1,\ldots, k_n$. Recall each dimension is associated with a set of categorical members. We represent relatedness of two members on a dimension as a kernel measuring the similarity of their learned embeddings. Using this kernel, we predict $n$ values at each position $\vek{k},t$ using kernel regression along each of the $n$ categorical dimensions. We present the detailed steps below:
\myparagraph{Index Embeddings}
First, for each categorical member $m_{ij}$ of each dimension $K_i$, we associate a $d_i$ dimensional learned embedding vector $E[m_{ij}]$.
We define relatedness among series pairs. We only consider series pairs that differ in exactly one dimension. We call these sibling series:
\myparagraph{Defining Siblings}
Siblings for an index $\vek{k}$ along categorical dimension $i$, $S(\vek{k},i)$ is defined as the set of all
indices $\vek{k}'$ such that $\vek{k}$ and $\vek{k}'$ differ only at $i$-th dimension.
\begin{align}
S(\vek{k},i) = \{ \vek{k}' :k'_j = k_j ~~ \forall j \neq i \land k'_i \neq k_i \}
\label{eqn:siblings}
\end{align}
For example, in a retail sales data, that contains three items \{$i_0, i_1, i_2$\} and four regions \{$r_0, r_1, r_2, r_3$\}, siblings of an (item, region) pair $\vek{k}=(i_1, r_2)$ along the product dimension would be
$S(\vek{k},0) = \{(i_0,r_2), (i_2,r_2)\}$
and along the region dimension would be $S(\vek{k},1) = \{(i_1,r_0), (i_1,r_1), (i_1,r_3)\}$.
\myparagraph{Regression along each dimension}
An RBF Kernel computes the similarity score $\mathcal{K}(k_i,k'_i)$ between indices $k_i$ and $k'_i$ in the $i$-th dimension:
\begin{align}
\mathcal{K}(k_i,k'_i) = \exp \Big(-\gamma*||E[k_i] - E[k'_i]||_2^2 \Big)
\end{align}
Given a series $X$ at index $(\vek{k},t)$, for each dimension $i$, we compute the kernel-weighted sum of measure values as
\begin{align}
U(\vek{k},i,t) = \frac{\sum_{\vek{k}' \in S(\vek{k},i)}X_{\vek{k}',t} \mathcal{K}(k_i,k'_i) A_{\vek{k}',t}}{\sum_{\vek{k}' \in S(\vek{k},i)}\mathcal{K}(k_i,k'_i) A_{\vek{k}',t}}
\end{align}
where $A_{\vek{k}',t} = 1$ for non-missing indices and $0$ for missing indices.
When a categorical dimension $i$ is large, we make the above computation efficient by pre-selecting the top $L$ members based on embedding similarity.
Let the matrix $U \in \RR^{c \times n}$ denote the kernel-weighted sum of measure values for each time index and each sibling dimension for a given $\vek{k}$.
We also compute two other measures: the sum of kernel weights $W \in \RR^{c \times n}$ and the variance in the $X$ values along each sibling dimension $V \in \RR^{c \times n}$ as follows:
\begin{align}
W(\vek{k},i,t) &= \sum_{\vek{k}' \in S(\vek{k},i)}\mathcal{K}(k_i,k'_i)A_{\vek{k}',t} \\
V(\vek{k},i,t) &= Var(X_{S(\vek{k}, i),t})
\end{align}
The last layer of the kernel-regression module is just a concatenation of matrices $U$, $V$, and $W$:
\begin{align}
\vek{h}^{\mathrm{kr}} = \mathrm{Concat}(U, V, W)
\end{align}
where $\vek{h}^{\mathrm{kr}} \in \RR^{c \times 3n}$.
\fi
\iffalse
\myparagraph{Differences with Standard Transformers}
\begin{enumerate}
\item No input embedding
\item Method of extracting key
\item Convolutions and deconvolution
\item Loss at all positions
\item Fine-grained local signal and coarse-grained attention
\end{enumerate}
We applied the transformer with single multi-head attention layer for missing value imputation task in time-series. While this transformer works well on point-missing values and surpasses existing neural architectures, it fails on blocks of missing values. This is primarily because transformer relies heavily on the context available near the missing position.
A second limitation was that the computational complexity of self-attention is quadratic with respect to length of the sequence. This limits the transformer to capture long-range dependencies.
We use a special fine-grained attention (Section \ref{??}) which is a significant departure from attention performed in vanilla transformers. Our fine-grained attention works well on point-missing scenarios as well as missing blocks of small size (Section \ref{subsubsec:expts_fine_grained}).
One issue with using vanilla transformer is, since queries and keys are linear transformations of raw input values, they do not contain any signal about the local context. As pointed out in \cite{li2019enhancing}, this significantly limits the effectiveness of transformer on time-series. Similarly queries and keys in this setting do not work well in presence of blocks of missing values.
\sysname\ performs self-attention on non-overlapping windows of fixed size. We also use a special way of creating queries and keys as described in Sec.~\ref{??}.
Using stacked layers of multi-head attention is also not possible. A 2-layered transformer can learn identity mapping between input and output through self-attention of neighbouring positions. The loss terms in such a model are restricted to manually masked-out indices during training. However, this does not take advantage of all the available that can be rather utilized to learn complex dependencies. In \sysname, we do not need to pre-mask the indices where we compute the loss terms. Hence \sysname\ can take advantage of all the available data for training.
\subsection{Design Considerations}
The prime design consideration for achieving speed while maintaining accuracy was to impose a loss on the whole sequence instead of just 1 block i.e. if the input to the model is a time series is of length 1000, the loss should be on reconstruction of 1000 terms and not a block of length 10.
Firstly, why use non-overlapping blocks instead of overlapping convolutions? The prime reason is to sparsify the attention module and impose locality i.e. nearby points want to pay attention to the same block. Points $t$ and $t+1$ want to reconstruct themselves using $m$ and $m+1$, instead of $m$ and $n$ and $t$ has to query just $T/k$ vectors to find the relevant block can used for reconstruction instead of $T$ vectors.
\todo[inline]{PD: Written in Section 3}
Secondly, why not use the convolutions at timestep $t$ as query and key? There are 2 reasons for the same.
\begin{itemize}
\item In case of missing blocks, the convolutional features at time $t$ would not contain any information i.e. all the values around $t$ would be NaN leading to uninformative query and key.
\todo[inline]{PD: This point is is incorporated in Section 3}
\item In case of point missing, the same can be done given that we don't want to impose a loss on the complete series i.e. the indices that are included in the loss term have to absent while constructing convolutional features otherwise the query and key have the information of the true value to be imputed and will pay attention to blocks with same value leading to catastrophic test time results
\end{itemize}
Why just 1 layer of attention? Neural network models have been shown to have massive gain on increasing the number of modules between input and output layers. We experimented with 2 layers of attention which didn't lead to much improvement in accuracy. There's a trade-off to be considered here. In order to use multiple layer of attentions, we need to use a causal model i.e. there needs to be 2 different attention modules one of which attends to all values $<t$ and other to all values $>t$. The reason for the same is if we attend to all positions then there can be information leak in the same, i.e. values at block position $i$ would be stored at position $i+1$ and would be copied back in the second attention layers when $i$ attends to $i+1$ Hence a causal model or a model with the indices used in loss calculation replaced with NaN in input is required.
\todo{Incorporated in sec 3}
Why convolutional filters? Previous works \cite{li2019enhancing,li2019jasper} have shown that convolutional filters + attention works better than RNN based models. Without convolutional filters the algorithm is completely ignorant of values in context leading to spurious results.
Why not use embeddings in the Query and Key? In series where there are some time indices never included in training data, leads to their embedding not training, and hence causing spurious results while test time
Why not use embeddings too along with Filters in Query and Key? Using time embeddings with Conv Filters in Query/Key leads to the focus placed solely on time embeddings, which again leads to catastrophic test time results on blackout type setting. Learning from Convolution filters is way harder than from simple time embeddings hence causes this mismatch.
\todo{Added}
Why not use more of context in query and key? i.e. instead of just previous and next convolutional blocks use previous 2 and next 2? This was experimented and showed no improvement in the accuracy achieved but caused a gain in running time and hence we limited our context
\todo[inline]{Added}
Why dropout on the Embedding features? As stated earlier learning simple embedding based model is easier than learning convolutional filters which are able to model co-relation within the time series. Not including the dropout leads to suboptimal solution with training set overfitting on the embedding prediction without training attention module.
\todo{Learned time embeddings are omitted from attention module}
Why not use Embeddings Features at all? Experimental results show that Kernel Regression along with attention is enough to model the time series.
Mean normalising the time series ensures that there aren't any signals coming from the query embedding left to linearly change the prediction.
\fi
\iffalse
\begin{figure}
\begin{algorithmic}[1]
\Procedure \sysname{$X, A, M$}
\State $P_{BS},P_{BO} \gets$ statistics of missing block sizes in $X$.
\State $A^{\mathrm{trn}},M^{\mathrm{val}} \gets \mathrm{MakeValidation}(X,P_{BS})$ (Sec.~\ref{para:val_split})
\State model $\gets \mathrm{CreateModel}()$ \texttt{/* Sec.~\ref{para:net_default_params} */}
\For {$\mathrm{iter}=0$ to $\mathrm{MaxIter}$
\State \texttt{/* Sec.~\ref{para:training_loop} and ~\ref{para:create_batch} */}
\For{\textbf{each} $(\vek{k},t, bs)\sim \mathrm{Batch}(\cI(A^{\mathrm{trn}}), P_{BS})$}
\State $\cI^{\mathrm{impute}} \gets \{ (\vek{k},t-cs)\ldots (\vek{k}, t+cs) \}$
\State {$\hat{X}_{\cI^{\mathrm{impute}}} \gets \mathrm{ForwardPass}(\cI^{\mathrm{impute}})$}
\State Compute Loss $\forall (\vek{k},t) \in \cI^{\mathrm{impute}}$ (Eq.~\ref{eqn:training_objective}).
\EndFor
\State Update model parameters $\Theta$.
\State Evaluate \textbf{Stopping Criteria} on $\cI(M^{\mathrm{val}})$ (Sec.~\ref{para:stopping_criteria}).
\EndFor
\State \texttt{/* Impute test-blocks */}
\State{$\hat{X} \gets \mathrm{ForwardPass}(\cI(M))$ over all test blocks in $\cI(M)$}.
\State $\hat{X} \gets (\hat{X})X_{\sigma}+X_{\mu}$\\
\Return $\hat{X}$
\EndProcedure
\Procedure{ForwardPass}{$\cI^{\mathrm{impute}}$}
\State $X_{sib} \gets $ Siblings of $\vek{k}$ along each $i=1\ldots n$ dimensions
\State{$\hat{X} \gets \textbf{Adaptive Forward Pass}( \cI^{\mathrm{impute}})$ (Refer Sec.~\ref{para:fwd_pass})} \\
\Return $\hat{X}$
\EndProcedure
\end{algorithmic}
\caption{The \sysname\ training and imputation algorithm}
\label{alg:pcode}
\end{figure}
\fi
\begin{figure}
\begin{algorithmic}[1]
\Procedure \sysname{$X, A, M$}
\State TrainData=$(\vek{k}_i,t_i) \in A, A^i$=random misses around $(\vek{k}_i,t_i)$.
\State model $\gets \mathrm{CreateModel}()$ \texttt{/* Sec.~\ref{para:net_default_params} */}
\For {$\mathrm{iter}=0$ to $\mathrm{MaxIter}$
\For{\textbf{each} $(\vek{k}_i,t_i,A^i)\sim \mathrm{Batch}$(TrainData)}
\State {$\hat{X}_{\vek{k}_i, t_i} \gets \mathrm{ForwardPass}(X, A^{i}, \vek{k}_i, t_i)$}
\EndFor
\State Update model parameters $\Theta$.
\State Evaluate validation data for early stopping.
\EndFor
\State \texttt{/* Impute test-blocks */}
\State{$\hat{X} \gets \mathrm{ForwardPass}(X, A, \bullet, \bullet)$ over all test blocks in $\cI(M)$}. \\
\Return $\hat{X}$
\EndProcedure
\Procedure{ForwardPass}{$X$, $A$, $\vek{k}$, $t$}
\State $\vek{h}^{\mathrm{tt}}, \vek{h}^{\mathrm{fg}} = TT(X, A)$.
\State $\vek{h}^{\mathrm{kr}} = KR(X, A)$. \texttt{Section \ref{ssec:kr}} \\
\Return $\vek{h}^{\mathrm{tt}}$, $\vek{h}^{\mathrm{fg}}$, $\vek{h}^{\mathrm{kr}}$
\EndProcedure
\Procedure{TT}{$X$, $A$}
\State Index of the block containing time $t$ is $j = T \% w$.
\State $Y_j = W_f X_{jw:(j+1)w} + b_f$.
\State \texttt{/*Similarly compute $Y_{j-1}$ and $Y_{j+1}.$*/}
\State Compute Query, Keys, and Values using Equations \ref{eqn:TT_query}, \ref{eqn:TT_key}, \ref{eqn:TT_value}.
\State Calculate $\mathrm{Attn(Q(\cdot), K(\cdot), V(\cdot), A, j)}$ using Eqn.~\ref{eqn:TT_attn}.
\State Calculate multi-head attention:
\begin{align*}
\vek{h}_j = [\mathrm{Attn}^1(\cdots), \ldots, \mathrm{Attn}^{n_{\mathrm{head}}}(\cdots)]
\end{align*}
\State Compute vector $\vek{h}_j^{\mathrm{tt}}$ using Equations \ref{eqn:TT_decodeMLP} and \ref{eqn:TT_deconv}.
\State $\vek{h}^{\mathrm{tt}} = \vek{h}_j^{\mathrm{tt}}[t\%w]$.
\State Compute the fine grained attention vector:
\begin{align*}
\vek{h}^{\mathrm{fg}} = FG_{\theta}(X, A) \texttt{/* Eqn~\ref{eqn:finegrained} */}
\end{align*}
\Return $\vek{h}^{\mathrm{tt}}$, $\vek{h}^{\mathrm{fg}}$
\EndProcedure
\Procedure{KR}{$X$, $A$}
\State Compute the vectors $U_{\bullet}$, $W_{\bullet}$, and $V_{\bullet}$ (Equations \ref{eqn:KR_U}, \ref{eqn:KR_W}, \ref{eqn:KR_V}).
\State $\vek{h}^{\mathrm{kr}} = \mathrm{Concat}(U_{(\vek{k},i),t}, V_{(\vek{k},i),t}, W_{(\vek{k},i),t})$. \\
\Return $\vek{h}^{\mathrm{kr}}$
\EndProcedure
\end{algorithmic}
\caption{The \sysname\ training and imputation algorithm}
\label{alg:pcode_new}
\end{figure}
\iffalse
\begin{figure}
\begin{algorithmic}[1]
\Procedure \sysname{$X, A, M$}
\State $X \gets \frac{X - X_{\mu}}{X_{\sigma}}$ \texttt{ //
$X_{\mu}$=mean and $X_{\sigma}$=std.dev along time.}
\State $P_{BS} \gets$ distribution of missing block sizes in $X$.
\State $A^{\mathrm{trn}},M^{\mathrm{val}} \gets \mathrm{MakeValidation}(X,P_{BS})$
\State $w \gets 20$ $\mathbf{if}$ $\EE[BS] > 100$ $\mathbf{else}$ $10$.
\State $cs \gets 50w$
\State $f_{kr} \gets $ Ratio of missing timesteps
\State $\mathrm{BatchSize} \gets \max(\frac{X.\mathrm{size}()}{cs}, 256)$
\State model $\gets$ \sysname($w$)
\State $\cE_{\mathrm{best}}, \rho \gets \inf$, $0$ \texttt{/* $\rho$ is patience parameter */}
\For {$\mathrm{iter}=0$ to $\mathrm{MaxIter}$, and $\mathbf{while}$ $\rho < 3$}
\State $\mathrm{Loss} \gets 0 $
\For {i=0 to $\mathrm{BatchSize}$}
\State $(\vek{k},t)\sim \mathrm{Batch}(\cI(A^{\mathrm{trn}})),bs\sim P_{BS}$
\State{$\hat{X} \gets \mathrm{ForwardPass}((\vek{k},t-cs,t+cs,bs)$, \\
\hspace{2.5cm} $\cI(A^{\mathrm{trn}}), \mathrm{Binomial}(1-f_{kr}))$}
\State $\cI^{\mathrm{impute}} \gets \{ (\vek{k},t-cs)\ldots (\vek{k}, t+cs) \}$
\State $\mathrm{Loss} += \mathrm{mean}(\mathrm{abs}((X_{\cI^{\mathrm{impute}}}-\hat{X}_{\cI^{\mathrm{impute}}})X_{\sigma}))$
\EndFor
\State \texttt{ /*Update model parameters $\Theta$ using gradient descent. */}
\If {$\mathrm{iter} \% \mathrm{interval} = 0$}
\State $\cE_{\mathrm{val}} \gets \mathrm{Validate} (\mathrm{model},M^{\mathrm{val}},A,X)$
\If {$\cE_{\mathrm{val}} \le \cE_{\mathrm{val}}$}
\State $\cE_{\mathrm{best}}$, $\rho \gets \cE_{\mathrm{val}}$, $0$
\State $\Theta^{*} \gets \mathrm{model}.\Theta$
\Else
\State $\rho \gets \rho+1$
\EndIf
\EndIf
\EndFor
\State $\mathrm{TestBlocks} \gets \mathrm{getTestBlocks}(M)$
\For {$(\vek{k},t,bs) \in \mathrm{TestBlocks}$}
\State $KR \gets \mathrm{NotBlackOut}(X[\bullet,t-cs:t+bs+cs])$
\State $\hat{X} \gets \mathrm{ForwardPass}((\vek{k},t-cs,t+bs+cs,bs),X,\cI(M),KR)$
\State $\cI^{\mathrm{impute}} \gets \{ (\vek{k}, t)\ldots (\vek{k}, t+bs) \}$
\EndFor
\State $\hat{X} \gets (\hat{X})X_{\sigma}+X_{\mu}$\\
\Return $\hat{X}$
\EndProcedure
\Procedure{ForwardPass}{$(\vek{k},t_s,t_e,bs),X,\cI,KR$}
\State \texttt{/* $\cI = \{(\vek{k},t):X_{\vek{k},t} \textrm{ is missing} \} $, set of all missing indices */}
\State $\cI^{\mathrm{impute}} = \{(\vek{k},t_s) \ldots (\vek{k},t_e) \}$ \texttt{/* set of all indices to impute */}
\State $Context \gets bs \leq w$
\State $Local \gets bs < w$
\State $X_{sib} \gets $ Siblings of $\vek{k}$ along each $i=1\ldots n$ dimensions
\State $\hat{X} \gets \mathrm{model}(X, \cI^{\mathrm{impute}},X_{sib},\cI,bs,KR,Context,Local)$\\
\Return $\hat{X}$
\end{algorithmic}
\caption{The \sysname\ training and imputation algorithm}
\label{alg:pcode}
\end{figure}
\fi
\newcounter{paranumbers}
\newcommand\paranumber{\stepcounter{paranumbers}\arabic{paranumbers}}
\subsection{Network Parameters and Hyper-parameters}
\label{sec:train}
The parameters of the network span the temporal transformer, the embeddings of members of dimensions used in the kernel regression, and the parameters of the output layer. We use $\Theta$ to denote all the trainable parameters in all modules: $$\Theta = \{ W_f, b_f, W_q, b_q, W_k, b_k, W_v, b_v, W_{d_1}, W_{d_2}, W_d, b_d, \bm{w}_o, \bm{b}_o, E[m_{\bullet,\bullet}] \}.$$
These parameters are trained using the training objective described in Section~\ref{sec:ctrain} on the available data. Any off-the-shelf stochastic gradient method can be used for solving this objective. We used Adam with a learning rate 1e-3.
{\color{black}
\label{para:net_default_params}
\paragraph{Network Hyper-parameters:}
Like any deep learning method, our network also has hyper-parameters that control the size of the network, which in turn impacts accuracy in non-monotonic ways. Many techniques exist for automatically searching for optimal values of hyper-parameters~\cite{citeHyper} based on performance on a validation set. These techniques are applicable in our model too but we refrained from using those for two reasons: (1) they tend to be computationally expensive, and (2) we obtained impressive results compared to almost all existing methods in more than 50 settings without
extensive dataset specific hyper-parameter tuning. This could be attributed to our network design and robust training procedure. That said, in specific vertical applications,
a more extensive tuning of hyper-parameters using any of the available methods~\cite{citeHyper} could be deployed for even larger gains.
The hyper-parameters and their default values in our network are:
the number of filters $p=32$ that controls the size of the first layer of the temporal transformer,
the window size $w$ of the first convolution layer. The hyper-parameter $w$ also determines the size of the context key used for attention. If $w$ is very small, the context size may be inadequate, and if it is too large compared to the size of each series we may over-smooth patterns. We use $10$ by default. When the average size of a missing block is large ($> 100$) we use $w=20$ to gather a larger context.
The number of attention heads $n_{\mathrm{head}}$ is four and embedding size $d_i$ is taken to be $10$ in all our experiments.
}
\iffalse
These are trained jointly so that the predicted imputations of available positions $A$ are close to their actual values $X$. The trained parameters are then used to impute the values in missing positions. The overall pseudocode appears in Figure~\ref{alg:pcode}.
During training we start with an $X$ with missing value positions denoted by $M$ to be filled by our method. First, we extract the distribution $P_\text{BS}$ of missing blocks in $M$ along time and $P_{BO}$ the fraction of total series length that are Blackouts in $M$.
\subsubsection{Set aside validation split}
\label{para:val_split
In order to regularize the parameter training process, we then set-aside roughly 10\% of $X$ as a validation dataset. The size of the missing blocks in the validation set is set to be the same as the average missing block size in $M$, but upper bounded by $w$.
The time indices in the validation constitute the mask $M^{\mathrm{val}}$.
\subsubsection{Training Loop}
\label{para:training_loop
Let $\cI(A^{\mathrm{trn}}) = \cI(A) - \cI(M^{\mathrm{val}})$ denote all indices that are available for training parameters. The training loop uses batch stochastic gradient for optimizing the parameters. In each loop, a random subset of $X$, called a batch $B$ is sampled. The exact method that we used for sampling a batch is detailed in the next paragraph. The
training objective is to minimize the MAE between $\hatX{\vek{k}, t}$ and $X_{\vek{k}, t}$.
\begin{align}
\Theta^{*} = \underset{\Theta}{\mathrm{minimize}} \underset{(\vek{k},t) \in \text{Batch}(\cI(A^{\mathrm{trn}}))}{\EE} [\cE(\hatX{\vek{k},t}, \X{\vek{k},t})]
\label{eqn:training_objective}
\end{align}
where $\Theta$ denotes model parameters and Batch$(\cI(A^{\mathrm{trn}}))$ denotes our method of selecting a batch from the available training indices that we elaborate next.
Before feeding to the model, each series is standard normalised i.e. each $(\vek{k},\bullet)$ has zero mean and unit standard deviation post normalization.
The loss is on un-normalised time series i.e. we multiply the standard deviation and add the mean before computing the loss.
An off-the-shelf optimization (Adam in our case with a learning rate 1e-3) updates the model parameters using gradients of the above loss.
\subsubsection{Method of creating a training batch}
\label{para:create_batch
A batch is created by first sampling batch-size $B$ indices $(\vek{k},t)$ from available indices $\cI(A^{\mathrm{trn}})$.
Since our goal is to train for imputing known missing blocks $M$, we stochastically try to simulate a similar missing pattern around each $\vek{k},t$ both along time and across series. For each batch
we sample two values for this: (1) a block size $bs \sim P_\text{BS}$ and pretend to impute value at $(\vek{k},t)$ assuming a block of size $bs$ along time is missing around it. (2) a binomial value $bo$ with probability $P_{BO}$. When $bo$ is one we pretend that the same $bs$-sized block is missing in the siblings of $\vek{k}$.
Since $X$ may contain very long time series in general, in order to avoid the quadratic penalty of self-attention, we choose a fixed chunk size $cs$ of length that we include around $t$ to get the series $(\vek{k},t-cs:t+cs)$. We calculate the chunk size as $cs = 50w$
Hence the input to the time series at any point is at most $2cs$.
We also feed the sibling values for the selected time series along each dimension $\{(S(\vek{k},j),t-cs:t+cs)), j = 1\ldots n\}$ (see Eq~\ref{eqn:siblings}) for use in kernel regression.
\subsubsection{Stopping Criteria}
\label{para:stopping_criteria
In the training loop, after every 100 batch of parameter updates, we evaluate the imputation loss on the validation dataset. The training loop stops when the loss does not drop in three consecutive iterations. The parameter value corresponding to the smallest loss is saved as the final parameters $\Theta^*$.
\subsubsection{Adaptive Forward Pass}
\label{para:fwd_pass
\todo[inline]{write in conceptual model}
In the forward pass we compute the imputed value for an index $\vek{k},t$ with a block $bs$ around it missing along time and a $bo$ denoting if sibling series have the same time range missing. This involves computing the temporal attention with context features, the fine-grained local signal, and the kernel regression. However, depending on $bs,bo$ some subset of these modules may provide little or no signal. To ensure that these weak signals do not introduce noise, we adaptively disable some subset of our pipelines as follows: If the missing block size $bs$ is too large ($> w$), local values are not available. Hence we disable fine-grained signal and also the Left and Right window features. In rare cases, when $bs$ is larger than even the chunk size, then the whole temporal attention module is disabled. If sibling series have the same time range missing $bo=1$ we disable kernel regression.
\fi
\section{Experiments}
We present results of our experiments on ten datasets under four different missing value scenarios. We compare imputation accuracy of several methods spanning both traditional and deep learning and approaches in Sections~\ref{sec:expt:trad} and ~\ref{subsec:expt:point}. We then perform an ablation study to evaluate the various design choices of \sysname\ in Section~\ref{sec:expt:ablation}. In Section~\ref{subsec:runtime} we compare different methods on running time. Finally, in Section~\ref{subsec:analysis} we
highlight the importance of accurate imputation algorithms
on downstream analytics.
\subsection{Experiment Setup}
\subsubsection{Datasets}
\begin{table}[]
\centering
\begin{tabular}{|l|r|r|l|l|}
\hline
Dataset & Number & Length & Repetitions & Relatedness \\
& of TS & of TS & within TS & across series \\ \hline \hline
AirQ & 10 & 1k & Moderate & High\\ \hline
Chlorine & 50 & 1k & High & High\\ \hline
Gas & 100 & 1k & High & Moderate\\ \hline
Climate & 10 & 5k & High & Low\\ \hline
Electricity & 20 & 5k & High & Low\\ \hline
Temperature & 50 & 5k & High & High\\ \hline
Meteo & 10 & 10k & Low & Moderate\\ \hline
BAFU & 10 & 50k & Low & Moderate\\ \hline
JanataHack & 76*28 & 134 & Low & High\\ \hline
M5 & 10*106 & 1941 & Low & Low\\ \hline
\end{tabular}
\caption{Datasets: All except the last two have one categorical dimension. Qualitative judgements on the repetitions of patterns along time and across series appear in the last two columns.}
\label{tab:expt:datasets}
\end{table}
We experiments on eight datasets used in earlier papers on missing value imputation~\cite{khayati2020mind}. In addition, due to the lack of multi-dimensional datasets in previous works, we introduce two new datasets, ``JanataHack'', ``M5''. Table \ref{tab:expt:datasets} presents a summary along with qualitative judgements of their properties.
\noindent{\bf AirQ} brings air quality measurements collected from 36 monitoring stations in China from 2014 to 2015. AirQ time series contain both repeating patterns and jumps, and also strong correlations across time series. Replicating setup \cite{khayati2020mind}, we filter the dataset to get 10 time series of 1000 length.\\
\noindent{\bf Chlorine} simulates a drinking water distribution system on the concentration of chlorine in 166 junctions over 15 days in 5 minutes interval. This dataset contains clusters of similar time series which exhibit repeating trends.\\
\noindent{\bf Gas} shows gas concentration between 2007 and 2011 from a gas delivery platform of ChemoSignals Laboratory at UC San Diego. \\% These time series exhibit high variations in correlation.
\noindent{\bf Climate} is monthly climate data from 18 stations over 125 locations in North America between 1990 and 2002. These time series are irregular and contain sporadic spikes.
\noindent{\bf Electricity} is on household energy consumption collected every minute between 2006 and 2010 in France. \\% Electricity time series are shifted in time.
\noindent{\bf Temperature} contains temperature from climate stations in China from 1960 to 2012. These series are highly correlated.
\noindent{\bf MeteoSwiss} is weather
from different Swiss cities from 1980 to 2018 and contains repeating trends with sporadic anomalies.
\noindent{\bf BAFU} consists of water discharge data by the BundesAmt Für Umwelt (BAFU), collected from Swiss rivers from 1974 to 2015. These time series exhibit synchronized irregular trends.
\noindent{\bf JanataHack} is a multidimensional time series dataset\footnote{\url{https://www.kaggle.com/vin1234/janatahack-demand-forecasting-analytics-vidhya}} which consists of sales data spanning over 130 weeks, for 76 stores and 28 products (termed "SKU").
{\color{black} \noindent{\bf Walmart M5} made available by Walmart, involves the daily unit sales of $3049$ products sold in 10 stores in the USA spanning 5 years.
Since most of the 3,049 items have 0 sales, we retain the 106 most selling items averaged over stores. This gives us a 2 dimensional data of sales of 106 items across 10 stores.}
\subsubsection{Missing Scenarios Description}
\label{subsec:missing_scenarios}
We experiment with four missing scenarios\cite{khayati2020mind} considered to be the common missing patterns encountered in real datasets.
Here we consider continuous chunks of missing values termed as blocks.
We also consider a scenario with point missing values scattered throughout the dataset in Sec \ref{sec:expt:ablation}.
\myparagraph{Missing Completely at Random (MCAR)} Each incomplete time series has 10\% of its data missing. The missing data is in randomly chosen blocks of constant size 10. We experiment with different \% of incomplete time series
\myparagraph{Missing Disjoint (MissDisj)} Here we consider disjoint blocks to be missing.
Block size is $T/N$, where $T$ is the length of time series, and $N$ is the number of time series. For $i$th time series the missing block ranges from time step $\frac{iT}{N}$ to $\frac{(i+1)T}{N}-1$, which ensures that
missing blocks do not overlap across series.
\myparagraph{Missing Overlap (MissOver)} A slight modification on MissDisj, MissOver has block size of $2*T/N$ for all time series except the last one for which the block size is still $T/N$. For the $i$-th time series the missing block ranges from time step $\frac{iT}{N}$ to $\frac{(i+2)T}{N}-1$, which causes an overlap between missing blocks of series $i$ with $i-1$ and $i+1$
\myparagraph{Blackout} considers a scenario where all time series have missing values for the same time range. Given a block size $s$ all time series have values missing from $t$ to $t+s$, where $t$ is fixed to be $5\%$.
We vary the block size $s$ from 10 to 100.
\subsubsection{\bf Methods Compared}
We compare with methods from both conventional and deep learning literature.
\noindent{\bf CDRec}\cite{khayati2019scalable} is one of the top performing recent Matrix Factorisation based technique which uses iterative Centroid Decomposition. \\
\noindent{\bf DynaMMO}\cite{li2009dynammo} is a probabilistic method that uses Kalman Filters to model co-evolution of subsets of similar time series. \\
\noindent{\bf TRMF}~\cite{yu2016temporal} is a matrix factorisation augmented with an auto-regressive temporal model. \\
\noindent{\bf SVDImp}~\cite{troyanskaya2001missing} is a basic matrix factorisation based technique which imputes using top k vectors in SVD factorisation.\\
\noindent{\bf BRITS}~\cite{cao2018brits} is a recent Deep learning techniques that uses a Bidirectional RNN that takes as input all the series' values at time $t$. \\%but a single RNN is fed all time series as a vector unlike MRNNs
{\color{black}
\noindent{\bf GPVAE}~\cite{yoon2018estimating} a deep learning method that uses Gaussian process in the low dimensional latent space representation of the data. GPVAE uses Variational Autoencoder to generate the imputed values in the original data space. \\% that uses a Bidirectional RNN to do imputation in a single time series, and a separate fully connection NN to capture correlations across series.
\noindent{\bf Transformer}~\cite{Vaswani2017} is a deep learning method that uses a multi-head self-attention based architecture to impute the missing values in time-series.
}
\subsubsection{Other Experiment Details}
\myparagraph{Platforms} Our experiments are done on the Imputation Benchmark\footnote{\url{https://github.com/eXascaleInfolab/bench-vldb20}} for comparisons with conventional methods. The benchmark lacks in support for deep learning based algorithms hence we compare our numbers for those outside this framework.
\myparagraph{Evaluation metric} We use Mean Absolute Error as our evaluation metric.
\subsection{Visual Comparison of Imputation Quality}
\input{plots/visualise}
We start with a visual illustration of how \sysname's imputations compare with those of two of the best performing existing methods: CDRec and DynaMMO. In Fig.~\ref{fig:visualise}, we visualize the imputations for different missing blocks on the Electricity dataset. First row is for MCAR scenario whereas second row is for Blackout scenario.
First observe how \sysname (Blue) correctly captures both the shape and scale of actual values (Black) over a range of missing blocks.
On the MCAR scenario CDRec gets the shape right, only in the first and fourth blocks, however it is off with scale.
In the Blackout scenario, CDRec only linearly interpolates the values in missing block, whereas DynaMMO is only slightly inclined towards ground-truth. However, both CDRec and DynaMMO miss the trend during Blackout whereas \sysname\ successfully captures it because of careful pattern match within a series.
\subsection{Comparison on Imputation Accuracy}
\label{sec:expt:trad}
\input{plots/main_plots}
\input{plots/miss_perc}
Given the large number of datasets, methods, missing scenarios and missing sizes we present our numbers in stages. First in Figure~\ref{fig:main_plots_bar} we show comparisons in MAE of all conventional methods on five datasets under a fixed $x=$10\% of series with missing values in MCAR, MissDisj, MissOver and all series in Blackout with a block size of 10. Then, in Figure~\ref{fig:graph} we show more detailed MAE numbers on three datasets (AirQ, Climate and Electricity) where we vary the percent of series with missing values ($x$) from 10 to 100 for MCAR, MissDisj, MissOver and the block size from 10 to 100 in Blackout. From these comparisons across eight datasets we make the following observations:
First, observe that \sysname\ is better or comparable to all other methods under all missing values scenarios and all datasets.
Our gains are particularly high in the Blackout scenario seen in the last column in graphs in Figure~\ref{fig:graph} and in the bottom-right graph of Figure~\ref{fig:barplots}. For accurate imputation in Blackouts, we need to exploit signals from other locations of the same series. Matrix factorisation based methods such as SVDImp and TRMF fail in doing so and rely heavily on correlation across time series. TRMF's temporal regularisation does not seem to be helping in capturing long term temporal correlations. DynaMMO and CDRec
are able to capture within time series dependencies better than matrix factorisation methods. But they are still much worse than \sysname, particularly on Gas in Figure~\ref{fig:barplots}, and Climate, Electricity in Figure~\ref{fig:graph}.
In the MissDisj/MissOver scenario where the same time range is not missing across all time series, methods that effectively exploit relatedness across series perform better on datasets with highly correlated series such as Chlorine and Temp.
Even in these scenarios we provide as much as 50\% error reduction compared to existing methods.
MCAR is the most interesting scenario for our analysis. Most of the baselines are geared towards capturing either inter or intra TS correlation but none of them are able to effectively combine and exploit both. MCAR owing to small block size and random missing position can benefit from both inter and intra correlation which are fully exploited by our model. \sysname\ achieves strictly better numbers than all the baselines on all the datasets. For Climate and Electricity datasets,
we reduce errors between 20\% and 70\% as seen in the first column of Figure~\ref{fig:graph}.
\input{plots/point_missing}
\input{plots/ablations}
\subsection{Comparison with Deep Learning Methods}
\label{subsec:expt:point}
{\color{black}
We compare our method with with two state-of-the-art deep learning imputation models, along with a vanilla transformer model. We use the official implementation of BRITS and GP-VAE to report these numbers. We present MAE numbers in Table \ref{tab:expt:dl}.
First consider the comparison on the two multi-dimensional datasets: M5 and JanataHack. Both have store and items as the two dimensions in addition to time (Table \ref{tab:expt:datasets}).
We experiment in the MCAR scenario with $x=100\%$ time-series with a missing block. We find that \sysname\ outperforms all the other imputation models on both these datasets. The decrease in MAE is especially significant for JantaHack which has high correlations across different stores for given products.
We next present our numbers on Climate, Electricity and Meteo, on MCAR and Blackout. Here too \sysname\ is either the best or close to the best in all dataset-scenario combinations. On the Blackout scenario our method is significantly better than BRITS, the state-of-the-art. We attribute this to our method of creating artificial blackouts around training indices. In contrast, the BRITS model depends too much on immediate temporal neighborhood during training.
We see that the Transformer model can capture periodic correlations within time series such as those in Climate MCAR. However it fails to capture more subtle non-periodic repeating pattern which requires attention on window feature vectors. Such patterns are prevalent in Electricity and Meteo datasets.
}
\iffalse
\subsection{Capturing correlation in Index duplicated Time Series}
To show that our model is capable of capturing perfect correlations among time series, we construct a new dataset, say AirQDouble, which is simply AirQ dataset with all its time series duplicated. Ideally for such a dataset barring Blackout scenario, we should be able to achieve perfect reconstruction (with high probability in MCAR).
Our results on the same are shown in the figure below. We have compared our method to all the baselines which are doing significantly worse. Since our method by default uses an embeddings dimension of size 10, we have included SVDImp algorithm retaining top 10 time series (dubbed SVDImp10) instead of top 4 in the default case. Our results are comparable to SVDImp10 and close to 0 MAE which implies that our model is able to capture perfect correlations.
\input{plots/airq_doubled}
\fi
\subsection{Justifying Design Choices of \sysname}
\label{sec:expt:ablation}
\sysname\ introduces a Temporal transformer with an innovative left-right window feature to capture coarse-grained context, a fine-grained local signal, and a kernel regression module that handles multi-dimensional data. Here we perform an ablation study to dissect the role of each of these parts.
\subsubsection{Role of Context Window Features}
We study the role of query and key used in our Temporal Transformer module.
Our query/key consists of concatenated window features of previous and next block arithmetically added with positional encoding. Positional encoding encode the relative positions and have no information pertaining to the context of the block where imputation needs to be performed. A question to be asked here was whether the contextual information around a missing block help in a better attention mechanism or whether the attention mechanism just ignores this contextual information doing a fixed periodic imputation. Figure~\ref{fig:ablation} shows this method (Green). These experiments are on MCAR and x-axis is increasing \% of missing TS.
Comparing the green and blue,
we see that our window context features did help on two of the three datasets, with the impact on Electricity being quite significant.
This might be attributed to the periodic nature of the climate dataset compared to non-periodic but strongly contextual information in electricity.
\subsubsection{Role of Temporal Transformer and Kernel Regression}
In Figure~\ref{fig:ablation} we present error without the Temporal Transformer Module(Red) and without Kernel Regression Module(Brown). We see some interesting trends here. On Climate and Electricity where each series is large (5k) with repeated patterns across series, we see that dropping the temporal transformer causes large jumps in error. On Climate error jumps from 0.15 to 0.55 with 10\% missing! In AirQ we see little impact. However, on this data dropping Kernel regression causes a large increase in error jumping from 0.04 to 0.25 on 10\% missing. Kernel regression does not help much beyond temporal transformer on Climate and Electricity. These experiments show that \sysname\ is capable of
combining both factors and determining the dominating correlation via the training process.
\subsubsection{Role of Fine-Grained Local Signal}
\label{subsubsec:expts_fine_grained}
(Equation~\ref{eqn:finegrained}). This signal is most useful for small missing blocks. Hence we modify the MCAR missing scenario such that missing percentage from all time series is still 10\%, but the missing block size is varied from 1 to 10. Figure~\ref{fig:finegrained} shows the results where we
compare our MAE with and without fine grained local signal with CDRec algorithm on the Climate Dataset.
The plot shows that including fine grained signal helps improve accuracy over a model which ignores the local information. Also the gain in accuracy with fine grained local signal diminishes with increasing block size which is to be expected.
\input{plots/fine_grained}
\input{plots/janta2d}
\subsubsection{Effect of multidimensional kernel regression}
For this task, we run two variants of our model. The first model dubbed as \sysname1D flattens the multidimensional index of time series by getting rid of the store and product information. The second variant is the proposed model itself which retains the multi-dimensional structure and applies kernel embeddings in two separate spaces. In \sysname\ each time series is associated with two embeddings of size $k$ each. To keep the comparison fair, \sysname1D uses embedding of size $2k$. Since other methods have no explicit model for multi-dimensional indices, the input is a flattened matrix, similar to \sysname1D.
Figure~\ref{fig:janta2d} shows the performance of the variants compared to the baselines on MCAR for increasing percentage $x$ of number of series with a missing block.
Observe how in this case too \sysname\ is significantly more accurate than other methods including \sysname1D.
If each series is small and the number of series is large, there is a greater chance of capturing spurious correlation across series. In such cases, the external multidimensional structure that \sysname exploits helps to restrict relatedness only among siblings of each dimension. We expect this difference to get magnified as the number of dimensions increase.
\subsection{Running Time}
\label{subsec:runtime}
\input{plots/runtime_factor}
The above experiments have shown that \sysname\ is far superior to existing methods on accuracy of imputation. One concern with any deep learning based solution is its runtime overheads. We show in this section that while \sysname\ is slower than existing matrix factorization methods, it is more scalable with TS length and much faster than off-the-shelf deep learning methods.
We present running times on AirQ, Climate, Meteo, BAFU, and JanataHack datasets in Figure~\ref{fig:runtime}. The x-axis shows the datasets ordered by increasing total size
and y-axis is running time in log-scale. In addition to methods above we also show running time with an off the shelf transformer method.
Matrix factorisation based method like CDRec and SVDImp are much faster than DynaMMO and \sysname. But compared to the vanilla Transformer our running time is a factor of 2.5 to 7 smaller.
The running time of DynaMMO exceeds the running time of other algorithms by a factor of 1000 and increases substantially with increasing series length, which undermines the accuracy gains it achieves. On the JanataHack dataset, DynaMMO took 25 mins (1.5e9 $\mu s$) compared to \sysname\ which took just 2.5 mins.
{\color{black}
We next present our numbers on scalability of \sysname\ in Fig \ref{fig:scalability}. The x axis denotes the length of times series in factors of 1K. The points correspond to datasets AirQ, Climate, Meteo and BAFU for 1K, 5K, 10K and 50K respectively. All these datasets have 10 time series. We can see a sub-linear growth of running time with the series length. An intuition behind the same is that our training algorithm learns patterns, which in case of seasonal time series can be learnt by seeing a small number fraction of the series, abstractly one season worth of data.
}
\subsection{Impact on downstream analytics}
\label{subsec:analysis}
A major motivation for missing value imputation is more accurate data analytics on time series datasets~\cite{Cambronero2017,milo2020automating,kandel2012profiler,Mayfield2010}. Analytical processing typically involves studying trends of aggregate quantities.
When some detailed data is missing, a default option is to just ignore the missing data from the aggregate statistic.
Any MVI method to be of practical significance in data analysis should result in more accurate top-level aggregates than just ignoring missing values.
We present a comparison of the different MVI methods on the average over the first dimension so the result is a $n-1$ dimensional aggregated time series. Except in JanataHack and M5, this results in a single averaged time series.
{\color{black}
Apart from computing the above statistic on the imputation output by various algorithms, we also compute this statistic with just the missing values in the set dropped from the average. We call this the \DiscardT\ method.
We consider four datasets: Climate, Electricity, JanntaHack, and M5, each in MCAR with $100\%$ of the time series containing missing values. On each of these datasets, we first compute the aggregate statistic using true values. For Climate and Electricity, this returns a single series with value at time $t$ as the average of values at all series at time $t$. For JantaHack, we average over 76 stores resulting in average sales of 28 products. Similarly on M5, we average over 10 stores giving us sales of 106 items. Next we compute the aggregate statistic with missing values imputed by five algorithms: CDRec, BRITS, GPVAE, Transformer, and DeepMVI. We compute MAE between aggregate with imputed values and aggregate over true values.
In Figure~\ref{fig:barplots}, we report the difference between MAE of \DiscardT\ and MAE of the algorithm. We see that on the JanataHack dataset, three existing imputation method CDRec, GPVAE, and Transformer provide worse results than just dropping the missing value in computing the aggregate statistic. In contrast \sysname\ provides gains over this default in all cases. Also, it is overall better than existing methods, particularly on the multidimensional datasets. This illustrates the impact of \sysname\ on downstream data analytics.
}
\input{plots/downstream}
\section{Conclusion and Future Work}
In this paper, we propose \sysname, a deep learning method for missing value imputation in multi-dimensional time-series data. \sysname\ combines within-series signals using a novel temporal transformer, across-series signals using a multidimensional kernel regression, and local fine-grained signals. The network parameters are carefully selected to be trainable across wide ranges of data sizes, data characteristics, and missing block pattern in the data.
We extensively evaluate \sysname\ on ten datasets, against comparing seven conventional and three deep learning methods, and with five missing-value scenarios. \sysname\ achieves up to 70\% error reduction compared to state of the art methods. Our method is up to 50\% more accurate and six times faster than using off-the-shelf neural sequence models.
We also justify our module choices by comparing \sysname\ with its variants. We show that \sysname's performance on downstream analytics tasks is better than dropping the cells with missing values as well as existing methods.
Future work in this area includes applying our neural architecture to other time-series tasks including forecasting.
\bibliographystyle{ACM-Reference-Format}
\subsection{Related Work}
Missing value imputation in time series is an age-old problem~\cite{little2002single}, with several solutions that we categorize into matrix-completion methods, conventional statistical time-series models, and recent deep learning methods (discussed in Section~\ref{sec:relate:deep}). However, all these prior methods are for single-dimensional series. So, we will assume $n=1$ for the discussions below.
\myparagraph{Matrix completion methods}
These methods~\cite{yu2016temporal,troyanskaya2001missing,khayati2019scalable,mei2017nonnegative,mazumder2010spectral,cai2010singular}, view the time-series dataset as a matrix $X$ with rows corresponding to series and columns corresponding to time. They then apply various dimensional reduction techniques to decompose the matrix as $X\approx UV^T$ where $U$ and $V$ represent low-dimensional embeddings of series and time respectively. The missing entry in a series $i$ and position $t$ is obtained by multiplying the corresponding embeddings.
A common tool is the classical Singular Value Decomposition (SVD) and this forms the basis of three earlier techniques: SVDImp\cite{troyanskaya2001missing}, SoftImpute~\cite{mazumder2010spectral}, and SVT~\cite{cai2010singular}. All these methods are surpassed by a recently proposed centroid decomposition (CD) algorithm called
CDRec\cite{khayati2019scalable}.
CDRec performs recovery by first using interpolation/extrapolation to initialize the missing values. Second, it computes the CD and keeps only the first k columns of U and V, producing $U_k$ and $V_k$, respectively. Lastly, it imputes values using $X = U_k V_k^T$. This process iterates until the normalized Frobenius norm between the matrices before and after the update reaches a small threshold.
A limitation of pure matrix decomposition based methods is that they do not capture any dependencies along time.
TRMF\cite{yu2016temporal} proposes to address this limitation by introducing a regularization on the temporal embeddings $V$ so that these confirm to auto-regressive structures commonly observed in time-series data. STMVL is another algorithm that smooths along time and is designed to recover missing values in spatio-temporal data using collaborative filtering methods for matrix completion.
\myparagraph{Statistical time-series models}
DynaMMO\cite{li2009dynammo},
is an algorithm that creates groups of a few time series based on similarities that capture co-evolving patterns. They fit a Kalman Filter model on the group using the Expectation Maximization (EM) algorithm. The Kalman Filter uses the data that contains missing blocks together with a reference time series to estimate the current state of the missing blocks. The recovery is performed as a multi-step process. At each step, the EM method predicts the value of the current state and then two estimators refine the predicted values of the given state, maximizing a likelihood function.
{\color{black}
\myparagraph{Pattern Based Methods}
TKCM \cite{wellenzohn2017continuous} identifies and uses repeating patterns (seasonality) in the time series’ history. They find similarity between window of measures spanning all time series and window around the query time index using Pearson's correlation coefficient and do 1-1 imputation using the mean value of the matched blocks. Though promising this method performs poorly compared to other baselines like CDRec, on each dataset \cite{khayati2020mind}, hence we have excluded it from our analysis. Deep Learning architectures have been shown to perform better at query-pattern search and corresponding weighted imputation \cite{Vaswani2017} which we exploit in our work.
}
We present an empirical comparison with SVDImp (as a representative of pure SVD methods), CDRec, TRMF, STMVL, and DynaMMO and show that our method is significantly more accurate than all of them.
\section{Introduction}
Given a mixed-type (categorical and continuous) record data, our objective is to find outlying continuous values in the rows. Further, we also try to find the explanations from the data for such outlying values. If we find at least one explanation, the outlier can be \emph{explained away}. If we do not find any explanation, we report it as an outlier.
Suppose the structure of a record $(c_{1:k}, y_{1:n}, t)$ is as follows:
\begin{enumerate}
\item $c_1,\ldots,c_k$ be categorical features.
\item $y_{1},\dots,y_{n}$ be measures or continuous-valued features.
\item $t$ be a special time attribute.
\end{enumerate}
\section{Problem Formulation}
Let us consider a sales dataset in which we have $k$ products denoted as $p_1,\ldots,p_k$ and $r$ shops denoted by $s_1,\ldots,s_r$. The sales for product $i$ at shop $j$ and time $t$ is denoted by $y_{i,j,t}$.
\section{Conditional Predictor}
This model is a feed-forward network with following input and output:
\begin{itemize}
\item Input: $(p_i, s_j, t)$
\item Output: $y_{i,j,t}$
\end{itemize}
\section{Contextualized Conditional Predictor}
Suppose $\delta_{i,j,t,t'} = y_{i,j,t}-y_{i,j,t'}$.
First we define the context as follows:
\todo[inline]{Do not include current product in expectation}
\begin{enumerate}
\item $y_{i,j,t-1}, y_{i,j,t-2}, y_{i,j,t+1}, y_{i,j,t+2}$
\item $\mathbb{E}[\delta_{:,j,t-1,t}],\mathbb{E}[\delta_{:,j,t-2,t}],\mathbb{E}[\delta_{:,j,t+1,t}],\mathbb{E}[\delta_{:,j,t+2,t}],$
$\text{Var}[\delta_{:,j,t-1,t}],\text{Var}[\delta_{:,j,t-2,t}],\text{Var}[\delta_{:,j,t+1,t}],\text{Var}[\delta_{:,j,t+2,t}]$
\item $\mathbb{E}[\delta_{i,:,t-1,t}],\mathbb{E}[\delta_{i,:,t-2,t}],\mathbb{E}[\delta_{i,:,t+1,t}],\mathbb{E}[\delta_{i,:,t+2,t}],$ $\text{Var}[\delta_{i,:,t-1,t}],\text{Var}[\delta_{i,:,t-2,t}],\text{Var}[\delta_{i,:,t+1,t}],\text{Var}[\delta_{i,:,t+2,t}]$
\end{enumerate}
We feed entire context to the model and try to predict the value $y_{i,j,t}$.
\subsection{Further Extensions}
Here we list possible extensions to contextualized model.
\subsubsection{Timeseries+Outlier Model}
We first decouple timeseries and outlier detection model.
\xhdr{time-series model}
The timeseries model will take inputs $\delta_{i,j,[t-t',\ldots,t+t']\setminus t}$ and predict the $\delta_{i,j,t}$.
\xhdr{outlier model}
Another model, called outlier model will take the context values at time $t$ and $\hat{\delta}_{i,j,t}$ as input and predict the probability that value at cell $(i.j,t)$ is an outlying value.
\todo[inline]{How to train the outlier model? Supervised separately or jointly train both models?}
\todo[inline]{Feed discrepancy between true and predicted values and context to outlier model}
\section{Some Ideas}
\subsection{First}
Given a set of training points $x_1,x_2,x_3,...,x_n$, the goal is to learn a set of kernels $k_1(.,.),k_2(.,.),k_3(.,.),...,k_m(.,.)$ s.t.
$$
k_i(x_j,x_k) \approx 1\ \ \ \forall i, \forall j, \forall k
$$
where all of $k_i$ are a linear combination of of a some base kernels $k'_1,k'_2,k'_3,...,k'_d$ corresponding to each possible relaxation on the categorical dimension. Let (
$$
k'_i = 0 if
$$
\subsection{Second}
$$\delta_{i,j,t} = \delta_{i,j,t-1,t}$$
Given current point $\hat{\delta}_{i,j,t}$ and contexts $\mathbb{E}[\delta_{i,:,t}]$, $\mathbb{E}[\delta_{:,j,t}]$ we want to predict the probability that $y_{i,j,t}$ is an outlier.
Calculate the following quantities:
\begin{align}
p_{i,:,t} = \mathcal{N}(\mathbb{E}[\delta_{i,:,t}], \text{Var}[\delta_{i,:,t}]) \\
p_{:,j,t} = \mathcal{N}(\mathbb{E}[\delta_{:,j,t}], \text{Var}[\delta_{:,j,t}])
\end{align}
If all of $p_{\bullet,\bullet,t}$ are outside $3\sigma$ range, we report the value $y_{i,j,t}$ as an outlier.
The contexts for which $p_{\bullet,\bullet,t}$ is inside $3\sigma$ range, such contexts explain away the value $y_{i,j,t}$.
\subsection{Third: When number of dimensions is large}
When the number of dimensions is large, number of possible aggregations increase exponentially. Hence, we will select top-$k$ aggregations according to lowest variance and try to explain the values using only those aggregations.
\subsection{Fourth : }
Let $X_{s,t},X_{p,t}$ be the R.Vs defining the $\delta$ for shop and prod context respectively for time $t$. From above we know that
\begin{align}
X_{s,t} \sim \mathcal{N}(\mathbb{E}[\delta_{i,:,t}], \text{Var}[\delta_{i,:,t}]) \\
X_{p,t} \sim \mathcal{N}( \mathbb{E}[\delta_{:,j,t}], \text{Var}[\delta_{:,j,t}])
\end{align}
For a general test point, the delta might be explainable by a linear combination of $X_{s,t},X_{p,t}$ though like $\lambda X_{s,t}+(1- \lambda) X_{p,t}$ where $\lambda$ is a learnable parameter learnt though MLE.
\begin{align}
\lambda = \mathrm{argmax}_\lambda \Pi_i\Pi_j\Pi_t \mathcal{N}(\delta_{i,j,t};\lambda*\mathbb{E}[\delta_{i,:,t}] + (1-\lambda)*\mathbb{E}[\delta_{:,j,t}],\lambda*\text{Var}[\delta_{i,:,t}] + (1-\lambda)*\text{Var}[\delta_{:,j,t}])
\end{align}
Training
\begin{align}
P(\delta_{i,j,t};\sum_{c}\lambda_c \mathbb{E}_c, \sum_{c}\lambda_c \text{Var}_c)
\end{align}
\subsection{Fifth: Uncertainty over time-series predictions}
Timeseries output at $(i,j,t)$: $\delta_{i,j,t} \sim \mathcal{N}(\mu_{i,j,t}, \sigma_{i,j,t})$.
\begin{align}
\underset{d_{i,j,t}}{\text{argmax}}~~ \log P(d_{i,j,t} | \mu, \sigma) + \log [\mathcal{N}(d_{i,j,t} | \mathbb{E}[\delta_{i,:,t}], \text{Var}[\delta_{i,:,t}]) + \mathcal{N}(d_{i,j,t} | \mathbb{E}[\delta_{:,j,t}], \text{Var}[\delta_{:,j,t}])]
\end{align}
\subsection{Sixth: Time-series over true values, Context distributions over deltas of true and predicted values}
\begin{align}
\delta_{i,j,t} = y_{i,j,t} - \hat{y}_{i,j,t}
\end{align}
\section{Missing Value Imputation}
The ideas that are used to \emph{explain-away} the values that look like outliers can also be used for missing data imputation task. The idea is, finding right explanation for a value that looks like an outlier leads us to a right context that can predict the value that otherwise cannot be predicted.
In the missing value imputation problem, several values in the time-series could be missing, some of which could be adjacent. Hence, while predicting a missing value at a position $t$ in time, we cannot assume that all the values in the input context $[t-p, t+p]$ are available. Such missing values in the input are generally replace by zeros, or any other simple interpolation method. GP-VAE uses zero values because the dependencies are captured through latent gaussian process instead of observed values space. BRITS introduced a Discrepancy loss term that ensures prediction from both directions in the BI-LSTM are consistent. We can use a similar term that ensures predictions from both directions are consistent.
\end{document} | 2024-02-18T23:39:53.901Z | 2021-07-19T02:08:18.000Z | algebraic_stack_train_0000 | 767 | 18,153 |
|
proofpile-arXiv_065-3864 | \section{Introduction}
All graphs considered in this paper are finite, simple and undirected. For a graph $G$, we use $|G|$ to denote the number of vertices of $G$, say the \emph{order} of $G$.
The complete graph of order $n$ is denoted by $K_{n}$ and the star graph of order $n$ is denoted by $K_{1, n-1}$.
For a subset $S$ of $V(G)$, let $G[S]$ be the subgraph of $G$ induced by $S$.
For two disjoint subsets $A$ and $B$ of $V(G)$, $E(A, B)=\{ab\in E(G) ~|~ a\in A, b\in B\}$.
Let $G$ be a graph and $H$ a subgraph of $G$. The graph obtained from $G$ by deleting all edges of $H$ is denoted by $G-H$.
$K_{n-1}\sqcup K_{1, s}$ is a graph obtained from $K_{n-1}$ by adding a new vertex $v$ and adding $s$ edges which join $v$ to $s$ vertices of $K_{n-1}$.
For any positive integer $k$, we write $[k]$ for the set $\{1, 2, \cdots, k\}$.
An edge-colored graph is called \emph{monochromatic} if all edges are colored by the same color and \emph{rainbow} if no two edges are colored by the same color.
A \emph{blow-up} of an edge-colored graph $G$ on a graph $H$ is a new graph obtained from $G$ by replacing each vertex of $G$ with $H$ and replacing each edge $e$ of $G$ with a monochromatic complete bipartite graph $(V(H), V(H))$ in the same color with $e$.
The \emph{Ramsey number} $R(G, H)$ is the smallest integer $n$ such that every red-blue edge-colored $K_n$ contains either a red $G$ or a blue $H$.
When $G=H$, we simply denote $R(G, H)$ by $R_{2}(G)$.
For more information on Ramsey number, we refer the readers to a dynamic survey on Ramsey number in \cite{R}.
The definition of Ramsey number implies that there exists a \emph{critical graph}, that is, a red-blue edge-colored $K_{n-1}$ contains neither a red $G$ nor a blue $H$.
It is significant to find a smallest integer $s$ such that for any critical graph $K_{n-1}$, every red-blue edge-colored $K_{n-1}\sqcup K_{1, s}$ contains either a red $G$ or a blue $H$.
To study this, Hook and Isaak \cite{JG} introduced the definition of the \emph{star-critical Ramsey number} $r_{*}(G, H)$. $r_{*}(G, H)$ is the smallest integer $s$ such that every red-blue edge-colored $K_{n-1}\sqcup K_{1, s}$ contains either a red $G$ or a blue $H$.
Then it is clear that $r_{*}(G, H)$ is the smallest integer $s$ such that every red-blue edge-colored $K_{n-1}\sqcup K_{1, s}$ contains either a red $G$ or a blue $H$ for any critical graph $K_{n-1}$.
When $G=H$, we simply denote $r_{*}(G, H)$ by $r_{*}(G)$.
Star-critical Ramsey number of a graph is closely related to its upper size Ramsey number and lower size Ramsey number\cite{JG}.
In \cite{ZBC}, Zhang et al. defined the \emph{Ramsey-full} graph pair $(G, H)$.
A graph pair $(G, H)$ is called Ramsey-full if there exists a red-blue edge-colored graph $K_{n}-e$ contains neither a red $G$ nor a blue $H$, where $n=R(G, H)$.
So $(G, H)$ is Ramsey-full if and only if $r_{*}(G, H)=n-1$.
If $(G, H)$ is Ramsey-full and $G=H$, then we say that $H$ is Ramsey-full.
Hook and Isaak \cite{J} proved that the complete graph pair $(K_m, K_n)$ is Ramsey-full.
Given a positive integer $k$ and graphs $H_{1}, H_{2}, \cdots, H_{k}$, the \emph{Gallai-Ramsey number} $gr_{k}(K_{3}: H_{1}, H_{2}, \cdots, H_{k})$ is the smallest integer $n$ such that every $k$-edge-colored $K_{n}$ contains either a rainbow $K_3$ or a monochromatic $H_{i}$ in color $i$ for some $i\in [k]$.
Clearly, $gr_{2}(K_{3}: H_{1}, H_{2})=R(H_{1}, H_{2})$.
When $H=H_{1}=\cdots=H_{k}$, we simply denote $gr_{k}(K_{3}: H_{1}, H_{2}, \cdots, H_{k})$ by $gr_{k}(K_{3}: H)$.
More information on Gallai-Ramsey number can be found in \cite{FMC, CP}.
Let $n=gr_{k}(K_3: H_{1}, H_{2}, \cdots, H_{k})$.
The definition of Gallai-Ramsey number implies that there exists a \emph{critical graph}, that is, a $k$-edge-colored $K_{n-1}$ contains neither a rainbow $K_3$ nor a monochromatic $H_{i}$ for any $i\in [k]$.
In this paper, we define the \emph{star-critical Gallai-Ramsey number} $gr_{k}^{*}(K_3: H_{1}, H_{2}, \cdots, H_{k})$ to be the smallest integer $s$ such that every $k$-edge-colored graph $K_{n-1}\sqcup K_{1, s}$ contains either a rainbow $K_3$ or a monochromatic $H_{i}$ in color $i$ for some $i\in [k]$.
Then it is clear that $gr_{k}^{*}(K_3: H_{1}, H_{2}, \cdots, H_{k})$ is the the smallest integer $s$ such that for any critical graph $K_{n-1}$, every $k$-edge-colored graph $K_{n-1}\sqcup K_{1, s}$ contains either a rainbow $K_3$ or a monochromatic $H_{i}$ in color $i$ for some $i\in [k]$.
Clearly, $gr_{k}^{*}(K_3: H_{1}, H_{2}, \cdots, H_{k})\leq n-1$ and $gr_{2}^{*}(K_{3}: H_{1}, H_{2})=r_{*}(H_{1}, H_{2})$.
When $H=H_{1}=\cdots=H_{k}$, we simply denote $gr_{k}^{*}(K_{3}: H_{1}, H_{2}, \cdots, H_{k})$ by $gr_{k}^{*}(K_{3}: H)$.
$(H_{1}, H_{2}, \cdots, H_{k})$ is called \emph{Gallai-Ramsey-full} if there exists a $k$-edge-colored graph $K_{n}-e$ contains neither a rainbow $K_3$ nor a monochromatic $H_{i}$ for any $i\in[k]$.
So $(H_{1}, H_{2}, \cdots, H_{k})$ is Gallai-Ramsey-full if and only if $gr_{k}^{*}(K_3: H_{1}, \cdots, H_{k})=n-1$.
If $(H_{1}, H_{2}, \cdots, H_{k})$ is Gallai-Ramsey-full and $H=H_{1}=\cdots=H_{k}$, then we say that $H$ is Gallai-Ramsey-full.
In this paper, we investigate the star-critical Gallai-Ramsey numbers for some graphs.
In order to study the star-critical Gallai-Ramsey numbers of a graph $H$, we first characterize the critical graphs on $H$ and then use the critical graphs to find its star-critical Gallai-Ramsey number.
In Section 3, we obtain the star-critical Gallai-Ramsey numbers $gr_{k}^{*}(K_{3}: K_{p_1}, K_{p_2}, \cdots, K_{p_k})$ and $gr_{k}^{*}(K_3: C_4)$.
Thus we find that $(K_{p_1}, K_{p_2}, \cdots, K_{p_k})$ and $C_4$ are Gallai-Ramsey-full.
In Section 4, we get the star-critical Gallai-Ramsey numbers of $P_4$ and $K_{1, m}$.
Thus we find that $P_4$ and $K_{1, m}$ are not Gallai-Ramsey-full.
Finally, for general graphs $H$, we prove the general behavior of $gr_{k}^{*}(K_3: H)$ in Section 5.
\section{Preliminary}
In this section, we list some useful lemmas.
\begin{lemma}{\upshape \cite{Gallai, GyarfasSimonyi, CameronEdmonds}}\label{Lem:G-Part}
For any rainbow triangle free edge-colored complete graph $G$, there exists a partition of $V(G)$ into at least two parts such that there are at most two colors on the edges between the parts and only one color on the edges between each pair of parts. The partition is called a Gallai-partition.
\end{lemma}
\begin{lemma}\label{Lem:qneq3}
Let $G$ be a rainbow triangle free edge-colored complete graph and $(V_1, V_2, \ldots, V_q)$ a Gallai-partition of $V(G)$ with the smallest number of parts.
If $q>2$, then for each part $V_i$, there are exactly two colors on the edges in $E(V_i, V(G)-V_i)$ for $i\in [q]$. Thus $q\neq3$.
\end{lemma}
\begin{proof}
By Lemma~\ref{Lem:G-Part}, suppose, to the contrary, that there exists one part (say $V_{1}$) such that all edges joining $V_{1}$ to other parts are colored by the same color. Then we can find a new Gallai-partition with two parts $(V_{1}, V_{2}\bigcup\cdots \bigcup V_{q})$, which contradicts with that $q$ is smallest.
It follows that $q\neq 3$.
\end{proof}
\begin{lemma}{\upshape \cite{R}}\label{Lem:2P4}
$
R_2(C_{4})=R_2(K_{1, 3})=6, R_2(P_{4})=5.
$
\end{lemma}
\begin{lemma}{\upshape \cite{RRMC}}\label{Lem:C4}
For any positive integer $k$ with that $k\geq 2$,
$
gr_k(K_{3}: C_{4})=k+4.
$
\end{lemma}
\begin{lemma}{\upshape \cite{RRMC}}\label{Lem:P4}
For any positive integer $k$,
$
gr_k(K_{3}: P_{4})=k+3.
$
\end{lemma}
\begin{lemma}{\upshape \cite{GyarfasSimonyi}}\label{Lem:star}
For any $m\geq 3$ and $k\geq 3$,
$$
gr_k(K_{3}: K_{1, m})= \begin{cases}
\frac{5m-6}{2}, & \text{if $m$ is even,}\\
\frac{5m-3}{2}, & \text{if $m$ is odd.}
\end{cases}
$$
\end{lemma}
\section{Gallai-Ramsey-full graphs}
First, we investigate the star-critical Gallai-Ramsey number for complete graphs.
\begin{theorem}\label{Thm:Kp}
For any positive integers $p_1$, $p_2$, $\ldots$, $p_k$ and $k$, $$gr_{k}^{*}(K_{3}: K_{p_1}, K_{p_2}, \cdots, K_{p_k})=gr_{k}(K_{3}: K_{p_1}, K_{p_2}, \cdots, K_{p_k})-1.$$
\end{theorem}
\begin{proof}
Let $n=gr_{k}(K_{3}: K_{p_1}, K_{p_2}, \cdots, K_{p_k})$.
Clearly, $gr_{k}^{*}(K_{3}: K_{p_1}, K_{p_2}, \cdots, K_{p_k})\leq n-1$.
So we only prove that $gr_{k}^{*}(K_{3}: K_{p_1}, K_{p_2}, \cdots, K_{p_k})\geq n-1$.
Since $gr_k(K_3: K_{p_1}, K_{p_2}, \cdots, K_{p_k})=n$, there exists a $k$-edge-colored critical graph $K_{n-1}$ containing neither a rainbow triangle nor a monochromatic $K_{p_i}$ in color $i$ for any $i\in[k]$.
Let $f$ be the $k$-edge-coloring of the critical graph $K_{n-1}$ and $u\in V(K_{n-1})$.
We construct $K_{n-1}\sqcup K_{1, n-2}$ by adding the edge set $\{vw~|~w\in V(K_{n-1})-\{u\}\}$ to the critical graph $K_{n-1}$, where $v$ is the center vertex of $K_{1, n-2}$.
Let $g$ be a $k$-edge-coloring of $K_{n-1}\sqcup K_{1, n-2}$ such that $$
g(e)= \begin{cases}
f(e), & \text{if $e\in E(K_{n-1})$,}\\
f(uw), & \text{if $e=vw$ and $w\in V(K_{n-1})-\{u\}$.}
\end{cases}
$$
Clearly, the $k$-edge-colored $K_{n-1}\sqcup K_{1, n-2}$ by $g$ contains neither a rainbow triangle nor a monochromatic $K_{p_i}$ in color $i$ for any $i\in[k]$. Then $gr_{k}^{*}(K_{3}: K_{p_1}, K_{p_2}, \cdots, K_{p_k})\geq n-1$.
Therefore, $gr_{k}^{*}(K_{3}: K_{p_1}, K_{p_2}, \cdots, K_{p_k})=n-1$.
\end{proof}
By Theorem~\ref{Thm:Kp}, we know that $(K_{p_1}, K_{p_2}, \cdots, K_{p_k})$ is Gallai-Ramsey-full.
In the following, we determine the star-critical Gallai-Ramsey number $gr_{k}^{*}(K_{3}: C_{4})$.
By Lemma~\ref{Lem:C4}, $gr_{k}(K_3: C_4)=k+4\geq6$ for any $k\geq 2$.
First we construct a critical graph on $C_4$.
\begin{definition}\label{Def:C4}
Let $k\geq2$, $n=gr_k(K_{3}: C_{4})$, $V(K_{n-1})=\{v_1, v_2, \ldots, v_{n-1}\}$ and color set $[k]$.
The subgraph induced by $\{v_1, v_2, v_3, v_4, v_5\}$ consists of two edge-disjoint $C_5$, say $v_1v_2v_3v_4v_5v_1$ and $v_1v_4v_2v_5v_3v_1$.
Define a $k$-edge-coloring $f$ of $K_{n-1}$ as follows: (1) $f(e)=1$ if $e$ is an edge of $v_1v_2v_3v_4v_5v_1$ and $f(e)=2$ if $e$ is an edge of $v_1v_4v_2v_5v_3v_1$.
(2) For any $j\in \{6, \ldots, n-1\}$ and $i\in [j-1]$, $f(v_jv_i)=j-3$.
We denote this $k$-edge-colored $K_{n-1}$ by $G^{k}_{n-1}$.
\end{definition}
Clearly, graph $G^{k}_{n-1}$ in Definition~\ref{Def:C4} contains neither a rainbow triangle nor a monochromatic $C_4$.
\begin{theorem}\label{Thm:starC4}
For any positive integer $k$ with that $k\geq 2$,
$
gr_{k}^{*}(K_{3}: C_{4})=k+3.
$
\end{theorem}
\begin{proof}
Let $n=gr_k(K_{3}: C_{4})$.
By Lemma~\ref{Lem:C4}, $n=k+4$.
Clearly, $gr_{k}^{*}(K_{3}: C_{4})\leq n-1=k+3$.
So we only prove that $gr_{k}^{*}(K_{3}: C_{4})\geq k+3$.
Let $G^{k}_{n-1}\sqcup K_{1, n-2}$ be a graph obtained from $G^{k}_{n-1}$ by adding the edge set $\{vv_i~|~i\in[n-1]-\{5\}\}$, where $G^{k}_{n-1}$ is described in Definition~\ref{Def:C4} and $v$ is the center vertex of $K_{1, n-2}$.
Let $g$ be a $k$-edge-coloring of $G^{k}_{n-1}\sqcup K_{1, n-2}$ such that
$$
g(e)= \begin{cases}
f(e), & \text{if $e\in E(G^{k}_{n-1})$, where $f$ is the $k$-edge-coloring in Definition~\ref{Def:C4},}\\
1, & \text{if $e\in \{vv_2, vv_3\}$,}\\
2, & \text{if $e\in \{vv_1, vv_4\}$,}\\
i-3, & \text{if $e=vv_i$ for any $6\leq i\leq n-1$.}
\end{cases}
$$
Clearly, the $k$-edge-colored $G^{k}_{n-1}\sqcup K_{1, n-2}$ by $g$ contains neither a rainbow $K_3$ nor a monochromatic $C_4$. So $gr_{k}^{*}(K_{3}: C_{4})\geq n-1=k+3$.
Therefore, $gr_{k}^{*}(K_{3}: C_{4})=k+3$.
\end{proof}
By Lemma~\ref{Lem:2P4} and Lemma~\ref{Lem:C4}, we know that $C_4$ is Gallai-Ramsey-full and Ramsey-full since $gr_{k}^{*}(K_{3}: C_{4})=gr_{k}(K_{3}: C_{4})-1$ and $r_{*}(C_4)=gr_{2}^{*}(K_{3}: C_{4})=R_2(C_4)-1$.
Note that complete graph $K_n$ is also Ramsey-full and Gallai-Ramsey-full.
So we pose a conjecture as following.\\
{\bf Conjecture.}
Let $H$ be a graph with no isolated vertex.
Then $H$ is Ramsey-full if and only if $H$ is Gallai-Ramsey-full.
\section{Gallai-Ramsey non-full graphs}
In this section, we investigate the star-critical Gallai-Ramsey numbers for $P_4$ and $K_{1, m}$.
Thus, we find that $P_4$ and $K_{1, m}$ are not Gallai-Ramsey-full.
In order to determine the star-critical Gallai-Ramsey number $gr_{k}^{*}(K_{3}: P_{4})$, first we study the structure of the critical graphs on $P_4$.
By Lemma~\ref{Lem:P4}, $gr_{k}(K_3: P_4)=k+3$ for any positive integer $k$.
\begin{definition}\label{Def:P4}
For any positive integer $k$, let $n=gr_k(K_{3}: P_{4})$, $V(K_{n-1})=\{v_1, v_2, \ldots, v_{n-1}\}$ and color set $[k]$. Define a $k$-edge-coloring $f$ of $K_{n-1}$ as follows: (1) $f(v_1v_2)=f(v_1v_3)=f(v_2v_3)=1$; (2) $f(v_1v_i)=f(v_2v_i)=f(v_3v_i)=i-2$ for any $4\leq i\leq n-1$; (3) For any $4\leq i<j\leq n-1$, $f(v_iv_j)=i-2$ or $j-2$ such that there is no rainbow triangle in the subgraph induced by $\{v_4, \ldots, v_{n-1}\}$.
The $k$-edge-coloring $f$ is called $k$-critical coloring on $P_4$ and let $\mathcal{G}^{k}_{n-1}=\{$all $k$-critical colored $K_{n-1}$ on $P_4\}$.
\end{definition}
Clearly, $\mathcal{G}^{k}_{n-1}\neq\emptyset$. For example, we set $f(v_iv_j)=j-2$ for any $4\leq i< j\leq n-1$. It is easy to check that $f$ is a $k$-critical coloring on $P_4$.
On the other hand, the graphs in Definition~\ref{Def:P4} are not unique when $k\geq3$.
For example, if $k=4$, we can set $f(v_4v_5)=f(v_4v_6)=2$ and $f(v_5v_6)=3$ or $f(v_4v_6)=f(v_5v_6)=4$ and $f(v_4v_5)=2$.
It is easy to check that the two colorings are $k$-critical.
\begin{proposition}\label{Pro:P4}
For any positive integer $k$, let $n=gr_k(K_{3}: P_{4})$. Then $H$ is a $k$-edge-colored complete graph $K_{n-1}$ containing neither a rainbow triangle nor a monochromatic $P_4$ if and only if $H\in\mathcal{G}^{k}_{n-1}$, where $\mathcal{G}^{k}_{n-1}$ is described in Definition~\ref{Def:P4}.
\end{proposition}
\begin{proof}
It suffices to prove the 'necessity'.
Let $H$ be a $k$-edge-colored $K_{n-1}$ containing neither a rainbow triangle nor a monochromatic $P_4$.
Then, by Lemma~\ref{Lem:G-Part}, there exists a Gallai-partition of $V(H)$. Choose a Gallai-partition with the smallest number of parts, say $(V_{1}, V_{2}, \cdots, V_{q})$.
Then $q\ge 2$ and $q\neq 3$ by Lemma~\ref{Lem:G-Part} and Lemma~\ref{Lem:qneq3}.
Let $H_{i}=H[V_{i}]$ for each part $V_i$.
Since $R_2(P_4)=5$ by Lemma~\ref{Lem:2P4}, we have that $2\leq q\leq 4$.
If there exist two parts $V_i$, $V_j$ such that $|V_i|\geq 2$ and $|V_j|\geq 2$, then there is a monochromatic $P_4$, a contradiction.
So there is at most one part with at least two vertices and all other parts are single vertex.
W.L.O.G., suppose that $|H_2|=\ldots=|H_q|=1$.
\begin{claim}\label{Cla:q=2}
If $k\geq3$, then $q=2$.
\end{claim}
\noindent{\bf Proof.}
Suppose, to the contrary, that $q=4$.
Then $|H_1|\geq 2$ since $k\geq 3$.
By the pigeonhole principle, there are two single vertex parts such that all edges between the two parts and $V_1$ are in the same color.
So there is a monochromatic $P_4$, a contradiction.
Then $q=2$.
\begin{claim}\label{Cla:mk3}
$H$ contains a monochromatic $K_3$.
\end{claim}
\noindent{\bf Proof.}
We prove this claim by induction on $k$.
When $k=1$, it is trivial.
When $k=2$, $H=K_4$.
It is easy to check that $H$ has a monochromatic $K_3$ since $H$ contains no monochromatic $P_4$.
Suppose that $k\geq 3$ and the claim holds for any $k'$ such that $k'<k$.
By Claim~\ref{Cla:q=2}, $q=2$.
Hence, $|H_1|\geq 4$.
W.L.O.G., suppose that the edges between the two parts are colored by 1.
To avoid a monochromatic $P_4$ in color 1, $H_1$ contains no edges colored by 1.
By the induction hypothesis, $H_1$ contains a monochromatic $K_3$. So $H$ contains a monochromatic $K_3$.
By Claim~\ref{Cla:mk3}, W.L.O.G., we can assume that the monochromatic $K_3$ is $v_1v_2v_3$ and this $K_3$ is colored by 1.
Let $V(H)-\{v_1, v_2, v_3\}=\{v_4, \ldots, v_{n-1}\}$.
Then, to avoid a monochromatic $P_4$, the colors of edges between $\{v_4, \ldots, v_{n-1}\}$ and $\{v_1, v_2, v_3\}$ are not 1.
So to avoid a rainbow triangle, all edges in $E(v_i, \{v_1, v_2, v_3\})$ are in the same color for any $i\geq4$.
Then, to avoid a monochromatic $P_4$, the edges in $E(v_i, \{v_1, v_2, v_3\})$ and the edges in $E(v_j, \{v_1, v_2, v_3\})$ have different colors for any $4\leq i<j\leq n-1$.
W.L.O.G., we can assume that the edges in $E(v_i, \{v_1, v_2, v_3\})$ have color $i-2$ for any $i\geq4$.
Then to avoid a rainbow triangle, the edge $v_iv_j$ is colored by $i-2$ or $j-2$ for any $4\leq i<j\leq n-1$.
Thus, $H\in\mathcal{G}^{k}_{n-1}$, where $\mathcal{G}^{k}_{n-1}$ is described in Definition~\ref{Def:P4}.
\end{proof}
\begin{theorem}\label{Thm:starP4}
For any positive integer $k$,
$
gr_{k}^{*}(K_{3}: P_{4})=k.
$
\end{theorem}
\begin{proof}
Let $H\in \mathcal{G}^{k}_{n-1}$, where $\mathcal{G}^{k}_{n-1}$ is defined in Definition~\ref{Def:P4} and $n=gr_k(K_{3}: P_{4})=k+3$.
First we show that $gr_{k}^{*}(K_{3}: P_{4})\geq k$.
When $k=1$, $H$ is a monochromatic $K_3$. Then $gr_{1}^{*}(K_{3}: P_{4})\geq 1$.
So we can assume that $k\geq 2$.
By Definition~\ref{Def:P4}, $H$ contains a monochromatic $K_{3}=v_1v_2v_3$.
Then we construct $H\sqcup K_{1, k-1}$ by adding the edge set $\{vv_i~|~4\leq i\leq n-1\}$ to $H$, where $v$ is the center vertex of $K_{1, k-1}$.
Let $c$ be a $k$-edge-coloring of $H\sqcup K_{1, k-1}$ such that $$
c(e)= \begin{cases}
f(e), & \text{if $e\in E(H)$,}\\
f(v_1v_i), & \text{if $e=vv_i$ for any $4\leq i\leq n-1$,}
\end{cases}
$$ where $f$ is the $k$-edge-coloring in Definition~\ref{Def:P4}.
Clearly, this $k$-edge-colored $H\sqcup K_{1, k-1}$ by $c$ contains neither a rainbow triangle nor a monochromatic $P_4$ and $d(v)=k-1$.
Hence, $gr_{k}^{*}(K_{3}: P_{4})\geq k$.
Now we show that $gr_{k}^{*}(K_{3}: P_{4})\leq k$.
Let $G=H\sqcup K_{1, k}$ be a $k$-edge-colored graph and $v$ be the center vertex of $K_{1, k}$.
Since $d_{G}(v)=k$ and $|H|=k+2$ by Definition~\ref{Def:P4}, there is at least one edge of $G$ between $\{v_1, v_2, v_3\}$ and $v$.
W.L.O.G., suppose that $v_1v\in E(G)$ and $v_1v$ is colored by $i$, where $i\in [k]$.
Then there is a monochromatic $P_4=vv_1v_{i+2}v_2$ in color $i$.
Hence, $gr_{k}^{*}(K_{3}: P_{4})\leq k$.
Therefore, we have that $gr_{k}^{*}(K_{3}: P_{4})=k$.
\end{proof}
In the following, we determine the star-critical Gallai-Ramsey number $gr_{k}^{*}(K_{3}: K_{1, m})$. First we characterize all critical graphs on star $K_{1, m}$.
By Lemma~\ref{Lem:star}, for any $k\geq3$ and any $m\geq3$, $$
gr_k(K_{3}: K_{1, m})= \begin{cases}
\frac{5m-6}{2}, & \text{if $m$ is even,}\\
\frac{5m-3}{2}, & \text{if $m$ is odd.}
\end{cases}
$$
\begin{definition}\label{Def:star}
Let $m\geq 3$, $k\geq 3$, $n=gr_k(K_{3}: K_{1, m})$ and color set $[k]$.
For the complete graph $K_{n-1}$, choose a partition $(V_{1}$, $V_{2}$, $V_3$, $V_4$, $V_{5})$ of $V(K_{n-1})$ such that $|V_i|=\frac{m-1}{2}$ for $i\in [5]$ if $m$ is odd and $|V_1|=\frac{m}{2}$, $|V_i|=\frac{m-2}{2}$ for $i\geq 2$ if $m$ is even.
Let $H_i$ be the induced subgraph of $K_{n-1}$ by $V_i$ and $v_i\in V_i$ for any $i\in [5]$.
The subgraph induced by $\{v_1, v_2, v_3, v_4, v_5\}$ consists of two edge-disjoint $C_5$, say $v_1v_2v_3v_4v_5v_1$ and $v_1v_4v_2v_5v_3v_1$.
Define a $k$-edge-coloring $f$ of $K_{n-1}$ as follows:
(1) $f(e)=1$ if $e$ is an edge of $v_1v_2v_3v_4v_5v_1$ and $f(e)=2$ if $e$ is an edge of $v_1v_4v_2v_5v_3v_1$.
Color all edges between $V_i$ and $V_j$ by the color of $v_iv_j$ for any $1\leq i< j\leq 5$ (see Fig. 1(c)).
(2) Color $H_i$ as follows such that each $H_i$ contains no rainbow triangle.
If $m$ is odd, then we color each $H_i$ by the color set $\{3, \ldots, k\}$.
Let $m$ be even. Then we color $H_1$ by the color set $[k]$ such that the subgraph induced by $E_1=\{e\in E(H_1)~|~f(e)=1, 2\}$ is a matching or $E_1=\emptyset$;
We color $H_2$ and $H_5$ by color set $\{2, \ldots, k\}$ such that the subgraph induced by $E_2=\{e\in E(H_2)\cup E(H_5)~|~f(e)=2\}$ is a matching or $E_2=\emptyset$;
We color $H_3$ and $H_4$ by color set $\{1, 3, \ldots, k\}$ such that the subgraph induced by $E_3=\{e\in E(H_3)\cup E(H_4)~|~f(e)=1\}$ is a matching or $E_3=\emptyset$.
The $k$-edge-coloring $f$ is called $k$-critical coloring on star $K_{1, m}$ and let $\mathcal{S}^{k}_{n-1}=\{$all $k$-critical colored $K_{n-1}$ on star $\}$.
\end{definition}
Clearly, $\mathcal{S}^{k}_{n-1}\neq\emptyset$ and each graph in $\mathcal{S}^{k}_{n-1}$ in Definition~\ref{Def:star} contains neither a rainbow triangle nor a monochromatic $K_{1, m}$.
\begin{proposition}\label{Pro:star}
Let $k\geq 3$, $n=gr_k(K_{3}: K_{1, m})$, $m\geq3$ if $m$ is odd and $m\geq12$ otherwise.
Then $H$ is a $k$-edge-colored $K_{n-1}$ containing neither a rainbow triangle nor a monochromatic $K_{1, m}$ if and only if $H\in\mathcal{S}^{k}_{n-1}$, where $\mathcal{S}^{k}_{n-1}$ is described in Definition~\ref{Def:star}.
\end{proposition}
\begin{proof}
It suffices to prove the 'necessity'.
Let $H$ be a $k$-edge-colored $K_{n-1}$ containing neither a rainbow triangle nor a monochromatic $K_{1, m}$. Then, by Lemma~\ref{Lem:G-Part}, there exists a Gallai-partition of $V(H)$.
Choose a Gallai-partition with the smallest number of parts, say $(V_{1}, V_{2}, \cdots, V_{q})$, where $|V_1|\geq |V_2|\geq\ldots\geq |V_q|$. Let $H_{i}=H[V_{i}]$ and $v_i\in V_i$ for each $i\in [q]$.
Then $q\ge 2$ and $q\neq 3$ by Lemma~\ref{Lem:G-Part} and Lemma~\ref{Lem:qneq3}.
W.L.O.G., let 1 and 2 be the colors of edges between parts of the partition.
Suppose that $|V_q|\leq\frac{m-3}{2}$ if $m$ is odd and $|V_q|\leq\frac{m-6}{2}$ if $m$ is even.
Then $|V(H)-V_q|\geq 2m-1$.
Let $v\in V_q$. By the pigeonhole principle, there are at least $m$ edges in $E(v, V(H)-V_q)$ with the same color.
It implies a monochromatic $K_{1, m}$, a contradiction.
Then $|V_i|\geq \frac{m-1}{2}$ for any $i\in [q]$ if $m$ is odd and $|V_i|\geq \frac{m-4}{2}$ for any $i\in [q]$ if $m$ is even.
\begin{claim}\label{Cla:K13}
The subgraph induced by $\{v_1, \ldots, v_q\}$ contains no monochromatic $K_{1, 3}$.
\end{claim}
\noindent{\bf Proof.} Suppose, to the contrary, that the subgraph induced by $\{v_1, \ldots, v_q\}$ contains a monochromatic $K_{1, 3}$.
It follows that there exist four parts, say $V_i$, $V_{j_1}$, $V_{j_2}$ and $V_{j_3}$, such that all edges in $E(V_i, V_{j_1}\cup V_{j_2}\cup V_{j_3})$ are the same color.
Since $3\times\frac{m-1}{2}\geq m$ if $m\geq3$ is odd and $3\times\frac{m-4}{2}\geq m$ if $m\geq12$ is even, it follows that there is a monochromatic $K_{1, m}$, a contradiction.
\begin{claim}\label{Cla:q=5}
q=5.
\end{claim}
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{01.eps}\\
\caption{}
\label{Fig:1}
\end{center}
\end{figure}
\noindent{\bf Proof.}
If $q=2$, then $|V_1|\geq\frac{|H|}{2}\geq m$, which implies that $H$ has a monochromatic $K_{1, m}$, a contradiction.
Now suppose that $q=4$.
By Lemma~\ref{Lem:qneq3}, for each part $V_i$, there are exactly two colors on the edges in $E(V_i, V(G)-V_i)$ for $i\in [4]$.
Thus, the subgraph induced by $\{v_1, v_2, v_3, v_4\}$ must be the graph as shown in Fig.1 (a) or Fig.1 (b).
By the minimality of $q$, the subgraph induced by $\{v_1, v_2, v_3, v_4\}$ consists of two edge-disjoint $P_4$ such that one is colored by 1 and the other is colored by 2.
W.L.O.G., let $v_1v_2v_3v_4$ be colored by 1 and $v_3v_1v_4v_2$ colored by 2 (see Fig.1 (b)).
Since $|V_1|+|V_2|\geq\frac{|H|}{2}\geq m$, it follows that the induced subgraph by $E(V_1\cup V_2, V_4)$ contains a monochromatic star $K_{1, m}$ in color 2, a contradiction.
So $q\geq 5$.
By Claim~\ref{Cla:K13}, the subgraph induced by $\{v_1, \ldots, v_q\}$ contains no monochromatic $K_{1, 3}$.
Since $R_2(K_{1, 3})=6$ by Lemma~\ref{Lem:2P4}, we have that $q\leq 5$.
So $q=5$.
By Claim~\ref{Cla:K13}, the subgraph induced by $\{v_1, \ldots, v_5\}$ contains no monochromatic $K_{1, 3}$.
So this subgraph consists of two edge-disjoint $C_5$ such that one is colored by 1 and the other is colored by 2.
W.L.O.G., let $v_1v_2v_3v_4v_5v_1$ be colored by 1 and $v_1v_4v_2v_5v_3v_1$ colored by 2 (see Fig.1 (c)).
When $m\geq3$ is odd, since $\frac{|H|}{5}=\frac{m-1}{2}$ and every part has order at least $\frac{m-1}{2}$, it follows that each part contains exactly $\frac{m-1}{2}$ vertices.
To avoid a monochromatic $K_{1, m}$, $H_i$ contains no edge with color 1 or 2 for any $i\in[5]$.
Hence, for any $i\in [5]$, $H_i$ is colored by color set $\{3, \ldots, k\}$.
When $m\geq 12$ is even, if there is a part with $\frac{m-4}{2}$ vertices, then there exists a monochromatic $K_{1, m}$ since there must be two parts, say $V_1$ and $V_2$, such that $|V_1|+|V_2|\geq m$.
Hence, every part has at least $\frac{m-2}{2}$ vertices.
Since $|H|=\frac{5m-8}{2}$, we have that $|V_1|=\frac{m}{2}$ and $|V_2|=\ldots=|V_5|=\frac{m-2}{2}$.
To avoid a monochromatic $K_{1, m}$, if $H_1$ contains edges colored by 1 or 2, then $\{e\in E(H_1)~|~ f(e)=1, 2\}$ is a matching.
To avoid a monochromatic $K_{1, m}$, $H_2$ and $H_5$ contain no edges colored by 1. If there exist edges colored by 2 in $H_2$ or $H_5$, then $\{e\in E(H_2)\cup E(H_5)~|~f(e)=2\}$ is a matching.
Similarly, $H_3$ and $H_4$ contain no edges colored by 2. If there exist edges colored by 1 in $H_3$ or $H_4$, then $\{e\in E(H_3)\cup E(H_4)~|~f(e)=1\}$ is a matching.
Thus, the graph $H\in\mathcal{S}^{k}_{n-1}$, where $\mathcal{S}^{k}_{n-1}$ is described in Definition~\ref{Def:star}.
\end{proof}
\begin{theorem}\label{Thm:star}
For any $k\geq 3$,
$
gr_k^{*}(K_{3}: K_{1, m})= \begin{cases}
2m-2, & \text{if $m\geq 12$ is even,}\\
m, & \text{if $m\geq 3$ is odd.}
\end{cases}
$
\end{theorem}
\begin{proof}
Let $n=gr_k(K_{3}: K_{1, m})$.
Now we consider the following cases.
\begin{case}\label{case:modd}
$m\geq3$ is odd.
\end{case}
Let $H\in \mathcal{S}^{k}_{n-1}$, where $\mathcal{S}^{k}_{n-1}$ is defined in Definition~\ref{Def:star}.
First we show that $gr_{k}^{*}(K_{3}: K_{1, m})\geq m$.
We construct $H\sqcup K_{1, m-1}$ by adding the edge set $\{vv_i~|~v_i\in V_1\cup V_2\}$ to $H$, where $v$ is the center vertex of $K_{1, m-1}$.
Let $c$ be a $k$-edge-coloring of $H\sqcup K_{1, m-1}$ such that $$
c(e)= \begin{cases}
f(e), & \text{if $e\in E(H)$, where $f$ is the $k$-edge-coloring in Definition~\ref{Def:star},}\\
3, & \text{if $e\in E(v, V(H))$.}
\end{cases}
$$ Clearly, this $k$-edge-colored graph $H\sqcup K_{1, m-1}$ by $c$ contains neither a rainbow triangle nor a monochromatic $K_{1, m}$ and $d(v)=m-1$.
Hence, $gr_{k}^{*}(K_{3}: K_{1, m})\geq m$.
Now we show that $gr_{k}^{*}(K_{3}: K_{1, m})\leq m$.
Let $G=H\sqcup K_{1, m}$ be a $k$-edge-colored graph and $v$ the center vertex of $K_{1, m}$.
Suppose that $G$ contains no rainbow triangle.
Then we prove that $G$ contains a monochromatic $K_{1, m}$.
By the definition of $H$, if there exists an edge $e\in E(v, V(H))$ in color 1 or 2, then there is a monochromatic $K_{1, m}$ in color 1 or 2.
So we can assume that all edges in $E(v, V(H))$ have color in $\{3, \ldots, k\}$.
Since $H$ consists of 5 parts such that every part has exactly $\frac{m-1}{2}$ vertices and $|E(v, V(H))|=m$, we have that there are at least 3 parts of $H$, say $V_1$, $V_2$, $V_3$, such that $E(v, V_i)\neq\emptyset$ for any $i\in [3]$.
Then to avoid a rainbow triangle, all edges in $E(v, V(H))$ have the same color.
So there is a monochromatic $K_{1, m}$ with the center vertex $v$.
Hence, $gr_{k}^{*}(K_{3}: K_{1, m})\leq m$.
Therefore, we have that $gr_{k}^{*}(K_{3}: K_{1, m})=m$.
\begin{case}\label{case:meven}
$m\geq12$ is even.
\end{case}
First we show that $gr_{k}^{*}(K_{3}: K_{1, m})\geq 2m-2$.
Let $H\in \mathcal{S}^{k}_{n-1}$ such that each subgraph $H_i$ of $H$ is colored by the color set $\{3, \ldots, k\}$, where $\mathcal{S}^{k}_{n-1}$ is defined in Definition~\ref{Def:star}.
We construct $H\sqcup K_{1, 2m-3}$ by adding the edge set $\{vv_i~|~v_i\in V_1\cup V_2\cup V_3\cup V_5\}$ to $H$, where $v$ is the center vertex of $K_{1, 2m-3}$.
Let $c$ be a $k$-edge-coloring of $H\sqcup K_{1, 2m-3}$ such that $$
c(e)= \begin{cases}
f(e), & \text{if $e\in E(H)$, where $f$ is the $k$-edge-coloring in Definition~\ref{Def:star},}\\
1, & \text{if $e\in E(v, V_1\cup V_3)$,}\\
2, & \text{if $e\in E(v, V_2\cup V_5)$.}\\
\end{cases}
$$ Clearly, this $k$-edge-colored graph $H\sqcup K_{1, 2m-3}$ by $c$ contains neither a rainbow triangle nor a monochromatic $K_{1, m}$ and $d(v)=2m-3$.
Hence, $gr_{k}^{*}(K_{3}: K_{1, m})\geq 2m-2$.
Now we show that $gr_{k}^{*}(K_{3}: K_{1, m})\leq 2m-2$.
Let $H\in \mathcal{S}^{k}_{n-1}$, $G=H\sqcup K_{1, 2m-2}$ be a $k$-edge-colored graph and $v$ be the center vertex of $K_{1, 2m-2}$.
Suppose that $G$ contains no rainbow triangle.
We prove that $G$ has a monochromatic $K_{1, m}$ in the following.
First, suppose that there exists an edge $e\in E(v, V(H))$ in color $\{3, \ldots, k\}$, say $e$ is colored by 3.
W.L.O.G., let $e\in E(v, V_1)$ or $e\in E(v, V_4)$ by the symmetry of $V_2$, $V_3$, $V_4$ and $V_5$ in $H$ (see Fig.1(c)).
First we assume that $e\in E(v, V_1)$. Then to avoid a rainbow triangle, all edges in $E(v, V_2\cup V_5)$ are colored by 1 or 3.
If there exists an edge in $E(v, V_2\cup V_5)$ colored by 1, then there is a monochromatic $K_{1, m}$ in color 1.
So we can assume that all edges in $E(v, V_2\cup V_5)$ are colored by 3.
Then to avoid a rainbow triangle, all edges in $E(v, V_3\cup V_4)$ are colored by 3.
So there is a monochromatic $K_{1, m}$ in color 3.
Now we assume that $e\in E(v, V_4)$.
Then to avoid a rainbow triangle, all edges in $E(v, V_5)$ are colored by 1 or 3.
If there exists an edge in $E(v, V_5)$ colored by 1, then there is a monochromatic $K_{1, m}$ in color 1.
So we can assume that all edges in $E(v, V_5)$ are colored by 3.
Then to avoid a rainbow triangle, all edges $E(v, V_3)$ must be colored by 3.
It follows that all edges in $E(v, V_1\cup V_2)$ are colored by 3 since $G$ contains no rainbow triangle.
Hence, there is a monochromatic $K_{1, m}$ in color 3.
Now we can assume that all edges in $E(v, V(H))$ are colored by 1 or 2.
By the definition of $H$ in Definition~\ref{Def:star}, for any part $V_i$, $E(v, V_i)\neq\emptyset$ since $|E(v, V(H))|=2m-2$.
If there exists an edge in $E(v, V_2\cup V_5)$ colored by 1, then there is a monochromatic $K_{1, m}$ in color 1.
Similarly, if there exists an edge in $E(v, V_3\cup V_4)$ colored by 2, then there is a monochromatic $K_{1, m}$ in color 2.
Thus, suppose that all edges in $E(v, V_2\cup V_5)$ are colored by 2 and all edges in $E(v, V_3\cup V_4)$ are colored by 1.
Let $a, b\in V_1$. Suppose that the color of $va$ is 1 and the color of $vb$ is 2.
To avoid a rainbow triangle, the edge $ab$ must be colored by 1 or 2.
Then there is a monochromatic $K_{1, m}$ in color 1 or 2.
Hence, suppose that all edges in $E(v, V_1)$ are in the same color which is either 1 or 2.
Thus, there are $m$ edges in $E(v, V_1\cup V_3\cup V_4)$ colored by 1 or there are $m$ edges in $E(v, V_1\cup V_2\cup V_5)$ colored by 2, which follows a monochromatic $K_{1, m}$.
Hence, $gr_{k}^{*}(K_{3}: K_{1, m})\leq 2m-2$.
Therefore, $gr_{k}^{*}(K_{3}: K_{1, m})=2m-2$.
\end{proof}
\noindent{\bf Remark.}
For any $m\geq3$ is odd and $m\geq12$ is even, $gr_{k}^{*}(K_3: K_{1, m})$ is determined by Theorem~\ref{Thm:star}.
When $m=4$, $gr_{k}(K_3: K_{1, 4})=7$ by Lemma~\ref{Lem:star}.
We can verify that $gr_{k}^{*}(K_3: K_{1, 4})=6$ (the proof is omitted).
When $m=6, 8, 10$, the problem to determine $gr_{k}^{*}(K_3: K_{1, m})$ is open.
\section{Star-critical Gallai-Ramsey numbers for general graphs}
In 2010, Gy\'{a}rf\'{a}s et al. \cite{AGAS} proved the general behavior of $gr_{k}(K_3: H)$.
It turns out that for some graphs $H$, the order of magnitude of $gr_{k}(K_3: H)$ seems hopelessly difficult to determine.
So finding the exact value of $gr_{k}^{*}(K_3: H)$ is far from trivial.
Thus, in this section we investigate the general behavior of star-critical Gallai-Ramsey numbers for general graphs.
For general bipartite graph $H$, we can only consider the case that $H$ is not a star since the exact value of $gr_{k}^{*}(K_3: K_{1, m})$ is determined in Theorem~\ref{Thm:star}.
First give the following lemmas and definitions.
For a connected bipartite graph $H$, define $s(H)$ to be the order of the smaller part of $H$ and $l(H)$ to be the order of the larger part.
For a connected non-bipartite graph $H$, call a graph $H^{'}$ a \emph{merge} of $H$ if $H^{'}$ can be obtained from $H$ by identifying some independent sets of $H$ (and removing any resulting repeated edges).
Let $\mathscr{H}$ be the set of all possible merges of $H$ and $R_{2}(\mathscr{H})$ the minimum integer $n$ such that every 2-edge-colored $K_n$ contains a monochromatic graph in $\mathscr{H}$.
Then there exists a 2-edge-colored critical graph $K_{n-1}$ containing no monochromatic graph in $\mathscr{H}$.
Let $m(H)=R_{2}(\mathscr{H})$ and $r_{*}(\mathscr{H})$ be the smallest integer $r$ such that every 2-edge-colored $K_{m(H)-1}\sqcup K_{1, r}$ contains a monochromatic graph in $\mathscr{H}$.
Then there exists a 2-edge-colored critical graph $K_{m(H)-1}\sqcup K_{1, r-1}$ containing no monochromatic graph in $\mathscr{H}$.
The \emph{chromatic number} of a graph $G$, denoted by $\chi(G)$, is the smallest number of colors needed to color the vertices of $G$ so that no two adjacent vertices share the same color.
\begin{lemma}{\upshape \cite{WMSX}}\label{Lem:bipartite}
Let $H$ be a connected bipartite graph and $k$ an integer such that $k\geq2$. Then
$$gr_{k}(K_3: H)\geq R_{2}(H)+(k-2)(s(H)-1).$$
\end{lemma}
To prove the above lower bound, Wu et al. in \cite{WMSX} gave the following definition.
\begin{definition}\cite{WMSX}\label{Def:bipartite}
Let $k\geq2$ and $n=R_{2}(H)+(k-2)(s(H)-1)$.
For the complete graph $K_{n-1}$, choose a partition $(V_1, \ldots, V_{k-1})$ of $V(K_{n-1})$ such that $|V_1|=R_2(H)-1$ and $|V_i|=s(H)-1$ for $2\leq i\leq k-1$ and let $H_i$ be the subgraph of $K_{n-1}$ induced by $V_i$ for $i\in[k-1]$.
Define a $k$-edge-coloring $f$ of $K_{n-1}$ as follows:
(1) Color $H_1$ by colors 1 and 2 such that $H_1$ is a 2-edge-colored critical graph containing no monochromatic $H$.
(2) Color $H_i$ by $i+1$ for any $i\in \{2, \ldots, k-1\}$.
(3) For any $j\in \{2, \ldots, k-1\}$ and $i\in[j-1]$, $f(v_iv_j)=j+1$, where $v_i\in V_i$ and $v_j\in V_j$.
We denote this $k$-edge-colored $K_{n-1}$ by $B_{n-1}^{k}$.
\end{definition}
Clearly, $B_{n-1}^{k}$ in Definition~\ref{Def:bipartite} contains no rainbow triangle and no monochromatic $H$.
\begin{lemma}{\upshape \cite{M}}\label{Lem:nonbipartite}
Let $H$ be a connected non-bipartite graph and $k$ an integer such that $k\geq2$. Then
$$gr_{k}(K_3: H)\geq \begin{cases}
(R_2(H)-1)\cdot (m(H)-1)^{(k-2)/2}+1, & \text{if $k$ is even,}\\
(\chi(H)-1)\cdot (R_2(H)-1)\cdot (m(H)-1)^{(k - 3)/2}+1, & \text{if $k$ is odd.}
\end{cases}$$
\end{lemma}
To prove the above lower bound, Magnant in \cite{M} gave the following definition. Recall a \emph{blow-up} of an edge-colored graph $G$ on a graph $H$ is a new graph obtained from $G$ by replacing each vertex of $G$ with $H$ and replacing each edge $e$ of $G$ with a monochromatic complete bipartite graph $(V(H), V(H))$ in the same color with $e$.
\begin{definition}\cite{M}\label{Def:nonbipartite}
Let $k\geq2$ and $$n_k=\begin{cases}
(R_2(H)-1)\cdot (m(H)-1)^{(k-2)/2}+1, & \text{if $k$ is even,}\\
(\chi(H)-1)\cdot (R_2(H)-1)\cdot (m(H)-1)^{(k - 3)/2}+1, & \text{if $k$ is odd.}
\end{cases}$$
Define a $k$-edge-colored $K_{n_k-1}$ by induction on $k$, which is denoted by $N_{n_{k}-1}^{k}$.
(1) $N_{n_{2}-1}^{2}$ is a 2-edge-colored critical graph (using colors 1 and 2) with $R_{2}(H)-1$ vertices containing no monochromatic $H$.
Suppose that for any $2i<k$, $N_{n_{2i}-1}^{2i}$ has been constructed which contains no rainbow triangle and no monochromatic $H$.
(2) If $2i+2\leq k$, then let $D$ be a 2-edge-colored critical graph $K_{m(H)-1}$ (using colors $2i+1$ and $2i+2$) containing no monochromatic $H'$ in $\mathscr{H}$.
Then construct $N_{n_{2i+2}-1}^{2i+2}$ by making a blow-up of $D$ on $N_{n_{2i}-1}^{2i}$.
(3) If $2i+1=k$, then construct $N_{n_{2i+1}-1}^{2i+1}$ by making a blow-up of $K_{\chi(H)-1}$ on $N_{n_{2i}-1}^{2i}$, where $K_{\chi(H)-1}$ is colored by $2i+1$.
\end{definition}
Clearly, $N_{n_{k}-1}^{k}$ in Definition~\ref{Def:nonbipartite} contains no rainbow triangle and no monochromatic $H$.
\begin{lemma}{\upshape \cite{AGAS}}\label{Lem:graph}
Let $k$ be a positive integer. Then for any connected bipartite graph $H$,
$gr_{k}(K_3: H)\leq (R_{2}(H)-1)\cdot[(l(H)-1)k+2]\cdot(l(H)-1),$ and for any connected
non-bipartite graph $H$, $gr_{k}(K_3: H)\leq (R_{2}(H)-1)^{k\cdot(|H|-1)+1}.$
\end{lemma}
\begin{theorem}\label{Thm:graph}
Let $H$ be a graph with no isolated vertex. If $H$ is bipartite and not a star, then $gr_k^{*}(K_3: H)$ is linear in $k$.
If $H$ is not bipartite, then $gr_k^{*}(K_3: H)$ is exponential in $k$.
\end{theorem}
\begin{proof}
First, suppose that $H$ is a bipartite graph but not a star.
Now we prove that $gr_k^{*}(K_3: H)\geq r_{*}(H)+(k-2)(s(H)-1)$ which is linear in $k$.
Let $t=r_{*}(H)-1+(k-2)(s(H)-1)$ and $B_{n-1}^{k}$ be the graph defined in Definition~\ref{Def:bipartite}, where $n=R_{2}(H)+(k-2)(s(H)-1)$.
Then $H_1$ is the subgraph of $B_{n-1}^{k}$ such that $|H_1|=R_2(H)-1$ and $H_1$ is a 2-edge-colored critical graph containing no monochromatic $H$.
Hence, we can find a 2-edge-colored critical graph $H_{1}\sqcup K_{1, r_{*}(H)-1}$ (using colors 1 and 2) containing no monochromatic $H$.
Denote this 2-edge-coloring by $h$ and the center vertex of $K_{1, r_{*}(H)-1}$ by $v$.
Let $\{x_1, \ldots, x_{r_{*}(H)-1}\}\subseteq V_1$ and $vx_i\in E(H_{1}\sqcup K_{1, r_{*}(H)-1})$ for any $1\leq i\leq r_{*}(H)-1$.
We construct $B_{n-1}^{k}\sqcup K_{1, t}$ by adding the edge set $E^{*}=\bigcup\limits_{i=2}^{k-1}\{vu~|~u\in V_i\}\cup \{vx_1, \ldots, vx_{r_{*}(H)-1}\}$ to $B_{n-1}^{k}$, where $V_2$, $\ldots$, $V_{k-1}$ are the parts of $V(B_{n-1}^{k})$ in Definition~\ref{Def:bipartite}.
Let $g$ be a $k$-edge-coloring of $B_{n-1}^{k}\sqcup K_{1, t}$ such that
$$g(e)= \begin{cases}
f(e), & \text{if $e\in E(B_{n-1}^{k})$, where $f$ is defined in Definition~\ref{Def:bipartite}, }\\
h(e), & \text{if $e\in E(v, V_1)$,}\\
i+1, & \text{if $e\in E(v, V_i)$ for any $i\in \{2, \ldots, k-1\}$.}\\
\end{cases}
$$
Thus, $d(v)=r_{*}(H)-1+(k-2)(s(H)-1)$ and the graph $B_{n-1}^{k}\sqcup K_{1, t}$ colored by $g$ contains neither a rainbow triangle nor a monochromatic $H$.
Hence, $gr_{k}^{*}(K_{3}: H)\geq r_{*}(H)+(k-2)(s(H)-1)$.
For the upper bound, by Lemma~\ref{Lem:graph}, we know that $gr_{k}(K_3: H)\leq (R_2(H)-1)\cdot [(l(H)-1)k+2]\cdot(l(H)-1)$. Clearly, $gr_{k}^{*}(K_{3}: H)\leq (R_2(H)-1)\cdot [(l(H)-1)k+2]\cdot(l(H)-1)-1$.
Thus, for bipartite graph $H$, we know that there exists a lower bound and an upper bound of $gr^{*}_{k}(K_3: H)$ linear in $k$.
It follows that the statement in Theorem~\ref{Thm:graph} holds when $H$ is bipartite.
Now, suppose that $H$ is a connected non-bipartite graph.
Then we give a lower bound on $gr_k^{*}(K_3: H)$ which is exponential in $k$ in the case that $k\geq3$.
Let
$$r_k=\begin{cases}
r_{*}(H)-1, & \text{if $k=2$,}\\
(r_{*}(\mathscr{H})-1)\cdot(R_2(H)-1)\cdot (m(H)-1)^{(k-4)/2}, & \text{if $k\geq4$ is even,}\\
(\chi(H)-2)\cdot (R_2(H)-1)\cdot (m(H)-1)^{(k - 3)/2}, & \text{if $k$ is odd,}
\end{cases}$$
and $N_{n_{k}-1}^{k}$ be the graph defined in Definition~\ref{Def:nonbipartite}, where $m(H)=R_{2}(\mathscr{H})$ and
$$n_k=\begin{cases}
(R_2(H)-1)\cdot (m(H)-1)^{(k-2)/2}+1, & \text{if $k$ is even,}\\
(\chi(H)-1)\cdot (R_2(H)-1)\cdot (m(H)-1)^{(k - 3)/2}+1, & \text{if $k$ is odd.}
\end{cases}$$
Now we construct a $k$-edge-colored $N_{n_{k}-1}^{k}\sqcup K_{1, r_{k}}$ containing neither a rainbow triangle nor a monochromatic $H$ by induction on $k$.
Let $v$ be the center vertex of $K_{1, r_k}$.
When $k=2$, we have that $n_2=R_{2}(H)$ and $r_2=r_{*}(H)-1$. Then there exists a 2-edge-colored critical graph $N_{n_{2}-1}^{2}\sqcup K_{1, r_2}$ containing no monochromatic $H$.
Suppose that $k\geq 3$ and for any $2i<k$, we have constructed $N_{n_{2i}-1}^{2i}\sqcup K_{1, r_{2i}}$ which contains neither a rainbow triangle nor a monochromatic $H$.
When $2i+2\leq k$, $N_{n_{2i+2}-1}^{2i+2}$ is a blow-up of $D$ on $N_{n_{2i}-1}^{2i}$ by Definition~\ref{Def:nonbipartite}.
Denote each copy of $N_{n_{2i}-1}^{2i}$ in $N_{n_{2i+2}-1}^{2i+2}$ by $G_1$, $\ldots$, $G_{m(H)-1}$.
Let $v_j\in V(G_j)$ for each $1\leq j\leq m(H)-1$.
Then the subgraph induced by $\{v_1, \ldots, v_{m(H)-1}\}$ is isomorphic to $D$.
Choose a critical 2-edge-coloring $g$ of $D\sqcup K_{1, r_{*}(\mathscr{H})-1}$ (using $2i+1$ and $2i+2$) such that $D\sqcup K_{1, r_{*}(\mathscr{H})-1}$ contains no monochromatic graph $H^{'}$ in $\mathscr{H}$.
Let $v^{'}$ be the center vertex of $K_{1, r_{*}(\mathscr{H})-1}$.
Now we construct $N_{n_{2i+2}-1}^{2i+2}\sqcup K_{1, r_{2i+2}}$ according to the graph $D\sqcup K_{1, r_{*}(\mathscr{H})-1}$.
For any $v_j\in V(D)$ with $1\leq j\leq m(H)-1$, if $v^{'}v_j\in E(D\sqcup K_{1, r_{*}(\mathscr{H})-1})$, then we add the edge set $\{vu~|~u\in V(G_j)\}$.
The resulting graph is $N_{n_{2i+2}-1}^{2i+2}\sqcup K_{1, r_{2i+2}}$.
Thus, $r_{2i+2}=(r_{*}(\mathscr{H})-1)\cdot(n_{2i}-1)$.
Let $c$ be a $(2i+2)$-edge-coloring of $N_{n_{2i+2}-1}^{2i+2}\sqcup K_{1, r_{2i+2}}$ such that
$$c(e)=\begin{cases}
f(e), & \text{if $e\in E(N_{n_{2i+2}-1}^{2i+2})$, where $f$ is defined in Definition~\ref{Def:nonbipartite},}\\
g(v^{'}v_j), & \text{if $e\in E(v, V(G_j))$ for $1\leq j\leq m(H)-1$.}
\end{cases}$$
Clearly, this $(2i+2)$-edge-colored $N_{n_{2i+2}-1}^{2i+2}\sqcup K_{1, r_{2i+2}}$ by $c$ contains no rainbow triangle, where $d(v)=r_{2i+2}=(r_{*}(\mathscr{H})-1)\cdot(R_2(H)-1)\cdot (m(H)-1)^{(2i-2)/2}$.
Suppose, to the contrary, that $N_{n_{2i+2}-1}^{2i+2}\sqcup K_{1, r_{2i+2}}$ contains a monochromatic $H$.
Then the monochromatic $H$ is colored by $2i+1$ or $2i+2$.
Hence for each copy of $N_{n_{2i}-1}^{2i}$, $V(H)\cap V(N_{n_{2i}-1}^{2i})$ is an independent set of $H$.
It follows that $D\sqcup K_{1, r_{*}(\mathscr{H})-1}$ contains a monochromatic merge graph of $H$, a contradiction.
When $2i+1=k$, $N_{n_{2i+1}-1}^{2i+1}$ is a blow-up of $K_{\chi(H)-1}$ on $N_{n_{2i}-1}^{2i}$ by Definition~\ref{Def:nonbipartite}.
Denote each copy of $N_{n_{2i}-1}^{2i}$ in $N_{n_{2i+1}-1}^{2i+1}$ by $G_1$, $\ldots$, $G_{\chi(H)-1}$.
We construct $N_{n_{2i+1}-1}^{2i+1}\sqcup K_{1, r_{2i+1}}$ by adding the edge set $\{vu~|~u\in \bigcup_{j=1}^{\chi(H)-2}V(G_j)\}$ to $N_{n_{2i+1}-1}^{2i+1}$.
Thus, $r_{2i+1}=(\chi(H)-2)\cdot(n_{2i}-1)$.
Let $c$ be a $(2i+1)$-edge-coloring of $N_{n_{2i+1}-1}^{2i+1}\sqcup K_{1, r_{2i+1}}$ such that
$$c(e)=\begin{cases}
f(e), & \text{if $e\in E(N_{n_{2i+1}-1}^{2i+1})$, where $f$ is defined in Definition~\ref{Def:nonbipartite},}\\
2i+1, & \text{if $e\in E(v, V(G_j))$ for $1\leq j\leq \chi(H)-2$.}
\end{cases}$$
Clearly, this is a $(2i+1)$-edge-colored $N_{n_{2i+1}-1}^{2i+1}\sqcup K_{1, r_{2i+1}}$ containing neither a rainbow triangle nor a monochromatic $H$, where $d(v)=r_{2i+1}=(\chi(H)-2)\cdot (R_2(H)-1)\cdot (m(H)-1)^{(2i - 2)/2}$.
Hence, we have that
$gr_{k}^{*}(K_3: H)\geq r_{k}+1
$, where $r_{k}+1$ is exponential in $k$ for $k\geq 3$.
For the upper bound, by Lemma~\ref{Lem:graph}, we know that $gr_{k}(K_3: H)\leq (R_2(H)-1)^{k(|H|-1)+1}$. Clearly, $gr_{k}^{*}(K_{3}: H)\leq (R_2(H)-1)^{k(|H|-1)+1}-1$. Thus, for non-bipartite graph $H$, there is a lower bound and an upper bound exponential in $k$.
Complete the proof of Theorem~\ref{Thm:graph}.
\end{proof}
| 2024-02-18T23:39:54.214Z | 2021-03-03T02:17:54.000Z | algebraic_stack_train_0000 | 780 | 8,893 |
|
proofpile-arXiv_065-3949 | \section{Introduction}
Plankton blooms in the ocean represent some of the most massive and rapid biomass growth events in nature. Planktonic organisms are responsible for more than 50\% of the earth’s oxygen production, are the base of the marine food chain, contribute to the cycling of carbon, and preserve ocean biodiversity \cite{falkowski1998biogeochemical,ptacnik2008diversity}. However, phytoplankton blooms are not uniformly distributed across the seascape. The large spatio-temporal scales of phytoplankton distribution are set by seasons and basin-wide circulation. On a smaller scale, eddies \cite{mcgillicuddy2007eddy} and fronts \cite{levy2018role} contort these patterns, and localized injections of nutrients into the sunlit layer allow for the formation of frequent and ephemeral blooms (e.g. as seen in satellite observation, Fig 1). Such pulses of resources could be caused, for instance, by upwelling of nutrient-rich water \cite{mcgillicuddy2007eddy}, a burst of important micro-nutrients from dust deposition \cite{hamme2010volcanic}, the wake of islands \cite{signorini1999mixing}, or by deliberate fertilization experiments as have been carried out in several location in the ocean \cite{de2005synthesis,boyd2007mesoscale}. The rich structure in observed chlorophyll at those scales demands tools for interpretation. How do such bloom events evolve as a result of the local bio-physical environment?
Once favorable conditions for growth are set, the fate of a plankton ecosystem is indeed tightly linked to the physical evolution of the patch of water that contains it. The interplay of strain and diffusion generated by oceanic currents can strongly deform, dilute and mix a water patch and such processes could affect the associated ecosystem in various ways \cite{garrett1983initial,ledwell1998mixing,martin2000filament,abraham2000importance,iudicone2008water}. Dilution has been proposed as a prominent driver of plankton productivity by modulating concentrations of nutrient and biomass within a patch of water \cite{hannon2001modeling,boyd2007mesoscale,lehahn2017dispersion,paparella2020stirring}. This has been associated either with a condition of stoichiometric unbalance where a nutrient inside a bloom becomes suddenly limiting \cite{fujii2005simulated,hannon2001modeling,boyd2007mesoscale} or with the effect of diluting grazer concentrations in a region where they are higher than average \cite{lehahn2017dispersion}. On the other hand, the high level of spatial heterogeneity – i.e. patchiness – generated by ocean turbulence across a wide range of scales can also potentially affect biomass production \cite{abraham1998generation,martin2002plankton,martin2003phytoplankton,kuhn2019temporal}. Indeed, due to the often non-linear response of plankton to nutrients, biomass growth can depend not only on average concentrations of resources but also on their spatial patchiness. Thus, a mechanistic understanding of the evolution of plankton ecosystems in its entirety requires a Lagrangian approach - that is following the water patch within which plankton live. At the same time, the role of spatial heterogeneity inside such dynamic ecosystems should be carefully addressed.
However, the combined impact of the Lagrangian evolution of a water patch and the associated patchiness as aspects of the same evolving system has not yet been addressed. Lagrangian models to date always assume a water parcel is well-mixed, i.e. with no spatial heterogeneity, \cite{abraham2000importance,lehahn2017dispersion,paparella2020stirring} and patchiness of quantities such as nutrients or biomass has never been implemented in a Lagrangian frame of reference \cite{wallhead2008spatially,levy2013influence,mandal2014observations,priyadarshi2019micro}. By combining both we will be better able to disentangle the physical from the biological drivers of the generation, maintenance and decay of blooms of plankton in the ocean.
Here we introduce a new framework to track, from first principles, a generic plankton ecosystem from a Lagrangian perspective. We define a Lagrangian patch as a physical body of water of arbitrary shape and size containing such ecosystem. We study the physical dynamics, the evolution of spatial heterogeneity within the patch as well as the biochemical interactions between nutrients and their consumers. Though the theoretical approach we develop could be used in many applications, we concentrate on the ecological response to pulses of resources within such Lagrangian ecosystems while they are subjected to dilution with its resource-poorer surroundings. As first application, we model the biophysical evolution of the artificially-fertilized bloom during the SOIREE campaign obtaining predictions consistent with the observed data \cite{abraham2000importance,boyd2000mesoscale}. More generally, we then demonstrate that dilution, driven by strain and diffusion, is responsible for the initial generation of patchiness in plankton ecosystems. Finally, we show that such heterogeneity can in turn significantly enhance plankton growth highlighting the existence of optimal dilution rates that maximize the patch integrated biomass.
\begin{figure
\includegraphics[width=10cm]{fig_bloom3.pdf}
\caption{A plankton bloom offshore of Namibia on 6/11/2007 captured by the MERIS (Medium Resolution Imaging Spectrometer) instrument aboard ESA’s Envisat satellite . The red dashed elliptical line indicates the water patch where the bloom is occurring. Coupled biophysical processes lead to a marked spatial heterogeneity - i.e patchiness - within the region.} \label{fig:bloom}
\end{figure}
\section{Results}
\subsection{Lagrangian ecosystems theory}
We develop a theoretical framework to study a generic plankton ecosystem inhabiting a Lagrangian patch of water at the ocean surface. Such a Lagrangian perspective - that is tracking in space and time the same physical water mass - allows us to naturally address the ecological responses to favorable (or unfavorable) environmental conditions occurring in the patch itself (Fig. \ref{fig:bloom}). In this section we layout all the essential concepts and quantities to describe our approach; the mathematical developments are extensively illustrated in the Methods. Model variables are listed in Table \ref{tab:var}. We first focus on the physical evolution of the patch and then we describe the associated tracers dynamics.
\textbf{\emph{Physical dynamics}.} Any Lagrangian water patch in the ocean undergoes continuous changes in position, size and shape due to the effect of ocean motions (Fig. \ref{fig:patchvar}). To model the physical transformations of the patch, we approximate it by an ellipsis containing the majority of its surface \cite{ledwell1998mixing,sundermeyer1998lateral}. The patch shape is thus described, at time $t$, by the length $L(t)$ and the width $W(t)$ of such ellipsis. Its characteristic size is defined as $S(t)=L(t)+W(t)$ while its area is $A(t)=\pi W(t)L(t)$ (Methods). From a Lagrangian perspective all rigid-like movements associated with the patch, such as translation and rotation, are ignored because they are implicitly incorporated in the displacement of the frame of reference \cite{landau1976mechanics,ranz1979applications,batchelor2000introduction}. Previous studies have shown that a water patch in the open ocean is primarily affected by horizontal strain and diffusion \cite{ledwell1998mixing,abraham2000importance,sundermeyer1998lateral}. The strain rate $\gamma(t)$ is responsible for the elongation of the patch, augmenting its aspect ratio. Diffusion $\kappa(t)$ describes the small-scale processes that cause the entrainment of surrounding waters within the patch. With the addition of water into the patch, its area increases (Fig. \ref{fig:straindiff}). Solving a Lagrangian advection-diffusion equation \cite{townsend1951diffusion,ranz1979applications,garrett1983initial,ledwell1998mixing,sundermeyer1998lateral,martin2003phytoplankton,neufeld2009chemical} we obtain analytical expressions for the evolution of $W(t)$ and $L(t)$ and from them we derive the patch area increase rate as function of $\gamma(t)$ and $\kappa(t)$ (Methods):
\begin{equation}\label{eq:area_deriv_patch}
\frac{dA(t)}{dt} = \pi \kappa(t) \bigg[ \frac{W^2(t) + L^2(t)}{W(t)L(t)} \bigg] .
\end{equation}
From Eq. (\ref{eq:area_deriv_patch}) we see that diffusion has a stronger proportional effect on the area increase when the perimeter-to-area ratio of the patch is larger and the strain rate controls how fast this ratio increases. Indeed the quantity $W^2(t) + L^2(t)$ is proportional to the square of the perimeter of the ellipsis encompassing the patch. Therefore, even though strain does not directly contribute to mix the patch with the surrounding, it makes diffusion more efficient by increasing the patch perimeter. Thus, both strain and diffusion are responsible, in a non-trivial way, for the increase of the patch area that in turn controls the dilution rate and the entrainment of outer waters.
\begin{figure
\includegraphics[width=10cm]{fig_patch_variables.pdf}
\caption{The evolution of the shape of a 2-dimensional Lagrangian patch (salmon color) is sketched for three consecutive times $t_0<t_1<t_2$. The patch is modeled as an ellipsis (black line) with a co-moving center of mass $\mathbf{X}(t)$ and changing characteristic length $L(t)$ and width $W(t)$. The characteristic size of the patch is $S(t)=L(t)+W(t)$ while its area is $A(t)=\pi W(t)L(t)$ (Methods). }\label{fig:patchvar}
\end{figure}
Experimental measurements have shown that, due to the complexity of ocean turbulence, strain and diffusion values change depending on the spatial scale considered \cite{okubo1971oceanic,garrett1983initial,ledwell1998mixing,sundermeyer1998lateral,falkovich2001particles,corrado2017general,sundermeyer2020dispersion}. Hence, while a Lagrangian patch is growing in size, it can be subject to a range of different values of strain and diffusion. To describe this effect while the patch is expanding, we make the strain and diffusion rates depend on patch size (Methods):
\begin{align}
\gamma(t) =& f_{\gamma}\big[S(t)\big] , \label{eq:sizedepmaingamma} \\
\kappa(t) =& f_{\kappa}\big[S(t)\big] . \label{eq:sizedepmainkappa}
\end{align}
This allows us to describe the patch evolution across any dynamical regime in the ocean, from sub-meso to gyre scales, matching the corresponding strain and diffusion ranges.
Indeed, the explicit functional forms of $f_{\gamma}$ and $f_{\kappa}$ can change qualitatively across different spatial scales \cite{garrett1983initial,corrado2017general} (e.g. from constant values to power-laws). This approach permits us to recreate individual patch dynamics in patch size and shape observed in the real ocean, such as the decrease and successive increase of the patch width $W$, that cannot be modeled assuming fixed strain and diffusion values.
\begin{figure
\includegraphics[width=10cm]{fig_strain_diffusion.pdf}
\caption{Strain and diffusion effects (blue lines) on an initially circular Lagrangian patch (salmon color). From top to bottom, the evolution in time of the patch under these processes is sketched. Diffusion (left side) isotropically dilutes the patch through small scale mixing occurring at its boundary. Strain (right side) stretches the patch conserving its area by increasing its length and compressing its width. Considered together, strain and diffusion generate a wide range of possible combinations of patch shapes and sizes.} \label{fig:straindiff}
\end{figure}
\textbf{\emph{Tracers dynamics}.} To characterize the plankton ecosystem associated with a Lagrangian patch, we need to describe its drifting components (i.e. resources and organisms) - generally referred as tracers - in terms of their spatial distributions. Due to diffusive processes at the patch boundaries and its consequent increase in area and dilution, tracers inside the patch will interact and mingle with tracers at the patch surrounding. To model such dynamics explicitly, the inside and outside distributions of tracers have to be described separately. Formally, for a generic tracer $i$, its distribution fields (in terms of, for instance, abundance or mass) within the patch and at its surrounding are denoted as $p_{i}(\mathbf{x},t)$ and $s_{i}(\mathbf{x},t)$, respectively. Since the tracer fields are not uniform across the ocean, we use the Reynold's decomposition to account for spatial heterogeneity \cite{law2003population,wallhead2008spatially,levy2013influence,mandal2014observations,priyadarshi2019micro}:
\begin{equation}\label{eq:reynoldsdeco}
p_{i}(\mathbf{x},t) = \langle p_i(\mathbf{x},t) \rangle + p'_{i}(\mathbf{x},t) \qquad ; \qquad s_{i}(\mathbf{x},t) = \langle s_i(\mathbf{x},t) \rangle + s'_{i}(\mathbf{x},t)
\end{equation}
where $\langle p_i(\mathbf{x},t) \rangle$ and $\langle s_i(\mathbf{x},t) \rangle$ are spatial means while $p'_{i}(\mathbf{x},t)$ and $s'_{i}(\mathbf{x},t)$ are fluctuations. Thus, second moments - that are spatial variances and covariances - are denoted as $\langle p'_{i}(\mathbf{x},t)^2 \rangle$, $\langle s'_{i}(\mathbf{x},t)^2 \rangle$ and $\langle p'_{i}(\mathbf{x},t) p'_{j}(\mathbf{x},t) \rangle$, $\langle s'_{i}(\mathbf{x},t) s'_{j}(\mathbf{x},t) \rangle$ for any tracer $i$ and $j$ (Fig \ref{fig:surround}). We identify the three main determinants of evolution of tracer fields inside the patch as:
\begin{itemize}
\item Entrainment of surrounding waters
\item Internal mixing
\item Biochemical interactions
\end{itemize}
\begin{figure
\includegraphics[width=10cm]{fig_patchy_patch2.png}
\caption{We assess tracers distributions in the patch (salmon color) and at its surrounding waters (blue color). When the assumption of well-mixed concentrations is taken (top panel), for a given tracer $i$, we need to specify only its mean concentration in the patch $\langle p_i(\mathbf{x},t) \rangle$ and at the surrounding $\langle s_i(\mathbf{x},t) \rangle$. Instead, if we account for spatial heterogeneity (bottom panel), second moments of tracers distributions have to be considered, both in the patch $\langle p'_{i}(\mathbf{x},t) p'_{j}(\mathbf{x},t) \rangle$ and at the surrounding $\langle s'_{i}(\mathbf{x},t) s'_{j}(\mathbf{x},t) \rangle$, for any $i$ and $j$. } \label{fig:surround}
\end{figure}
Entrainment is intimately related with the patch dilution that, in turn, can be modeled in terms of the patch area increase. We derive general analytical expressions to quantify the effect of such processes on the derivative of first (spatial means) and second (spatial variances and covariances) moments of the tracer distributions (Methods and Supplementary Fig. \ref{fig:immigration}):
\begin{align}
\frac{d \langle p_{i}(\mathbf{x},t) \rangle}{dt} &= \frac{d A(t)}{dt} \, \frac{\langle s_{i}(\mathbf{x},t) \rangle - \langle p_{i}(\mathbf{x},t) \rangle}{A(t)} , \label{eq:entrain1} \\
\frac{d \langle p'_{i}(\mathbf{x},t) p'_{j}(\mathbf{x},t) \rangle}{dt} &= \frac{d A(t)}{dt} \, \frac{ \big[ \langle s'_{i}(\mathbf{x},t) s'_{j}(\mathbf{x},t) \rangle - \langle p'_{i}(\mathbf{x},t) p'_{j}(\mathbf{x},t) \rangle \big] + \big[ \langle s_{i}(\mathbf{x},t) \rangle - \langle p_{i}(\mathbf{x},t) \rangle \big] \big[ \langle s_{j}(\mathbf{x},t) \rangle - \langle p_{j}(\mathbf{x},t) \rangle \big] }{A(t)} . \label{eq:entrain2}
\end{align}
Note that, while the evolution of first moments depends only on the difference between means, the derivatives of second moments are function of means, variances and covariance.
Internal mixing, on the other hand, is driven by the diffusion of tracers within the patch. Such process reduces the spatial variances and covariances of the tracer fields, but leaves the spatial means unchanged. We model it assuming an exponential decay for variances and covariances \cite{artale1997dispersion,haynes2005controls,thiffeault2008scalar,neufeld2009chemical}:
\begin{equation}\label{eq:intmixing}
\langle p'_{i}(\mathbf{x},t) p'_{j}(\mathbf{x},t) \rangle \sim e^{-\frac{\kappa(t)}{S(t)^2}t},
\end{equation}
where $\frac{\kappa(t)}{S(t)^2}$ is the effective decay rate associated with the diffusion $\kappa(t)$ at the spatial scale $S(t)$ (Methods).
Biochemical interactions among tracers within the patch can be addressed using first and second moments of the associated distributions $p_i(\mathbf{x},t)$'s. Due to the modularity of the approach proposed here, different models involving any number of tracers can be implemented, from resource-consumer to niche or neutral models \cite{dutkiewicz2020dimensions,follows2007emergent,ser2018ubiquitous,villa2020ocean, azaele2016statistical, gravel2006reconciling, grilli2020macroecological,ward2021selective}. External factors that do not depend explicitly on patch evolution (such as temperature or light) can also be directly included as modulators of biochemical dynamics.
\textbf{\emph{General master equation}.} Entrainment, mixing, and interactions are thus the three fundamental actors that shape the spatial distribution of tracers within a Lagrangian plankton ecosystem. Synthesizing the above developments, we can write a master equation for the time evolution of a generic tracer distribution $p_i(\mathbf{x},t)$ encompassing such physical and biochemical processes:
\begin{equation}\label{eq:master_eq}
\frac{d p_{i}(\mathbf{x},t)}{dt} = \mathcal{E} \big[ p_{i}(\mathbf{x},t) ; s_{i}(\mathbf{x},t) \big] +
\mathcal{M} \big[ p_{i}(\mathbf{x},t) \big] +
\sum_{j} \mathcal{I}_{ij} \big[ p_{i}(\mathbf{x},t) ; p_{j}(\mathbf{x},t) \big] ,
\end{equation}
where $\mathcal{E}$ includes the contribution of entrainment from Eq. (\ref{eq:entrain1}-\ref{eq:entrain2}), $\mathcal{M}$ the effect of internal mixing from Eq. (\ref{eq:intmixing}) and $\mathcal{I}_{ij}$ the interactions between tracer $i$ and $j$ which can have different functional forms depending on the dynamics considered. By virtue of the generality of Eq. (\ref{eq:master_eq}), our framework can be adapted to any spatio-temporal scale while focusing on any physical and biochemical dynamics.
\begin{table}[ht]%
\begingroup
\setlength{\tabcolsep}{10pt}
\renewcommand{\arraystretch}{1.5}
\centering
\begin{tabular}{c|c|c}
\bf{Variable} & \bf{Name} & \bf{Units} \\
\hline
$L,W,S$ & Length, width and size of the patch & $km$ \\
\hline
$\gamma$ & Strain rate & $1/day$ \\
\hline
$\kappa$ & Strain rate & $km^2 / day$ \\
\hline
$p_i$ & Patch concentration of $i$-tracer & $\mu mol / m^{3}$ \\
\hline
$s_i$ & Surrounding concentration of $i$-tracer & $\mu mol / m^{3}$ \\
\hline
$\nu$ & Maximum growth rate & $day^{-1}$ \\
\hline
$\alpha$ & Remineralization fraction & $-$ \\
\hline
$m$ & Mortality rate & $day^{-1}$ \\
\hline
$k$ & Half-saturation constant & $\mu mol / m^{3}$ \\
\hline
$\tau$ & Integration time & $day$ \\
\hline
LBA & Lagrangian biomass anomaly & $Mg C$ \\
\hline
\end{tabular}%
\endgroup
\caption{Symbols, names and units of the model variables.} \label{tab:var}
\end{table}
\subsection{Modeling a fertilized patch: setup and ensemble simulations}
\textbf{\emph{Realistic model setting}.} We simulate the dynamics of a Lagrangian plankton ecosystem by integrating Eq. (\ref{eq:master_eq}). As prototypical approach we model for 30 days an ecosystem initially residing in a 10 km wide and 10 meters thick circular patch (model sensitivity shown in Supplementary Fig. \ref{fig:physics_sens}). This setting encompasses relevant spatio-temporal scales typical of natural as well as artificial fertilized blooms \cite{oschlies1998eddy,de2005synthesis,boyd2007mesoscale,lehahn2017dispersion,kuhn2019temporal}. Functional forms $f_{\gamma}$ and $f_{\kappa}$ for the scaling-laws of strain and diffusion are chosen to match their experimentally measured values at the specific spatial scales spanned by the patch size evolution, that are of the order of 10-100 km (Methods).
To address the response of a Lagrangian ecosystem to localized conditions favoring population growth, we focus on the biochemical interactions between two ideal tracers: an inorganic resource $p_r \equiv p_r(\mathbf{x},t)$ and a planktonic consumer $p_b \equiv p_b(\mathbf{x},t)$ nourished by the resource. We assume a Monod kinetics and a linear mortality rate for the consumer \cite{follows2007emergent,lehahn2017dispersion,dutkiewicz2020dimensions}. Hence, the general term $\mathcal{I}_{ij} \big[ p_{i}(\mathbf{x},t) ; p_{j}(\mathbf{x},t) \big]$ of Eq. (\ref{eq:master_eq}) can be made explicit:
\begin{align}
\frac{d p_r}{dt} &= - \nu \frac{p_r}{p_r + k}p_b + \alpha m p_b, \label{eq:michmenten1}\\
\frac{d p_b}{dt} &= + \nu \frac{p_r}{p_r + k}p_b - m p_b , \label{eq:michmenten2}
\end{align}
where $\nu$ is the maximum growth rate, $k$ is the half-saturation constant, $m$ the linear mortality rate and $\alpha$ is the fraction of dead biomass that is recycled into the resource pool. Accordingly, the biomass ``export'' rate out of the patch corresponds to $m(1-\alpha)p_b$. Following Eq. (\ref{eq:reynoldsdeco}), we can use the Reynold's decomposition \cite{mandal2014observations,priyadarshi2019micro} to evaluate the contribution of first and second moments to Eqs. (\ref{eq:michmenten1}) and (\ref{eq:michmenten2}) (Methods).
We stimulate Lagrangian blooms of consumer $p_b$ by fertilizing the patch with a pulse of resource that mimics, for instance, processes like nutrient upheaval, dust deposition, fertilization experiments or, more generally, any perturbation to the average ocean state that causes a local inhomogeneity of resource concentrations. To this aim, we initially (at $t=0$) fix the second moments to zero and the first moments at steady state, internally to the patch and at its surrounding. This corresponds to an initial state where everything is well-mixed and spatial means of tracer distributions are stationary. Then, we initialize each numerical experiment by increasing the resource mean in the patch $\langle p_r \rangle$ and tracking the ecosystem response. To provide a concrete and realistic interpretation of the model outputs, we set the parameters in Eqs. (\ref{eq:michmenten1}) and (\ref{eq:michmenten2}) to recreate an idealized iron-phytoplankton dynamics (Methods), adopting values for the biological parameters derived from literature \cite{boyd2000mesoscale,hannon2001modeling,tsuda2005responses,dutkiewicz2020dimensions,lancelot2000modeling,timmermans2001growth} (model sensitivity shown in Supplementary Fig. \ref{fig:physics_sens}, \ref{fig:bio_sens}, \ref{fig:detrit} and \ref{fig:quadratic}).
\begin{figure
\includegraphics[width=15cm]{fig_soiree_2.pdf}
\caption{We use our Lagrangian ecosystem model to qualitatively reproduce the bio-physical dynamics of the iron-fertilized bloom during the SOIREE experiment. Initial values of iron and biomass concentrations as well as biological model parameters are set to resemble the ones measured during the campaign (Methods). Solid line represent the time evolution of model variables: patch width and length (top-left), iron and biomass spatial means (top-right), iron and biomass spatial variances (bottom-left) and iron-biomass covariance (bottom-right). Stars corresponds to measured values for the corresponding variables from to in-situ sampling or remote-sensing. Biomass is expressed in iron currency for visualization convenience.} \label{fig:soiree}
\end{figure}
\textbf{\emph{Simulating iron fertilization experiment}.} As first application, we simulate the iron-fertilized bloom during the SOIREE experiment \cite{boyd2000mesoscale} in the Southern Ocean (see Fig. \ref{fig:soiree}). We focus on this campaign since it is, to our knowledge, the one for which we have the most detailed description of the physical evolution of the water patch hosting the bloom. For this specific simulation we use initial values for strain and diffusion of $\gamma=0.12$ and $\kappa=0.1$. In this way the width and length of the modeled patch match satellite observations of the SOIREE bloom taken at 9 and 42 days \cite{abraham2000importance}. In accord with experimental data and dedicated models, the simulated bloom that we recreate peaks after 14 days and it reaches values $\sim$15 times higher than the surrounding biomass concentration \cite{boyd2000mesoscale, fujii2005simulated}. The mean biomass curve predicted by the model follows the in-situ measures taken during the first 15 days of the campaign. We also conducted sensitivity experiments to show that other strain and diffusion combinations do not capture the observations (Supplementary Fig. \ref{fig:soiree_ses}), suggesting a non-trivial relation between the physical and ecological evolution of the SOIREE bloom. Thus, despite the simplicity of the biochemical dynamics considered, our model is able to reproduce the main bio-physical patterns of a plankton bloom and demonstrate the key role of dilution.
The model also provides the time evolution of second moments of tracers distributions even though there are no observations to be compared with our predictions. Biomass variance peaks about 10 days after the iron variance and it reaches higher values. This is consistent with observations showing that the plankton distributions are more patchy than the nutrient ones \cite{abraham1998generation,martin2002plankton}. The covariance curve unveils how, for high-resource concentrations, biomass and iron are spatially correlated while, when the resource starts to be depleted, the correlation becomes negative. This inversions almost coincides with the mean biomass peak. This suggests the existence of a dynamical relationship between covariance, and more generally of tracer heterogeneity, and biomass growth. However, the relative contribution of physical versus intrinsic biochemical factors in generating spatial heterogeneity still remains implicit.
\textbf{\emph{Ensembles of simulations and Lagrangian biomass anomaly}.} To unveil the interrelation between physical forcings and bloom dynamics we produce several ensembles of simulations (Methods). Within each ensemble, we explore wide ranges of strain and diffusion values maintaining the same initial input of the nutrient. Such combinations of parameters allow us to explore a wide spectrum of patch dilution rates. In order compare the differences in response between well-mixed patches and heterogeneous patches, we confront ensembles where second moments are switched off with ones where they are fully considered. In well-mixed ensembles variances and covariances are thus always set to zero while in the heterogeneous ensembles they are free to vary. We also perform independent simulations exploring the model sensitivity and robustness (Supplementary Figs. \ref{fig:physics_sens}, \ref{fig:bio_sens}, \ref{fig:detrit}, \ref{fig:desert} and \ref{fig:quadratic}).
We then introduce a synthetic metric to be able to compare different simulations within ensembles. In particular, we aim at characterizing the overall response of a Lagrangian ecosystem to a resource perturbation with respect to its steady state. We measure such deviation by defining a quantity called Lagrangian biomass anomaly (LBA):
\begin{equation}
\text{LBA} = \frac{1}{\tau}\int_{0}^{\tau} A(t) \Big(\langle p_b \rangle - \langle s_b \rangle \Big)dt . \label{eq:lbadefinition}
\end{equation}
The above expression is the average over the time window $\tau$ of the anomaly of biomass residing in the patch with respect to the surrounding value $\langle s_b \rangle$. Indeed, for any time $t$, the term $A(t)\big(\langle p_b \rangle - \langle s_b \rangle \big)$ is the difference between the absolute biomass in the patch and the biomass of a region of the surrounding of the same area $A(t)$. If $\text{LBA}>0$, the patch biomass has been on average higher than the surrounding and the opposite if $\text{LBA}<0$. Hence, the LBA is based on biomass or standing stock (potentially evaluated with Chlorophyll) and so provides a useful real-world metric which could be based on remote-sensing. In the model the LBA is also a proxy for the biomass export; combining Eqs. (\ref{eq:michmenten1}), (\ref{eq:michmenten2}) and (\ref{eq:lbadefinition}), the temporal mean of the export rate anomaly turns out to be $m(1-\alpha)\text{LBA}$.
\subsection{Dilution and spatial heterogeneity trade-offs in enhancing Lagrangian ecosystem biomass}
\textbf{\emph{Well-mixed versus heterogeneous ensembles}.} We start by considering the ensemble where patches are forced to be well-mixed, as this is the usual assumption in Lagrangian studies as well as inside grid cells of Eulerian models. We first note that the LBA is always positive, meaning that any fertilized patch produced more biomass than the surrounding (see Fig. \ref{fig:iron_srf}). However, higher dilution - driven by stronger strain and diffusion - leads to lower LBA respect to low dilution regimes. This might be what we intuitively expect: in a well-mixed Lagrangian ecosystem, the modification of patch mean concentrations described by Eq. (\ref{eq:entrain1}) due to the entrainment of resource-poorer water always reduces biomass production with respect to the case of a ``closed patch'' with no exchanges with the surroundings. In other words, any intrusion of surrounding water from outside of a well-mixed patch leads to less increase of biomass than if there was no external water entrained.
\begin{figure
\includegraphics[width=15cm]{fig_iron4-3_surface.pdf}
\caption{Lagrangian biomass anomaly (LBA) measured in tonnes of carbon [Mg C] for an ensemble of well-mixed (panel a) and spatially-heterogeneous (panel b) patches across realistic ranges of strain [$\text{day}^{-1}$] and diffusion [$\text{km}^2 \text{day}^{-1}$]. The values reported for such two parameters are referred to the initial time of the simulations, as the patch size increases, strain and diffusion change accordingly to the respective scaling laws. All the other model parameters are kept constant (Methods). Each point of the plotted surface corresponds to the LBA attained by a single simulated patch under the effect of a specific combination of strain and diffusion. In panel a) spatial heterogeneity is neglected by switching off the second moments of the tracer distributions. The maximum LBA values are reached for the minimums of both strain and diffusion i.e. for the smaller dilution rates. Instead, in panel b), spatial heterogeneity is explicitly considered by modeling the second moments of the tracer distributions. The LBA in this case reaches values 45\% higher than the well mixed case. Such maximums populate an extended ridge in the LBA surface highlighting the fact that the associated optimal dilution values can be obtained from various combinations of strain and diffusion. } \label{fig:iron_srf}
\end{figure}
If instead we consider the more realistic case where the patch is spatially heterogeneous, Eq. (\ref{eq:entrain2}) shows that dilution by itself, associated with the intrusion of external water with different tracer concentrations, can generate spatial heterogeneity. In this scenario, our ensemble simulations reveal the existence of a region in the strain-diffusion parameter space in which the LBA is maximized and is up to 45\% larger than in the case of a closed patch (see Fig. \ref{fig:iron_srf}). We conclude that dilution-driven spatial heterogeneity could greatly enhance the biomass of a plankton ecosystem. To further support this, in Fig. \ref{fig:iron3-4dilution} we plot, for the two ensembles, the LBA versus the average dilution factor - that is ratio between the temporal mean of the patch area and its initial value. In the well-mixed case, the LBA decreases monotonically with dilution and its values can be up to 6 times smaller than in the heterogeneous ensemble, which instead presents a more complex pattern with a marked LBA peak at intermediate dilutions.
\textbf{\emph{The origin and the role of spatial heterogeneity}.} Fig. \ref{fig:iron_srf} and \ref{fig:iron3-4dilution} show that the LBA of the heterogeneous ensemble is very similar to the well-mixed one for small dilution values. However, for the heterogeneous case, after touching a minimum valley, the LBA surface rises steadily until reaching a maximum ridge. To investigate such behavior, we calculate the contribution of spatial heterogeneity to biomass production by subtracting all first moment terms to Eq. (\ref{eq:mean2decomposed}) and integrating in time. We find that such quantity is below (or close to) zero in the decreasing part of the LBA surface and becomes positive when the LBA begins to rise after reaching its minimum values (Supplementary Fig. \ref{fig:minimumvalley}). We conclude that there is a dilution threshold that has to be passed to generate a level of spatial heterogeneity sufficient to abruptly enhance growth with respect to the well-mixed scenario. Above such threshold, the detrimental effect of the smearing of resource concentrations is overcompensated by spatial heterogeneity, allowing the LBA to rise for increasing dilution values.
The key point to understand the enhancement of LBA driven by spatial heterogeneity is the positive contribution of spatial variances and covariance to the consumer growth rate \cite{mandal2014observations,priyadarshi2019micro}. In general, Eqs. (\ref{eq:mean1decomposed}) - (\ref{eq:covdecomposed}) highlight how the non-linear contribution of second moments can affect the derivatives of the means that can thus deviate importantly from the ones estimated using only first moments. In particular, as already shown for the SOIREE simulation (Fig. \ref{fig:soiree}), the role of a positive spatial covariance seems to be crucial in enhancing biomass. We first note that, in a fertilized and growing patch, covariance is mostly positive due to fact that the water inside the patch is rich in both resource and concentration while the recently entrained water presents low concentrations of the two tracers. This configuration results in a positive spatial correlation between resource and biomass - i.e. positive covariance. Then, considering a simplified analytical model, it can be shown that the biomass growth rate, when calculated including spatial heterogeneity with positive covariance, is higher than the growth rate calculated only with the mean biomass and resource (Methods). This finally provides an heuristic explanation of why a positive covariance generated by dilution can increase the growth of the consumer.
Another aspect to consider when interpreting the LBA patterns is that dilution also increases the total patch volume. Indeed, large patches that underwent strong dilution, even if presenting a low biomass concentration, can attain larger LBA values relative to small patches with higher mean biomass concentrations (see Eq. (\ref{eq:lbadefinition})). This underlines the importance of a Lagrangian perspective to avoid misleading interpretations based only on Eulerian concentration fields i.e. focusing only on mean values without considering the volume associated with them.
\textbf{\emph{Sensitivity analysis}.} As a confirmation of the robustness of our results, we find that the two distinct qualitative patterns of Fig. \ref{fig:iron_srf}a versus Fig. \ref{fig:iron_srf}b, i.e. a monotonous decrease versus the existence of a maximum ridge, are conserved in additional series of ensembles when varying the patch size, integration time, the parameters $\mu$ and $k$ and when considering a perfect recycling of resource by setting $\alpha=1$ (Supplementary Figs. \ref{fig:physics_sens}, \ref{fig:bio_sens} and \ref{fig:detrit}). For these simulations, when necessary, initial tracers concentrations are also changed consistently to ensure a steady-state surrounding. In the case where $\mu$ and $k$ are altered, the optimal LBA occurs at different strain/diffusion (Supplementary Fig. \ref{fig:bio_sens}): this dependence is consistent with the hypothesis that different organisms can be better adapted to different degrees of turbulence and therefore to different values of strain and diffusion \cite{margalef1978life,martin2000filament,lehahn2017dispersion, freilich2022diversity}. Regarding the recycled fraction of the nutrients, we find that it can play a relevant role in the LBA budget especially in the late period of the bloom when the initial resource pulse is already depleted. Moreover, in an ensemble where we assume the extreme case of a ``desert" surrounding - that is putting to zero all surrounding tracer concentration - we observe the same different patterns between the heterogeneous and well-mixed ensembles (Supplementary Fig. \ref{fig:desert}). Finally, we also produce an ensemble using a quadratic mortality rate $m'$ in Eq. (\ref{eq:michmenten2}), substituting $m p_b$ with $m' p^2_b$. This allows us to implicitly account for some level of grazing on the planktonic consumer \cite{dutkiewicz2020dimensions,follows2007emergent}. Again, the qualitative difference between well-mixed and heterogeneous ensembles remains (Supplementary Fig. \ref{fig:quadratic}). We also note that, for this configuration, the well-mixed ensemble presents a tiny LBA peak at low but not null dilution rates, breaking the typical monotonous decrease observed in all the other model setups. This is consistent with a positive effect of dilution on phytoplankton growth observed in models that explicitly consider grazing dynamics, even in well-mixed conditions \cite{lehahn2017dispersion}.
\section{Discussions and conclusions}
Previous Eulerian-based research has demonstrated that spatial heterogeneity can increase productivity relative to a well-mixed environment \cite{wallhead2008spatially,priyadarshi2019micro,law2003population,mandal2014observations,levy2013influence}. However, this has been only studied from the perspective of biological interactions and not of the drivers that create and modulate patchiness. On the other hand, Lagrangian approaches have been shown to be the most effective way to model and observe the mechanisms driving local bio-physical dynamics in the ocean since they focus on the real `landscape' where a drifting ecosystem is evolving \cite{abraham2000importance,abraham1998generation,martin2000filament,paparella2020stirring,ledwell1998mixing, iudicone2008water}. Our work establishes a novel theoretical connection between the study of ecological spatial heterogeneity and the Lagrangian perspective of fluid flows and provides a theory to describe plankton ecosystems in the ocean.
The passage of weather systems, dust deposition events, and internal (sub)meso-scale physical processes continuously stimulate changes in the resource environment throughout the oceans. Our model reveals that such localized, transient enhancement of resources can lead to very different subsequent signatures in biomass depending upon the local strain and diffusion and surrounding tracer concentrations. Thus the measurable response (e.g. Chlorophyll concentration) of two resource injections of similar magnitude can be very different depending on the dilution rate. This suggests that relationships between remotely sensed Chlorophyll and produced biomass may be more complex than first intuition suggests. Nevertheless, it may be possible to account for aspects of this influence by interpreting the nature of the bio-physical environment.
Dilution has been already proposed in the past as a positive factor for plankton growth due to its effects of supplying nutrients or removing grazers \cite{hannon2001modeling,fujii2005simulated,lehahn2017dispersion,boyd2007mesoscale}. Consistently, our model is able to reproduce such dynamics, in particular emulating the decrease of grazers pressure using a quadratic mortality (see Supplementary Fig. \ref{fig:quadratic}). However, here we show that dilution can enhance biomass growth through only the physical mechanism of creating heterogeneity, without invoking other biologically driven mechanisms. We also foresee that, due to an increase of trophic efficiency caused by spatial heterogeneity \cite{priyadarshi2019micro,mandal2014observations}, the Lagrangian biomass anomaly increment can be transferred to higher trophic level (e.g. grazers). A more diluted and thus heterogeneous ecosystem would also be expected to have a reinforced stability that would ultimately boost the level of biodiversity that it can sustain \cite{law2003population,priyadarshi2019micro,mandal2014observations,ward2021selective}. From a community ecology perspective, entrainment can be quantitatively related to the rate at which organisms from outside the community migrate towards it \cite{ser2018ubiquitous}. This brings a key input that could not previously addressed in the oceanic environment - that is the dispersal rate - to community assembly theories allowing predictions of macro-ecological features such as diversity, Species-Abundance Distributions (SADs), Species-Area Relationships (SARs) and Taylor’s law \cite{ser2018ubiquitous,villa2020ocean,azaele2016statistical,grilli2020macroecological,ward2021selective}.
Our theoretical approach provides a bottom-up general framework to assess plankton ecology in the ocean from first principles. Indeed, a Lagrangian ecosystem can be regarded as the fundamental building block of more complex assemblages. Here, as proof of concept, we showed that our model can reproduce the features of the artificially fertilized bloom SOIREE \cite{boyd2000mesoscale} highlighting the key role of dilution and heterogeneity in modulating biomass growth. However, our model can be applied to any Lagrangian ecosystem such as, for instance, the one illustrated in Fig. \ref{fig:bloom}. Vertical dynamics can be included to describe exchanges across different depths \cite{freilich2022diversity} and the complexity of the biochemical interactions can be escalated adding more tracers and new trophic layers \cite{dutkiewicz2020dimensions,follows2007emergent}. Moreover, instead of assuming `mean-field' surrounding distributions, implementing multi-patches simulations would allow us to model how Lagrangian ecosystems interact with one another through the exchange of tracers while mixing and diluting. Though here we focused on a particular spatio-temporal scale, our approach can be re-scaled across wide ranges of physical and biochemical scales. This would permit us to explore how much a plankton ecosystem conserve the memory of its Lagrangian past unveiling its ‘lifetime’ i.e. for how long it can be considered significantly different from the surrounding. More generally, this could ultimately reveal the effective spatio-temporal scale of an ecological perturbation across the seascape \cite{kuhn2019temporal}.
In summary, we present a formalism that addresses the role of dilution and spatial heterogeneity (i.e. patchiness) on the integrated response of plankton biomass to a local resource pulse. Nutrient injections are ubiquitous in the oceans and the interpretation of their biomass signatures contributes to our integrated evaluations of ocean productivity. Perhaps unintuitively, we find that lateral dilution of such a feature can enhance the integrated biomass anomaly up to a factor of two due to the local generation of patchiness. We also find that neglecting patchiness leads to a several-fold underestimate of the integrated biomass response to a resource injection. Hence we believe that accounting for dilution and unresolved patchiness is an important goal for biogeochemical sampling strategies and future modeling approaches.
\begin{figure
\includegraphics[width=12cm]{fig_iron3-4_dilution.pdf}
\caption{Lagrangian biomass anomaly (LBA) measured in tonnes of carbon [Mg C] versus average dilution factor for ensembles of heterogeneous (red) and well-mixed (blue) patches. Each dot corresponds to a single simulation. For the well-mixed ensemble, the LBA decreases monotonically with dilution. In the heterogeneous ensemble the LBA presents a sharp and shallow minimum at low dilutions before a steady increase until reaching its maximum at intermediate dilutions. In the higher range of dilutions, the LBA of the heterogeneous ensemble is up to 600\% larger than in the well-mixed one.} \label{fig:iron3-4dilution}
\end{figure}
\section{Methods}
\subsection{Geometric description of a Lagrangian patch}
We describe a Lagrangian patch as a two-dimensional evolving ellipsis encompassing the majority of its surface \cite{ledwell1998mixing,sundermeyer1998lateral}. To this aim we model the water concentration identifying the patch as a 2-dimensional Gaussian distribution \cite{townsend1951diffusion,garrett1983initial,martin2003phytoplankton}:
\begin{equation}
\theta(\mathbf{x},t) = e^{-\frac{x^2}{2W(t)^2} -\frac{y^2}{2L(t)^2}},
\end{equation}
where isolines of such distribution describe elliptic areas. Note that $\theta(\mathbf{x},t)$ is the concentration of the physical water mass associated with the patch and should not be confused with the various distributions of tracers contained in it. $L(t)$ and $W(t)$ are the standard deviations of the distribution $\theta(\mathbf{x},t)$ and, following the definition of the Gaussian distribution, the ellipsoid with axis $2W(t)$, $2L(t)$ contains the 68.27\% of the patch mass. We identify $2W(t)$ as the patch width (the minor axis of the ellipsis) and $2L(t)$ as the the patch length (the major axis of the ellipsis) while the patch center of mass is denoted as $\mathbf{X}(t)$ (see Fig. \ref{fig:patchvar}). The characteristic patch size is the average of length and width: $S(t) \equiv W(t) + L(t)$. The area of the ocean surface corresponding to the 68.27\% of the patch is:
\begin{equation}\label{eq:patch_area_def}
A(t) = \pi W(t)L(t)
\end{equation}
\subsection{Lagrangian advection-diffusion equation and patch physical evolution}
We take a Lagrangian perspective focusing on the trajectory and the modification of the water patch. To this aim we chose a reference frame that is translating and rotating with the patch. In this way all the rigid-like movements - that are the ones that do not change the relative positions of the fluid elements in the patch - are ignored \cite{ranz1979applications}. Since we consider here incompressible flows, we set the divergence component to zero and the velocity field can be locally associated to an elliptically symmetrical stagnation flow. The associated stirring effect on the patch, at the spatial scale $S(t)$, can be described by a strain rate coefficient $\gamma(\mathbf{X}(t), S(t), t) \equiv \gamma(t)$ (see Fig. \ref{fig:straindiff}). Advection, rotation and stirring are not responsible for the dilution of the patch in the surrounding since they are not directly related to mixing. Indeed under their action, the area associated to the patch remains constant in time. On top of such deterministic stirring dynamics we superimpose the effect of diffusion related to unresolved scales of the velocity field smaller of the typical patch size $S(t)$. We denote the size dependent diffusion as $\kappa((\mathbf{X}(t), S(t), t)) \equiv \kappa(t)$ (see Fig. \ref{fig:straindiff}).
We derive then the advection-diffusion equation for the distribution $\theta(\mathbf{x},t)$ \cite{townsend1951diffusion,garrett1983initial,sundermeyer1998lateral,martin2003phytoplankton}:
\begin{equation}\label{eq:adv_diff_1}
\frac{\partial \theta(\mathbf{x},t)}{\partial t} + \mathbf{v}(\mathbf{x},t) \cdot \nabla \theta(\mathbf{x},t) = \kappa(t) \nabla^2 \theta(\mathbf{x},t),
\end{equation}
where $ \mathbf{v}(\mathbf{x},t)$ is the velocity. In a proper Lagrangian frame of reference \cite{ranz1979applications,martin2003phytoplankton} when the contracting direction is aligned with the x-axis and the expanding one with the y-axis, Eq. (\ref{eq:adv_diff_1}) becomes:
\begin{equation}\label{eq:adv_diff_2}
\frac{\partial \theta}{\partial t} - \gamma(t) x \frac{\partial \theta}{\partial x} + \gamma(t) y \frac{\partial \theta}{\partial y} = \kappa(t) \nabla^2 \theta,
\end{equation}
where , for brevity, we omitted in the notation the temporal and spatial dependence of $\theta(\mathbf{x},t)$.
The zeroth and second order spatial integrals of the tracer distribution $\theta$ are:
\begin{align}
M_0^x(t) = \int \theta(\mathbf{x},t)\Big\rvert_{y=0} dx \quad \quad &; \quad \quad M_2^x(t) = \int x^2 \theta(\mathbf{x},t)\Big\rvert_{y=0} dx , \\
M_0^y(t) = \int \theta(\mathbf{x},t)\Big\rvert_{x=0} dy \quad \quad &; \quad \quad M_2^y(t) = \int y^2 \theta(\mathbf{x},t)\Big\rvert_{x=0} dy .
\end{align}
Hence, the squares of the width and length of the patch can be expressed as:
\begin{align}
W^2(t) =& \frac{M_2^x(t)}{M_0^x(t)} , \\
L^2(t) =& \frac{M_2^y(t)}{M_0^y(t)} .
\end{align}
Deriving in time the above expressions and integrating in space Eq. (\ref{eq:adv_diff_2}), we obtain the time evolution for the patch width and length \cite{townsend1951diffusion,ledwell1998mixing,martin2003phytoplankton}:
\begin{align}
\frac{\partial W^2(t)}{\partial t} =& + 2 \kappa(t) - 2 \gamma(t) W^2(t) , \label{eq:deriv_lenwidth1_met} \\
\frac{\partial L^2(t)}{\partial t} =& + 2 \kappa(t) + 2 \gamma(t) L^2(t) . \label{eq:deriv_lenwidth2_met}
\end{align}
Combining the above equations with Eq. (\ref{eq:patch_area_def}) we finally obtain the increase rate for the patch area:
\begin{equation}\label{eq:area_deriv_patch_met}
\frac{dA(t)}{dt} = \pi \kappa(t) \bigg[ \frac{W^2(t) + L^2(t)}{W(t)L(t)} \bigg] .
\end{equation}
\subsection{Entrainment effects on tracer distributions}
Diffusion at the patch boundaries causes entrainment of surrounding waters in the patch. The rate at which this process happens can be estimated from the rate at which the patch area is growing i.e. from $dA(t)/dt$. We derive here on the contribution of such processes on the evolution of first and second moments of tracers inside the patch. In this section the patch is explicitly indicated with $\texttt{pat}$ while its surrounding is indicated with $\texttt{sur}$. For an interval of time $\Delta t$ the area of the patch at time $t$ will increase from $A$ to $A + \Delta A$. For mass conservation, the surface intruded in the patch bringing waters with different composition, should correspond exactly to $\Delta A$. In the following we derive the equations describing how means, variances and covariances change when we merge the two regions $\texttt{pat}$ and $\Delta\texttt{pat}$, of surface $A$ and $\Delta A$ respectively, with different tracer compositions (see Fig. \ref{fig:surround} and Supplementary Fig. \ref{fig:immigration}).
Let's derive the equation for the mean values first. By definition we can write:
\begin{align}
\langle p_{i}(\mathbf{x},t) \rangle_{\texttt{pat}} &= \frac{1}{A} \int_{\texttt{pat}} p_{i}(\mathbf{x},t) ds , \\
\langle p_{i}(\mathbf{x},t) \rangle_{\Delta\texttt{pat}} &= \frac{1}{\Delta A} \int_{\Delta \texttt{pat}} s_{i}(\mathbf{x},t) ds ,
\end{align}
where in the second equation the integrand is $s_{i}(\mathbf{x},t)$ because $\Delta\texttt{pat}$ is intruding from the surrounding of the patch. Considering the mean value of both areas merged at time $t+\Delta t$ we have:
\begin{equation}
\langle p_{i}(\mathbf{x},t+\Delta t) \rangle_{\texttt{pat} \cup \Delta\texttt{pat}} = \frac{1}{A + \Delta A} \bigg( A \langle p_{i}(\mathbf{x},t) \rangle_{\texttt{pat}} + \Delta A \langle s_{i}(\mathbf{x},t) \rangle_{\texttt{sur}} \bigg). \label{eq:averageintermediatemean}
\end{equation}
Using the definition of derivative $\frac{df(t)}{dt} = \frac{f(t+dt)-f(t)}{dt}$, taking the limits $\Delta A \rightarrow dA \rightarrow 0$ and $\Delta t \rightarrow dt \rightarrow 0$, we obtain:
\begin{equation}
\frac{d \langle p_{i}(\mathbf{x},t) \rangle_{\texttt{pat}}}{dt} = \frac{1}{A(t)} \bigg( \frac{dA(t)}{dt} \bigg) \bigg( \langle s_{i}(\mathbf{x},t) \rangle_{\texttt{sur}}
- \langle p_{i}(\mathbf{x},t) \rangle_{\texttt{pat}} \bigg)
\end{equation}
With a similar approach and using the definition of spatial variance we can derive the equation for the derivative of the variances. The variance of both volumes merged at time $t+\Delta t$ is:
\begin{align}
\langle p'_{i}(\mathbf{x},t+\Delta t)^2 \rangle_{\texttt{pat} \cup \Delta\texttt{pat}} &= \frac{1}{A+\Delta A} \Bigg( \int_{\texttt{pat}} p_{i}(\mathbf{x},t)^{2} ds + \int_{\Delta \texttt{pat}} s_{i}(\mathbf{x},t)^{2} ds \Bigg) - \Big( \langle p_{i}(\mathbf{x},t+\Delta t) \rangle_{\texttt{pat} \cup \Delta\texttt{pat}} \Big)^2
\end{align}
Developing the integral terms and using Eq. (\ref{eq:averageintermediatemean}):
\begin{align}
\langle p'_{i}(\mathbf{x},t+\Delta t)^2 \rangle_{\texttt{pat} \cup \Delta\texttt{pat}} &= \bigg( \frac{1}{A+\Delta A} \bigg) \bigg( A \Big( \langle p'_{i}(\mathbf{x},t)^2 \rangle_{\texttt{pat}} + \langle p_{i}(\mathbf{x},t) \rangle_{\texttt{pat}} \Big) + \Delta A \Big( \langle s'_{i}(\mathbf{x},t)^2 \rangle_{\texttt{sur}} + \langle s_{i}(\mathbf{x},t) \rangle_{\texttt{sur}} \Big) \bigg) - \nonumber \\
&- \bigg( \frac{1}{A+\Delta A} \bigg)^2 \bigg( A \langle p_{i}(\mathbf{x},t) \rangle_{\texttt{pat}} + \Delta A \langle s_{i}(\mathbf{x},t) \rangle_{\texttt{sur}} \bigg)^2
\end{align}
Developing all terms, taking the limits $\Delta A \rightarrow dA \rightarrow 0$ and $\Delta t \rightarrow dt \rightarrow 0$ and using the definition of derivative we obtain:
\begin{align} \label{eq:immigration_eq_variances}
\frac{d \langle p'_{i}(\mathbf{x},t)^2 \rangle_{\texttt{pat}}}{dt} &=
\frac{1}{A(t)} \bigg( \frac{d A(t)}{dt} \bigg) \bigg( \langle s'_{i}(\mathbf{x},t)^2 \rangle_{\texttt{sur}} - \langle p'_{i}(\mathbf{x},t)^2 \rangle_{\texttt{pat}} +
\Big( \langle s_{i}(\mathbf{x},t) \rangle_{\texttt{sur}} - \langle p_{i}(\mathbf{x},t) \rangle_{\texttt{pat}} \Big)^2 \bigg)
\end{align}
where in the last equality we defined the two separated contributions to the variance derivative, across the horizontal and the vertical, respectively.
Finally, generalizing Eq. (\ref{eq:immigration_eq_variances}), we have an expression for the derivative of the covariance between tracer $i$ and $j$:
\begin{align}
\frac{d \langle p'_{i}(\mathbf{x},t)p'_{j}(\mathbf{x},t) \rangle_{\texttt{pat}}}{dt} =
\frac{1}{A(t)} \bigg( \frac{d A(t)}{dt} \bigg) \bigg( &\langle s'_{i}(\mathbf{x},t)s'_{j}(\mathbf{x},t) \rangle_{\texttt{sur}} - \langle p'_{i}(\mathbf{x},t)p'_{j}(\mathbf{x},t) \rangle_{\texttt{pat}} + \nonumber \\
&+ \Big( \langle s_{i}(\mathbf{x},t) \rangle_{\texttt{sur}} - \langle p_{i}(\mathbf{x},t) \rangle_{\texttt{pat}} \Big)\Big( \langle s_{j}(\mathbf{x},t) \rangle_{\texttt{sur}} - \langle p_{j}(\mathbf{x},t) \rangle_{\texttt{pat}} \Big) \bigg) .
\end{align}
\subsection{Internal mixing within the patch}
A passive tracer in a turbulent flow is subjected to mixing processes that tend to homogenize its concentration in time. Several approaches have been developed to theoretical model the decay of the moments of a tracer distribution \cite{artale1997dispersion,haynes2005controls,thiffeault2008scalar,neufeld2009chemical}. From a patch perspective, internal mixing does not affect spatial means but it contributes to smooth variances and covariances. In particular, the decay rate of tracer second moments can be related to the diffusion acting at the corresponding spatial scale \cite{artale1997dispersion,thiffeault2008scalar}. Using that the diffusion coefficient $\kappa(t)$ represents the effective diffusion at the scale of the patch size $S(t)$, we conclude that the decay rate of tracers variances and covariances in the patch is \cite{haynes2005controls,neufeld2009chemical}:
\begin{equation}
\langle p'_{i}(\mathbf{x},t) p'_{j}(\mathbf{x},t) \rangle \sim e^{-\frac{\kappa(t)}{S(t)^2}t}.
\end{equation}
From the above functional dependence we finally derive the expression for the internal mixing contribution to the time derivative of second moments:
\begin{equation}
\frac{d \langle p'_{i}(\mathbf{x},t) p'_{j}(\mathbf{x},t) \rangle}{dt} = - \frac{\kappa(t)}{S(t)^2} \langle p'_{i}(\mathbf{x},t) p'_{j}(\mathbf{x},t) \rangle.
\end{equation}
\subsection{First and second moments contributions to biological dynamics}
Based on the Reynold's decomposition for tracer distributions of Eq. (\ref{eq:reynoldsdeco}) we can derive the contribution of spatial means, variances and covariances to Eqs. (\ref{eq:michmenten1}) and (\ref{eq:michmenten2}). To this aim, we use a closure method to provide analytical expressions for time derivatives of first and second moments in the patch \cite{law2003population,wallhead2008spatially,levy2013influence,mandal2014observations,priyadarshi2019micro}. In the following, to simplify notation, we omit the dependence of tracer distributions on $t$ and $\mathbf{x}$.
The equations for the evolution of the means are:
\begin{equation}
\frac{d \langle p_r \rangle}{dt} = -\nu \frac{ \langle p_r \rangle \langle p_b \rangle }{( \langle p_r \rangle + k)} + \nu k \frac{ \langle p_b \rangle \langle p'^{2}_{r} \rangle }{( \langle p_r \rangle + k )^3} - \nu k \frac{ \langle p'_{r} p'_{b} \rangle }{( \langle p_r \rangle + k )^2} + \alpha m \langle p_b \rangle , \label{eq:mean1decomposed}
\end{equation}
\begin{equation}
\frac{d \langle p_b \rangle}{dt} = +\nu \frac{ \langle p_r \rangle \langle p_b \rangle }{( \langle p_r \rangle + k)} - \nu k \frac{ \langle p_b \rangle \langle p'^{2}_{r} \rangle }{( \langle p_r \rangle + k )^3} + \nu k \frac{ \langle p'_{r} p'_{b} \rangle }{( \langle p_r \rangle + k )^2} - m \langle p_b \rangle . \label{eq:mean2decomposed}
\end{equation}
The evolution of the variances are:
\begin{equation}
\frac{d \langle p'^{2}_{r} \rangle}{dt} = -2 \nu k \frac{ \langle p_b \rangle \langle p'^{2}_{r} \rangle }{( \langle p_r \rangle + k)^2} -2 \nu \frac{ \langle p_r \rangle \langle p'_{r} p'_{b} \rangle }{( \langle p_r \rangle + k)} + 2 \alpha m \langle p'_{r} p'_{b} \rangle, \label{eq:var1decomposed}
\end{equation}
\begin{equation}
\frac{d \langle p'^{2}_{b} \rangle}{dt} = +2 \nu\frac{ \langle p_r \rangle \langle p'^{2}_{b} \rangle }{( \langle p_r \rangle + k)} +2 \nu k \frac{ \langle p_b \rangle \langle p'_{r} p'_{b} \rangle }{( \langle p_r \rangle + k)^2} - 2 m \langle p'^{2}_{b} \rangle. \label{eq:var2decomposed}
\end{equation}
Similarly, we can obtain the evolution of the covariance:
\begin{align}
\frac{d \langle p'_{r} p'_{b} \rangle}{dt} &= \nu\frac{ \langle p_r \rangle}{( \langle p_r \rangle + k)} \Big( \langle p'_{r} p'_{b} \rangle - \langle p'^{2}_{b} \rangle \Big) + \nu k \frac{ \langle p_b \rangle}{( \langle p_r \rangle + k)^2} \Big( \langle p'^{2}_{r} \rangle - \langle p'_{r} p'_{b} \rangle \Big) +m \Big(\alpha \langle p'^{2}_{b} \rangle -\langle p'_{r} p'_{b} \rangle \Big) . \label{eq:covdecomposed}
\end{align}
\subsection{Bio-physical parameters setting for ensemble simulations}
We detail below the setting of physical and biological parameters used for the main ensemble simulations (Figs. \ref{fig:iron_srf} and \ref{fig:iron3-4dilution}). Other ensemble simulations to address the model sensitivity using different sets of parameters are presented in the Supplementary Information (Supplementary Figs. \ref{fig:physics_sens}, \ref{fig:bio_sens}, \ref{fig:detrit}, \ref{fig:desert}, \ref{fig:quadratic}).
We setup the our Lagrangian ecosystem model to study the evolution of a horizontal circular patch of initial diameter of $S(0)=10$ km and constant thickness of 10 m. We track its evolution over a time window (i.e. the integration time) of $\tau=30$ days with a time-step of $\sim$14 minutes. The ranges of realistic values of initial strain and diffusion used are based on in-situ observations \cite{okubo1971oceanic,ledwell1998mixing,sundermeyer1998lateral,corrado2017general,sundermeyer2020dispersion}. They corresponds to: $0.01<\gamma<0.6$ $\text{day}^{-1}$ and $0.01<\kappa<0.6$ $\text{km}^2 \text{day}^{-1}$, respectively. We then implement specific scaling laws of $\gamma$ and $\delta$ for the spatial scales of 10-100 km spanned by our ensemble simulations:
\begin{align}
\gamma(t) =& f_{\gamma}\big[S(t)\big] = \alpha S(t)^{-\frac{2}{3}}, \label{eq:sizedepgamma} \\
\kappa(t) =& f_{\kappa}\big[S(t)\big] = \beta S(t), \label{eq:sizedepkappa}
\end{align}
where $\alpha$ and $\beta$ are chosen in a way that $\gamma(t=0)$ and $\kappa(t=0)$ match realistic values at the scale of the initial patch size $S(t=0)$. For the study of the SOIREE experiment we use as initial value of strain and diffusion $\gamma=0.12$ and $\kappa=0.1$, respectively.
We identify the resource $p_r$ with iron and the consumer $p_b$ with phytoplankton. We do not model resource recycling ($\alpha=0$) with exception of the sensitivity analysis reported in Supplementary Fig. \ref{fig:detrit} in which we instead use a complete remineralization rate ($\alpha=1$). The Fe:C ratio used is $10^{-5}$ \cite{hannon2001modeling,dutkiewicz2020dimensions}. The initial iron concentration in the patch is 1 $\mu\text{mol} / \text{m}^3$ and 0.1 $\mu\text{mol} / \text{m}^3$ at the surrounding while the initial phytoplankton concentration in iron currency, both in the patch and at the surrounding, is 0.0249 $\mu\text{mol} / \text{m}^3$ \cite{boyd2000mesoscale,abraham2000importance}. Initial variances and covariance are set to zero. The maximum phytoplankton growth rate and its linear mortality rate are: $\nu=1.05$ $\text{day}^{-1}$ and m=0.05 $\text{day}^{-1}$ \cite{boyd2000mesoscale,hannon2001modeling,tsuda2005responses,dutkiewicz2020dimensions}. The half-saturation constant for iron is: $k=2$ $\mu\text{mol} / \text{m}^3$ \cite{lancelot2000modeling,timmermans2001growth}.
\subsection{Simplified analytical model of an heterogeneous patch}
Here we introduce a simplified model to investigate the role of positive covariance for biomass growth. Let's consider an analytical model of a patch composed by just two sub-regions of equal size. In sub-region 1 the concentrations of resource is $r_1$ and of biomass is $b_1$, respectively we have $r_2$ and $b_2$ for sub-region 2. If we identify the total growth rate of the patch as the average of the growth rates of the two sub-regions we would have:
\begin{equation}\label{eq:postaverage}
\frac{\nu}{2} \bigg[ \frac{r_1 b_1}{r_1+k} + \frac{r_2 b_2}{r_2+k} \bigg]
\end{equation}
If we instead do the opposite, i.e. average first the concentrations of the two sub-regions and only after compute a single growth rate for the entire patch, we have:
\begin{equation}\label{eq:preaverage}
\frac{\nu}{4} \bigg[ \frac{(r_1+r_2) (b_1+b_2)}{\frac{r_1+r_2}{2}+k} \bigg]
\end{equation}
Then, we can consider the case in which we have a positive spatial covariance in the patch by setting:
\begin{align}
&r_1 = r+ \delta r \qquad ; \qquad b_1 = b +\delta b \\
&r_2 = r - \delta r \qquad ; \qquad b_2= b - \delta b
\end{align}
The difference of the two different growth rate above, i.e. Eq. (\ref{eq:postaverage}) - Eq. (\ref{eq:preaverage}), becomes:
\begin{equation}
\Delta = \nu \frac{(k+r)\delta b - b \delta r}{(k+r)(k+r-\delta r)(\delta r +k+r)} k \delta r
\end{equation}
Assuming that $k+r>\delta r$, then $\Delta$ is positive if and only if $(k+r)\delta b > b \delta r$.
| 2024-02-18T23:39:54.733Z | 2022-10-03T02:10:53.000Z | algebraic_stack_train_0000 | 797 | 9,738 |
|
proofpile-arXiv_065-4020 | \subsection{Introduction}
\section{Introduction}\label{sec:marketplace}
We consider a marketplace with trading agents that play the roles of buyers and/or sellers of goods or services. The trading agents are of different types based on the frequency at which they trade, the size of their trading orders, whether they consistently buy or sell or both, and the strategy that they use for trading. The seller agents can range from manufacturers that offer high good volumes for sale, to individual and retail sellers that sell much less frequently. The buyer agents use the platform to compare prices available from different seller agents, and to buy goods from them. The \emph{marketplace} agent is one that facilitates trading between buyer and seller agents by providing them access to marketplace communication and an order matching engine. It also charges trading agents in the form of fees for the facilities it provides. These fees serve as profits to the marketplace agent. Further, the profits made by trading agents through goods exchange are offset by the fees paid to the marketplace.
\emph{Wholesale} agents are both buyers and sellers of the goods that are traded in the marketplace. They are characterised by their large trading volumes and high trading frequencies. Wholesale agents can improve liquidity in the marketplace by frequent buying and selling of large volumes of goods. It is for this reason that the marketplace agent often offers fee rebates to wholesale agents for their function of ensuring the presence of orders on the opposite side of every arriving buy or sell order. There are \emph{other} agent groups in the marketplace ecosystem that trade goods based on their perception of the long term value of a good, or market momentum signals. \emph{Consumer} agents are characterized by trading on demand. They arrive to the marketplace at random times and trade smaller volumes without using informed trading strategies. They can therefore potentially trade at inferior prices, hence, raising questions about marketplace equitability to consumer agents. Consider an example where a consumer agent needs to trade during a period of marketplace distress when there is little supply of goods offered by wholesale agents. Under such conditions, the consumer agent might trade at an inferior price, resulting in an execution that may be perceived as inequitable as compared to those of other agents who were luckier to trade in a more stable marketplace.
Examples of marketplace ecosystems that can be described by the above framework include e-commerce trading platforms as well as stock exchange markets. For instance, wholesale agents such as publishing houses as well as other small seller agents (such as bookstores) can list books for a fee on e-commerce trading marketplaces, which allows them to procure books to individual consumer agents and other buyers. In stock exchange markets, market makers provide liquidity to the exchange on both buy and sell sides of the market for exchange incentives. This action enables market makers to profit, and other market agents to trade assets with each other. Consumer agents such as individuals who trade on demand without sophisticated trading strategies and technology can be vulnerable to rapid price changes and volatility.
Simulations have previously been used to answer questions that are of interest to participants of financial markets, hedge funds, banks and stock exchanges. In \cite{nasdaq_tick_size}, the authors investigated the use of an intelligent tick structure that modified the currently constant tick size for all stocks to having different tick sizes for different stocks. Fairness and equitability in markets have become increasingly important as described in \cite{sec_options,sec_securities}. In this paper, we investigate the impact of a reduction in marketplace fees charged to wholesale agents on equitability outcomes to consumer agents in a marketplace simulator. We show that such fee reductions incentivise wholesale agents to enable equitable provision of goods to consumer agents in the marketplace (see Figure \ref{fig:marketplace}). Specifically, we demonstrate that an equitable marketplace mechanism can be enabled by a dynamic marketplace fee policy derived by reinforcement learning in a simulated stock exchange market.
\section{Background and related work}
\subsection{Equitability Metric}
Equitability has conventionally been studied in political philosophy and ethics \cite{johnrawls}, economics \cite{hmoulin}, and public resource distribution \cite{peytonyoung}. In recent times, there has been a renewed interest in quantifying equitability in classification tasks \cite{dwork2012fairness}. Literature on fairness in machine learning studies two main notions of equitability: group fairness and individual fairness. Group fairness ensures some form of statistical parity (e.g. between positive outcomes, or errors) for members of different groups. On the other hand, individual fairness ensures that individuals who are `similar' with respect to the task at hand receive similar outcomes \cite{binns2020apparentfairness}. In \cite{dwarakanath2021profit}, the authors studied the effect a market marker can have on individual fairness for consumer traders by adjusting its parameters. A negative correlation was observed between the profits of the market maker and equitability. Hence, the market maker incurs a cost while enabling individual fairness to consumer traders. This stirs up the idea of designing a marketplace in which the market maker can be compensated by the exchange for making equitable markets.
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{marketplace.jpg}
\caption{Schematic of marketplace ecosystem}
\label{fig:marketplace}
\end{figure}
In this paper, we are interested in understanding the effects of marketplace fees on equitability to trading agents. We draw from the entropy metric used in \cite{dwarakanath2021profit} to measure individual fairness within a single group of (consumer) traders. We seek an equitability metric that can capture equitability both within each group as well as that across groups. The authors of \cite{cowell1981gei,algo_unfairness} describe the family of generalized entropy indices (GEI) that satisfy the property of subgroup-decomposability, i.e. the inequity measure over an entire population can be decomposed into the sum of a between-group inequity component (similar to group fairness metrics in \cite{dwork2012fairness,aif360}) and a within-group inequity component (similar to the individual fairness entropy metric used in \cite{dwarakanath2021profit}). Given observations $Y=\lbrace y_1,y_2,\cdots,y_n\rbrace$ of outcomes of $n$ agents, the generalized entropy index is defined for parameter $\kappa\neq0,1$ as
\begin{align}
\textnormal{GE}_{\kappa}(Y)=\frac{1}{n\kappa(\kappa-1)}\sum_{i=1}^n\left(\left(\frac{y_i}{\mu}\right)^\kappa-1\right)\label{eq:gei_defn}
\end{align}
where $\mu:=\frac{1}{n}\sum_{i=1}^{n}y_i$ is the average outcome. Note that $\textnormal{GE}_{\kappa}(Y)$ is a measure of inequity with the most equitable scenario resulting from $y_i=c$ for all $i=1,2,\cdots,n$. If we think of $y_i$ to denote the profit of trading agent $i$, then the most equitable scenario corresponds to having equal profits for all agents.
With the population divided into $G$ groups of agents, one can decompose the right hand side of equation (\ref{eq:gei_defn}) as \begin{align}
\textnormal{GE}_\kappa(Y)&=\sum_{g=1}^{G}\frac{n_g}{n}\left(\frac{\mu_g}{\mu}\right)^\kappa \textnormal{GE}_\kappa(Y_g)\label{eq:gei1}\\
&+\sum_{g=1}^{G}\frac{n_g}{n\kappa(\kappa-1)}\left(\left(\frac{\mu_g}{\mu}\right)^\kappa-1\right)\label{eq:gei2}
\end{align}
where $Y_g=\lbrace y_i:i\in g\rbrace$ is the set of outcomes of agents in group $g$, $n_g$ is the number of agents in group $g$, and $\mu_g:=\frac{1}{n_g}\sum_{i\in g}y_i$ is the average outcome in group $g$. Then, the term on the right of (\ref{eq:gei1}) captures the within-group inequity (similar to the entropy metric for individual fairness in \cite{dwarakanath2021profit}) for group $g$ weighted by its population. And, the term in (\ref{eq:gei2}) captures the between-group inequity by comparing the average outcome in the entire population against that in group $g$.
We propose a weighted version of (\ref{eq:gei1})-(\ref{eq:gei2}) with weight $w=\lbrace w_g:g=1,2,\cdots,G\rbrace$ where $w_g\geq0$ for all $g=1,2,\cdots,G$ and $\sum_{g=1}^{G}w_g=1$ as
\begin{equation}
\begin{aligned}
\textnormal{GE}^w_\kappa(Y)&=\sum_{g=1}^Gw_g\cdot\frac{n_g}{n}\left(\frac{\mu_g}{\mu}\right)^\kappa \textnormal{GE}_\kappa(Y_g)\\
&+\sum_{g=1}^Gw_g\cdot\frac{n_g}{n\kappa(\kappa-1)}\left(\left(\frac{\mu_g}{\mu}\right)^\kappa-1\right)
\end{aligned}\label{eq:w_gei}
\end{equation}
Note that the equitability metric defined in (\ref{eq:w_gei}) provides extended flexibility to the original definition (\ref{eq:gei_defn}) by enabling the user to focus on a specific agent group $l$ by setting $w_l=1$ and $w_g=0$ for all $g\neq l$. For ease of notation, we establish the following group correspondence for the three types of trading agents in our marketplace described in section \ref{sec:marketplace}. Let $g=1$ correspond to wholesale agents, $g=2$ to consumer agents and $g=3$ to other agents. We use the negative of (\ref{eq:w_gei}) as the metric for equitability going ahead.
\subsection{Reinforcement Learning}
Our marketplace ecosystem consists of multiple interacting trading agents as in Figure \ref{fig:marketplace}.
Such an ecosystem is well modeled as a multi-agent system -- a system comprised of multiple autonomous agents interacting with each other in a common environment which they each observe and act upon. The behaviours of these agents can be defined beforehand using certain rules or expert knowledge, or learnt on the go. Reinforcement learning (RL) has become a popular approach to learn agent behavior given certain objectives that are to be improved upon \cite{sutton2018reinforcement,kaelbling1996reinforcement}. An RL agent seeks to modify its behaviour based on rewards received upon its interaction with its dynamic environment.
There exist well-understood algorithms with their convergence and consistency properties well studied for the single-agent RL task.
An environment with multiple learning agents is modeled in the form of Markov Games (MGs) or stochastic games \cite{zhang2021marl,shapley1953stochastic}. An MG is a tuple $(\mathcal{N},\mathcal{S},\lbrace\mathcal{A}_i:i=1,2,\cdots,n\rbrace,\mathcal{T},\lbrace R_i:i=1,2,\cdots,n\rbrace,\gamma,T)$ comprising the set $\mathcal{N}=\lbrace1,2,\cdots,n\rbrace$ of all agents, the joint state space, action spaces of all agents, a model of the environment giving the probability of transitioning from one joint state to another given the actions of all agents, the reward functions of all agents, discount factor and the time horizon respectively \cite{zhang2021marl}. The goal of each agent is to maximize the expected sum of its own discounted rewards that now depend on the actions of other agents as well. While it is tempting to use RL for multi-agent systems, it comes with a set of challenges. The main challenge being the presence of multiple agents that are learning to act in presence of one another \cite{busoniu2008comprehensivemarl}. We deal with this by adopting an iterative learning framework where our learning agents taking turns to update their value functions while the other learning agents keep their policies fixed. Such an iterative learning framework was used in \cite{zheng2020aieconomist} to simultaneously learn economic actors and a tax planner in an economic system.
The general framework of using RL for mechanism design was previously considered in \cite{rl_mechanismdesign}. Mechanism design using RL for e-commerce applications was studied in \cite{mechanism_ecommerce}. In this paper, we show how to use RL for equitable marketplace mechanism design with our discussion focused on financial markets.
\subsection{Stock exchange markets}
We now concentrate on stock exchange markets (such as Nasdaq or New York Stock Exchange) which can be viewed as instances of our generic trading marketplace with stocks being the goods traded between agents.
Stock trading agents can belong to many categories: market makers, consumer investors, fundamental investors, momentum investors, etc. Market makers are investors that are obliged to continuously provide liquidity to both buyers and sellers regardless of market conditions.
They act as both buyers and sellers of the stock and have more frequent trades with larger order volumes as compared to the other categories of investors.
Fundamental investors and momentum investors use the exogenous stock value or long term averages of the stock to trade unilaterally (buy or sell, not both) at different times in the trading day; they also have more frequent trades (albeit unilateral) than the category of consumer investors, who trade purely based on demand without any other considerations \cite{kyle1985continuous}.
Irrespective of type, the objective of all market investors is to make profits from their trading actions.
The aforementioned investor categories can be mapped to our marketplace ecosystem as follows: exchange (marketplace agent), market makers (wholesale agents), consumer investors (consumer agents) and value and momentum investors (other agents) - see Figure~\ref{fig:marketplace} for agent categories.
The exchange charges investors fees for the facilities it provides on its platform. These fees typically differ based on investor category, and serve as profits for the exchange. Direct (regular) stock exchanges such as NYSE, Nasdaq and Cboe provide incentives to market makers for liquidity provision \cite{lightspeed_inverted}. On the contrary, inverted stock exchanges such as NYSE National, Nasdaq BX, Cboe EDGA and Cboe BYX charge market makers for providing liquidity. The reasons for such fee structures range from ensuring market efficiency to faster order fills in different exchanges \cite{nasdaq_inverted,cboe_inverted}.
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{lob.jpg}
\caption{Snapshot of order queue maintained by the exchange}
\label{fig:lob}
\end{figure}
\subsection{Simulator}\label{subsec:simulator}
In order to play out the interactions between agents in a stock exchange market, we employ a multi-agent exchange market simulator called ABIDES \cite{abides,amrouni2021abides}. ABIDES provides a selection of background trading agents with different trading behaviors and incentives. The simulation engine manages the flow of time and handles all inter-agent communication. The first category of simulated trading agents is that of {\it{market makers}} (denoted MMs henceforth) that continuously quote prices on both the buy and sell sides of the market, and earn the difference between the best buy and sell prices if orders execute on both sides (see Figure \ref{fig:lob}). MMs act as intermediaries and essentially eliminate `air pockets' between existing buyers and sellers. In this work, we define a MM by the stylized parameters that follow from its regulatory definition \cite{wah2017welfare,chakraborty2011market}. At every time $t$, the MM places new price quotes of constant order size $I$ at $d$ price increments around the current stock price $p_t$ in cents i.e., it places buy orders at prices $p_t-h-d, \ldots, p_t-h$ and sell orders at prices $p_t + h, \ldots, p_t + h +d$, where $d$ is the depth of placement and $h$ is the half-spread chosen by the MM at time $t$. Figure \ref{fig:lob} shows an example snapshot of orders collected at the exchange with $p_t=10000.5$ cents, $h=0.5$ cents displaying $d=2$ levels on either side of the current stock price. Each blue/orange rectangle represents a block of sell/buy orders placed in the order queue.
ABIDES also contains other strategy-based investors such as fundamental investors and momentum investors. The fundamental investors trade in line with their belief of the exogenous stock value (which we call fundamental price), without any view of the market microstructure \cite{kyle1985continuous}.
In this paper, we model the fundamental price of an asset by its historical price series. Each fundamental investor arrives to the market according to a Poisson process, and chooses to buy or sell a stock depending on whether it is cheap or expensive relative to its noisy observation of the fundamental. On the other hand, the momentum investors follow a simple momentum strategy of comparing a long-term average of the price with a short-term average. If the short-term average is higher than the long-term average, the investor buys since the price is seen to be rising. And, vice-versa for selling.
Further, ABIDES is equipped with {\it{consumer investors}} that are designed to emulate consumer agents who trade on demand. Each consumer investor trades once a day by placing an order of a random size in a random direction (buy or sell).
\section{Problem Setup}\label{sec:problem_setup}
In this paper, we take a mechanism design based approach that uses RL to derive a dynamic fee schedule optimizing for equitability of investors as well as for profits of the exchange. The dynamic nature of the fee schedule is inspired by the idea that exchange fees at any time during a trading day must be contingent on the current market conditions. An important point regarding the use of dynamic exchange fee schedules is that other investors could change their trading strategies in response to varying fees. Therefore, we consider an RL setup with two learning agents interacting with each other. We use ABIDES as a stock exchange simulation platform with an exchange agent that learns to update its fee schedule, and a MM agent that learns to update its trading strategy (see Figure \ref{fig:marketplace} for schematic). The remaining investors have static rule-based policies that do not use learning. We formulate this learning scenario as an RL problem by representing the exchange with investors as a Markov Game (MG).
\subsection{State of Markov Game}
The state for our MG captures the {\bf{shared states}} of the learning MM, learning exchange and the market comprising other (non-learning) investors as \begin{align}
s=\begin{bmatrix}inventory & fee & incentive & market\ signals\end{bmatrix}\label{eq:state}
\end{align}
where $inventory$ is the number of shares of stock held in the MM's inventory, $fee$ refers to exchange trading fees per unit of stock charged from liquidity consumers such as consumer investors, $incentive$ refers to exchange incentives per unit of stock given out to liquidity providers for their services. By convention, negative values for $fee$ and $incentive$ imply providing rebates to liquidity consumers, and charging fees from liquidity providers respectively. $market\ signals$ contains signals such as
\begin{align}
imbalance=\frac{total\ buy\ volume}{total\ buy\ volume +total\ sell\ volume}\nonumber
\end{align}
which is the volume imbalance in buy and sell orders in the market, $spread$ which is the difference between best sell and best buy prices, and $midprice$ which is the current mid-point of the best buy and sell prices of the stock (also called the stock price). Although the exchange may have access to the $inventory$ state for the MM, we design its policy to only depend on the latter three states.
\subsection{Actions and rewards of the learning MM}
The \textbf{actions} of the learning MM are comprised of the stylized parameters of $half-spread$ and $depth$ that define the price levels at which the MM places buy and sell orders (as described in section \ref{subsec:simulator}), and are denoted by\begin{align}
a_{\textnormal{MM}}=\begin{bmatrix}half-spread&depth\end{bmatrix}\nonumber
\end{align}
While the MM profits from its trading actions, it also receives incentives from the exchange for all units of liquidity provided (negative values for which correspond to paying out fees to the exchange). Therefore, we define the \textbf{reward} $R_{\textnormal{MM}}$, that captures all MM profits and losses, by\begin{equation}
\begin{aligned}
R_{\textnormal{MM}}&=\textnormal{Trading profits}\\
&+\lambda\cdot \left(incentive\times\textnormal{units of liquidity provided by MM}\right)
\end{aligned}\label{eq:r_mm}
\end{equation}
where $\lambda\geq0$ is a weighting parameter for the importance given by the MM to exchange incentives. Note that although it makes monetary sense to have $\lambda=1$, one can theoretically examine the effects of varying $\lambda$ across other values, since the reward function in RL does not need to exactly map to monetary profits. The objective of a reinforcement learning MM is to find a policy by maximizing the expected sum of discounted rewards (\ref{eq:r_mm}).
\subsection{Actions and rewards of the learning exchange}
The \textbf{actions} for the exchange involve specifying fees and incentives per unit of stock placed by liquidity consumers and providers respectively denoted by \begin{align}
a_{\textnormal{Ex}}=\begin{bmatrix}fee & incentive \end{bmatrix}\nonumber
\end{align} to entirely specify the next states of $fee$ and $incentive$ in (\ref{eq:state}).
In order to write down the {\bf{rewards}} for the exchange, we need a way to numerically quantify equitability alongside its profits. We use the negative of the weighted generalized entropy index defined in (\ref{eq:w_gei}), with the outcome $y_i$ for each investor $i\in\lbrace1,2,\cdots,n\rbrace$ being its profits at the end of the trading day. Since investors can also make loses, $y_i$ and hence $\mu$ and $\mu_g$ can take on negative values in (\ref{eq:w_gei}). This restricts the choice of $\kappa$ to even values. We choose $\kappa=2$ as in \cite{algo_unfairness} since higher values give spiky values for $\textnormal{GE}^w_\kappa$ hindering learning . For this work, we are interested in weights of the form $w=\begin{bmatrix}\beta&1-\beta&0\end{bmatrix}$ that look at equitability only to MMs and consumer investors for ease of understanding, with $\beta$ called the GEI weight.
Although the trading agents arrive at random times during a trading day, the equitability reward (\ref{eq:w_gei}) computed at the end of a trading day can be distributed throughout the day as follows. Define the equitability reward computed at every time step $t\in[T]$ to be the change in (\ref{eq:w_gei}) from $t-1$ to $t$ as
\begin{align}
R_\mathrm{Equitability}^t=
-\textnormal{GE}^\beta_2\left(Y^{t}\right)+\textnormal{GE}^\beta_2\left(Y^{t-1}\right)\label{eq:r_ex_equitability}
\end{align}
where $Y^t$ is the vector of profits for all investors $\lbrace1,2,\cdots,n\rbrace$ up to time $t$ and $\textnormal{GE}^\beta_2\left(Y^{0}\right):=0$.
The profits made by the exchange are given by the difference between the fees received from liquidity consumers and the incentives given out to liquidity providers over all traded units of stock as
\begin{align}
R_{\textnormal{Profits for Ex}}&=\textnormal{Fees}-\textnormal{Incentives}\label{eq:r_ex_profits}\\
&=fee\times\textnormal{units of liquidity consumed}\nonumber\\
&-incentive\times\textnormal{units of liquidity provided}\nonumber
\end{align}
Having quantified the two rewards of profits and equitability for the exchange, we use a weighted combination of the two as the {\bf{rewards}} for our learning exchange\begin{align}
R_{\textnormal{Ex}}=R_{\textnormal{Profits for Ex}}+\eta\cdot R_\mathrm{Equitability}\label{eq:r_ex}
\end{align}
where $\eta\geq0$ is a parameter called the equitability-weight, that has the interpretation of monetary benefits in \$ per unit of equitability. (\ref{eq:r_ex}) is also motivated from a constrained optimization perspective as being the objective in the unconstrained relaxation \cite{boyd2004convex} of the problem $\max R_{\textnormal{Profits for Ex}}\textnormal{ s.t. }R_\mathrm{Equitability}\geq c$. With the rewards defined in equations (\ref{eq:r_ex_equitability})-(\ref{eq:r_ex}) and discount factor $\gamma=1$, the RL objective for the exchange can be written as
\begin{align}
\mathbb{E}\left[\sum_{t=1}^T\left(\textnormal{Fees}-\textnormal{Incentives}\right)\right]-\eta\cdot\mathbb{E}\left[\textnormal{GE}^\beta_2\left(Y^{T}\right)\right]\nonumbe
\end{align}
where $\mathbb{E}[X]$ denotes the expected value of a random variable $X$. Hence, the objective of the equitable exchange is to learn a fee schedule that maximizes its profits over a given time horizon, while minimizing inequity to investors.
Having outlined our MG, we estimate the optimal policy using both tabular Q Learning (QL) \cite{watkins1992q} as well as the policy gradient method called Proximal Policy Optimization (PPO) from the library RLlib \cite{rllib}. Tabular QL estimates the optimal Q functions for the exchange and MM that are subsequently used to compute policies determining the dynamic fee schedule and MM trading strategy respectively.
\section{Experiments}
Given the MG formulated in the previous section, we try using tabular QL with discretized states as well as the policy gradient method called Proximal Policy Optimization (PPO) from the RLlib package \cite{ppo,rllib} with continuous states to estimate policies for the learning MM and Exchange (denoted Ex/EX).
\subsection{Numerics}
The time horizon of interest is a single trading day from 9:30am until 4:00pm. Therefore, we set $\gamma=0.9999$ to ensure that traders do not undervalue money at the end of the trading day as compared to that at the beginning. Both the learning MM and Exchange take an action every minute giving $T=390$ steps per episode. We also normalize the states and rewards to lie within the range $[-1,1]$. The precise numerics of our learning experiments that are common to both tabular QL and PPO algorithms are given in Table \ref{tab:expt_numerics}. The values for every $(fee,incentive)$ pair are to be read as follows\footnote{The fees charged and incentives given out by real exchanges are of the order of $0.10$ cents per share \cite{nyse_prices} informing our choice of exchange actions listed in Table \ref{tab:expt_numerics}.}. If $(fee,incentive)=(0.30,0.25)$ cents, the exchange would charge 0.30 cents per trade executed by a liquidity consumer, and provide 0.25 cents of incentives per trade executed by a liquidity provider. Similarly, if $(fee,incentive)=(-0.30,-0.25)$ cents, the exchange would provide 0.30 cents of rebate per trade executed by a liquidity consumer, and charge 0.25 cents per trade executed by a liquidity provider.
\subsection{Training and convergence}
To make our problem suited to the use of tabular QL, we discretize our states by binning them. An important requirement for the functioning of the tabular QL algorithm is that there be enough visitations of each (state,action) pair. Accordingly, we pick our state discretization bins by observing the range of values taken in a sample experiment. We use an $\epsilon$ - greedy approach to balance exploration and exploitation. Additionally, since convergence of tabular QL relies on having \emph{adequate} visitation of each (state, action) pair, training training is divided into three phases - pure exploration, pure exploitation and convergence phases. During the pure exploration phase, $\alpha_n$ and $\epsilon_n$ are both held constant at high values to facilitate the visitation of as many state-action discretization bins as possible. During the pure exploitation phase, $\epsilon_n$ is decayed to an intermediate value while $\alpha_n$ is held constant at its exploration value so that the Q Table is updated to reflect the one step optimal actions. After the pure exploration and pure exploitation phases, we have the learning phase where both $\alpha_n$ and $\epsilon_n$ are decayed to facilitate convergence of the QL algorithm. The precise numerics specific to our tabular QL experiments are given in Table \ref{tab:tab_expt_numerics}.
\begin{table}[tb]
\allowdisplaybreaks
\centering
\begin{tabular}{|p{0.4\linewidth}|p{0.55\linewidth}|}\hline
Total \# of training episodes & 2000\\\hline
$half-spread$ & $\lbrace0.5,1.0,1.5,2.0,2.5\rbrace$cents\\\hline
$depth$ & $\lbrace1,2,3\rbrace$cents\\\hline
\multirow{3}{0.4\linewidth}{$(fee,incentive)$} &
$\lbrace(0.30,0.30),(0.30,0.25),$\\
&$(0.25,0.30),(-0.30,-0.30),$\\
&$(-0.30,-0.25),(-0.25,-0.30)\rbrace$cents\\\hline
$\gamma$ & $0.9999$\\\hline
$T$ & $390$\\\hline
$G$ & $3$\\\hline
$\kappa$ & $2$\\\hline
$\lambda$ & $1.0$\\\hline
$\beta$ & $\lbrace0.0,0.3,0.5,0.6,1.0\rbrace$\\\hline
$\eta$ & $\lbrace0,1,10,100,1000,10000\rbrace$\\\hline
\end{tabular}
\caption{Numerics common to tabular QL and PPO experiments}
\label{tab:expt_numerics}
\end{table}
\begin{table}[tb]
\centering
\begin{tabular}{|p{0.5\linewidth}|p{0.45\linewidth}|}\hline
\# of pure exploration episodes & 800 \\\hline
\# of pure exploitation episodes & 400 \\\hline
\# of convergence episodes & 800 \\\hline
$\alpha_0=\cdots=\alpha_{399}=\cdots=\alpha_{599}$ & $0.9$\\\hline
$\alpha_{999}$ & $10^{-5}$\\\hline
$\epsilon_0=\epsilon_1=\cdots=\epsilon_{399}$ & $0.9$\\\hline
$\epsilon_{599}$ & $0.1$ \\\hline
$\epsilon_{999}$ & $10^{-5}$\\\hline
\end{tabular}
\caption{Numerics specific to tabular QL experiments}
\label{tab:tab_expt_numerics}
\end{table}
\begin{figure*}[tb]
\centering
\includegraphics[width=\linewidth]{deep_vs_tab.eps}
\caption{Comparison of training rewards for tabular QL and PPO}
\label{fig:deep_v_tab}
\end{figure*}
\begin{figure*}[tb]
\centering
\includegraphics[width=\linewidth]{Reward.eps}
\caption{Training rewards with PPO}
\label{fig:training_rewards}
\end{figure*}
We additionally estimate the optimal policies for both learning agents using PPO with the default parameters in RLlib. We observe convergence in cumulative training rewards per episode for both methods for the range of values of equitability weight $\eta$ and the GEI weight $\beta$ given in Table \ref{tab:expt_numerics}. Figure \ref{fig:deep_v_tab} is a plot comparing the cumulative training rewards for the learning MM and learning exchange for $\beta\in\lbrace0.0,0.5\rbrace$ and $\eta=1.0$. We see that PPO is able to achieve higher cumulative training rewards for both learning agents. Figure \ref{fig:training_rewards} is a plot of cumulative training rewards achieved using PPO alone for a wide range of values of $(\beta,\eta)$. We see that the cumulative training rewards converge enabling us to estimate optimal exchange fee schedules and MM actions simultaneously for the range of weights $(\eta,\beta)$ considered.
\subsection{Explaining learnt policies}
\begin{figure*}[tb]
\centering
\includegraphics[width=\linewidth]{avg_quantities.eps}
\caption{Average policies and profits of MM and exchange with varying $(\eta,\beta)$ for deep Q learning}
\label{fig:avg_policy}
\end{figure*}
We now try to intuitively explain the effects of the parameters $(\eta,\beta)$ on the learnt exchange and MM policies. Increasing $\beta$ from 0 to 1 corresponds to increasing the weighting of GEI given to MM compared to consumer investors in (\ref{eq:w_gei}), with $\textnormal{GE}_2(Y_{\textnormal{MM}})=0$ with a single MM. While $\beta=0$ accounts for equitability to only the consumer investor group, $\beta=1$ corresponds to the case where the GEI metric captures (between group) equitability to only MMs. On the other hand, increasing $\eta$ corresponds to increasing the equitability weight in the exchange reward (\ref{eq:r_ex}).
Figure \ref{fig:avg_policy} is a plot of average policies and resulting profits for the EX and MM for various $(\eta,\beta)$ pairs\footnote{All profits are normalized, and hence unit less. EX fee adn incentive are in cents.}. The average policies are got by averaging the learnt policy (which maps the current state to the estimated (optimal) action to be taken in that state) using a uniform distribution on the states. By convention, we are looking at fees charged to liquidity consumers and incentives provided to liquidity providers as in direct stock exchanges. Negative fees correspond to rebates to consumers, and negative incentives correspond to fees charged to providers. Thus, negative fee and negative incentive reflect inverted stock exchanges. We observe the following trends from Figure \ref{fig:avg_policy}.
\paragraph{Exchange fees and incentives}
As $\eta$ increases for a given $\beta$, we see that the exchange starts charging more fees from liquidity consumers. For some $\beta$, we see that the exchange moves from initially providing incentives to liquidity consumers to charging them. When $\beta$ increases given high values of $\eta$, going from considering equitability to consumer investors to that for MMs, the fees to consumers increase. We see similar trends in the exchange incentives for liquidity providers. As $\eta$ increases for a given $\beta$, we see that the exchange starts providing more incentives to liquidity providers. For some $\beta$, we see that the exchange moves from initially charging fees to liquidity providers to giving them incentives. When $\beta$ increases given high values of $\eta$, going from considering equitability to consumer investors to that for MMs, the incentives to providers increase.
The above two points say that when the exchange is looking at equitability to only consumer investors, increasing the equitability metric makes it switch from an inverted exchange to a direct exchange. This is in line with popular opinion about direct exchanges being more equitable to consumer investors than inverted exchanges.
\paragraph{Exchange and MM profits}
As $\eta$ increases for fixed $\beta$, we see exchange profits decreasing as it strives to be more equitable. For fixed $\eta$, as $\beta$ is increased to consider equitability to the MM, the exchange profit increases in line with MM profits. Similarly, we see that MM profits increase as $\beta$ is increased to favour equitability to the MM group.
\paragraph{Consumer profits and equitability}
As the equitability weight $\eta$ increases for a given $\beta$, we see consumer profits increase. For a fixed high value of equitability weight $\eta$, when $\beta$ increases going from considering equitability to consumer investors to that for MMs, we interestingly see that consumer profits increase. This is to say that the MMs are incentivized to provide liquidity to consumers in an equitable fashion. For the equitability reward (\ref{eq:r_ex_equitability}), we see that it increases as the weight to the MM group is increased. This goes to say (as previously) that focusing solely on equitability to the MM group helps in the equitability in the entire marketplace since the MM is then incentivized to provide liquidity in an equitable fashion (at the cost of low exchange profits).
\section{Discussion and Conclusion}
In this paper, we used reinforcement learning to design a dynamic fee schedule for a marketplace agent that makes the marketplace equitable while ensuring profitability for trading agents.
We see that the choice of equitability parameters define the nature of learnt policies for strategic marketplace agents. The learnt policies start favoring the agent group with the highest equitability weight. We observe that such a setup can be used to design marketplace incentives to wholesale agents to influence them to make marketplaces more equitable.
\begin{acks}
This paper was prepared for informational purposes by the Artificial Intelligence Research group of JPMorgan Chase \& Co. and its affiliates (``JP Morgan''), and is not a product of the Research Department of JP Morgan. JP Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful.
\end{acks}
\printbibliography
\subsection{Introduction}
\section{Introduction}\label{sec:marketplace}
We consider a marketplace with trading agents that play the roles of buyers and/or sellers of goods or services. The trading agents are of different types based on the frequency at which they trade, the size of their trading orders, whether they consistently buy or sell or both, and the strategy that they use for trading. The seller agents can range from manufacturers that offer high good volumes for sale, to individual and retail sellers that sell much less frequently. The buyer agents use the platform to compare prices available from different seller agents, and to buy goods from them. The \emph{marketplace} agent is one that facilitates trading between buyer and seller agents by providing them access to marketplace communication and an order matching engine. It also charges trading agents in the form of fees for the facilities it provides. These fees serve as profits to the marketplace agent. Further, the profits made by trading agents through goods exchange are offset by the fees paid to the marketplace.
\emph{Wholesale} agents are both buyers and sellers of the goods that are traded in the marketplace. They are characterised by their large trading volumes and high trading frequencies. Wholesale agents can improve liquidity in the marketplace by frequent buying and selling of large volumes of goods. It is for this reason that the marketplace agent often offers fee rebates to wholesale agents for their function of ensuring the presence of orders on the opposite side of every arriving buy or sell order. There are \emph{other} agent groups in the marketplace ecosystem that trade goods based on their perception of the long term value of a good, or market momentum signals. \emph{Consumer} agents are characterized by trading on demand. They arrive to the marketplace at random times and trade smaller volumes without using informed trading strategies. They can therefore potentially trade at inferior prices, hence, raising questions about marketplace equitability to consumer agents. Consider an example where a consumer agent needs to trade during a period of marketplace distress when there is little supply of goods offered by wholesale agents. Under such conditions, the consumer agent might trade at an inferior price, resulting in an execution that may be perceived as inequitable as compared to those of other agents who were luckier to trade in a more stable marketplace.
Examples of marketplace ecosystems that can be described by the above framework include e-commerce trading platforms as well as stock exchange markets. For instance, wholesale agents such as publishing houses as well as other small seller agents (such as bookstores) can list books for a fee on e-commerce trading marketplaces, which allows them to procure books to individual consumer agents and other buyers. In stock exchange markets, market makers provide liquidity to the exchange on both buy and sell sides of the market for exchange incentives. This action enables market makers to profit, and other market agents to trade assets with each other. Consumer agents such as individuals who trade on demand without sophisticated trading strategies and technology can be vulnerable to rapid price changes and volatility.
Simulations have previously been used to answer questions that are of interest to participants of financial markets, hedge funds, banks and stock exchanges. In \cite{nasdaq_tick_size}, the authors investigated the use of an intelligent tick structure that modified the currently constant tick size for all stocks to having different tick sizes for different stocks. Fairness and equitability in markets have become increasingly important as described in \cite{sec_options,sec_securities}. In this paper, we investigate the impact of a reduction in marketplace fees charged to wholesale agents on equitability outcomes to consumer agents in a marketplace simulator. We show that such fee reductions incentivise wholesale agents to enable equitable provision of goods to consumer agents in the marketplace (see Figure \ref{fig:marketplace}). Specifically, we demonstrate that an equitable marketplace mechanism can be enabled by a dynamic marketplace fee policy derived by reinforcement learning in a simulated stock exchange market.
\section{Background and related work}
\subsection{Equitability Metric}
Equitability has conventionally been studied in political philosophy and ethics \cite{johnrawls}, economics \cite{hmoulin}, and public resource distribution \cite{peytonyoung}. In recent times, there has been a renewed interest in quantifying equitability in classification tasks \cite{dwork2012fairness}. Literature on fairness in machine learning studies two main notions of equitability: group fairness and individual fairness. Group fairness ensures some form of statistical parity (e.g. between positive outcomes, or errors) for members of different groups. On the other hand, individual fairness ensures that individuals who are `similar' with respect to the task at hand receive similar outcomes \cite{binns2020apparentfairness}. In \cite{dwarakanath2021profit}, the authors studied the effect a market marker can have on individual fairness for consumer traders by adjusting its parameters. A negative correlation was observed between the profits of the market maker and equitability. Hence, the market maker incurs a cost while enabling individual fairness to consumer traders. This stirs up the idea of designing a marketplace in which the market maker can be compensated by the exchange for making equitable markets.
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{marketplace.jpg}
\caption{Schematic of marketplace ecosystem}
\label{fig:marketplace}
\end{figure}
In this paper, we are interested in understanding the effects of marketplace fees on equitability to trading agents. We draw from the entropy metric used in \cite{dwarakanath2021profit} to measure individual fairness within a single group of (consumer) traders. We seek an equitability metric that can capture equitability both within each group as well as that across groups. The authors of \cite{cowell1981gei,algo_unfairness} describe the family of generalized entropy indices (GEI) that satisfy the property of subgroup-decomposability, i.e. the inequity measure over an entire population can be decomposed into the sum of a between-group inequity component (similar to group fairness metrics in \cite{dwork2012fairness,aif360}) and a within-group inequity component (similar to the individual fairness entropy metric used in \cite{dwarakanath2021profit}). Given observations $Y=\lbrace y_1,y_2,\cdots,y_n\rbrace$ of outcomes of $n$ agents, the generalized entropy index is defined for parameter $\kappa\neq0,1$ as
\begin{align}
\textnormal{GE}_{\kappa}(Y)=\frac{1}{n\kappa(\kappa-1)}\sum_{i=1}^n\left(\left(\frac{y_i}{\mu}\right)^\kappa-1\right)\label{eq:gei_defn}
\end{align}
where $\mu:=\frac{1}{n}\sum_{i=1}^{n}y_i$ is the average outcome. Note that $\textnormal{GE}_{\kappa}(Y)$ is a measure of inequity with the most equitable scenario resulting from $y_i=c$ for all $i=1,2,\cdots,n$. If we think of $y_i$ to denote the profit of trading agent $i$, then the most equitable scenario corresponds to having equal profits for all agents.
With the population divided into $G$ groups of agents, one can decompose the right hand side of equation (\ref{eq:gei_defn}) as \begin{align}
\textnormal{GE}_\kappa(Y)&=\sum_{g=1}^{G}\frac{n_g}{n}\left(\frac{\mu_g}{\mu}\right)^\kappa \textnormal{GE}_\kappa(Y_g)\label{eq:gei1}\\
&+\sum_{g=1}^{G}\frac{n_g}{n\kappa(\kappa-1)}\left(\left(\frac{\mu_g}{\mu}\right)^\kappa-1\right)\label{eq:gei2}
\end{align}
where $Y_g=\lbrace y_i:i\in g\rbrace$ is the set of outcomes of agents in group $g$, $n_g$ is the number of agents in group $g$, and $\mu_g:=\frac{1}{n_g}\sum_{i\in g}y_i$ is the average outcome in group $g$. Then, the term on the right of (\ref{eq:gei1}) captures the within-group inequity (similar to the entropy metric for individual fairness in \cite{dwarakanath2021profit}) for group $g$ weighted by its population. And, the term in (\ref{eq:gei2}) captures the between-group inequity by comparing the average outcome in the entire population against that in group $g$.
We propose a weighted version of (\ref{eq:gei1})-(\ref{eq:gei2}) with weight $w=\lbrace w_g:g=1,2,\cdots,G\rbrace$ where $w_g\geq0$ for all $g=1,2,\cdots,G$ and $\sum_{g=1}^{G}w_g=1$ as
\begin{equation}
\begin{aligned}
\textnormal{GE}^w_\kappa(Y)&=\sum_{g=1}^Gw_g\cdot\frac{n_g}{n}\left(\frac{\mu_g}{\mu}\right)^\kappa \textnormal{GE}_\kappa(Y_g)\\
&+\sum_{g=1}^Gw_g\cdot\frac{n_g}{n\kappa(\kappa-1)}\left(\left(\frac{\mu_g}{\mu}\right)^\kappa-1\right)
\end{aligned}\label{eq:w_gei}
\end{equation}
Note that the equitability metric defined in (\ref{eq:w_gei}) provides extended flexibility to the original definition (\ref{eq:gei_defn}) by enabling the user to focus on a specific agent group $l$ by setting $w_l=1$ and $w_g=0$ for all $g\neq l$. For ease of notation, we establish the following group correspondence for the three types of trading agents in our marketplace described in section \ref{sec:marketplace}. Let $g=1$ correspond to wholesale agents, $g=2$ to consumer agents and $g=3$ to other agents. We use the negative of (\ref{eq:w_gei}) as the metric for equitability going ahead.
\subsection{Reinforcement Learning}
Our marketplace ecosystem consists of multiple interacting trading agents as in Figure \ref{fig:marketplace}.
Such an ecosystem is well modeled as a multi-agent system -- a system comprised of multiple autonomous agents interacting with each other in a common environment which they each observe and act upon. The behaviours of these agents can be defined beforehand using certain rules or expert knowledge, or learnt on the go. Reinforcement learning (RL) has become a popular approach to learn agent behavior given certain objectives that are to be improved upon \cite{sutton2018reinforcement,kaelbling1996reinforcement}. An RL agent seeks to modify its behaviour based on rewards received upon its interaction with its dynamic environment.
There exist well-understood algorithms with their convergence and consistency properties well studied for the single-agent RL task.
An environment with multiple learning agents is modeled in the form of Markov Games (MGs) or stochastic games \cite{zhang2021marl,shapley1953stochastic}. An MG is a tuple $(\mathcal{N},\mathcal{S},\lbrace\mathcal{A}_i:i=1,2,\cdots,n\rbrace,\mathcal{T},\lbrace R_i:i=1,2,\cdots,n\rbrace,\gamma,T)$ comprising the set $\mathcal{N}=\lbrace1,2,\cdots,n\rbrace$ of all agents, the joint state space, action spaces of all agents, a model of the environment giving the probability of transitioning from one joint state to another given the actions of all agents, the reward functions of all agents, discount factor and the time horizon respectively \cite{zhang2021marl}. The goal of each agent is to maximize the expected sum of its own discounted rewards that now depend on the actions of other agents as well. While it is tempting to use RL for multi-agent systems, it comes with a set of challenges. The main challenge being the presence of multiple agents that are learning to act in presence of one another \cite{busoniu2008comprehensivemarl}. We deal with this by adopting an iterative learning framework where our learning agents taking turns to update their value functions while the other learning agents keep their policies fixed. Such an iterative learning framework was used in \cite{zheng2020aieconomist} to simultaneously learn economic actors and a tax planner in an economic system.
The general framework of using RL for mechanism design was previously considered in \cite{rl_mechanismdesign}. Mechanism design using RL for e-commerce applications was studied in \cite{mechanism_ecommerce}. In this paper, we show how to use RL for equitable marketplace mechanism design with our discussion focused on financial markets.
\subsection{Stock exchange markets}
We now concentrate on stock exchange markets (such as Nasdaq or New York Stock Exchange) which can be viewed as instances of our generic trading marketplace with stocks being the goods traded between agents.
Stock trading agents can belong to many categories: market makers, consumer investors, fundamental investors, momentum investors, etc. Market makers are investors that are obliged to continuously provide liquidity to both buyers and sellers regardless of market conditions.
They act as both buyers and sellers of the stock and have more frequent trades with larger order volumes as compared to the other categories of investors.
Fundamental investors and momentum investors use the exogenous stock value or long term averages of the stock to trade unilaterally (buy or sell, not both) at different times in the trading day; they also have more frequent trades (albeit unilateral) than the category of consumer investors, who trade purely based on demand without any other considerations \cite{kyle1985continuous}.
Irrespective of type, the objective of all market investors is to make profits from their trading actions.
The aforementioned investor categories can be mapped to our marketplace ecosystem as follows: exchange (marketplace agent), market makers (wholesale agents), consumer investors (consumer agents) and value and momentum investors (other agents) - see Figure~\ref{fig:marketplace} for agent categories.
The exchange charges investors fees for the facilities it provides on its platform. These fees typically differ based on investor category, and serve as profits for the exchange. Direct (regular) stock exchanges such as NYSE, Nasdaq and Cboe provide incentives to market makers for liquidity provision \cite{lightspeed_inverted}. On the contrary, inverted stock exchanges such as NYSE National, Nasdaq BX, Cboe EDGA and Cboe BYX charge market makers for providing liquidity. The reasons for such fee structures range from ensuring market efficiency to faster order fills in different exchanges \cite{nasdaq_inverted,cboe_inverted}.
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{lob.jpg}
\caption{Snapshot of order queue maintained by the exchange}
\label{fig:lob}
\end{figure}
\subsection{Simulator}\label{subsec:simulator}
In order to play out the interactions between agents in a stock exchange market, we employ a multi-agent exchange market simulator called ABIDES \cite{abides,amrouni2021abides}. ABIDES provides a selection of background trading agents with different trading behaviors and incentives. The simulation engine manages the flow of time and handles all inter-agent communication. The first category of simulated trading agents is that of {\it{market makers}} (denoted MMs henceforth) that continuously quote prices on both the buy and sell sides of the market, and earn the difference between the best buy and sell prices if orders execute on both sides (see Figure \ref{fig:lob}). MMs act as intermediaries and essentially eliminate `air pockets' between existing buyers and sellers. In this work, we define a MM by the stylized parameters that follow from its regulatory definition \cite{wah2017welfare,chakraborty2011market}. At every time $t$, the MM places new price quotes of constant order size $I$ at $d$ price increments around the current stock price $p_t$ in cents i.e., it places buy orders at prices $p_t-h-d, \ldots, p_t-h$ and sell orders at prices $p_t + h, \ldots, p_t + h +d$, where $d$ is the depth of placement and $h$ is the half-spread chosen by the MM at time $t$. Figure \ref{fig:lob} shows an example snapshot of orders collected at the exchange with $p_t=10000.5$ cents, $h=0.5$ cents displaying $d=2$ levels on either side of the current stock price. Each blue/orange rectangle represents a block of sell/buy orders placed in the order queue.
ABIDES also contains other strategy-based investors such as fundamental investors and momentum investors. The fundamental investors trade in line with their belief of the exogenous stock value (which we call fundamental price), without any view of the market microstructure \cite{kyle1985continuous}.
In this paper, we model the fundamental price of an asset by its historical price series. Each fundamental investor arrives to the market according to a Poisson process, and chooses to buy or sell a stock depending on whether it is cheap or expensive relative to its noisy observation of the fundamental. On the other hand, the momentum investors follow a simple momentum strategy of comparing a long-term average of the price with a short-term average. If the short-term average is higher than the long-term average, the investor buys since the price is seen to be rising. And, vice-versa for selling.
Further, ABIDES is equipped with {\it{consumer investors}} that are designed to emulate consumer agents who trade on demand. Each consumer investor trades once a day by placing an order of a random size in a random direction (buy or sell).
\section{Problem Setup}\label{sec:problem_setup}
In this paper, we take a mechanism design based approach that uses RL to derive a dynamic fee schedule optimizing for equitability of investors as well as for profits of the exchange. The dynamic nature of the fee schedule is inspired by the idea that exchange fees at any time during a trading day must be contingent on the current market conditions. An important point regarding the use of dynamic exchange fee schedules is that other investors could change their trading strategies in response to varying fees. Therefore, we consider an RL setup with two learning agents interacting with each other. We use ABIDES as a stock exchange simulation platform with an exchange agent that learns to update its fee schedule, and a MM agent that learns to update its trading strategy (see Figure \ref{fig:marketplace} for schematic). The remaining investors have static rule-based policies that do not use learning. We formulate this learning scenario as an RL problem by representing the exchange with investors as a Markov Game (MG).
\subsection{State of Markov Game}
The state for our MG captures the {\bf{shared states}} of the learning MM, learning exchange and the market comprising other (non-learning) investors as \begin{align}
s=\begin{bmatrix}inventory & fee & incentive & market\ signals\end{bmatrix}\label{eq:state}
\end{align}
where $inventory$ is the number of shares of stock held in the MM's inventory, $fee$ refers to exchange trading fees per unit of stock charged from liquidity consumers such as consumer investors, $incentive$ refers to exchange incentives per unit of stock given out to liquidity providers for their services. By convention, negative values for $fee$ and $incentive$ imply providing rebates to liquidity consumers, and charging fees from liquidity providers respectively. $market\ signals$ contains signals such as
\begin{align}
imbalance=\frac{total\ buy\ volume}{total\ buy\ volume +total\ sell\ volume}\nonumber
\end{align}
which is the volume imbalance in buy and sell orders in the market, $spread$ which is the difference between best sell and best buy prices, and $midprice$ which is the current mid-point of the best buy and sell prices of the stock (also called the stock price). Although the exchange may have access to the $inventory$ state for the MM, we design its policy to only depend on the latter three states.
\subsection{Actions and rewards of the learning MM}
The \textbf{actions} of the learning MM are comprised of the stylized parameters of $half-spread$ and $depth$ that define the price levels at which the MM places buy and sell orders (as described in section \ref{subsec:simulator}), and are denoted by\begin{align}
a_{\textnormal{MM}}=\begin{bmatrix}half-spread&depth\end{bmatrix}\nonumber
\end{align}
While the MM profits from its trading actions, it also receives incentives from the exchange for all units of liquidity provided (negative values for which correspond to paying out fees to the exchange). Therefore, we define the \textbf{reward} $R_{\textnormal{MM}}$, that captures all MM profits and losses, by\begin{equation}
\begin{aligned}
R_{\textnormal{MM}}&=\textnormal{Trading profits}\\
&+\lambda\cdot \left(incentive\times\textnormal{units of liquidity provided by MM}\right)
\end{aligned}\label{eq:r_mm}
\end{equation}
where $\lambda\geq0$ is a weighting parameter for the importance given by the MM to exchange incentives. Note that although it makes monetary sense to have $\lambda=1$, one can theoretically examine the effects of varying $\lambda$ across other values, since the reward function in RL does not need to exactly map to monetary profits. The objective of a reinforcement learning MM is to find a policy by maximizing the expected sum of discounted rewards (\ref{eq:r_mm}).
\subsection{Actions and rewards of the learning exchange}
The \textbf{actions} for the exchange involve specifying fees and incentives per unit of stock placed by liquidity consumers and providers respectively denoted by \begin{align}
a_{\textnormal{Ex}}=\begin{bmatrix}fee & incentive \end{bmatrix}\nonumber
\end{align} to entirely specify the next states of $fee$ and $incentive$ in (\ref{eq:state}).
In order to write down the {\bf{rewards}} for the exchange, we need a way to numerically quantify equitability alongside its profits. We use the negative of the weighted generalized entropy index defined in (\ref{eq:w_gei}), with the outcome $y_i$ for each investor $i\in\lbrace1,2,\cdots,n\rbrace$ being its profits at the end of the trading day. Since investors can also make loses, $y_i$ and hence $\mu$ and $\mu_g$ can take on negative values in (\ref{eq:w_gei}). This restricts the choice of $\kappa$ to even values. We choose $\kappa=2$ as in \cite{algo_unfairness} since higher values give spiky values for $\textnormal{GE}^w_\kappa$ hindering learning . For this work, we are interested in weights of the form $w=\begin{bmatrix}\beta&1-\beta&0\end{bmatrix}$ that look at equitability only to MMs and consumer investors for ease of understanding, with $\beta$ called the GEI weight.
Although the trading agents arrive at random times during a trading day, the equitability reward (\ref{eq:w_gei}) computed at the end of a trading day can be distributed throughout the day as follows. Define the equitability reward computed at every time step $t\in[T]$ to be the change in (\ref{eq:w_gei}) from $t-1$ to $t$ as
\begin{align}
R_\mathrm{Equitability}^t=
-\textnormal{GE}^\beta_2\left(Y^{t}\right)+\textnormal{GE}^\beta_2\left(Y^{t-1}\right)\label{eq:r_ex_equitability}
\end{align}
where $Y^t$ is the vector of profits for all investors $\lbrace1,2,\cdots,n\rbrace$ up to time $t$ and $\textnormal{GE}^\beta_2\left(Y^{0}\right):=0$.
The profits made by the exchange are given by the difference between the fees received from liquidity consumers and the incentives given out to liquidity providers over all traded units of stock as
\begin{align}
R_{\textnormal{Profits for Ex}}&=\textnormal{Fees}-\textnormal{Incentives}\label{eq:r_ex_profits}\\
&=fee\times\textnormal{units of liquidity consumed}\nonumber\\
&-incentive\times\textnormal{units of liquidity provided}\nonumber
\end{align}
Having quantified the two rewards of profits and equitability for the exchange, we use a weighted combination of the two as the {\bf{rewards}} for our learning exchange\begin{align}
R_{\textnormal{Ex}}=R_{\textnormal{Profits for Ex}}+\eta\cdot R_\mathrm{Equitability}\label{eq:r_ex}
\end{align}
where $\eta\geq0$ is a parameter called the equitability-weight, that has the interpretation of monetary benefits in \$ per unit of equitability. (\ref{eq:r_ex}) is also motivated from a constrained optimization perspective as being the objective in the unconstrained relaxation \cite{boyd2004convex} of the problem $\max R_{\textnormal{Profits for Ex}}\textnormal{ s.t. }R_\mathrm{Equitability}\geq c$. With the rewards defined in equations (\ref{eq:r_ex_equitability})-(\ref{eq:r_ex}) and discount factor $\gamma=1$, the RL objective for the exchange can be written as
\begin{align}
\mathbb{E}\left[\sum_{t=1}^T\left(\textnormal{Fees}-\textnormal{Incentives}\right)\right]-\eta\cdot\mathbb{E}\left[\textnormal{GE}^\beta_2\left(Y^{T}\right)\right]\nonumbe
\end{align}
where $\mathbb{E}[X]$ denotes the expected value of a random variable $X$. Hence, the objective of the equitable exchange is to learn a fee schedule that maximizes its profits over a given time horizon, while minimizing inequity to investors.
Having outlined our MG, we estimate the optimal policy using both tabular Q Learning (QL) \cite{watkins1992q} as well as the policy gradient method called Proximal Policy Optimization (PPO) from the library RLlib \cite{rllib}. Tabular QL estimates the optimal Q functions for the exchange and MM that are subsequently used to compute policies determining the dynamic fee schedule and MM trading strategy respectively.
\section{Experiments}
Given the MG formulated in the previous section, we try using tabular QL with discretized states as well as the policy gradient method called Proximal Policy Optimization (PPO) from the RLlib package \cite{ppo,rllib} with continuous states to estimate policies for the learning MM and Exchange (denoted Ex/EX).
\subsection{Numerics}
The time horizon of interest is a single trading day from 9:30am until 4:00pm. Therefore, we set $\gamma=0.9999$ to ensure that traders do not undervalue money at the end of the trading day as compared to that at the beginning. Both the learning MM and Exchange take an action every minute giving $T=390$ steps per episode. We also normalize the states and rewards to lie within the range $[-1,1]$. The precise numerics of our learning experiments that are common to both tabular QL and PPO algorithms are given in Table \ref{tab:expt_numerics}. The values for every $(fee,incentive)$ pair are to be read as follows\footnote{The fees charged and incentives given out by real exchanges are of the order of $0.10$ cents per share \cite{nyse_prices} informing our choice of exchange actions listed in Table \ref{tab:expt_numerics}.}. If $(fee,incentive)=(0.30,0.25)$ cents, the exchange would charge 0.30 cents per trade executed by a liquidity consumer, and provide 0.25 cents of incentives per trade executed by a liquidity provider. Similarly, if $(fee,incentive)=(-0.30,-0.25)$ cents, the exchange would provide 0.30 cents of rebate per trade executed by a liquidity consumer, and charge 0.25 cents per trade executed by a liquidity provider.
\subsection{Training and convergence}
To make our problem suited to the use of tabular QL, we discretize our states by binning them. An important requirement for the functioning of the tabular QL algorithm is that there be enough visitations of each (state,action) pair. Accordingly, we pick our state discretization bins by observing the range of values taken in a sample experiment. We use an $\epsilon$ - greedy approach to balance exploration and exploitation. Additionally, since convergence of tabular QL relies on having \emph{adequate} visitation of each (state, action) pair, training training is divided into three phases - pure exploration, pure exploitation and convergence phases. During the pure exploration phase, $\alpha_n$ and $\epsilon_n$ are both held constant at high values to facilitate the visitation of as many state-action discretization bins as possible. During the pure exploitation phase, $\epsilon_n$ is decayed to an intermediate value while $\alpha_n$ is held constant at its exploration value so that the Q Table is updated to reflect the one step optimal actions. After the pure exploration and pure exploitation phases, we have the learning phase where both $\alpha_n$ and $\epsilon_n$ are decayed to facilitate convergence of the QL algorithm. The precise numerics specific to our tabular QL experiments are given in Table \ref{tab:tab_expt_numerics}.
\begin{table}[tb]
\allowdisplaybreaks
\centering
\begin{tabular}{|p{0.4\linewidth}|p{0.55\linewidth}|}\hline
Total \# of training episodes & 2000\\\hline
$half-spread$ & $\lbrace0.5,1.0,1.5,2.0,2.5\rbrace$cents\\\hline
$depth$ & $\lbrace1,2,3\rbrace$cents\\\hline
\multirow{3}{0.4\linewidth}{$(fee,incentive)$} &
$\lbrace(0.30,0.30),(0.30,0.25),$\\
&$(0.25,0.30),(-0.30,-0.30),$\\
&$(-0.30,-0.25),(-0.25,-0.30)\rbrace$cents\\\hline
$\gamma$ & $0.9999$\\\hline
$T$ & $390$\\\hline
$G$ & $3$\\\hline
$\kappa$ & $2$\\\hline
$\lambda$ & $1.0$\\\hline
$\beta$ & $\lbrace0.0,0.3,0.5,0.6,1.0\rbrace$\\\hline
$\eta$ & $\lbrace0,1,10,100,1000,10000\rbrace$\\\hline
\end{tabular}
\caption{Numerics common to tabular QL and PPO experiments}
\label{tab:expt_numerics}
\end{table}
\begin{table}[tb]
\centering
\begin{tabular}{|p{0.5\linewidth}|p{0.45\linewidth}|}\hline
\# of pure exploration episodes & 800 \\\hline
\# of pure exploitation episodes & 400 \\\hline
\# of convergence episodes & 800 \\\hline
$\alpha_0=\cdots=\alpha_{399}=\cdots=\alpha_{599}$ & $0.9$\\\hline
$\alpha_{999}$ & $10^{-5}$\\\hline
$\epsilon_0=\epsilon_1=\cdots=\epsilon_{399}$ & $0.9$\\\hline
$\epsilon_{599}$ & $0.1$ \\\hline
$\epsilon_{999}$ & $10^{-5}$\\\hline
\end{tabular}
\caption{Numerics specific to tabular QL experiments}
\label{tab:tab_expt_numerics}
\end{table}
\begin{figure*}[tb]
\centering
\includegraphics[width=\linewidth]{deep_vs_tab.eps}
\caption{Comparison of training rewards for tabular QL and PPO}
\label{fig:deep_v_tab}
\end{figure*}
\begin{figure*}[tb]
\centering
\includegraphics[width=\linewidth]{Reward.eps}
\caption{Training rewards with PPO}
\label{fig:training_rewards}
\end{figure*}
We additionally estimate the optimal policies for both learning agents using PPO with the default parameters in RLlib. We observe convergence in cumulative training rewards per episode for both methods for the range of values of equitability weight $\eta$ and the GEI weight $\beta$ given in Table \ref{tab:expt_numerics}. Figure \ref{fig:deep_v_tab} is a plot comparing the cumulative training rewards for the learning MM and learning exchange for $\beta\in\lbrace0.0,0.5\rbrace$ and $\eta=1.0$. We see that PPO is able to achieve higher cumulative training rewards for both learning agents. Figure \ref{fig:training_rewards} is a plot of cumulative training rewards achieved using PPO alone for a wide range of values of $(\beta,\eta)$. We see that the cumulative training rewards converge enabling us to estimate optimal exchange fee schedules and MM actions simultaneously for the range of weights $(\eta,\beta)$ considered.
\subsection{Explaining learnt policies}
\begin{figure*}[tb]
\centering
\includegraphics[width=\linewidth]{avg_quantities.eps}
\caption{Average policies and profits of MM and exchange with varying $(\eta,\beta)$ for deep Q learning}
\label{fig:avg_policy}
\end{figure*}
We now try to intuitively explain the effects of the parameters $(\eta,\beta)$ on the learnt exchange and MM policies. Increasing $\beta$ from 0 to 1 corresponds to increasing the weighting of GEI given to MM compared to consumer investors in (\ref{eq:w_gei}), with $\textnormal{GE}_2(Y_{\textnormal{MM}})=0$ with a single MM. While $\beta=0$ accounts for equitability to only the consumer investor group, $\beta=1$ corresponds to the case where the GEI metric captures (between group) equitability to only MMs. On the other hand, increasing $\eta$ corresponds to increasing the equitability weight in the exchange reward (\ref{eq:r_ex}).
Figure \ref{fig:avg_policy} is a plot of average policies and resulting profits for the EX and MM for various $(\eta,\beta)$ pairs\footnote{All profits are normalized, and hence unit less. EX fee adn incentive are in cents.}. The average policies are got by averaging the learnt policy (which maps the current state to the estimated (optimal) action to be taken in that state) using a uniform distribution on the states. By convention, we are looking at fees charged to liquidity consumers and incentives provided to liquidity providers as in direct stock exchanges. Negative fees correspond to rebates to consumers, and negative incentives correspond to fees charged to providers. Thus, negative fee and negative incentive reflect inverted stock exchanges. We observe the following trends from Figure \ref{fig:avg_policy}.
\paragraph{Exchange fees and incentives}
As $\eta$ increases for a given $\beta$, we see that the exchange starts charging more fees from liquidity consumers. For some $\beta$, we see that the exchange moves from initially providing incentives to liquidity consumers to charging them. When $\beta$ increases given high values of $\eta$, going from considering equitability to consumer investors to that for MMs, the fees to consumers increase. We see similar trends in the exchange incentives for liquidity providers. As $\eta$ increases for a given $\beta$, we see that the exchange starts providing more incentives to liquidity providers. For some $\beta$, we see that the exchange moves from initially charging fees to liquidity providers to giving them incentives. When $\beta$ increases given high values of $\eta$, going from considering equitability to consumer investors to that for MMs, the incentives to providers increase.
The above two points say that when the exchange is looking at equitability to only consumer investors, increasing the equitability metric makes it switch from an inverted exchange to a direct exchange. This is in line with popular opinion about direct exchanges being more equitable to consumer investors than inverted exchanges.
\paragraph{Exchange and MM profits}
As $\eta$ increases for fixed $\beta$, we see exchange profits decreasing as it strives to be more equitable. For fixed $\eta$, as $\beta$ is increased to consider equitability to the MM, the exchange profit increases in line with MM profits. Similarly, we see that MM profits increase as $\beta$ is increased to favour equitability to the MM group.
\paragraph{Consumer profits and equitability}
As the equitability weight $\eta$ increases for a given $\beta$, we see consumer profits increase. For a fixed high value of equitability weight $\eta$, when $\beta$ increases going from considering equitability to consumer investors to that for MMs, we interestingly see that consumer profits increase. This is to say that the MMs are incentivized to provide liquidity to consumers in an equitable fashion. For the equitability reward (\ref{eq:r_ex_equitability}), we see that it increases as the weight to the MM group is increased. This goes to say (as previously) that focusing solely on equitability to the MM group helps in the equitability in the entire marketplace since the MM is then incentivized to provide liquidity in an equitable fashion (at the cost of low exchange profits).
\section{Discussion and Conclusion}
In this paper, we used reinforcement learning to design a dynamic fee schedule for a marketplace agent that makes the marketplace equitable while ensuring profitability for trading agents.
We see that the choice of equitability parameters define the nature of learnt policies for strategic marketplace agents. The learnt policies start favoring the agent group with the highest equitability weight. We observe that such a setup can be used to design marketplace incentives to wholesale agents to influence them to make marketplaces more equitable.
\begin{acks}
This paper was prepared for informational purposes by the Artificial Intelligence Research group of JPMorgan Chase \& Co. and its affiliates (``JP Morgan''), and is not a product of the Research Department of JP Morgan. JP Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful.
\end{acks}
\printbibliography
| 2024-02-18T23:39:55.055Z | 2022-10-03T02:12:58.000Z | algebraic_stack_train_0000 | 813 | 11,502 |
|
proofpile-arXiv_065-4031 | \section{Introduction}
\label{intro}
Skewness of a random variable $X$ satisfying $E\left( \left\vert
X\right\vert ^{3}\right) <+\infty $ is often measured by its third
standardized cumulant
\begin{equation}
\gamma _{1}\left( X\right) =\frac{E\left[ \left( X-\mu\right) ^{3}\right] }{\sigma ^{3}},
\end{equation}
where $\mu $ and $\sigma
$ are the mean and the standard deviation of $X$. The squared third
standardized cumulant $\beta _{1}\left( X\right) =\gamma _{1}^{2}\left(
X\right) $, known as Pearson's skewness, is also used. The numerator of
\gamma _{1}\left( X\right) $, that is
\begin{equation}
\kappa_{3}\left( X\right)=E\left[ \left( X-\mu \right) ^{3
\right] ,
\end{equation}
is the third cumulant (i.e. the third central moment) of $X$.
Similarly, the third moment (cumulant) of a random vector is a matrix
containing all moments (cumulants) of order three which can be obtained
from the random vector itself.\\
Statistical applications of the third moment
include, but are not limited to: factor analysis (\citealp{BonhommeRobin2009}; \citealp{Mooijaart1985}), density approximation
(\citealp{ ChristiansenLoperfido2014}; \citealp{ Loperfido2019}; \citealp{VanHulle2005}), independent
component analysis \citep{PaajarviLeblanc2004}, financial
econometrics (\citealp{DeLucaLoperfido2015}; \citealp{ElyasianiMansur2017}), cluster analysis (\citealp{KabanGirolami2000}; \citealp{Loperfido2013}; \citealp{Loperfido2015a};
\citealp{Loperfido2019}; \citealp{TarpeyLoperfido2015}), Edgeworth expansions\\ (\citealp{KolloRosen2005},
page 189), portfolio theory
(\citealp{JondeauRockinger2006}), linear models
(\citealp{Mardia1971}; \citealp{YinCook2003}), likelihood inference (\citealp{McCullaghCox1986}), projection pursuit
(\citealp{Loperfido2018}), time series (\citealp{DeLucaLoperfido2015}; \citealp{Fiorentinietal2016}), spatial statistics (\citealp{GentonHeLiu2001}; \citealp{KimMallick2003}; \citealp{Lark2015}). \\
The third cumulant of a $d-$dimensional random vector is a $d^{2}\times d$
matrix with at most $d\left( d+1\right) \left( d+2\right) /6$ distinct
elements. Since their number grows very quickly with the vector's dimension,
it is convenient to summarize the skewness of the random vector itself with
a scalar function of the third standardized cumulant, as for example
Mardia's skewness ({\citealp{Mardia1970}}), partial skewness (\citealp{Davis1980}; \citealp{Isogai1983}; \citealp{Morietal1993}) or directional skewness (\citealp{MalkovichAfifi1973}). These measures have been
mainly used for testing multivariate normality, are invariant with respect
to one-to-one affine transformations and reduce to Pearson's skewness in the
univariate case. {\citet{Loperfido2015b}} reviews their main properties and
investigates their mutual connections.
Skewness might hamper the performance of several multivariate statistical
methods, as for example the Hotelling's one-sample test (\citealp{Mardia1970}; \citealp{Everitt1979}; \citealp{Davis1982}). Symmetry is usually pursued
by means of power transformations, which unfortunately suffer from some
serious drawbacks: the
transformed variables are neither affine invariant nor robust to outliers\\
(\citealp{HubertVeeken2008}; \citealp{LinLin2010}). Moreover,
they might not be easily interpretable nor jointly normal. \cite{Loperfido2014,Loperfido2019} addressed these problems with symmetrizing linear
transformations.\\
The \proglang{R} packages \pkg{MaxSkew} and \pkg{MultiSkew} provide an
unified treatment of multivariate skewness by
detecting, measuring and alleviating skewness from multivariate data. Symmetry is
assessed by either visual inspection or formal testing. Skewness is measured
by the third multivariate cumulant and its scalar functions. Skewness is removed or at least alleviated
by projecting the data onto appropriate linear subspaces. To the best of our knowledge, no statistical packages
compute their bootstrap estimates, the third cumulant and
linear projections alleviating skewness.
The remainder of the paper is organized as follows. Section 2 reviews
the basic concepts of multivariate skewness within the frameworks of third
moments and projection pursuit. It also describes some skewness-related features of the Iris dataset. Section 3
illustrates the package \pkg{MaxSkew}. Section 4 describes the functions of \pkg{MultiSkew} related to symmetrization, third moments and skewness measures. Section 5 contains some concluding remarks and hints for improving the packages.
\section{Third moment}
\label{sec:1}
The third multivariate moment, that is the third moment of a random vector,
naturally generalizes to the multivariate case the third moment $E\left(
X^{3}\right) $ of a random variable $X$ whose third absolute moment is finite. It is defined as follows, for a $d-
dimensional random vector $\mathbf{x}=\left( X_{1},...,X_{d}\right) ^\top$
satisfying $E\left( \left\vert X_{i}^{3}\right\vert \right) <+\infty $, for
i=1,...,d$. The third moment of $\mathbf{x}$ is the $d^{2}\times d$ matrix
\begin{equation}
\mathbf{M}_{3,x}=E\left( \mathbf{x}\otimes \mathbf{x}^\top\otimes \mathbf{x
\right),
\end{equation}
where ``$\otimes $" denotes the Kronecker product (see, for
example, \citealp{Loperfido2015b}). In the following, when referring to the
third moment of a random vector, we shall implicitly assume that all
appropriate moments exist.
The third moment $\mathbf{M}_{3,x}$ of $\mathbf{x}=\left(
X_{1},...,X_{d}\right) ^\top$ contains $d^{3}$ elements of the form $\mu
_{ijh}=E\left( X_{i}X_{j}X_{h}\right) $, where $i,j,h=1$, $...$, $d$.
Many elements are equal to each other, due to the identities
\begin{equation}
\mu
_{ijh}=\mu _{ihj}=\mu _{jih}=\mu _{jhi}=\mu _{hij}=\mu _{hji}.
\end{equation}
First, there are
at most $d$ distinct elements $\mu _{ijh}$
where the three indices are equal
to each other: $\mu _{iii}=E\left( X_{i}^{3}\right) $, for $i=1$, $...$, $d
. Second, there are at most $d(d-1)$ distinct elements $\mu _{ijh}
$ where only two indices are equal to each other: $\mu _{iij}=E\left(
X_{i}^{2}X_{j}\right) $, for $i,j=1$, $...$, $d$ and $i\neq j$. Third, there
are at most $d\left( d-1\right) \left( d-2\right) /6$ distinct elements $\mu
_{ijh}$ where the three indices differ from each other: $\mu _{ijh}=E\left(
X_{i}X_{j}X_{h}\right) $, for $i,j=1$, $...$, $d$ and $
i\neq j \neq h $. Hence $\mathbf{M}_{3,x}$ contains at most $d\left(
d+1\right) \left( d+2\right) /6$ distinct elements.
Invariance of $\mu _{ijh}=E\left( X_{i}X_{j}X_{h}\right) $ with respect to
indices permutations implies several symmetries in the structure of $\mathbf
M}_{3,x}$. First, $\mathbf{M}_{3,x}$ might be regarded as $d$ matrices
\mathbf{B}_{i}=E\left( X_{i}\mathbf{xx}^\top\right) $, ($i=1,...,n$) stacked
on top of each other. Hence $\mu {}_{ijh}$ is the element in the $j$-th row and in the $h$-th
column of the $i$-th block $\mathbf{B}_{i}$ of $\mathbf{M}_{3,x}$.
Similarly, $\mathbf{M}_{3,x}$ might be regarded as $d$ vectorized,
symmetric matrices lined side by side: $\mathbf{M}_{3,x}=\left[ vec\left(
\mathbf{B}_{1}\right) ,...,vec\left( \mathbf{B}_{d}\right) \right] $. Also, left
singular vectors corresponding to positive singular values of the third
multivariate moment are vectorized, symmetric matrices (\citealp{Loperfido2015b}).
Finally, $\mathbf{M}_{3,x}$ might be decomposed into the sum of
at most $d$ Kronecker products of symmetric matrices and vectors (\citealp{Loperfido2015b}).\\
Many useful properties of multivariate moments are related to the linear
transformation $\mathbf{y}=\mathbf{Ax}$, where $\mathbf{A}$ is a $k\times d$
real matrix. The first moment (that is the mean) of $\mathbf{y}$ is
evaluated via matrix multiplication only: $E(\mathbf{y})=\mathbf{A\bm{\mu}}$.
The second moment of $\mathbf{y}$ is evaluated using both the matrix multiplication
and transposition: $\mathbf{M}_{2,y}=\mathbf{AM}_{2,x}\mathbf{A}^\top$ where $\mathbf{M}_{2,x}=E\left( \mathbf{xx}^\top\right) $ denotes the second
moment of $\mathbf{x}$. The
third moment of $\mathbf{y}$ is evaluated using the matrix multiplication,
transposition and the tensor product: $\mathbf{M}_{3,y}=\left( \mathbf{A
\otimes \mathbf{A}\right) \mathbf{M}_{3,x}\mathbf{A}^\top$ (\citealp{ChristiansenLoperfido2014}). In particular, the third moment of the linear projection
\mathbf{v}^\top\mathbf{x}$, where $\mathbf{v}=\left( v_{1},...,v_{d}\right)
^\top$ is a $d-$dimensional real vector, is $\left( \mathbf{v}^\top\otimes
\mathbf{v}^\top\right) \mathbf{M}_{3,x}\mathbf{v}$ and is
a third-order polynomial in the variables $v_{1}$, ..., $v_{d}$.
The third central moment of $\mathbf{x}$, also known as its third cumulant,
is the third moment of $\mathbf{x}-\bm{\mu}$, where $\bm{\mu}$ is
the mean of $\mathbf{x}$:
\begin{equation}
\mathbf{K}_{3,x}=E\left[ \left( \mathbf{x-\bm{\mu} }\right) \otimes \left(
\mathbf{x-\bm{\mu} }\right) ^\top\otimes \left( \mathbf{x-\bm{\mu} }\right) \right].
\end{equation}
It is related to the third moment via the identity
\begin{equation}
\mathbf{K}_{3,x}=\mathbf{M}_{3,x}-\mathbf{M}_{2,x}\otimes {\bm{\mu} }
\bm{\mu}\otimes \mathbf{M}_{2,x}-vec\left( \mathbf{M}_{2,x}\right)
\bm{\mu}^\top+2\bm{\mu}\otimes \bm{\mu}^\top\otimes \bm{\mu}.
\end{equation
The third cumulant allows for a better assessment of
skewness by removing the effect of location on third-order moments. It becomes a null matrix under central symmetry,
that is when $\mathbf{x}-\bm{\mu}$ and $\bm{\mu}-\mathbf{x}$ are
identically distributed.
The third standardized moment (or cumulant) of the random vector $\mathbf{x}$
is the third moment of $\mathbf{z}=(Z_{1},...,Z_{d})^\top=\mathbf{\Sigma
^{-1/2}\left( \mathbf{x}-\bm{\mu}\right) $, where $\mathbf{\Sigma
^{-1/2}$ is the inverse of the positive definite square root $\mathbf{\Sigma
}^{1/2}$\ of $\mathbf{\Sigma }=cov\left( \mathbf{x}\right) $, which is
assumed to be positive definite:
\begin{equation}
\mathbf{\Sigma }^{1/2}=\left( \mathbf
\Sigma }^{1/2}\right) ^\top, \mathbf{\Sigma }^{1/2}>0 , \text{and } \mathbf{\Sigma
^{1/2}\mathbf{\Sigma }^{1/2}=\mathbf{\Sigma }.
\end{equation}
It is often denoted by
\mathbf{K}_{3,z}$\ and is related to $\mathbf{K}_{3,x}$ via the identit
\begin{equation}
\mathbf{K}_{3,z}=\left( \mathbf{\Sigma }^{-1/2}\otimes \mathbf{\Sigma
^{-1/2}\right) \mathbf{K}_{3,x}\mathbf{\Sigma }^{-1/2}.
\end{equation}
The third standardized cumulant is particularly useful for removing the effects of
location, scale and correlations on third order moments. The mean and the variance of $\mathbf{z}$ are invariant with respect to
orthogonal transformations, but the same does not hold for third
moments: $\mathbf{M}_{3,z}$ and $\mathbf{M}_{3,w}$ will in general differ,
if $\mathbf{w}=\mathbf{Uz}$ and $\mathbf{U}$ is a $d\times d$ orthogonal
matrix.
Projection pursuit is a multivariate statistical technique aimed at finding
interesting low-dimensional data projections. It looks for
the data projections which maximize the projection pursuit index, that is a
measure of interestingness. \cite{Loperfido2018} reviews the merits of skewness (i.e. the third standardized cumulant) as a projection pursuit index. Skewness-based projection pursuit is based on the multivariate skewness measure in \citet{MalkovichAfifi1973}. They defined the directional skewness of a random
vector $\mathbf{x}$ as the maximum value $\beta _{1,d}^{D}\left( \mathbf{x
\right) $ attainable by $\beta _{1}\left( \mathbf{c}^\top\mathbf{x}\right) $,
where $\mathbf{c}$ is a nonnull, $d-$dimensional real vector and $\beta
_{1}\left( Y\right) $ is the Pearson's skewness of the random
variable $Y$
\begin{equation}
\beta _{1,d}^{D}\left( \mathbf{x}\right) = \max_{\mathbf{c}\in\mathbb{S}^{d-1}} \frac{E^{2}\left[ \left( \mathbf{c}^\top\mathbf{x}-\mathbf{c
^\top\bm{\mu }\right) ^{3}\right] }{\left( \mathbf{c}^\top\mathbf{\Sigma c
\right) ^{3}},
\end{equation
with $\mathbb{S}^{d-1}$ being the set of $d{-}$ dimensional real vectors of unit length.
The name directional skewness reminds that $\beta _{1,d}^{D}\left( \mathbf{x
\right) $ is the maximum skewness attainable by a projection of the random
vector $\mathbf{x}$ onto a direction. It admits a simple representation in terms of the
third standardized cumulant
\begin{equation}
\max_{\mathbf{c}\in\mathbb{S}^{d-1}}\left[ \left( \mathbf{c}^\top\otimes \mathbf{c}^\top\right)
\mathbf{K}_{3,z}\mathbf{c}\right] ^{2}=\max_{\mathbf{c}\in\mathbb{S}^{d-1}}\;\underset{f,g,i,j,h,k}{\sum }c_{f}c_{g}c_{i}c_{j}c_{h}c_{k
\kappa _{fgi}\kappa _{jhk}.
\end{equation
Statistical applications of directional skewness include normality testing\\
(\citealp{MalkovichAfifi1973}), point estimation (\citealp{Loperfido2010}), independent component analysis (\citealp{Loperfido2015b}; \citealp{PaajarviLeblanc2004})
and cluster analysis (\citealp{KabanGirolami2000}; \citealp{Loperfido2013}; \citealp{Loperfido2015a}; \citealp{Loperfido2018}; \citealp{Loperfido2019}; \citealp{TarpeyLoperfido2015}).
There is a general consensus that an interesting feature, once found, should
be removed (\citealp{Huber1985}; \citealp{Sun2006}). In skewness-based projection pursuit, this means removing
skewness from the data using appropriate linear transformations. A
random vector whose third cumulant is a null matrix is said to be weakly
symmetric. Weak symmetry might be achieved by linear transformations, when the third cumulant of $\mathbf{x}$ is not
of full rank, and its rows belong to the linear space generated by the right
singular vectors associated with its null singular values. More formally,
let $\mathbf{x}$ be a $d-$dimensional random vector whose third cumulant
\mathbf{K}_{3,x}$ has rank $d-k$, with $0<k<d$. Also, let $\mathbf{A}$ be a
k\times d$ matrix whose rows span the null space of $\mathbf{K}_{3,x}^\to
\mathbf{K}_{3,x}$. Then the third cumulant of $\mathbf{Ax}$ is a null
matrix (\citealp{Loperfido2014}). Weak symmetry might be achieved even when this assumption is not
satisfied: any random vector with finite third-order moments and at
least two components admits a projection which is weakly symmetric (\citealp{Loperfido2014}).
The appropriateness of the linear transformation purported to remove or
alleviate symmetry might be assessed with measures of multivariate
skewness, which should be significantly smaller in the transformed data than
in the original ones. \citet{Mardia1970} summarized the multivariate skewness of the
random vector $\mathbf{x}$ with the scalar measure
\begin{equation}
\beta _{1,d}^{M}\left( \mathbf{x}\right) =E\left\{ \left[ \left( \mathbf{x}
\bm{\mu }\right) ^\top\mathbf{\Sigma }^{-1}\left( \mathbf{y}-\bm{\mu
\right) \right] ^{3}\right\} ,
\end{equation
where $\mathbf{x}$ and $\mathbf{y}$ are two $d-$dimensional, independent and
identically distributed random vectors with mean $\bm{\mu }$ and
covariance $\mathbf{\Sigma }$. It might be represented as the squared norm of
the third standardized cumulant:
\begin{equation}
\beta _{1,d}^{M}\left( \mathbf{x}\right) =tr\left( \mathbf{K}_{3,z}^\to
\mathbf{K}_{3,z}\right) =\underset{i,j,h}{\sum }\kappa _{ijh}^{2}.
\end{equation
It is invariant with respect to one-to-one affine transformations:
\begin{equation}
\beta_{1,d}^{M}\left( \mathbf{x}\right) =\beta _{1,d}^{M}\left( \mathbf{Ax+b
\right) , \text{where } \mathbf{b}\in \mathbb{R}^{d}, \mathbf{A}\in \mathbb{R
^{d}\times \mathbb{R}^{d}, \det \left( \mathbf{A}\right) \neq 0.
\end{equation}
Mardia's skewness is by far the most popular measure of multivariate skewness. Its
statistical applications include multivariate normality testing (\citealp{Mardia1970}) and assessment of robustness of MANOVA statistics (\citealp{Davis1980}).
Another scalar measure of multivariate skewness is
\begin{equation}
\beta _{1,d}^{P}\left( \mathbf{x}\right) =E\left[ \left( \mathbf{x}-\bm{\mu }\right) ^\top\mathbf{\Sigma }^{-1}\left( \mathbf{x}-\bm{\mu }\right)
\left( \mathbf{x}-\bm{\mu }\right) ^\top\mathbf{\Sigma }^{-1}\left(
\mathbf{y}-\bm{\mu }\right) \left( \mathbf{y}-\bm{\mu }\right) ^\to
\mathbf{\Sigma }^{-1}\left( \mathbf{y}-\bm{\mu }\right) \right] ,
\end{equation
where $\mathbf{x}$ and $\mathbf{y}$ are the same as above. It has been
independently proposed by several authors (\citealp{Davis1980}; \citealp{Isogai1983}; \citealp{Morietal1993}). \cite{Loperfido2015b} named it partial skewness to remind that $\beta
_{1,d}^{P}\left( \mathbf{x}\right) $ does not depend on moments of the form
E\left( Z_{i}Z_{j}Z_{h}\right) $ when $i$, $j$, $h$ differ from each other,
as it becomes apparent when representing it as a function of the third
standardized cumulant:
\begin{equation}
\beta _{1,d}^{P}\left( \mathbf{x}\right) =vec^\top\left( \mathbf{I
_{d}\right) \mathbf{K}_{3,z}^\top\mathbf{K}_{3,z}vec\left( \mathbf{I
_{d}\right) =\underset{i,j,h,k}{\sum }\kappa _{iij}\kappa _{hhk}.
\end{equation
Partial skewness is by far less popular than Mardia's skewness. Like the latter
measure, however, it has been applied to multivariate normality testing
(\citealp{Henze1997a}; \citealp{Henze1997b}; \citealp{HenzeKlarMeintanis2003}) and to the assessment of the robustness of MANOVA statistics
(\citealp{Davis1980}).
\citet{Morietal1993} proposed to measure the
skewness of the $d-$dimensional random vector $\mathbf{x}$ with the vector
\mathbf{\gamma }_{1,d}(\mathbf{x})=E\left( \mathbf{z}^\top\mathbf{zz}\right) $, where
\mathbf{z}$ is the standardized version of $\mathbf{x}$. This vector-valued
measure of skewness might be regarded as weighted average of the
standardized vector $\mathbf{z}$, with more weight placed on the outcomes
furthest away from the sample mean. It is location invariant, admits the
representation $\mathbf{\gamma }_{1,d}(\mathbf{x})=\mathbf{K}_{3,z}^\top vec\left( \mathbf
I}_{d}\right) $ and its squared norm is the partial skewness of $\mathbf{x}
. It coincides with the third standardized cumulant of a
random variable in the univariate case, and with the null $d-$dimensional
vector when the underlying distribution is centrally symmetric. \cite{Loperfido2015a} applied it to model-based clustering.
We shall illustrate the role of skewness with the
Iris dataset contained in the \textsf{R} package \pkg{datasets}: \code{iris\{datasets\}}. We shall first download the data:
\code{R> data(iris)}.
Then use the help command \code{R>help(iris)}
and the structure command \code{R> str(iris)}
to obtain the following informations about this dataset.
It contains the measurements in centimeters of four variables on 150 iris flowers: sepal length, sepal width, petal length and petal width. There is also a factor variable (Species) with three levels: setosa, versicolor and virginica. There are 50 rows for each species.
The output of the previous code shows that the dataset is a data frame object containing 150 units and 5 variables:\\
\begin{lstlisting}
'data.frame': 150 obs. of 5 variables:
$ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 ...
$ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 ...
$ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 ...
$ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 ...
$ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1...
\end{lstlisting}
With the command\\
\code{R> pairs(iris[,1:4],col=c("red","green","blue")[as.numeric(Species)])}
we obtain the multiple scatterplot of the Iris dataset (Figure~\ref{Fig1}).
\begin{figure}[tbph]
\centering
\includegraphics[scale=0.50]{scattermulti_iris_orig.color.eps}
\caption{Multiple scatterplot of the Iris dataset.}
\label{Fig1}
\end{figure}
The groups ``versicolor" and ``virginica" are quite close
to each other, and well separated from ``setosa". The data are markedly
skewed, but skewness reduces within each group, as exemplified by the
histograms of petal length in all flowers (Figure~\ref{fig1bis}) and in those
belonging to the ``setosa" group (Figure~\ref{fig1ter}). Both facts motivated
researchers to model the dataset with mixtures of three normal components
(see, for example \citealp{FrSchnatter2006}, Subsection 6.4.3). However, \citet{KorkmazGoksulukZararsiz2014}
showed that the normality hypothesis should be
rejected at the $0.05$ level in the ``setosa" group, while
nonnormality is undetected by Mardia's skewness.
\begin{figure}[tbph]
\centerin
\subfigure[\protect\url{}\label{fig1bis}
{\includegraphics[scale=0.33]{hist.iris.petal_length.eps}}\qquad\qquad
\subfigure[\protect\url{}\label{fig1ter}
{\includegraphics[scale=0.33]{hist.setosa.petal_length.eps}}\qquad\qquad
\caption{(a) Histogram of petal length in the Iris dataset, (b) Histogram of petal length in the ``setosa" group.\label{fig:sottofigure}}
\end{figure}
We shall use \pkg{MaxSkew} and \pkg{MultiSkew} to answer the
following questions about the Iris dataset.
\begin{enumerate}
\item Is skewness really inept at detecting nonnormality of the variable
recorded in the ``setosa" group?
\item Does skewness help in recovering the cluster structure, when information about the group
memberships is removed?
\item Can skewness be removed via linear projections, and are the
projected data meaningful?
\end{enumerate}
The above questions will be addressed within the frameworks of projection
pursuit, normality testing, cluster analysis and data exploration.
\section{MaxSkew}
\label{sec:2}
The package \pkg{MaxSkew} \citep{FranceschiniLoperfido2017a} is written in the \proglang{R}
programming language \citep{rcore}. It is available for download on the Comprehensive \code{R} Archive Network (CRAN) at \\
\url{https://CRAN.R-project.org/package=MaxSkew}.\\
The package \pkg{MaxSkew} uses several \proglang{R} functions. The first one is $\code{eigen}\left \{\code{base}\right \}$ \citep{rcore}. It computes eigenvalues and eigenvectors of real or complex matrices. Its usage, as described by the command \code{help(eigen)} is\\
\code{R> eigen(x, symmetric, only.values = FALSE, EISPACK = FALSE)}.\\
\linebreak
The output of the function are \code{values} (a vector containing the eigenvalues of $x$, sorted in decreasing order according to their modulus) and \code{vectors} (either a matrix whose columns contain the normalized eigenvectors of $x$, or a null matrix if \code{only.values} is set equal to TRUE). A second \proglang{R} function used in \pkg{MaxSkew} is $\code{polyroot}\left \{\code{base}\right \}$ \citep{rcore} which finds the zeros of a real or complex polynomial. Its usage is \code{polyroot(z)}, where $z$ is the vector of polynomial coefficients arranged in increasing order. The \code{Value} in output is a complex vector of length $n - 1$, where $n$ is the position of the largest non-zero element of $z$. \pkg{MaxSkew} also uses the \proglang{R} function $\code{svd}\left \{\code{base}\right \}$ \citep{rcore} which computes the singular value decomposition of a rectangular matrix. Its usage is\\
\code{svd(x, nu = min(n, p), nv = min(n, p), LINPACK = FALSE)}.\\
\linebreak
The last \proglang{R} function used in \pkg{MaxSkew} is $\code{kronecker}\left \{\code{base}\right \}$ \citep{rcore} which computes the generalised Kronecker product of two arrays, X and Y. The \code{Value} in output is an array with dimensions \code{dim(X) * dim(Y)}.
The \pkg{MaxSkew} package finds orthogonal data projections with maximal skewness. The first data projection in the output is the most skewed among all data projections. The second data projection in the output is the most skewed among all data projections orthogonal to the first one, and so on. \cite{Loperfido2019} motivates this method within the framework of model-based clustering. The package implements the algorithm described in \cite{Loperfido2018} and may be downloaded with the command \code{R> install.packages("MaxSkew")}.
The package is attached with the command \code{R > library(MaxSkew)}.\\
The packages \pkg{MaxSkew} and \pkg{MultiSkew} require the dataset to be a data matrix object, so we transform the \code{iris} data frame accordingly:\\
\code{R > iris.m<-data.matrix(iris)}. We check that we have a data matrix object with the command:\\
\begin{lstlisting}
R > str(iris.m)
num [1:150, 1:5] {5.1} {4.9} {4.7} {4.6 } {5} {5.4} {...}
- attr(*, "dimnames")=List of 2
..$ : NULL
..$ : chr [1:5] "Sepal.Length" "Sepal.Width" "Petal.Length"
"Petal.Width" ...
\end{lstlisting}
The \code{help} command shows the basic informations about the package and the functions it contains: \code{R > help(MaxSkew)}.\\
The package \pkg{MaxSkew} has three functions, two of which are internal.
The usage of the main function is\\
\code{R> MaxSkew(data, iterations, components, plot)},\\
where
\code{data} is a data matrix object, \code{iterations} (the number of required iterations) is a positive integer, \code{components} (the number of orthogonal projections maximizing skewness) is a positive integer smaller than the number of variables, and \code{plot} is a dichotomous variable: TRUE/FALSE. If \code{plot} is set equal to TRUE (FALSE) the scatterplot of the projections maximizing skewness appears (does not appear) in the output.
The output includes
a matrix of projected data, whose $i$-th row represents the $i$-th unit, while the $j$-th column represents the $j$-th projection. The output also includes
the multiple scatterplot of the projections maximizing skewness. As an example, we call the function\\
\linebreak
\code{R > MaxSkew(iris.m[,1:4], 50, 2, TRUE)}.\\
\linebreak
We have used only the first four columns of the \code{iris.m} data matrix object, because the last column is a label.
As a result, we obtain a matrix with 150 rows and 2 columns containing the projected data and a multiple scatterplot. The structure of the resulting matrix is\\
\begin{lstlisting}
R> str(MaxSkew(iris.m[,1:4], 50, 2, TRUE))
num [1:150, 1:2] -2.63 -2.62 -2.38 -2.58 -2.57 ...
\end{lstlisting}
For the sake of brevity we only show the first three rows in the matrix of projected data:\\
\begin{lstlisting}
R > iris.projections<-MaxSkew(iris.m[,1:4], 50,2,TRUE)
R > iris.projections[1:3,]
[,1] [,2]
[1,] -2.631244186 -0.817635353
[2,] -2.620071890 -1.033692782
[3,] -2.376652037 -1.311616693
\end{lstlisting}
\begin{figure}[tbph]
\centerin
\subfigure[\protect\url{}\label{Fig2}
{\includegraphics[scale=0.33]{iris.m.projections.2.eps}}\qquad\qquad
\subfigure[\protect\url{}\label{Fig3}
{\includegraphics[scale=0.33]{iris_MaxSkew_2projections_3gruppi.color.eps}}\qquad\qquad
\caption{(a) Multiple scatterplot of the first two most skewed, mutually orthogonal projections, (b) Scatterplot of the first two most skewed, mutually orthogonal projections of Iris data, with different colors to denote different group memberships.\label{fig:sottofigure2}}
\end{figure}
Figure~\ref{Fig2} shows the scatterplot of the projected data.
The connections between skewness-based projection
pursuit and cluster analysis, as implemented in \pkg{MaxSkew}, have been
investigated by several authors (\citealp{KabanGirolami2000}; \citealp{Loperfido2013}; \citealp{Loperfido2015a}; \citealp{Loperfido2018}; \citealp{Loperfido2019};\\ \citealp{TarpeyLoperfido2015}). For the Iris
dataset, it is well illustrated by the scatterplot of the two most skewed,
mutually orthogonal projections, with different colors to denote the group
memberships (Figure~\ref{Fig3}). The plot is obtained with the following commands:\\
\begin{lstlisting}
R> attach(iris)
R> iris.projections<-MaxSkew(iris.m[,1:4], 50,2,TRUE)
R> iris.projections.species<-cbind(iris.projections,
iris$Species)
R> pairs(iris.projections.species[,1:2],
col=c("red","green","blue")[as.numeric(Species)])
R>detach(iris)
\end{lstlisting}
The scatterplot clearly shows the separation of ``setosa" from
``virginica" and ``versicolor", whose overlapping is much less marked than in
the original variables. The scatterplot is very similar, up to rotation and
scaling, to those obtained from the same data by \cite{FriedmanTukey1974}
and \cite{HuiLindsay2010}.
Mardia's skewness is unable to
detect nonnormality in the ``setosa" group, thus raising the question of
whether any skewness measure is apt at detecting such nonnormality (\citealp{KorkmazGoksulukZararsiz2014}). We shall
address this question using skewness-based projection pursuit as a
visualization tool. Figure~\ref{Fig4} contains the scatterplot of the two most
skewed, mutually orthogonal projections obtained from the four variables
recorded from setosa flowers only. It clearly shows the presence of a dense,
elongated cluster, which is inconsistent with the normality assumption.
Formal testing for excess skewness in the ``setosa" group will be discussed
in the following sections.\\
\begin{figure}[tbph]
\centering
\includegraphics[scale=0.750]{setosa.scatter.MaxSkew.2projections.eps}
\caption{Scatterplot of the two most skewed, mutually orthogonal projections computed from data in the ``setosa" group.}
\label{Fig4}
\end{figure}
\section{MultiSkew}
\label{sec:3}
The package \pkg{MultiSkew} (\citealt{FranceschiniLoperfido2017b}) is written in the \proglang{R}
programming language \citep{rcore} and depends on the recommended
package \pkg{MaxSkew} (\citealt{FranceschiniLoperfido2017a}).
It is available for download on the Comprehensive \proglang{R} Archive Network (CRAN) at {\url{https://CRAN.R-project.org/package=MultiSkew}}.
The \pkg{MultiSkew} package computes the third multivariate cumulant of either the raw, centered or standardized data. It also computes the main measures of multivariate skewness, together with their bootstrap distributions. Finally, it computes the least skewed linear projections of the data. The \pkg{MultiSkew} package contains six different functions. First install it with the command\\
\linebreak
\code{R> install.packages("MultiSkew")}
\linebreak
\\
and then use the command
\code{R > library(MultiSkew)} to attach the package.
Since the package \pkg{MultiSkew} depends on the package \pkg{MaxSkew}, the latter is loaded together to the former.\\
\subsection{MinSkew}
\label{sec:4}
The function \code{R > MinSkew(data, dimension)}
alleviates sample skewness by projecting the data onto appropriate linear subspaces and implements the method in \cite{Loperfido2014}.
It requires two input arguments: \code{data} (a data matrix object), and \code{dimension} (the number of required projections),
which must be an integer between 2 and the number of the variables in the data matrix.
The output has two values: \code{Linear} (the linear function of the variables) and \code{Projections} (the projected data).
\code{Linear} is a matrix with the number of rows and columns equal to the number of variables and number of projections. \code{Projections} is a matrix whose number of rows and columns equal the number of observations and the number of projections.
We call the function using our data matrix object: \code{R > MinSkew(iris.m[,1:4],2)}.\\
We obtain the matrix \code{Linear} as a first output:\\
With the commands
\begin{lstlisting}
R> attach(iris)
R> projections.species<-cbind(Projections,iris$Species)
R> pairs(projections.species[,1:2],col=c("red","green",
"blue")[as.numeric(Species)])
R> detach(iris)
\end{lstlisting}
we obtain the multiple scatterplot of the two projections (Figure~\ref{Fig5}). The points clearly remind of bivariate normality, and the groups markedly overlap with each other. Hence the projection removes the group structure as well as skewness. This result might be useful for a researcher interested in the features which are common to the three species.
\begin{figure}[tbph]
\centering
\includegraphics[scale=0.750]{ScatterMulti_iris_MinSkew_2projections.eps}
\caption{Scatterplot of two projections obtained with the function \code{Minskew}.}
\label{Fig5}
\end{figure}
The histograms of the \code{MinSkew} projections remind of univariate normality, as it can be seen from Figure~\ref{Fig6} and Figure~\ref{Fig7}, obtained with the code
\begin{lstlisting}
R> hist(Projections[,1],freq=FALSE)
R> curve(dnorm, col = 2, add = TRUE)
R> hist(Projections[,2],freq=FALSE)
R> curve(dnorm, col = 2, add = TRUE)
\end{lstlisting}
\begin{figure}[tbph]
\centerin
\subfigure[\protect\url{}\label{Fig6}
{\includegraphics[scale=0.33]{iris.Projections_,1_.MinSkew.Normal.eps}}\qquad\qquad
\subfigure[\protect\url{}\label{Fig7}
{\includegraphics[scale=0.33]{iris.Projections_,2_.MinSkew.Normal.eps}}\qquad\qquad
\caption{(a) Histogram of the first projection of the Iris dataset obtained with \code{MinSkew}, (b) Histogram of the second projection of the Iris dataset obtained with \code{MinSkew}.\label{fig:sottofigure3}}
\end{figure}
\subsection{Third}
\label{sec:5}
This section describes the function which computes the third multivariate moment of a data matrix. Some general information about the third multivariate moment of both theoretical and empirical distributions are reviewed in \cite{Loperfido2015b}. The name of the function is \code{Third}, and its usage is\\
\linebreak
\code{R > Third(data,type)}.\\
As before, \code{data} is a data matrix object while \code{type} may be:
``raw" (the third raw moment), ``central" (the third central moment) and ``standardized" (the third standardized moment).
The output of the function, called \code{ThirdMoment}, is a matrix containing all moments of order three which can be obtained from the variables in \code{data}.
We compute the third raw moments of the iris variables with the command \code{R > Third(iris.m[,1:4], "raw")}.\\
The matrix \code{ThirdMoment} is:\\
\begin{lstlisting}
[1] "Third Moment"
[,1] [,2] [,3] [,4]
[1,] 211.6333 106.0231 145.8113 47.7868
[2,] 106.0231 55.4270 69.1059 22.3144
[3,] 145.8113 69.1059 109.9328 37.1797
[4,] 47.7868 22.3144 37.1797 12.9610
[5,] 106.0231 55.4270 69.1059 22.3144
[6,] 55.4270 30.3345 33.7011 10.6390
[7,] 69.1059 33.7011 50.7745 17.1259
[8,] 22.3144 10.6390 17.1259 5.9822
[9,] 145.8113 69.1059 109.9328 37.1797
[10,] 69.1059 33.7011 50.7745 17.1259
[11,] 109.9328 50.7745 86.4892 29.6938
[12,] 37.1797 17.1259 29.6938 10.4469
[13,] 47.7868 22.3144 37.1797 12.9610
[14,] 22.3144 10.6390 17.1259 5.9822
[15,] 37.1797 17.1259 29.6938 10.4469
[16,] 12.9610 5.9822 10.4469 3.7570
\end{lstlisting}
\begin{lstlisting}
R> str(ThirdMoment)
num [1:16, 1:4] 211.6 106 145.8 47.8 106 ...
\end{lstlisting}
Similarly, we use ``central" instead of ``raw" with the command\\
\code{R > Third(iris.m[,1:4], "central")}.\\
\linebreak
The output which appears in console is\\
\begin{lstlisting}
[1] "Third Moment"
[,1] [,2] [,3] [,4]
[1,] 0.1752 0.0420 0.1432 0.0259
[2,] 0.0420 -0.0373 0.1710 0.0770
[3,] 0.1432 0.1710 -0.1920 -0.1223
[4,] 0.0259 0.0770 -0.1223 -0.0466
[5,] 0.0420 -0.0373 0.1710 0.0770
[6,] -0.0373 0.0259 -0.1329 -0.0591
[7,] 0.1710 -0.1329 0.5943 0.2583
[8,] 0.0770 -0.0591 0.2583 0.1099
[9,] 0.1432 0.1710 -0.1920 -0.1223
[10,] 0.1710 -0.1329 0.5943 0.2583
[11,] -0.1920 0.5943 -1.4821 -0.6292
[12,] -0.1223 0.2583 -0.6292 -0.2145
[13,] 0.0259 0.0770 -0.1223 -0.0466
[14,] 0.0770 -0.0591 0.2583 0.1099
[15,] -0.1223 0.2583 -0.6292 -0.2145
[16,] -0.0466 0.1099 -0.2145 -0.0447
\end{lstlisting}
and the structure of the object \code{ThirdMoment} is\\
\begin{lstlisting}
R> str(ThirdMoment)
num [1:16, 1:4] 0.1752 0.042 0.1432 0.0259 0.042 ...
\end{lstlisting}
Finally, we set \code{type} equal to \code{standardized}:\\
\code{R > Third(iris.m[,1:4], "standardized")},\\
\linebreak
and obtain the output\\
\begin{lstlisting}
[1] "Third Moment"
[,1] [,2] [,3] [,4]
[1,] 0.2988 -0.0484 0.3257 0.0034
[2,] -0.0484 0.0927} -0.0358 -0.0444
[3,] 0.3257 -0.0358} 0.0788 -0.2221
[4,] 0.0034 -0.0444} -0.2221 0.0598
[5,] -0.0484 0.0927} -0.0358 -0.0444
[6,] 0.0927 -0.0331} -0.1166 -0.0844
[7,] -0.0358 -0.1166} 0.2894 0.1572
[8,] -0.0444 -0.0844} 0.1572 0.2276
[9,] 0.3257 -0.0358} 0.0788 -0.2221
[10,] -0.0358 -0.1166} 0.2894 0.1572
[11,] 0.0788 0.2894} -0.0995 -0.3317
[12,] -0.2221 0.1572} -0.3317 0.3009
[13,] 0.0034 -0.0444} -0.2221 0.0598
[14,] -0.0444 -0.0844} 0.1572 0.2276
[15,] -0.2221 0.1572} -0.3317 0.3009
[16,] 0.0598 0.2276} 0.3009 0.8259
\end{lstlisting}
We show the structure of the matrix ThirdMoment with \\
\begin{lstlisting}
R> str(ThirdMoment)
num [1:16, 1:4] 0.2988 -0.0484 0.3257 0.0034 -0.0484 ...
\end{lstlisting}
Third moments and cumulants might give some insights into
the data structure. As a first example, use the command\\
\code{R> Third(MaxSkew(iris.m[1:50,1:4],50,2,TRUE),"standardized")}\\
to compute the third standardized cumulant of the two most skewed, mutually
orthogonal projections obtained from the four variables recorded from setosa
flowers only. The resulting matrix is\\
\begin{lstlisting}
[1] "Third Moment"
[,1] [,2]
[1,] 1.2345 0.0918
[2,] 0.0918 -0.0746
[3,] 0.0918 -0.0746
[4,] -0.0746 0.5936
\end{lstlisting}
The largest entries in the matrix are the first element of the first row and
the last element in the last row. This pattern is typical of outcomes from
random vectors with skewed, independent components (see, for example,
\citealt{Loperfido2015b}). Hence the two most skewed projections may well be mutually independent.
As a second example, use the commands\\
\code{R> MinSkew(iris.m[,1:4],2)} and \code{R> Third(Projections,"standardized")},\\
where \code{Projections} is a value in output of the function \code{MinSkew},
to compute the third standardized cumulant of the two least skewed
projections obtained from the four variables of the Iris dataset.
The resulting matrix is\\
\begin{lstlisting}
[,1] [,2]
[1,] -0.0219 0.0334
[2,] 0.0334 -0.0151
[3,] 0.0334 -0.0151
[4,] -0.0151 -0.0963
\end{lstlisting}
All elements in the matrix are very close to zero, as it might be better
appreciated by comparing them with those in the third standardized cumulant
of the original variables. This pattern is typical of outcomes from
weakly symmetric random vectors (\citealp{Loperfido2014}).
\subsection{Skewness measures}
\label{sec:6}
The package \pkg{MultiSkew} has other four functions. All of them compute skewness measures. The first one is
\code{R > FisherSkew(data)} and computes Fisher's measure of skewness, that is the third standardized moment of each variable in the dataset.
The usage of the function shows that there is only one input argument: \code{data} (a data matrix object). The output of the function is a dataframe, whose name is \code{tab}, containing Fisher's measure of skewness of each variable of the dataset. To illustrate the function, we use the four numerical variables in the Iris dataset:\\
\code{R > FisherSkew(iris.m[,1:4])}\\
and obtain the output\\
\begin{lstlisting}
R > tab
X1 X2 X3 X4
Variables 1.0000 2.0000 3.0000 4.000
Fisher Skewness 0.3118 0.3158 -0.2721 -0.1019
\end{lstlisting}
in which \code{X1} is the variable Sepal.Length, \code{X2} is the variable Sepal.Width, \code{X3} is the variable Petal.Length, and \code{X4} is the variable Petal.Width.
Another function is \code{PartialSkew: R > PartialSkew(data)}.
It computes the multivariate skewness measure as defined in \cite{Morietal1993}.
The input is still a data matrix, while the values in output are three objects:
\code{Vector, Scalar} and \code{pvalue}.
The first is the skewness measure and it has a number of elements equal to the number of the variables in the dataset used as input.
The second is the squared norm of \code{Vector}. The last is the probability of observing a value of \code{Scalar} greater than the observed one, when the data are normally distributed and the sample size is large enough. We apply this function to our dataset: \code{R > PartialSkew(iris.m[,1:4])} and obtain\\
\begin{lstlisting}
R > Vector
[,1]
[1,] 0.5301
[2,] 0.4355
[3,] 0.4105
[4,] 0.4131
R > Scalar
[,1]
[1,] 0.8098
R > pvalue
[,1]
[1,] 0.0384
\end{lstlisting}
The function \code{R > SkewMardia(data)} computes the multivariate skewness introduced in \cite{Mardia1970}, that is the sum of squared elements in the third standardized cumulant of the data matrix. The output of the function is the squared norm of the third cumulant of the standardized data (\code{MardiaSkewness}) and the probability of observing a value of \code{MardiaSkewness} greater than the observed one, when data are normally distributed and the sample size is large enough (\code{pvalue)}.\\
With the command \code{R > SkewMardia(iris.m[,1:4])} we obtain\\
\begin{lstlisting}
R > MardiaSkewness
[1] 2.69722
R > pvalue
[1] 4.757998e-07
\end{lstlisting}
The function \code{SkewBoot} performs bootstrap inference for multivariate skewness measures.
It computes the bootstrap distribution, its histogram and the corresponding $p$-value of the chosen measure of multivariate skewness using a given number of bootstrap replicates. The function calls the function \code{MaxSkew} contained in \pkg{MaxSkew} package. Here, the number of iterations required by the function \code{MaxSkew} is set equal to 5.
The function's usage is
\linebreak
\code{R > SkewBoot(data, replicates, units, type)}. It requires four inputs:
\code{data} (the usual data matrix object),
\code{replicates} (the number of bootstrap replicates),
\code{units} (the number of rows in the data matrices sampled from the original data matrix object) and
\code{type}. The latter may be ``Directional", ``Partial" or ``Mardia" (three different measures of multivariate skewness). If \code{type} is set equal to ``Directional" or ``Mardia", \code{units} is an integer greater than the number of variables. If \code{type} is set equal to ``Partial", \code{units} is an integer greater than the number of variables augmented by one.
The values in output are three:
\code{histogram} (a plot of the above mentioned bootstrap distribution),
\code{Pvalue} (the $p$-value of the chosen skewness measure) and
\code{Vector} (the vector containing the bootstrap replicates of the chosen skewness measure).
For the reproducibility of the result, before calling the function \code{SkewBoot}, we type
\code{R> set.seed(101)}
and after\\
\code{R > SkewBoot(iris.m[,1:4],10,11,"Directional")}.\\
We obtain the output
\begin{lstlisting}
[1] "Vector"
[1] 2.0898 1.4443 1.0730 0.7690 0.6914 0.3617 0.2375 0.0241
[9] -0.1033 0.6092
[1] "Pvalue"
[1] 0.7272727
\end{lstlisting}
and also the histogram of bootstrapped directional skewness (Figure~\ref{Fig8}).
\begin{figure}[tbph]
\centering
\includegraphics[scale=0.50]{hist1.eps}
\caption{Bootstrapped directional skewness of Iris dataset.}
\label{Fig8}
\end{figure}
We call the function \code{SkewBoot}, first setting \code{type} equal to \code{Mardia} and then equal to \code{Partial}:\\
\begin{lstlisting}
R> set.seed(101)
R > SkewBoot(iris.m[,1:4],10,11,"Mardia").
\end{lstlisting}
We obtain the output \\
\begin{lstlisting}
[1] "Vector"
[1] 1.4768 1.1260 0.8008 0.6164 0.4554 0.1550 0.0856
-0.1394 -0.1857 0.4018
[1] "Pvalue"
[1] 0.6363636
R> set.seed(101)
R > SkewBoot(iris.m[,1:4],10,11,"Partial"),
[1] "Vector"
[1] 1.5435 1.0110 0.6338 0.2858 -0.0053 -0.3235 -0.6701
-1.1134 -1.7075 -0.9563
[1] "Pvalue"
[1] 0.3636364
\end{lstlisting}
Figure~\ref{Fig9} and Figure~\ref{Fig10} contain the histograms of bootstrapped Mardia's skewness and bootstrapped partial skewness, respectively.\\
\begin{figure}[tbph]
\centerin
\subfigure[\protect\url{}\label{Fig9}
{\includegraphics[scale=0.33]{hist2.eps}}\qquad\qquad
\subfigure[\protect\url{}\label{Fig10}
{\includegraphics[scale=0.33]{hist3.eps}}\qquad\qquad
\caption{(a) Bootstrapped Mardia's skewness of Iris dataset, (b) Bootstrapped Partial skewness of Iris dataset.\label{fig:sottofigure4}}
\end{figure}
We shall now compute the Fisher's skewnesses of
the four variables in the ``setosa'' group with\\
\code{R > FisherSkew(iris.m[1:50,1:4])}.
\linebreak
The output is the dataframe\\
\begin{lstlisting}
R> tab
X1 X2 X3 X4
Variables 1.0000 2.0000 3.0000 4.0000
Fisher Skewness 0.1165 0.0399 0.1032 1.2159
\end{lstlisting}
\citet{KorkmazGoksulukZararsiz2014} showed that nonnormality of variables in the
``setosa" group went undetected by Mardia's skewness calculated on all four
of them (the corresponding $p-$value is 0.1772). Here, we shall compute
Mardia's skewness of the two most skewed variables (sepal length and petal
width), with the commands\\
\linebreak
\code{R>iris.m.mardia<-cbind(iris.m[1:50,1],iris.m[1:50,4])}\\
\code{R>SkewMardia(iris.m.mardia)}.\\
\linebreak
We obtain the output\\
\begin{lstlisting}
R> MardiaSkewness
[1] 1.641217
R> pvalue
[1] 0.008401288
\end{lstlisting}
The $p-$value is highly significant, clearly suggesting the presence of
skewness and hence of nonnormality. We conjecture that the normality test
based on Mardia's skewness is less powerful when skewness is present only in
a few variables, either original or projected, while the remaining variables
might be regarded as background noise. We hope to either prove or disprove
this conjecture by both theoretical results and numerical experiments in
future works.
\section{Conclusions}
\label{S:8}
\pkg{MaxSkew} and \pkg{MultiSkew} are two \proglang{R} packages aimed at
detecting, measuring and removing multivariate skewness. They also compute the three
main skewness measures. The function \code{SkewBoot} computes the
bootstrap \textit{p-}value corresponding to the chosen skewness measure.
Skewness removal might be achieved with the function \code{
MinSkew}. The function \code{Third}, which computes the third moment,
plays a role whenever the researcher compares the third sample moment with
the expected third moment under a given model, in order to get a better
insight into the model's fit.
The major drawback of both \pkg{MaxSkew} and \pkg{Multiskew} is that
they address skewness by means of third-order moments only. In the first
place, they may not exist even if the distribution is skewed, as it happens
for the skew-Cauchy distribution. In the second
place, the third moment of a random vector may be a null matrix also when
the random vector itself is asymmetric. In the third place, third-order
moments are not robust to outliers. We are currently investigating these
problems.
| 2024-02-18T23:39:55.098Z | 2019-03-26T01:40:53.000Z | algebraic_stack_train_0000 | 819 | 7,501 |
|
proofpile-arXiv_065-4183 | \section{SUPPLEMENTARY MATERIAL for ``Slow scrambling and hidden integrability in a random rotor model"}
\section{Replica action for random $O(M)$ rotor model}
\label{saddle1}
In this subsection, we provide some additional details for the large $N,~M$ saddle point treatment for the rotor model. To begin, recall that the commutation relations for the rotor fields and the angular momenta are given by,
\begin{eqnarray}
[L_{i\mu\nu},n_{j\sigma}] = i\delta_{ij} (\delta_{\mu\sigma}n_{j\nu} - \delta_{\nu\sigma}n_{j\mu}).
\end{eqnarray}
In the $N\rightarrow \infty$ limit, the saddle point action can be obtained upon integrating over disorder configurations. The replicated partition function in Euclidean time is then given by,
\begin{eqnarray*}
Z_n&=&\int \mathcal{D}Q^{aa}_{mm}(\tau,\tau')~\mathcal{D}Q^{aa}_{mn}(\tau,\tau')~\mathcal{D}P^{ab}_{mn}(\tau,\tau')~\mathcal{D}\lambda~\mathcal{D}n^a(\tau)\\
&&\textnormal{exp}\bigg\{\int_0^\beta d\tau \int_0^\beta d\tau'\frac{NJ^2}{2}\bigg[-\sum_{a,m}\frac{1}{2}(Q^{aa}_{mm})^2-\sum_{a,m<n}(Q^{aa}_{mn})^2-\sum_{a<b,m,n}(P^{ab}_{mn})^2\bigg]\\
&&+\frac{MJ^2}{2}\bigg[\sum_{a,m}Q^{aa}_{mm}(\tau,\tau') \sum_i n_i^{ma}(\tau)n_i^{ma}(\tau')+2\sum_{a,m<n}Q^{aa}_{mn} \sum_i n_i^{ma}n_i^{na}+2\sum_{a<b,m,n}P^{ab}_{mn}\sum_i n_i^{ma}n_i^{nb}\bigg]\\
&&-\frac{M}{2g}\int_0^\beta d\tau\sum_{i,a}[(\partial_{\tau}n_i^a(\tau))^2+\lambda n_i^a(\tau))^2]\bigg\}
\end{eqnarray*}
where $a,b\in\{1,...,n\}$ are replica indices, $i,j\in\{1,...,N\}$ are site indices, and $m,n\in\{1,...,M\}$ are vector indices. $Q_{mn}^{aa}$ and $P_{mn}^{ab}$ correspond to quadrupolar and spin glass order, respectively. Since we are only interested in the paramagnetic regimes in this work, both of the above quantities have zero expectation value.
Assuming the saddle point to be $O(M)$ invariant, the replica action can be simplified such that it describes the quantum mechanical action for a single rotor with multiple replica indices,
\begin{eqnarray}
Z=\int \mathcal{D}n^a(\tau)~\mathcal{D}\lambda~\mathcal{D}Q^{ab}~ \textnormal{exp}\bigg[-\frac{M}{2g}\int d\tau[(\partial_\tau n^a)^2+\lambda((n^a)^2-1)]\nonumber\\
+\frac{MJ^2}{2}\int d\tau\int d\tau'[Q^{ab}(\tau-\tau')n^a(\tau)\cdot n^b(\tau')-\frac{1}{2}Q^{ab}(\tau-\tau')^2]\bigg].
\end{eqnarray}
In the large $M$ limit, we can define the imaginary time ($\tau$) auto-correlation functions $Q(\tau)=\langle {\mathbf{n}}(\tau)\cdot {\mathbf{n}}(0)\rangle$, where we are only interested in regimes where $Q(\tau)$ is replica diagonal. We can solve for $Q(i\omega_n)$ (which leads to the above self-consistency condition Eq. \ref{SC}) and obtain,
\begin{equation}
Q(i\omega_n)=\frac{\omega_n^2}{2gJ^2}+\frac{\lambda}{2gJ^2}-\frac{1}{2gJ^2}\sqrt{(\omega_n^2+\lambda-2gJ)(\omega_n^2+\lambda+2gJ)}.
\label{q}
\end{equation}
At $T=0$, the spin-gap vanishes at the critical point between a paramagnet and a spin-glass phase when $\lambda=2Jg$ (see fig.\ref{phasediag}); the actual phase-boundary in the $g-T$ plane can be obtained by solving for the constraint equation $Q(\tau=0)=1$ with the above value of $\lambda$, which can be recast as,
\begin{eqnarray}
\int_0^\infty \frac{d\omega}{\pi} A(\omega) \coth\bigg(\frac{\beta\omega}{2} \bigg) = 1.
\label{cons}
\end{eqnarray}
Near the $T=0$ quantum critical point, this is given by,
\begin{eqnarray}
gJ = \frac{9\pi^2J^2}{16} - 3T^2.
\end{eqnarray}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.25]{phasediag.pdf}
\end{center}
\caption{The phase-diagram for the random rotor model as a function of temperature and $g$.}
\label{phasediag}
\end{figure}
\section{$F(t_1,t_2)$ in the $N,~M\rightarrow\infty$ limit}
\label{ft}
In this subsection, we obtain the time dependence of $F(t_1,t_2)$ in the large $N,~M$ limit, where there is no exponential growth. Recall that,
\begin{equation}
F(\omega_1,\omega_2)=\frac{N}{M}\frac{Q_R(\omega_1)Q_R(\omega_2)}{1-J^2Q_R(\omega_1)Q_R(\omega_2)}=\frac{N}{MJ^2}\bigg(-1+\frac{1}{1-J^2Q_R(\omega_1)Q_R(\omega_2)}\bigg).
\end{equation}
The quantum critical region of interest to us here is when $\Delta_s^2\ll gJ$. Let us introduce the variables,
\begin{eqnarray}
x=\sqrt{\frac{-\omega_1^2+\Delta_s^2}{2gJ}},~~y=\sqrt{\frac{-\omega_2^2+\Delta_s^2}{2gJ}},
\end{eqnarray}
such that the denominator can be simplified to,
\begin{eqnarray}
1-J^2Q_R(\omega_1)Q_R(\omega_2)&\approx&1-(x^2+1-x\sqrt{2})(y^2+1-y\sqrt{2})\\
&=& - (x+y)(x+y-\sqrt{2}),
\end{eqnarray}
where we have assumed $x,y\ll1$ in the first line above and ignored terms that are $O(xy^2,~x^2y)$ and higher.
By making this approximation, we only focus on the long time, long period behavior with a characteristic time scale $1/\sqrt{2gJ}$.
We can then approximate $F(x,y)$ as
\begin{eqnarray}
F(x,y)&\approx&\frac{N}{MJ^2}\bigg[-1+\frac{1}{\sqrt{2}}\bigg(\frac{1}{x+y}+\frac{1}{\sqrt{2}-x-y}\bigg)\bigg]\\
&\approx&\frac{N}{MJ^2}\bigg[-\frac{1}{2}+\frac{1}{\sqrt{2}}\frac{1}{x+y}+O(x,y)\bigg].
\end{eqnarray}
The leading time dependence of $F(t_1,t_2)$ comes from the second term above, which can be simplified as,
\begin{eqnarray}
\frac{1}{x+y}&=&\frac{\sqrt{2gJ}}{\Delta_s}\bigg[\frac{\sqrt{1-(\omega_1^2/\Delta_s^2)}-\sqrt{1-(\omega_2^2/\Delta_s^2)}}{-(\omega_1^2/\Delta_s^2)+(\omega_2^2/\Delta_s^2)}\bigg].
\label{1byxy}
\end{eqnarray}
Rescaling frequencies and time as $\tilde\omega_{1,2}=\omega_{1,2}/\Delta_s,~ \tilde{t}_{1,2}=\Delta_s t_{1,2}$, the inverse Fourier transform of Eqn.~\ref{1byxy} is given by
\begin{eqnarray}
I(\tilde{t}_1,\tilde{t}_2)\equiv\sqrt{2gJ}\Delta_s\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\frac{\sqrt{1-\tilde\omega_1^2}-\sqrt{1-\tilde\omega_2^2}}{(\tilde\omega_2+\tilde\omega_1)(\tilde\omega_2-\tilde\omega_1)}e^{-i\tilde\omega_1\tilde{t}_1-i\tilde\omega_2\tilde{t}_2}\frac{d\tilde\omega_1}{2\pi}\frac{d\tilde\omega_2}{2\pi}
\label{int_I}
\end{eqnarray}
We can redefine $\tilde{t} = \frac{\tilde{t}_1-\tilde{t}_2}{2}$ and $\tilde{T} = \frac{\tilde{t}_1+\tilde{t}_2}{2}$. $I(\tilde{t}_1,\tilde{t}_2) = I(\tilde{T},\tilde{t})$. Taking partial derivative over Eqn.$\tilde{t}$ in \ref{int_I} yields,
\begin{equation}
\frac{\partial I(\tilde{T},\tilde{t})}{\partial \tilde{t}} = i \sqrt{2gJ}\Delta_s\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\frac{\sqrt{1-\tilde\omega_1^2}-\sqrt{1-\tilde\omega_2^2}}{\tilde\omega_2+\tilde\omega_1}e^{-i\tilde\omega_1\tilde{t}_1-i\tilde\omega_2\tilde{t}_2}\frac{d\tilde\omega_1}{2\pi}\frac{d\tilde\omega_2}{2\pi}\label{deriv_I_t}
\end{equation}
We can integrate over $\tilde{\omega}_2$ in the first term in Eqn.~\ref{deriv_I_t}, and integrate over $\tilde{\omega}_1$ in the second term in Eqn.~\ref{deriv_I_t}.
\begin{eqnarray}
\frac{\partial I(\tilde{T},\tilde{t})}{\partial \tilde{t}}= \frac{\sqrt{2gJ}\Delta_s}{2} \Big(sgn(\tilde{t}_2)\int_{-\infty}^{\infty}\sqrt{1-\tilde\omega_1^2}e^{-2i\tilde\omega_1\tilde{t}}\frac{d\tilde\omega_1}{2\pi}-sgn(\tilde{t}_1)\int_{-\infty}^{\infty}\sqrt{1-\tilde\omega_2^2}e^{2i\tilde\omega_2\tilde{t}}\frac{d\tilde\omega_2}{2\pi}\Big)=\sqrt{2gJ}\Delta_s sgn(\tilde{t}) \frac{J_1(2\tilde{t})}{4\tilde{t}},\label{partial_t_tilde}
\end{eqnarray}
where $J_1$ is the Bessel function and we always let $\tilde{t}_{1,2} \geq 0$.
Let me explain in more detail how to perform the square-root integral,
\begin{eqnarray}
\int_{-\infty}^{\infty} \sqrt{1-\omega^2} e^{-i \omega t} \frac{d\omega}{2\pi} &=& \int_{-1}^{1} \sqrt{1-\omega^2} e^{-i \omega t} \frac{d\omega}{2\pi} -i\int_{1}^{\infty} \sqrt{\omega^2-1} e^{-i \omega t} \frac{d\omega}{2\pi}+i\int_{-\infty}^{-1} \sqrt{\omega^2-1} e^{-i \omega t} \frac{d\omega}{2\pi}\\
&=& \frac{J_1(t)}{2t} - \frac{K_1(i t)+K_1(-i t)}{2 \pi t}\\
&=& \theta(t)\frac{J_1(t)}{t},\label{bessel}
\end{eqnarray}
where $\theta(t)$ is the Heaviside function and $K_1$ is the Bessel function of the second kind.
Using Eqn.\ref{bessel}, we can get the result in Eqn.\ref{partial_t_tilde}. Integrate over $\tilde{t}$ in Eqn.\ref{partial_t_tilde}, we can get,
\begin{eqnarray}
I(\tilde{T},\tilde{t}) = I(\tilde{T},0) + \frac{\sqrt{2gJ}\Delta_s}{4} |\tilde{t}| [_1F_2](\frac{1}{2};\frac{3}{2},2;-\tilde{t}^2),
\end{eqnarray}
where $[_1F_2]$ is the hypergeometric function.
Next, we want to calculate $I(\tilde{T},0)$.
\begin{eqnarray}
\frac{d I(\tilde{T},0)}{d \tilde{T}} &=& -i \sqrt{2gJ}\Delta_s \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\frac{\sqrt{1-\tilde\omega_1^2}-\sqrt{1-\tilde\omega_2^2}}{\tilde\omega_2-\tilde\omega_1}e^{-i(\tilde\omega_1+\tilde\omega_2)\tilde{T}}\frac{d\tilde\omega_1}{2\pi}\frac{d\tilde\omega_2}{2\pi}\\
&=&-i 2\sqrt{2gJ}\Delta_s \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\frac{\sqrt{1-\tilde\omega_1^2}}{\tilde\omega_2-\tilde\omega_1}e^{-i(\tilde\omega_1+\tilde\omega_2)\tilde{T}}\frac{d\tilde\omega_1}{2\pi}\frac{d\tilde\omega_2}{2\pi}\\
&=&-\sqrt{2gJ}\Delta_s sgn(\tilde{T})\int_{-\infty}^{\infty}\sqrt{1-\tilde\omega_1^2}e^{-2i\tilde\omega_1\tilde{T}}\frac{d\tilde\omega_1}{2\pi}\\
&=&-\sqrt{2gJ}\Delta_s sgn(\tilde{T})\frac{J_1(2\tilde{T})}{2\tilde{T}}
\end{eqnarray}
Integrate over $\tilde{T}$, one can get,
\begin{eqnarray}
I(\tilde{T},0) &=& I(\infty,0) - \frac{\sqrt{2gJ}\Delta_s}{2} \tilde{T} [_1F_2](\frac{1}{2};\frac{3}{2},2;-\frac{(2\tilde{T})^2}{4})+\frac{\sqrt{2gJ}\Delta_s}{2}\\
&=&-\frac{\sqrt{2gJ}\Delta_s}{4} (\tilde{t}_1+\tilde{t}_2) [_1F_2](\frac{1}{2};\frac{3}{2},2;-\frac{(\tilde{t}_1+\tilde{t}_2)^2}{4})+\frac{\sqrt{2gJ}\Delta_s}{2}
\end{eqnarray}
Note that $\lim_{t_1\to \infty, t_2\to\infty}F(t_1,t_2) =0$ by Riemann-Lebesgue lemma. Here, we approximate $I(\tilde{T},0)\approx 0$ in order to match the long time behavior of $F(t_1,t_2)$.
Finally, we obtain the approximate expression for the squared-commutator,
\begin{eqnarray}
F(t_1,t_2) \approx \frac{N\sqrt{gJ}\Delta_s}{MJ^2}\Big[\frac{1}{2}-\frac{1}{4}(\tilde{t}_1+\tilde{t}_2) [_1F_2](\frac{1}{2};\frac{3}{2},2;-(\tilde{t}_1+\tilde{t}_2)^2/4)+\frac{1}{8}|\tilde{t}_1-\tilde{t}_2| [_1F_2](\frac{1}{2};\frac{3}{2},2;-(\tilde{t}_1-\tilde{t}_2)^2/4)\Big].
\end{eqnarray}
\section{Self-energy at $O(1/M)$ and decay rate for OTOC}
\label{SEsup}
As discussed in the main text, the leading correction to the self-energy at $O(u^2/M)$ from the quartic interaction term (Fig.~\ref{SEu}) is given by,
\begin{eqnarray}
\frac{\Sigma(i\omega_n)}{g} &=& u^2\left(\frac{2\sqrt{gJ}}{gJ^2}\right)^3\int d\tau~ \frac{1}{\tau^6}~e^{-i\omega_n \tau}
\nonumber\\
&=&-u^2\left(\frac{\sqrt{gJ}}{gJ^2}\right)^3\frac{1}{15}\pi |\omega_n|^5.
\label{se}
\end{eqnarray}
The above self-energy correction gives rise to an exponential decay of the Green's function and to the OTOC. The decay rate can be estimated in a straightforward fashion by looking for the imaginary solutions for the following equation,
\begin{equation}
-\omega^2+\Delta_s^2-\frac{\Sigma(\omega)}{M}=0.
\end{equation}
This yields, $\omega=\pm\Delta_s-i\Gamma$, where
\begin{eqnarray}
\Gamma= \frac{u^2}{M}\frac{g}{30}\bigg(\frac{\sqrt{gJ}}{gJ^2}\bigg)^3\Delta_s^4,
\end{eqnarray}
as quoted in the main text. The negative imaginary part gives rise to an exponential decay of $F(t_1,t_2)$.
\section{Numerical analysis of $1/M$ corrections}
In this subsection, we describe details of our numerical evaluation of the ladder sum for $F(t,t)$. The Bethe-Salpeter equation for the $1/M$ corrected squared-commutator $F_u(t_1,t_2)$ can be written as,
\begin{equation}
F_u(t_1,t_2)=F_d(t_1,t_2)+\frac{2u^2}{M}\int_0^{t_1} dt_3 \int_0^{t_2} dt_4~ F_d(t_1-t_3,t_2-t_4)~P(t_3-t_4)~F_u(t_3,t_4),\label{int_num}
\end{equation}
where $F_d(t_1,t_2)$ represents the contribution to the ladder sum without the $O(u^2/M)$ correction to the rung, but includes the self-energy at $O(u^2/M)$ in the dressed propagators. As noted earlier, $F(t_1,t_2)$ in the large $N,~M$ limit does not have any exponential growth; an explicit analytical form for $F(t_1,t_2)$ appears below. Upon including the $O(u^2/M)$ self-energy correction into account, $F_d(t_1,t_2) =F(t_1,t_2)e^{-\Gamma (t_1+t_2)}$, where we evaluated $\Gamma \sim (u^2/M) \Delta_s^4$. Finally, we have defined $P(t)=[Q_W(t)]^2$.
The above self-consistent integral equation can be viewed as solving a matrix inversion problem once we rewrite it as,
\begin{equation}
\int dt_3 \int dt_4~ \bigg[\delta(t_{13})~\delta(t_{24})-\frac{2u^2}{M} ~F_d(t_{13},t_{24})~P(t_3-t_4)\bigg]~F_u(t_3,t_4)=F_d(t_1,t_2).
\end{equation}
In more explicit terms, the integral on the left hand side can be discretized as a summation on a fine (time) grid, $[A_{t_1t_2;t_3t_4} F_{u;t_3t_4}] = F_{d;t_1t_2}$, such that computing the inverse $A_{t_1t_2;t_3t_4}^{-1}$ will lead us to the required form of $F_u$.
In our numerical calculations, we focus exclusively on the exponential piece of $F_u(t,t)=F_0(t)~e^{\lambda_L t}$, where $F_0(t)$ is an undetermined function of time and $\lambda_L>0$ is the Lyapunov exponent. At late times, our numerical analysis is consistent with an exponent, $\lambda_L=a\frac{T^4}{\sqrt{\log(1/T)}}$, where $a$ is a temperature independent `fitting' parameters. However, there is some uncertainty associated with observing the $1/\sqrt{\log(1/T)}$ piece in $\lambda_L$ in our numerical analysis.
The numerical result for $F_u(t,t)$ is summarized in Fig.\ref{num1} for different temperature.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.6]{ftt.pdf}
\end{center}
\caption{A plot of $\log[F_u(t,t)/F(t,t)]$ as a function of time for different temperatures shows a clear exponential growth. The lines from top to bottom represent temperature from $0.05\Lambda$ to $0.025\Lambda$ with interval $0.025\Lambda$.}
\label{num1}
\end{figure}
\section{$1/M$ corrections to the random matrix model}
The quartic interaction between the rotor fields can be written as,
\begin{eqnarray}
H_{\textnormal{int}} = uM \sum_i \bigg[\vec{n}_i^2(\tau)\bigg]^2= uM \sum_i \sum_{\{p_i\}} \varphi_i^{p_1}\varphi_i^{p_2}\varphi_i^{p_3}\varphi_i^{p_4} (\vec{V}_{p1}\cdot\vec{V}_{p2})(\vec{V}_{p3}\cdot\vec{V}_{p4}),
\end{eqnarray}
where $\varphi_i^p$ represents an orthogonal matrix.
Let us consider the $1/M$ correction to the Green's function for the $\vec{V}_p$ field due to the above interaction term. At one-loop order, this leads to a contribution $O(u/M)$ which renormalizes the `mass' with no frequency dependence and can be absorbed into a redefinition of $\lambda$. At two-loop order, the contribution of the interaction to the self-energy is,
\begin{eqnarray}
\Sigma_{p_1p_1'} = u^2 \sum_{i,j}\sum_{p_2p_3p_4} \varphi_i^{p_1}\varphi_i^{p_2}\varphi_i^{p_3}\varphi_i^{p_4}\varphi_j^{p_1'}\varphi_j^{p_2}\varphi_j^{p_3}\varphi_j^{p_4}~ \widetilde{Q}(\lambda_{p_2})~\widetilde{Q}(\lambda_{p_3})~\widetilde{Q}(\lambda_{p_4})
\label{sermt}
\end{eqnarray}
where $\widetilde{Q}(\lambda_p)$ is the Green's function for $\vec{V}_p$, as introduced earlier, and we have suppressed the frequency dependence above.
In order to carry out an averaging over the disorder distribution, we now need to know the distribution of both the eigenvalues, $\lambda_p$ and the eigenvectors, $\varphi_i^p$; for the GOE of random-matrices, these distributions are known to be {\it independent} \cite{RMTbook}. This allows us to integrate over the eigenvalues first. (Note that the bare Green's function for one component of $\vec{V}_p$ has a factor $1/M$ such that the leading $1/M$ correction has a factor $1/M^2$.) Including the outer legs, we have,
\begin{eqnarray}
\frac{1}{M^2}\widetilde{Q}(\lambda_{p_1})~\Sigma_{p_1p_1'}~\widetilde{Q}(\lambda_{p_1'}) = \frac{u^2}{M^2}~\sum_{i,j}\sum_{p_2p_3p_4} \varphi_i^{p_1}\varphi_i^{p_2}\varphi_i^{p_3}\varphi_i^{p_4}\varphi_j^{p_1'}\varphi_j^{p_2}\varphi_j^{p_3}\varphi_j^{p_4}~ \widetilde{Q}(\lambda_{p_1})~\widetilde{Q}(\lambda_{p_1'})~\widetilde{Q}(\lambda_{p_2})~\widetilde{Q}(\lambda_{p_3})~\widetilde{Q}(\lambda_{p_4})
.
\end{eqnarray}
Averaging over the eigenvalue distributions, the $p$ dependence of eigenvalues drop out and one can perform the summation over eigenvectors,
\begin{eqnarray}
&&\frac{u^2}{M^2} ~\sum_{i,j}\sum_{p_2p_3p_4} \varphi_i^{p_1}\varphi_i^{p_2}\varphi_i^{p_3}\varphi_i^{p_4}\varphi_j^{p_1'}\varphi_j^{p_2}\varphi_j^{p_3}\varphi_j^{p_4}~\prod_{\{\lambda_{p_i}\}}\int d\lambda_{p_i}~ \widetilde{Q}(\lambda_{p_1})~ \widetilde{Q}(\lambda_{p_2})~\widetilde{Q}(\lambda_{p_3})~\widetilde{Q}(\lambda_{p_4})~\widetilde{Q}(\lambda_{p_5})~R_5(\lambda_{p_1},\lambda_{p_2},\lambda_{p_3},\lambda_{p_4},\lambda_{p_5})\\
&=&\frac{u^2}{M^2} \delta_{p_1,p_1'}~\prod_{\{\lambda_{p_i}\}}\int d\lambda_{p_i}~ \widetilde{Q}(\lambda_{p_1})~ \widetilde{Q}(\lambda_{p_2})~\widetilde{Q}(\lambda_{p_3})~\widetilde{Q}(\lambda_{p_4})~\widetilde{Q}(\lambda_{p_5})~R_5(\lambda_{p_1},\lambda_{p_2},\lambda_{p_3},\lambda_{p_4},\lambda_{p_5})
\end{eqnarray}
where $R_5(...)$ is the 5-level correlation function in GOE. We can express the 5-level correlation function more generally as,
\begin{eqnarray}
R_5(x_1,x_2,x_3,x_4,x_5) &=& \sigma_J(x_1)~\sigma_J(x_2)~\sigma_J(x_3)~\sigma_J(x_4)~\sigma_J(x_5)-\sum_P \sigma_J(x_{p_1})~\sigma_J(x_{p_2})~\sigma_J(x_{p_3})~T_2(x_{p_4},x_{p_5})\nonumber\\
&+&\sum_P \sigma_J(x_{p_1})~\sigma_J(x_{p_2})~T_3(x_{p_3},x_{p_4},x_{p_5})+\sum_P \sigma_J(x_{p_1})~T_2(x_{p_2},x_{p_3})T_2(x_{p_4},x_{p_5})\nonumber\\
&-&\sum_P \sigma_J(x_{p_1})~T_4(x_{p_2},x_{p_3},x_{p_4},x_{p_5})-\sum_P T_2(x_{p_1},x_{p_2})~T_3(x_{p_3},x_{p_4},x_{p_5})+T_5(x_{1},x_{2},x_{3},x_{4},x_{5}).
\label{5level}
\end{eqnarray}
The first term above is a simple (independent) product of the density of states for the five eigenvalues. The correlation among the different eigenvalues is contained in the $2,~3,~4$ and $5-$level ``cluster functions", $T_2,~T_3,~T_4,~T_5$, respectively \cite{RMTbook}. The summation $P$ is the permutation among $x_1$, $x_2$, $x_3$, $x_4$ and $x_5$. The first term in $R_5(x_1,..,x_5)$ above leads to the $\sim u^2|\omega|^5$ singular structure in the imaginary part of the self energy as in the main text since each integration $[\int~ d\lambda_i~ \sigma_J(\lambda_i) \widetilde{Q}(\lambda_i)]$ gives a factor of $Q(i\omega_n)$ (the rest follows the discussion in the main text). The remaining terms in Eqn.~\ref{5level}, that take into account correlations among eigenvalues of the random matrix, are not included in the replica treatment of the large$-N,~M$ saddle-point action for the rotor-theory.
Within the random-matrix picture, the higher-order corrections to the ``free" theory can be studied systematically by introducing the higher level correlation functions (i.e. $R_n(x_1,...,x_n)$) for GOE \cite{RMTbook}. As is clear from the above discussion, there is a piece for all $R_n(x_1,...,x_n)$ which corresponds simply to an independent product of the $\sigma_J(x_i)$; these are the terms that are also included in the replica action for the rotor theory.
\end{widetext}
\end{document} | 2024-02-18T23:39:55.737Z | 2019-08-22T02:16:48.000Z | algebraic_stack_train_0000 | 843 | 3,346 |
|
proofpile-arXiv_065-4216 |
\section{Construction of $\mathbf{M}^R$} \label{App:rank}
We give a brief outline of the construction of $\mathbf{M}^R$ and demonstrate that its maximal eigenvalue coincides with $R_0^r$ given in \cite{Ball_etal}, Section 3.1.3.
We begin by computing the transition probabilities for a household epidemic in state $(a,b)$, $a$ susceptibles and $b$ infectives. By considering the total amount of infection, $I_b$, generated by the $b$ infectives and the fact that the infectious periods, $T$, are independent and identically distributed, it was shown in \cite{Pellis_etal}, Appendix A, that $X_{(a,b)} | I_b \sim {\rm Bin} (a, 1- \exp(-\lambda_L I_b))$ with
\begin{eqnarray} \label{eq:rank:3} \pz (X_{(a,b)} = c ) &=& \ez[ \pz (X_{(a,b)} = c | I_b)] \nonumber \\
&=& \binom{a}{c} \ez \left[ \{1 -\exp(-\lambda_L I_b) \}^c \exp(-\lambda_L I_b)^{a-c} \right] \nonumber \\
&=& \binom{a}{c} \sum_{j=0}^c \binom{c}{j} (-1)^j \ez [ \exp(- \lambda_L (a+j-c) I_b] \nonumber \\
&=& \binom{a}{c} \sum_{j=0}^c \binom{c}{j} (-1)^j \phi_T (\lambda_L (a+j-c))^b, \end{eqnarray} where $\phi_T (\theta) = \ez [ \exp(-\theta T)]$ is the Laplace transform of the infectious period distribution.
Given that if a household epidemic transitions from state $(a,b)$ to state $(a-c,c)$, the mean number of infections due to any given infective is simply $c/b$.
For the rank generational representation of the epidemic, we can again subsume all states $(0,b)$ into $(0,1)^\ast$. We note that in contrast to Section \ref{S:example:house}, epidemic states $(a,1)$ $(1 \leq a \leq h-2)$ can arise from the epidemic process whilst states $(h-b,b)$ $(b>1)$ will not occur.
For all $\{(a,b); b >0, a+b \leq h\}$, we have that $M_{(a,b),(h-1,1)}^R = \mu_G= \lambda_G \ez[T]$ (the mean number of global infectious contacts made by an individual) and for $(d,c) \neq (h-1,1)$,
\begin{eqnarray} \label{eq:rank:4} M_{(a,b),(d,c)}^R = \left\{ \begin{array}{ll} \frac{c}{b} \binom{a}{c} \sum_{j=0}^c \binom{c}{j} (-1)^j \phi_T (\lambda_L (a+j-c))^b & \mbox{if } d = a-c >0 \\
\frac{a}{b} \sum_{j=0}^a \binom{a}{j} (-1)^j \phi_T (\lambda_L j)^b & \mbox{if } (d,c)=(0,1)^\ast \\
0 & \mbox{otherwise}. \end{array} \right. \end{eqnarray}
We can proceed along identical lines to \eqref{eq:house:E:6} in decomposing $\mathbf{M}^R$ into
\begin{eqnarray} \label{eq:rank:4a} \mathbf{M}^R = \mathbf{G} + \mathbf{U}^R, \end{eqnarray} where $\mathbf{G}$ is the $K \times K$ matrix ($K$ denotes the total number of infectious states) with $G_{k1} = \mu_G$ $(1 \leq k \leq K)$ and $G_{kj}=0$ otherwise. For $i=0,1, \ldots, h-1$, let $\mu_i^R$ denote the mean number of individuals in the $i^{th}$ generation of the rank construction of the epidemic, then we have that $\mu_i^R = \sum_{j=1}^K (u_{ij}^R)^i$, the sum of the first row of $(\mathbf{U}^R)^i$. We can follow essentially identical arguments to those given in Section \ref{S:example:house} to show that $R_0^r$ is the maximal eigenvalue of $\mathbf{M}^R$.
To illustrate, this approach we consider households of size $h=3$. The possible infectious units are $(2,1), (1,1)$ and $(0,1)^\ast$ with mean reproductive matrix
\begin{eqnarray} \label{eq:rank:5} \mathbf{M}^R = \begin{pmatrix} \mu_G & 2 \{ \phi_T (\lambda_L) - \phi_T (2 \lambda_L) \} & 2 \{ 1- 2 \phi_T (\lambda_L) + \phi_T (2 \lambda_L) \} \\
\mu_G & 0& 1 - \phi_T (\lambda_L) \\
\mu_G & 0 & 0 \end{pmatrix}. \end{eqnarray}
The eigenvalues of $\mathbf{M}^R$ are solutions of the cubic equation
\begin{eqnarray} \label{eq:rank:6} s^3 - \mu_G s^2 - 2 \left\{1 - \phi_T ( \lambda_L) \right\} \mu_G s - \mu_G \ 2 \{1 - \phi_T (\lambda_L)\}\{ \phi_T (\lambda_L) - \phi_T (2 \lambda_L) \} &=&0 \nonumber \\
s^3 - \mu_G \mu_0^R s^2 - \mu_G \mu_1^R s - \mu_G \mu_2^R &=&0, \end{eqnarray}
where $\mu_0^R =1$, $\mu_1^R= 2 \{ 1 - \phi_T (\lambda_L) \}$ and
$\mu_2^R = 2 \{\phi (\lambda_L) - \phi (2 \lambda_L) \} \{1 - \phi_T (\lambda_L) \}$ are the mean number of infectives in rank generations 0, 1 and 2, respectively of the household epidemic model. Given that \eqref{eq:rank:6} is equivalent to \cite{Pellis_etal}, (3.3), it follows that that maximal eigenvalue of $\mathbf{M}^R$ is $R_0^r$.
\subsection{Comments on $\mathbf{M}$}
We make a couple of observations concerning the construction of $\mathbf{M}$.
In the $SIR$ household epidemic model, we can reduce the number of states to $h(h+1)/2 - (h-1)$ by noting that in households with 0 susceptibles, no local infections can occur and thus infectives can only make global infections acting independently. Therefore we can subsume the states $(0,1), (0,2), \ldots, (0,h)$ into a single state, $(0,1)^\ast$ say, with a local infection in households with 1 susceptible resulting in the household moving to state $(0,1)^\ast$, see for, example \cite{Neal16}.
For $SIR$ epidemics, there will typically be a natural ordering of infectious unit types such that we can order the types with only transitions of infectious unit from type $i$ to type $j$ ($i < j$) being possible. For example, with household epidemics we can order the types such that type $(a,b)$ is said to be less than type $(c,d)$, if $a >c$, or if $a=c$, and $b>d$. In such cases $\mathbf{P}$ is an upper triangular matrix and if the main diagonal of $\mathbf{P}$ is $\mathbf{0}$ then there exists $n_0 \leq K$, such that for all $n > n_0$, $\mathbf{P}^n = \mathbf{0}$. Then
\begin{eqnarray} \label{eq:M:3} \mathbf{M} = (\mathbf{I} - \mathbf{P})^{-1} \Phi =\left(\sum_{n=0}^{n_0} \mathbf{P}^n \right) \Phi, \end{eqnarray}
and we can compute $\mathbf{M}$ without requiring matrix inversion.
\section{Conclusions} \label{S:conc}
In this paper we have given a simple definition and derivation of $R_0$ for structured populations by considering the population to consist of different types of infectious units. This multi-type approach to constructing $R_0$, via the mean offspring matrix of the different types of infectious units, follows a widely established mechanism introduced in \cite{Diekmann_etal}. Moreover, we have demonstrated that for $SIR$ household epidemic models that $R_0$ coincides with $R_0^g$, the generational basic reproduction number defined in \cite{Ball_etal}. In \cite{Ball_etal}, the rank generational basic reproduction number, $R_0^r$, is also considered and is taken to be the default choice of $R_0$ in that paper.
For the household $SIR$ epidemic model is straightforward to define and construct a rank generational mean reproduction matrix $\mathbf{M}^R$ for a general infectious period distribution, $T$. The approach is to represent the evolution of the epidemic as a discrete time process generation-by-generation. This approach ignores the time dynamics of the epidemic but leaves the final size unaffected and dates back to \cite{Ludwig75}.
The initial infective in the household (infected by a global infection) forms generation 0. Then for $i=1,2,\ldots,h-1$, the infectious contacts by the infectives in generation $i-1$ are considered and any susceptible individual contacted by an infective in generation $i-1$ will become an infective in generation $i$. We define an infective to be a type $(a,b)$ individual if the generation of the household epidemic in which they are an infective has $b$ infectives and $a$ susceptibles. In the construction of $\mathbf{M}^R$, we again look at the mean number of infections attributable to a given infective with details provided in Appendix \ref{App:rank}. Let $\mu_i^R$ denote the mean number of infectives in generation $i$ of the household epidemic then it is shown in \cite{Ball_etal}, Section 3.1.3, that for all $k \geq 1$, $\sum_{i=0}^k \mu_i^R \geq \sum_{i=0}^k \mu_i$, which in turn implies $R_0^r \geq R_0^g$. The construction of $\mathbf{M}^R$ is straightforward using \cite{Pellis_etal}, Appendix A and we provide a brief outline in Appendix \ref{App:rank} how similar arguments to those used in Section \ref{S:example:house} can be used to show that $R_0^r$ is the maximal eigenvalue of $\mathbf{M}^R$.
The rank generational construction is natural for the $SIR$ epidemic model and allows us to move beyond $T \sim {\rm Exp} (\gamma)$ but does not readily apply to $SIS$ epidemic models. Extensions of the $SIS$ epidemic model are possible by using the method of stages, see \cite{Barbour76}, where $T$ can be expressed as a sum or mixture of exponential distributions and by extending the number of infectious units to allow for individuals in different stages of the infectious period. In principle $\mathbf{P}$ and $\Phi$ can be constructed as above but the number of possible infectious units rapidly grows making the calculations more cumbersome.
\subsection{Construction of $\mathbf{M}$}
Consider an infective belonging to an infectious unit of state $i$ ($i=1,2,\ldots,K$). Suppose that there are $n_i$ events which can occur to an infectious unit in state $i$. Let $q_{il}$ $(i=1,2,\ldots,K;l=1,2,\ldots,n_i)$ denote the probability that a type $l$ event occurs in the infectious unit. Let $a_{il}$ $(i=1,2,\ldots,K;l=1,2,\ldots,n_i)$ denote the state of the infectious unit following the type $l$ event with $a_{il}= 0$ if the infective recovers and so no longer infectious. Let $Y_{ilj}$ $(i,j=1,2,\ldots,K;l=1,2,\ldots,n_i)$ denote the total number of type $j$ infectious units generated by an infective who undergoes a type $l$ event
with $\mu_{ilj} = E[Y_{ilj}]$ and $\mathbf{Y}_{il} = (Y_{il1}, Y_{il2}, \ldots, Y_{ilK})$.
For $i,j=1,2,\ldots,K$, let
\begin{eqnarray} \label{eq:M:prep:1} p_{ij} = \sum_{l=1}^{n_i} 1_{\{a_{il} =j \}} q_{il}, \end{eqnarray} the probability that an infective belonging to a state $i$ infectious unit moves to a state $j$ infectious unit. Note that typically $\sum_{j=1}^K p_{ij} <1$ as there is the possibility of the infective recovering from the disease and let $p_{i0} = 1- \sum_{j=1}^K p_{ij}$, the probability a type $i$ infective recovers from the disease. For $i,j=1,2,\ldots,K$, let
\begin{eqnarray} \label{eq:M:prep:2} \phi_{ij} = \sum_{l=1}^{n_i} q_{il} \mu_{ilj}, \end{eqnarray} the mean number of state $j$ infectious units generated by an event directly involving an infective in a state $i$ infectious unit. It follows using the theorem of total probability and the linearity of expectation that
\begin{eqnarray} \label{eq:M:prep:3}
m_{ij} &=& \sum_{l=1}^{n_i} q_{il} \ez[\mbox{State $j$ infectious units generated} | \mbox{Type $l$ event}] \nonumber \\
&=& \sum_{l=1}^{n_i} q_{il} \left\{ \mu_{ilj} + m_{a_{il} j} \right\} \nonumber \\
&=& \sum_{l=1}^{n_i} q_{il} \mu_{ilj} + \sum_{l=1}^{n_i} q_{il} m_{a_{il} j} \nonumber \\
&=& \phi_{ij} + \sum_{i=1}^{n_i} q_{il} \left\{ \sum_{k=1}^{K} 1_{\{a_{il} = k \}} m_{kj} \right\} \nonumber \\
&=& \phi_{ij} + \sum_{k=1}^{K} \left\{ \sum_{i=1}^{n_i} q_{il} 1_{\{a_{il} = k \}} \right\} m_{kj} \nonumber \\
&=& \phi_{ij} + \sum_{k=1}^{K} p_{ik} m_{kj}, \end{eqnarray}
where for $j=1,2,\ldots,K$, let $m_{0j} =0$, that is, a recovered individual makes no infections.
Letting $\Phi = (\phi_{ij})$ and $\mathbf{P} = (p_{ij})$ be $K \times K$ matrices, we can express \eqref{eq:M:prep:3} in matrix notation as
\begin{eqnarray} \label{eq:M:1} \mathbf{M} = \Phi + \mathbf{P} \mathbf{M}. \end{eqnarray}
Rearranging \eqref{eq:M:1}, with $\mathbf{I}$ denote the $K \times K$ identity matrix, we have that
\begin{eqnarray} \label{eq:M:2} \mathbf{M} = \left(\sum_{n=0}^\infty \mathbf{P}^n \right) \Phi = (\mathbf{I} - \mathbf{P})^{-1} \Phi. \end{eqnarray}
Since individuals recover from the disease, $\mathbf{P}$ is a substochastic matrix with at least some rows summing to less than 1. Thus the Perron-Frobenius theorem gives that $\mathbf{P}^n \rightarrow \mathbf{0}$ as $n \rightarrow \infty$.
The definition of an event, and hence, $\mathbf{Y}_{il}$ can be accomplished in a variety of ways. In this paper, we typically take an event to coincide with a change in the type of infectious unit to which an infective belongs and we take account of the (mean) number of global infections an infective makes in a type $i$ infectious unit before transitioning to a new type of infectious unit. In this way $p_{ii} =0$ $(i=1,2,\ldots,K)$. An alternative approach is to define an event to be any global infection, local infection or recovery within their infectious unit. In this case nothing occurs between events and $\mathbf{Y}_{il}$ is the number of infectious units generated by the type $l$ event. In Section \ref{S:example:sex}, we present both constructions for an $SIS$ sexually transmitted disease model.
\subsection{Definition of $R_0$}
We define
$R_0$ as the maximal eigenvalue of the mean reproduction matrix $\mathbf{M}$, where $\mathbf{M}$ is a $K \times K$ matrix with $m_{ij}$ denoting the mean number of state $j$ infectious units generated by an infective who enters the infectious state as a member of a state $i$ infectious unit. This definition of $R_0$ is consistent with earlier work on computing the basic reproduction number in heterogeneous populations with multiple types of infectives, see for example, \cite{Diekmann_etal}. A key point to note is that $\mathbf{M}$ will capture only the infections made by a specific infective rather than all the infections made by the infectious unit to which they belong.
Linking the mean reproduction matrix $\mathbf{M}$ back to the $SIR$ household example. An individual will be classed as a state $(a,b)$ individual if the event which leads to the individual becoming infected results in the infectious unit (household) entering state $(a,b)$. An infective will generate a state $(c,d)$ individual, if they are infectious in an infectious unit in state $(c+1,d-1)$ and the infective is responsible for infecting one of the susceptibles in the household. Note that if $d \geq 3$, the infectious unit can transit from state $(c+1,d-1)$ to state $(c,d)$ without the infective in question having made the infection.
\section{Examples} \label{S:Example}
In this Section we show how $\mathbf{M}$ is constructed for three different models; the household SIR epidemic model (Section \ref{S:example:house}), an SIS sexually transmitted disease (Section \ref{S:example:sex}) and the great circle SIR epidemic model (Section \ref{S:example:gcm}).
\input{house_v2}
\input{sex_v3}
\input{gcm}
\subsection{Great Circle Epidemic model} \label{S:example:gcm}
The final example we consider in this paper is the great circle $SIR$ epidemic model, see \cite{Ball_Neal03} and references therein. The model assumes that the population consists of $N$ individuals who are equally spaced on the circumference of a circle with the individuals labeled sequentially from 1 to $N$ and individuals 1 and $N$ are neighbours. Thus individuals $i \pm 1 (mod \, N)$ are the neighbours of individual $i$ $(i=1,2,\ldots,N)$. Individuals, whilst infectious, make both local and global infectious contacts as in the household model. An individual makes global infectious contacts at the points of a homogeneous Poisson point process with rate $\lambda_G$ with the individual contacted chosen uniformly at random from the entire population. An individual makes local infectious contacts with a given neighbour at the points of a homogeneous Poisson point process with rate $ \lambda_L$. Finally, the infectious periods are independently and exponentially distributed with mean $1/\gamma$.
An infectious individual in the great circle model can be characterised by the total number of susceptible neighbours that it has which can be $2, 1$ or 0. In the initial stages of the epidemic with $N$ large, with high probability, an individual infected globally will initially have 2 susceptible neighbours, whereas an individual infected locally will, with high probability, have 1 susceptible neighbour when they are infected. An infective with $k$ $(k=0, 1,2)$ susceptible neighbours makes ${\rm Po} (\lambda_G/\{k \lambda_L + \gamma \})$ global infectious contacts before a local infection or recovery event with the probability that the event is the infection of a neighbour $k \lambda_L /(k \lambda_L + \gamma)$. Therefore if we construct $\Phi$ and $\mathbf{P}$ in terms of descending number of susceptible neighbours we have that
\begin{eqnarray} \label{eq:gcm:1}
\Phi = \begin{pmatrix} \frac{\lambda_G}{2 \lambda_L + \gamma} & \frac{2 \lambda_L}{2 \lambda_L + \gamma} & 0 \\
\frac{\lambda_G}{ \lambda_L + \gamma} & 0 & \frac{ \lambda_L}{ \lambda_L + \gamma} \\ \frac{\lambda_G}{\gamma} & 0& 0 \end{pmatrix}, \end{eqnarray}
and
\begin{eqnarray} \label{eq:gcm:2}
\mathbf{P} = \begin{pmatrix} 0 & \frac{2 \lambda_L}{2 \lambda_L + \gamma} & 0 \\
0 & 0 & \frac{\lambda_L}{\lambda_L + \gamma} \\
0 & 0 & 0 \end{pmatrix}. \end{eqnarray}
It is then straightforward to show that
\begin{eqnarray} \label{eq:gcm:3}
\mathbf{M} = \begin{pmatrix} \frac{\lambda_G}{\gamma} & \frac{2 \lambda_L}{\lambda_L +\gamma} & 0 \\
\frac{\lambda_G}{\gamma} & \frac{\lambda_L}{\lambda_L +\gamma} & 0 \\
\frac{\lambda_G}{\gamma} & 0 & 0 \end{pmatrix}. \end{eqnarray}
We observe that no individuals are created with 0 susceptible neighbours and we only need to consider the mean offspring distributions for type 1 and type 2 infectives. This gives $R_0$ as the solution of the quadratic equation,
\begin{eqnarray} \label{eq:gcm:4} \left( \frac{\lambda_G}{\gamma} - s \right) \left( \frac{\lambda_L}{\lambda_L +\gamma} - s \right) - \frac{\lambda_G}{\gamma} \times \frac{2 \lambda_L}{\lambda_L +\gamma} &=&0 \nonumber \\
s^2 - (\mu_G + p_L) s - \mu_G p_L &=& 0,
\end{eqnarray} where $\mu_G = \lambda_G/\gamma$ denotes the mean number of global infectious contacts made by an infective and $p_L = \lambda_L /(\lambda_L + \gamma)$ denotes the probability an infective makes infects a given susceptible neighbour. This yields
\begin{eqnarray} \label{eq:gcm:5} R_0 = \frac{p_L + \mu_G + \sqrt{(p_L + \mu_G)^2 + 4 p_L \mu_G}}{2}. \end{eqnarray}
In \cite{BMST}, \cite{Ball_Neal02} and \cite{Ball_Neal03}, the threshold parameters $R_\ast$ is defined for the great circle model as the mean number of global infectious contacts emanating from a local infectious clump, where a local infectious clump is defined to be the epidemic generated by a single infective by only considering local (neighbour) infectious contacts. From \cite{Ball_Neal02}, (3.12),
\begin{eqnarray} \label{eq:gcm:6} R_\ast = \mu_G \frac{1 + p_L}{1- p_L}. \end{eqnarray} It is trivial to show that $R_0 =1$ $(R_0 <1; R_0 >1)$ if and only if $R_\ast =1$ $(R_\ast <1; R_\ast >1)$ confirming $R_0$ and $R_\ast$ as equivalent threshold parameters for the epidemic model.
In contrast to the household $SIR$ epidemic model (Section \ref{S:example:house}) and the $SIS$ sexually transmitted disease (Section \ref{S:example:sex}) for the great circle model it is trivial to extend the above definition of $R_0$ to a general infectious period distribution $T$. Let $\mu_T = \ez [T]$, the mean of the infectious period and $\phi_T (\theta) = \ez [ \exp(- \theta T)]$ $(\theta \in \mathbb{R}^+)$, the Laplace transform of the infectious period. Thus $\mu_G$ and $p_L$ become $\lambda_G \mu_T$ and $1 - \phi (\lambda_L)$, respectively. Then the probability that a globally infected individual infects 0, 1 or 2 of its initially susceptible neighbours is $\phi (2 \lambda_L)$, $2 \{ \phi ( \lambda_L) - \phi (2 \lambda_L) \}$ and $1 - 2 \phi ( \lambda_L) + \phi (2 \lambda_L) $, respectively. Similarly the probability that a locally infected individual infects its initially susceptible neighbour is $p_L =1- \phi (\lambda_L)$. Since the mean number of global infectious contacts made by an infective is $\mu_G (= \lambda_G \mu_T)$ regardless of whether or not the individual is infected globally or locally, we can derive directly the mean offspring matrix $\mathbf{M}$ is terms of those infected globally (initially 2 susceptible neighbours) and those infected locally (initially 1 susceptible neighbour) with
\begin{eqnarray} \label{eq:gcm:7}
\mathbf{M} = \begin{pmatrix} \mu_G & 2 p_L \\
\mu_G & p_L \end{pmatrix}. \end{eqnarray} Therefore after omitting the final column (and row) of $\mathbf{M}$ from \eqref{eq:gcm:3}, the equation for $\mathbf{M}$ is identical in \eqref{eq:gcm:3} and \eqref{eq:gcm:7}, and hence \eqref{eq:gcm:5} holds for $R_0$ for a general infectious period distribution $T$.
\subsection{$SIR$ Household example}
An example of an epidemic model which satisfies the above setup is the $SIR$ household epidemic model with exponential infectious periods. We illustrate assuming that all households are of size $h >1$ with the extension to allow for different size households trivial. An individual, whilst infectious, makes global contacts at the points of a homogeneous Poisson point process with rate $\lambda_G$ with the individual contacted chosen uniformly at random from the entire population and local contacts at the points of a homogeneous Poisson point process with rate $(h-1) \lambda_L$ with the individual contacted chosen uniformly at random from the remaining $h-1$ individuals in the infectives household. It is assumed that the local and global contacts are independent. Note that an infective makes contact with a given individual in their household at rate $\lambda_L$. Infectives have independent and identically distributed exponential infectious periods with mean $1/\gamma$ corresponding to infectives recovering at rate $\gamma$. In this case $M=1$ although we could extend to a multitype household model, see \cite{Ball_Lyne}. Infectious units correspond to households containing at least one infective and we classify households by the number of susceptibles and infectives they contain. Therefore the possible infectious states of a household are $\{(a,b); b=1,2,\ldots,h; a=0,1,\ldots, h-b \}$, where $a$ and $b$ denote the number of susceptibles and the number of infectives in the household, respectively. Thus there are $K = h (h+1)/2$ states. A global infection with a previously uninfected household results in the creation of a new infectious unit in state $(h-1,1)$. A local infection in a household in state $(a,b)$ results in the household moving to state $(a-1,b+1)$, whilst a recovery in a household in state $(a,b)$ results in the household moving to state $(a,b-1)$, and no longer being an infectious unit if $b=1$.
\subsection{$SIR$ Household epidemic model} \label{S:example:house}
We illustrate the computation of $R_0$ in a population with households of size $h$. As noted in Section \ref{S:setup}, we can summarise the epidemic process using $K = h (h+1)/2 - (h-1)$ states by amalgamating states $(0,1), (0,2), \ldots, (0,h)$ into the state $(0,1)^\ast$. We use the labellings $\{(0,1)^\ast, (a,b); a,b=1,2,\ldots,h, (a+b) \leq h\}$ rather than $1,2, \ldots, K$ to define the mean reproduction matrix.
We construct $\mathbf{M}$ by first considering the local transitions (infections and recoveries) which occur within a household. Therefore for an individual in state $(a,b)$, the non-zero transitions are
\begin{eqnarray}
p_{(a,b),(a-1,b+1)} &=& \frac{a b \lambda_L }{b (a \lambda_L+ \gamma)} \hspace{0.5cm} \mbox{if $a>1$} \nonumber \\
p_{(a,b),(0,1)^\ast} &=& \frac{a b\lambda_L}{b (a \lambda_L + \gamma)} \hspace{0.5cm} \mbox{if $a=1$} \label{eq:house:E:0} \\
p_{(a,b),(a,b-1)}&=& \frac{(b-1) \gamma}{b (a \lambda_L + \gamma)}. \nonumber
\end{eqnarray}
Note that the probability that the next event that the individual of interest recovers is $\gamma/\{ b (a \lambda_L + \gamma)\}$ and an individual only leaves state $(0,1)^\ast$ through recovery. Therefore the transition probabilities in \eqref{eq:house:E:0} define the substochastic matrix $\mathbf{P}$. The time that a household spends in state $(a,b)$ is exponentially distributed with rate $b (a \lambda_L + \gamma)$. Therefore, since infectives are making infectious contacts at the points of a homogeneous Poisson point process with rate $\lambda_G$, the mean number of global contacts made by an infective, whilst the household is in state $(a,b)$, is $\lambda_G/\{ b (a \lambda_L + \gamma)\}$ with all global contacts resulting in an $(h-1,1)$ infectious unit. This gives the non-zero entries of $\Phi$ to be
\begin{eqnarray*}
\phi_{(a,b),(a-1,b+1)} &=& \frac{a \lambda_L }{b (a \lambda_L + \gamma)} = \frac{p_{(a,b),(a-1,b+1)}}{b} \hspace{0.5cm} \mbox{if $a>1$} \\
\phi_{(a,b),(0,1)^\ast} &=& \frac{\lambda_L}{b (a \lambda_L + \gamma)} = \frac{p_{(a,b),(0,1)^\ast}}{b} \hspace{1.05cm} \mbox{if $a=1$} \\
\phi_{(a,b),(h-1,1)} &=& \frac{\lambda_G}{b (a \lambda_L + \gamma)} \end{eqnarray*}
Note that the probability that the infective of interest is responsible for a given local infection in a household in state $(a,b)$ is simply $1/b$.
In a population of households of size $3$ with the states ordered $(2,1)$, $(1,2)$, $(1,1)$ and $(0,1)^\ast$, we have that
\begin{eqnarray} \label{eq:house:E:1}
\mathbf{P} &=& \begin{pmatrix} 0 & \frac{2 \lambda_L}{2 \lambda_L + \gamma} & 0 & 0 \\
0& 0 & \frac{\gamma}{2 (\lambda_L + \gamma)} & \frac{2\lambda_L}{2 (\lambda_L + \gamma)} \\
0 & 0 &0& \frac{\lambda_L}{\lambda_L + \gamma} \\
0 & 0 & 0 & 0 \end{pmatrix} \end{eqnarray}
and
\begin{eqnarray} \label{eq:house:E:2}
\Phi &=& \begin{pmatrix} \frac{\lambda_G}{2 \lambda_L + \gamma} & \frac{2 \lambda_L}{2 \lambda_L + \gamma} & 0 & 0 \\
\frac{\lambda_G}{2 (\lambda_L + \gamma)} & 0 & 0 & \frac{\lambda_L}{2 (\lambda_L + \gamma)} \\
\frac{\lambda_G}{\lambda_L + \gamma} & 0 & 0 & \frac{\lambda_L}{\lambda_L + \gamma} \\
\frac{\lambda_G}{\gamma} & 0 & 0 & 0 \end{pmatrix}. \end{eqnarray}
It is then straightforward to show that
\begin{eqnarray} \label{eq:house:E:3}
\mathbf{M} = (\mathbf{I} - \mathbf{P})^{-1} \Phi = \begin{pmatrix} \frac{\lambda_G}{\gamma} & \frac{2 \lambda_L}{2 \lambda_L +\gamma} & 0 & \frac{\lambda_L^2 (\lambda_L + 2 \gamma)}{(2 \lambda_L + \gamma)(\lambda_L + \gamma)^2} \\
\frac{\lambda_G}{ \gamma} & 0 & 0 & \frac{\lambda_L (\lambda_L + 2 \gamma)}{2 (\lambda_L + \gamma)^2} \\
\frac{\lambda_G}{\gamma} & 0 & 0 & \frac{\lambda_L}{\lambda_L + \gamma} \\
\frac{\lambda_G}{\gamma} & 0 & 0 & 0 \end{pmatrix} \end{eqnarray}
There are a couple of observations to make concerning $\mathbf{M}$. Firstly, regardless of at what stage of the household epidemic an individual is infected, the mean number of global contacts, and hence, the number of infectious units of type $(h-1,1)$ that are created by the individual is $\lambda_G/\gamma$. Secondly, no individuals of type $(1,1)$ are created in the epidemic since a household can only reach this state from $(1,2)$ and through the recovery of the other infective. More generally, an individual does not start as an infectious unit of type $(a,1)$, where $1 \leq a \leq h-2$, although it is helpful to define such infectious units for the progression of the epidemic.
It follows from \eqref{eq:house:E:3} by removing the redundant row and column for state $(1,1)$ individuals, that the basic reproduction number, $R_0$, solves the cubic equation
\begin{eqnarray} \label{eq:house:E:4}
s^3 - \frac{\lambda_G}{\gamma} s^2 - \frac{\lambda_G}{\gamma} \left\{\frac{2 \lambda_L}{2 \lambda_L +\gamma} +\frac{\lambda_L^2 (\lambda_L + 2 \gamma)}{(2 \lambda_L + \gamma)(\lambda_L + \gamma)^2} \right\} s -
\frac{\lambda_G}{\gamma} \left\{ \frac{2 \lambda_L}{2 \lambda_L +\gamma} \frac{\lambda_L (\lambda_L + 2 \gamma)}{2 (\lambda_L + \gamma)^2} \right\} =0. \end{eqnarray}
We note that in the notation of \cite{Pellis_etal}, $\mu_G = \lambda_G/\gamma$, $\mu_0 =1$,
\[ \mu_1 = \frac{2 \lambda_L}{2 \lambda_L +\gamma} +\frac{\lambda_L^2 (\lambda_L + 2 \gamma)}{(2 \lambda_L + \gamma)(\lambda_L + \gamma)^2} \]
and
\[ \mu_2 = \frac{2 \lambda_L}{2 \lambda_L +\gamma} \frac{\lambda_L (\lambda_L + 2 \gamma)}{2 (\lambda_L + \gamma)^2}, \] where $\mu_i$ $(i=0,1,\ldots)$ denotes the number of infectives in generation $i$ of the household epidemic, see also \cite{Ball_etal}, Section 3.1.3.
Therefore we can rewrite \eqref{eq:house:E:4} as
\begin{eqnarray} \label{eq:house:E:5} s^3 - \sum_{i=0}^2 \mu_G \mu_i s^{2-i} = 0, \end{eqnarray}
which is equivalent to \cite{Pellis_etal}, (3.3), and hence obtain an identical $R_0$ to $R_0^g$ defined in \cite{Ball_etal}.
We proceed by showing that for the Markov household epidemic model $R_0$ obtained as the maximal eigenvalue of $\mathbf{M}$ corresponds $R_0^g$ defined in \cite{Ball_etal} for any $h \geq 1$. In order to do this it is helpful to write
\begin{eqnarray} \label{eq:house:E:6} \mathbf{M} = \mathbf{G} + \mathbf{U}, \end{eqnarray}
where $\mathbf{G}$ is the $K \times K$ matrix with $G_{k1} = \mu_G$ $(1 \leq k \leq K)$ and $G_{kj} =0$ otherwise. Then $\mathbf{G}$ and $\mathbf{U}$ denote the matrices of global and local infections, respectively.
For $i=0,1,2,\ldots,h-1$, let $\nu_i = \sum_{j=1}^K u_{1j}^i$, the sum of the first row of $\mathbf{U}^i$. The key observation is that $\nu_i$ denotes the mean number of individuals in generation $i$ of the household epidemic with $\mathbf{U}^0 = \mathbf{I}$, the identity matrix (the initial infective in the household is classed as generation 0) and $\mathbf{U}^i = \mathbf{0}$ for $i \geq h$.
For $0 \leq a,b \leq h-1$, let $y_{(a,b)}^{(n)}$ denote the mean number of type $(a,b)$ individuals in the $n^{th}$ generation of the epidemic process. Then $y_{(h-1,1)}^{(0)}=1$ (the initial infective) and for all $(a,b) \neq (h-1,1)$, $y_{(a,b)}^{(0)}=0$. Let $\mathbf{y}^{(n)} = (y_{(a,b)}^{(n)})$ denote the mean number of individuals of each type in the $n^{th}$ generation of the epidemic process with the convention that $y_{(h-1,1)}^{(n)}$ is the first entry of $\mathbf{y}^{(n)}$. Then for $n \geq 1$, $\mathbf{y}^{(n)}$ solves
\begin{eqnarray} \label{eq:house:E:7} \mathbf{y}^{(n)} = \mathbf{y}^{(n-1)} \mathbf{M}. \end{eqnarray}
The proof of \eqref{eq:house:E:7} mimics the proof of \cite{Pellis_etal}, Lemma 2, and it follows by induction that
\begin{eqnarray} \label{eq:house:E:8} \mathbf{y}^{(n)} = \mathbf{y}^{(0)} \mathbf{M}^n. \end{eqnarray}
Let $x_{n,i}$ $(n=0,1,\ldots;i=0,1,\ldots,h-1)$ be defined as in \cite{Pellis_etal}, Lemma 1, with $x_{n,i}$ denoting the mean number of individuals in the $n^{th}$ generation of the epidemic who belong to the $i^{th}$ generation of the household epidemic. We again employ the convention that the initial infective individual in the household represents generation 0. It is shown in \cite{Pellis_etal}, Lemma 1, (3.5) and (3.6) that
\begin{eqnarray} \label{eq:Pellis_etal:1} x_{n,0} = \mu_G \sum_{i=0}^{h-1} x_{n-1,i}, \end{eqnarray}
and
\begin{eqnarray} \label{eq:Pellis_etal:2} x_{n,i} = \mu_i x_{n-i,0}, \end{eqnarray}
where $\mu_i$ is the mean number of infectives in generation $i$ of a household epidemic, $x_{0,0}=1$ and $x_{0,i} =0$ $(i=1,2,\ldots,h-1)$. Let $\mathbf{x}^{(n)} = (x_{n,0}, x_{n,1}, \ldots, x_{n,h-1})$.
${\newtheorem{lemma}{Lemma}[section]}$
\begin{lemma} \label{lem1} For $n=0,1,\ldots$,
\begin{eqnarray} \label{eq:house:E:9} y^{(n)}_{(h-1,1)} = x_{n,0}. \end{eqnarray}
Let $x_n = \mathbf{x}^{(n)} \mathbf{1} = \sum_{j=0}^{h-1} x_{n,j}$ and $y_n = \mathbf{y}^{(n)} \mathbf{1} = \sum_{(a,b)} y^{(n)}_{(a,b)}$, then for $n=0,1,\ldots$,
\begin{eqnarray} \label{eq:house:E:9a} y_n = x_n. \end{eqnarray}
\end{lemma}
Before proving Lemma \ref{lem1}, we prove Lemma \ref{lem2} which gives $\mu_i$ in terms of the local reproduction matrix $\mathbf{U}$.
${\newtheorem{lemma2}[lemma]{Lemma}}$
\begin{lemma2} \label{lem2} For $i=0,1,\ldots,h-1$,
\begin{eqnarray} \label{eq:house:E:10} \mu_i = \sum_{(c,d)} u_{(h-1,1),(c,d)}^i = \nu_i. \end{eqnarray}
\end{lemma2}
{\bf Proof.} Let $Z_{(a,b)}^{(i)}$ denote the total number of individuals of type $(a,b)$ in generation $i$ of a household epidemic. Note that $Z_{(a,b)}^{(i)}$ will be either 0 or 1 and $Z_{(a,b)}^{(i)} =1$ if an infection takes place in a household in state $(a+1,b-1)$ with the infector belonging to generation $i-1$. Then by definition
\begin{eqnarray} \label{eq:house:E:11} \mu_i = \sum_{(a,b)} \ez [ Z_{(a,b)}^{(i)}]. \end{eqnarray}
We note that $Z_{(h-1,1)}^{(0)} =1$ and for $(a,b) \neq (h-1,1)$, $Z_{(a,b)}^{(0)} =0$, giving $\mu_0 =1$. Since $\mathbf{U}^0$ is the identity matrix, we have that
$\sum_{(c,d)} u_{(h-1,1),(c,d)}^0 = 1$ also.
For $i=1,2,\ldots,h-1$, we have that
\begin{eqnarray} \label{eq:house:E:12} \ez [ Z_{(a,b)}^{(i)}] = \ez[\ez [Z_{(a,b)}^{(i)} | \mathbf{Z}^{(i-1)}] ], \end{eqnarray}
where $\mathbf{Z}^{(i-1)} = (Z_{(a,b)}^{(i-1)})$. Now
\begin{eqnarray} \label{eq:house:E:13} \ez [Z_{(a,b)}^{(i)} | \mathbf{Z}^{(i-1)}] = \sum_{(c,d)} u_{(c,d),(a,b)} Z_{(c,d)}^{(i-1)}, \end{eqnarray}
since $u_{(c,d),(a,b)}$ is the probability that a type $(c,d)$ individual will infect an individual to create a type $(a,b)$ infective. Therefore
taking expectations of both sides of \eqref{eq:house:E:13} yields
\begin{eqnarray} \label{eq:house:E:14} \ez [Z_{(a,b)}^{(i)}] = \sum_{(c,d)} u_{(c,d),(a,b)} E[Z_{(c,d)}^{(i-1)}]. \end{eqnarray}
Therefore letting $z_{(a,b)}^{(i)} = \ez [Z_{(a,b)}^{(i)}]$ and $\mathbf{z}^{(i)} = (z_{(a,b)}^{(i)})$ it follows from \eqref{eq:house:E:14} that
\begin{eqnarray} \label{eq:house:E:15} \mathbf{z}^{(i)} = \mathbf{z}^{(i-1)} \mathbf{U} = \mathbf{z}^{(0)} \mathbf{U}^i. \end{eqnarray}
Hence,
\begin{eqnarray} \label{eq:house:E:16} \mu_i &=& \sum_{(a,b)} z_{(a,b)}^{(i)} \nonumber \\
&=& \sum_{(a,b)} z_{(a,b)}^{(0)} \sum_{(c,d)} u_{(a,b),(c,d)}^i \nonumber \\
&=& \sum_{(c,d)} u_{(h-1,1),(c,d)}^i = \nu_i, \end{eqnarray}
as required.
\hfill $\square$
{\bf Proof of Lemma \ref{lem1}.}
We prove the lemma by induction and noting that for $n=0$, $y^{(0)}_{(h-1,1)} = x_{0,0}=1$.
Before proving the inductive step, we note that it follows from \eqref{eq:house:E:7} that
\begin{eqnarray} \label{eq:house:E:17} y_{(h-1,1)}^{(n)} = \frac{\lambda_G}{\gamma} \sum_{(c,d)} y_{(c,d)}^{(n-1)} = \mu_G \sum_{(c,d)} y_{(c,d)}^{(n-1)}\end{eqnarray}
and for $(a,b) \neq (h-1,1)$,
\begin{eqnarray} \label{eq:house:E:18} y_{(a,b)}^{(n)} &=& \sum_{(c,d)} y_{(c,d)}^{(n-1)} u_{(c,d),(a,b)} \nonumber \\
&=& y_{(h-1,1)}^{(n-1)} u_{(h-1,1),(a,b)} + \sum_{(c,d) \neq (h-1,1)} y_{(c,d)}^{(n-1)} u_{(c,d),(a,b)} \nonumber \\
&=& y_{(h-1,1)}^{(n-1)} u_{(h-1,1),(a,b)} + \sum_{(c,d) \neq (h-1,1)} \left\{ \sum_{(e,f)} y_{(e,f)}^{(n-2)} u_{(e,f),(c,d)} \right\} u_{(c,d),(a,b)} \nonumber \\
&=& y_{(h-1,1)}^{(n-1)} u_{(h-1,1),(a,b)} + \sum_{(e,f)} y_{(e,f)}^{(n-2)} \sum_{(c,d) \neq (h-1,1)} u_{(e,f),(c,d)} u_{(c,d),(a,b)} \nonumber \\
&=& y_{(h-1,1)}^{(n-1)} u_{(h-1,1),(a,b)} + \sum_{(e,f)} y_{(e,f)}^{(n-2)} u_{(e,f),(a,b)}^2. \end{eqnarray}
The final line of \eqref{eq:house:E:18} follows from $u_{(e,f),(h-1)} =0$ for all $(e,f)$.
Then by a simple recursion it follows from \eqref{eq:house:E:18}, after at most $h-1$ steps, that, for $(a,b) \neq (h-1,1)$,
\begin{eqnarray} \label{eq:house:E:19} y_{(a,b)}^{(n)} &=& \sum_{j=1}^{h-1} y_{(h-1,1)}^{(n-j)} u_{(h-1,1),(a,b)}^j. \end{eqnarray}
Note that \eqref{eq:house:E:19} can easily be extended to include $(a,b) = (h-1,1)$ giving
\begin{eqnarray} \label{eq:house:E:20} y_{(a,b)}^{(n)} &=& \sum_{j=0}^{h-1} y_{(h-1,1)}^{(n-j)} u_{(h-1,1),(a,b)}^j. \end{eqnarray}
For $n \geq 1$, we assume the inductive hypothesis that for $0 \leq k \leq n-1$, $y^{(k)}_{(h-1,1)} = x_{k,0}$. Then from \eqref{eq:house:E:20}, we have that
\begin{eqnarray} \label{eq:house:E:21}
y^{(n)}_{(h-1,1)} &=& \sum_{(a,b)} m_{(a,b),(h-1,1)} y^{(n-1)}_{(a,b)} \nonumber \\
&=& \mu_G \sum_{(a,b)} y^{(n-1)}_{(a,b)} \nonumber \\
&=& \mu_G \sum_{(a,b)} \left\{ \sum_{j=0}^{h-1} y_{(h-1,1)}^{(n-1-j)} u_{(h-1,1),(a,b)}^j \right\} \nonumber \\
&=& \mu_G \sum_{j=0}^{h-1} y_{(h-1,1)}^{(n-1-j)} \left( \sum_{(a,b)}u_{(h-1,1),(a,b)}^j \right). \end{eqnarray}
Using the inductive hypothesis and Lemma \ref{lem2}, we have from \eqref{eq:house:E:21} that
\begin{eqnarray} \label{eq:house:E:22}
y^{(n)}_{(h-1,1)}&=& \mu_G \sum_{j=0}^{h-1} x_{(n-1-j),0} \mu_j =x_{n,0}, \end{eqnarray}
as required for \eqref{eq:house:E:9}.
Using a similar line of argument,
\begin{eqnarray} \label{eq:house:E:22a}
y_n = \mathbf{y}^{(n)} \mathbf{1} &=& \sum_{(a,b)} y^{(n)}_{(a,b)} \nonumber \\
&=& \sum_{(a,b)} \left\{ \sum_{j=0}^{h-1} y^{(n-j)}_{(h-1,1)} u_{(h-1,1),(a,b)}^j \right\} \nonumber \\
&=& \sum_{j=0}^{h-1} y^{(n-j)}_{(h-1,1)} \sum_{(a,b)} \left\{ u_{(h-1,1),(a,b)}^j \right\} \nonumber \\
&=& \sum_{j=0}^{h-1} x_{n-j,0} \mu_j = x_n, \end{eqnarray}
as required for \eqref{eq:house:E:9a}. \hfill $\square$
Therefore we have shown that the two representations of the household epidemic given in \cite{Pellis_etal} and in this paper give the same mean number of infectives and the same mean number of new household epidemics in generation $n$ $(n=0,1,\ldots)$. This is a key component in showing that $\mathbf{M}$ and $\mathbf{A}$, the mean reproductive matrix given in \cite{Pellis_etal} by
\begin{eqnarray} \label{eq:house:E:23}
\mathbf{A} = \begin{pmatrix} \mu_G \mu_0 & 1 & 0 & \cdots & 0 \\
\mu_G \mu_1 & 0 & 1 & & 0 \\
\vdots & && \ddots & 0 \\
\mu_G \mu_{h-2} & 0 & 0 & & 1 \\
\mu_G \mu_{h-1} & 0 & 0& \cdots & 0 \\ \end{pmatrix} \end{eqnarray} have the same largest eigenvalue.
Let $\rho_A$ and $\rho_M$ denote the largest eigenvalues of $\mathbf{A}$ and $\mathbf{M}$, respectively.
Let $\mathbf{z}_L$ and $\mathbf{z}_R$ denote the normalised left and right eigenvectors corresponding to $\rho_A$ with $\mathbf{z}_L \mathbf{z}_R = 1$.
In \cite{Pellis_etal}, Lemma 3, it is note that
\begin{eqnarray} \label{eq:house:E:24} \mathbf{A} = \rho_A \mathbf{C}_A + B_A, \end{eqnarray}
where $\mathbf{C}_A = \mathbf{z}_R \mathbf{z}_L$ and $\rho_A^{-n} B_A^n \rightarrow \mathbf{0}$ as $n \rightarrow \infty$. This implies that if $x_n = \mathbf{x}^{(n)} \mathbf{1}$, the mean number of individuals infected in the $n^{th}$ generation of the epidemic then
\begin{eqnarray} \label{eq:house:E:25} (y_n^{1/n} =) x_n^{1/n} \rightarrow \rho_A \hspace{0.5cm} \mbox{as } n \rightarrow \infty. \end{eqnarray}
As observed earlier the construction of $\mathbf{M}$ results in $\mathbf{0}$ columns corresponding to infectious units which can arise through the removal of an infective. Let $\tilde{\mathbf{M}}$ denote the matrix obtained by removing the $\mathbf{0}$ columns and corresponding rows from $\mathbf{M}$. The eigenvalues of $\mathbf{M}$ will consist of the eigenvalues of $\tilde{\mathbf{M}}$ plus repeated 0 eigenvalues, one for each $\mathbf{0}$ column.
Let $\mathbf{w}_L$ and $\mathbf{w}_R$ denote the normalised left and right eigenvectors corresponding to $\rho_{\tilde{M}}$ with $\mathbf{w}_L \mathbf{w}_R = 1$. Then since $\tilde{\mathbf{M}}$ is a positively regular matrix by the Perron-Frobenius theorem, $\tilde{\mathbf{M}}$ (and hence $\mathbf{M}$) has a unique real and positive largest eigenvalue, $\rho_M$. Moreover,
\begin{eqnarray} \label{eq:house:E:26} \tilde{\mathbf{M}} = \rho_M \mathbf{C}_M + B_M, \end{eqnarray}
where $\mathbf{C}_M = \mathbf{w}_R \mathbf{w}_L$ and $\rho_M^{-n} B_M^n \rightarrow \mathbf{0}$ as $n \rightarrow \infty$.
Then following the arguments in the proof of \cite{Pellis_etal}, Lemma 3,
\begin{eqnarray} \label{eq:house:E:27} y_n^{1/n} \rightarrow \rho_M \hspace{0.5cm} \mbox{as } n \rightarrow \infty. \end{eqnarray}
Since $x_n = y_n$ $(n=0,1,2,\ldots)$, it follows from \eqref{eq:house:E:25} and \eqref{eq:house:E:27} that $\rho_M = \rho_A$ and therefore that the two constructions of the epidemic process give the same basic reproduction number $R_0$.
\section{Introduction} \label{S:intro}
The basic reproduction number, $R_0$, is a key summary in infectious disease modelling being defined as the expected number of individuals infected by a typical individual in a completely susceptible population. This definition of $R_0$ is straightforward to define and compute in a homogeneous population consisting of a single type of infective (homogeneous behaviour) and with uniform random mixing of infectives (homogeneous mixing). This yields the celebrated threshold theorem, see for example, \cite{Whittle55}, for the epidemic with a non-zero probability of a major epidemic outbreak if and only if $R_0 >1$.
The extension of the definition $R_0$ to non-homogeneous populations is non-trivial. Important work in this direction includes \cite{Diekmann_etal} which considers heterogeneous populations consisting of multiple types of infectives and \cite{Pellis_etal}, \cite{Ball_etal} which consider heterogeneity in mixing through population structure. Specifically \cite{Diekmann_etal} defines for a population consisting of $K$ types of infectives, the $K \times K$ mean reproduction matrix $\mathbf{M}$ (also known as the next-generation-matrix), where $M_{ij}$ denotes the mean number of infectives of type $j$ generated by a typical type $i$ infective during its infectious period. Then $R_0$ is defined as the Perron-Frobenius (dominant) eigenvalue of $\mathbf{M}$. By contrast \cite{Pellis_etal} and \cite{Ball_etal} focus on a household epidemic model with a single type of infective and consider a branching process approximation for the initial stages of the epidemic process, see, for example, \cite{Whittle55}, \cite{Ball_Donnelly} and \cite{BMST}. \cite{Pellis_etal} consider the asymptotic growth rate of the epidemic on a generational basis using an embedded Galton-Watson branching process and define $R_0$ to be
\begin{eqnarray} \label{eq:intro:1} R_0 = \lim_{n \rightarrow \infty} \ez [X_n]^{1/n}, \end{eqnarray}
where $X_n$ is the number of infectives in the $n^{th}$ generation of the epidemic. Given that the mean reproduction matrix $\mathbf{M}$ represents the mean number of infectives generated by an infective in the next generation, we observe that the computation of $R_0$ in both cases is defined in terms of the generational growth rate of the epidemic.
The current work applies the approach of \cite{Diekmann_etal} to Markovian epidemics in structured populations and thus assumes individuals have exponentially distributed infectious periods. Using the method of stages \cite{Barbour76} it is straightforward to extend the work to more general infectious periods consisting of sums or mixtures of exponential random variables. Alternatively by considering the epidemic on a generational basis in the spirit of \cite{Pellis_etal} we can adapt our approach to general infectious periods for $SIR$ or $SEIR$ epidemics. Note that as demonstrated in a sexually transmitted disease example in Section \ref{S:example:sex}, our approach applies to $SIS$ epidemics as well. The key idea is that in structured populations we can define infectives by the type of infectious unit to which they belong, where for many models the number of type of infectious units is relatively small and easy to classify. By characterising an infective by the type of infectious unit they originate in (belong to at the point of infection) and considering the possible events involving the infectious unit, we can write down a simple recursive equation for the mean reproduction matrix $\mathbf{M}$. Then as in \cite{Diekmann_etal} we can simply define $R_0$ to be the Perron-Frobenius eigenvalue of $\mathbf{M}$.
Our approach is similar to \cite{LKD15}, who also consider classifying infectives by the type of infectious unit to which they belong in an dynamic $SI$ sexually transmitted disease model which is similar to the $SIS$ model studied in Section \ref{S:example:sex}. The modelling in \cite{LKD15} is presented in a deterministic framework with \cite{Lashari_Trapman} considering the model from a stochastic perspective. The key difference to \cite{LKD15} is that we work with the embedded discrete Markov process of the transition events rather than the continuous time Markov rate matrices. The advantages of studying the discretised process is that it is easier to incorporate both local (within-infectious unit) infections and global (creation of new infectious units) infections and in generalisations of the construction of $\mathbf{M}$ beyond exponential infectious periods, see Section \ref{S:example:gcm} and Appendix \ref{App:rank}.
The remainder of the paper is structured as follows. In Section \ref{S:setup} we define the generic epidemic model which we consider along with the derivation of $\mathbf{M}$ and $R_0$. To assist with the understanding we illustrate with an $SIR$ household epidemic model (\cite{BMST}, \cite{Pellis_etal} and \cite{Ball_etal}). In Section \ref{S:Example}, we detail the computing of $\mathbf{M}$ and $R_0$, for the $SIR$ household epidemic model (Section \ref{S:example:house}), an $SIS$ sexually transmitted disease model (Section \ref{S:example:sex}), see, for example, \cite{Kret}, \cite{LKD15} and \cite{Lashari_Trapman} and the great circle $SIR$ epidemic model (Section \ref{S:example:gcm}), see \cite{BMST}, \cite{Ball_Neal02} and \cite{Ball_Neal03}. In Section \ref{S:example:house} we show that the computed $R_0$ agrees with $R_0^g$ obtained in \cite{Ball_etal}.
\section{Introduction} \label{S:intro}
The basic reproduction number, $R_0$, is a key summary in infectious disease modelling being defined as the expected number of individuals infected by a typical individual in a completely susceptible population. This definition of $R_0$ is straightforward to define and compute in a homogeneous population consisting of a single type of infective (homogeneous behaviour) and with uniform random mixing of infectives (homogeneous mixing). This yields the celebrated threshold theorem, see for example, \cite{Whittle55}, for the epidemic with a non-zero probability of a major epidemic outbreak if and only if $R_0 >1$.
The extension of the definition $R_0$ to non-homogeneous populations is non-trivial. Important work in this direction includes \cite{Diekmann_etal} which considers heterogeneous populations consisting of multiple types of infectives and \cite{Pellis_etal}, \cite{Ball_etal} which consider heterogeneity in mixing through population structure. Specifically \cite{Diekmann_etal} defines for a population consisting of $K$ types of infectives, the $K \times K$ mean reproduction matrix $\mathbf{M}$ (also known as the next-generation-matrix), where $M_{ij}$ denotes the mean number of infectives of type $j$ generated by a typical type $i$ infective during its infectious period. Then $R_0$ is defined as the Perron-Frobenius (dominant) eigenvalue of $\mathbf{M}$. By contrast \cite{Pellis_etal} and \cite{Ball_etal} focus on a household epidemic model with a single type of infective and consider a branching process approximation for the initial stages of the epidemic process, see, for example, \cite{Whittle55}, \cite{Ball_Donnelly} and \cite{BMST}. \cite{Pellis_etal} consider the asymptotic growth rate of the epidemic on a generational basis using an embedded Galton-Watson branching process and define $R_0$ to be
\begin{eqnarray} \label{eq:intro:1} R_0 = \lim_{n \rightarrow \infty} \ez [X_n]^{1/n}, \end{eqnarray}
where $X_n$ is the number of infectives in the $n^{th}$ generation of the epidemic. Given that the mean reproduction matrix $\mathbf{M}$ represents the mean number of infectives generated by an infective in the next generation, we observe that the computation of $R_0$ in both cases is defined in terms of the generational growth rate of the epidemic.
The current work applies the approach of \cite{Diekmann_etal} to Markovian epidemics in structured populations and thus assumes individuals have exponentially distributed infectious periods. The key idea is that in structured populations we can define infectives by the type of infectious unit to which they belong, where for many models the number of type of infectious units is relatively small and easy to classify. By characterising an infective by the type of infectious unit they originate in (belong to at the point of infection) and considering the possible events involving the infectious unit, we can write down a simple recursive equation for the mean reproduction matrix $\mathbf{M}$. Then as in \cite{Diekmann_etal} we can simply define $R_0$ to be the Perron-Frobenius eigenvalue of $\mathbf{M}$.
Our approach is similar to \cite{LKD15}, who also consider classifying infectives by the type of infectious unit to which they belong in an dynamic $SI$ sexually transmitted disease model which is similar to the $SIS$ model studied in Section \ref{S:example:sex}. The modelling in \cite{LKD15} is presented in a deterministic framework with \cite{Lashari_Trapman} considering the model from a stochastic perspective. The key difference to \cite{LKD15} is that we work with the embedded discrete Markov process of the transition events rather than the continuous time Markov rate matrices. The advantages of studying the discretised process is that it is easier to incorporate both local (within-infectious unit) infections and global (creation of new infectious units) infections and in generalisations of the construction of $\mathbf{M}$ beyond exponential infectious periods, see Section \ref{S:example:gcm} and Appendix \ref{App:rank}. Moreover, we present the approach in a general framework which easily incorporates both $SIR$ and $SIS$ epidemic models and allows for population as well as epidemic dynamics.
The remainder of the paper is structured as follows. In Section \ref{S:setup} we define the generic epidemic model which we consider along with the derivation of $\mathbf{M}$ and $R_0$. To assist with the understanding we illustrate with an $SIR$ household epidemic model (\cite{BMST}, \cite{Pellis_etal} and \cite{Ball_etal}). In Section \ref{S:Example}, we detail the computing of $\mathbf{M}$ and $R_0$, for the $SIR$ household epidemic model (Section \ref{S:example:house}), an $SIS$ sexually transmitted disease model (Section \ref{S:example:sex}), see, for example, \cite{Kret}, \cite{LKD15} and \cite{Lashari_Trapman} and the great circle $SIR$ epidemic model (Section \ref{S:example:gcm}), see \cite{BMST}, \cite{Ball_Neal02} and \cite{Ball_Neal03}. In Section \ref{S:example:house} we show that the computed $R_0$ agrees with $R_0^g$ obtained in \cite{Ball_etal}.
\section{Introduction}
\input{abstract}
\input{intro_v2}
\input{setup}
\input{example}
\input{conc_v2}
\section*{Acknowledgements}
TT was supported by a PhD scholarship, grant number ST\_2965 Lancaster U, from the Royal Thai Office
of Education Affairs.
\input{bib}
\input{appendix_v2}
\end{document}
\section{Model setup} \label{S:setup}
In this Section we characterise the key elements of the modelling. In order to keep the results as general as possible, we present a generic description of the model before illustrating with examples to make the more abstract concepts more concrete.
We assume that the population consists of $M$ types of individuals and for
illustrative purposes we will assume that $M=1$. We allow individuals to be grouped together in local units and a unit is said to be an infectious unit if it contains at least one infectious individual. The local units might be static (remain fixed through time) or dynamic (varying over time). We assume that there are $K$ states that local infectious units can be in. Note that different local units may permit different local infectious unit states. Finally, we assume that all dynamics within the population and epidemic are Markovian. That is, for any infectious unit there is an exponential waiting time until the next event involving the infectious unit and no changes occur in the infectious unit between events. We assume that there are three types of events which take place within the population with regards the epidemic process. These are:-
\begin{enumerate}
\item {\bf Global infections}. These are infections where the individual contacted is chosen uniformly at random from a specified type of individual in the population. If the population consists of one single type of individual then the individual is chosen uniformly at random from the whole population. It is assumed that the number of individuals of each type are large, so that in the early stages of the epidemic with probability tending to 1, a global infectious contact is with a susceptible individual, and thus, results in an infection.
\item {\bf Local transitions}. These are transitions which affect an infectious unit. These transitions can include infection within an infectious unit leading to a change of state of an infectious unit or an infectious individual moving to a different type.
\item {\bf Recovery}. An infectious individual recovers from the disease and is no longer able to infect individuals within their current infectious episode. Given that we allow for $SIR$ and $SIS$ epidemic dynamics, a given individual may have at most one, or possibly many, infectious episodes depending upon the disease dynamics.
\end{enumerate}
\input{house_example}
\input{defn_R0}
\input{construct_M_v2}
\input{comment_M_v2}
\subsection{$SIS$ sexually transmitted disease model} \label{S:example:sex}
We begin by describing the $SIS$ sexually transmitted disease model which provided the motivation for this work and then study the construction of $\mathbf{M}$.
\subsubsection{Model}
We consider a model for a population of sexually active individuals who alternate between being in a relationship and being single. For simplicity of presentation, we assume a homosexual model where each relationship comprises of two individuals. The extension to a heterosexual population with equal numbers of males and females is straightforward.
We assume $SIS$ disease dynamics with infectious individuals returning to the susceptible state on recovery from the disease. There are two key dynamics underlying the spread of the disease. The formation and dissolution of relationships and the transmission of the disease.
Individuals are termed as either single (not currently in a relationship) or coupled (currently in a relationship).
We assume that each single individual seeks to instigate the formation of relationship at the points of a homogeneous Poisson point process with rate $\alpha/2$ with the individual with whom they seek to form a relationship
chosen uniformly at random from the population. (The rate $\alpha/2$ allows for individuals to be both instigators and contacted individuals.) If a contacted individual is single, they agrees to form a relationship with the instigator, otherwise the individual is already in a relationship and remains with their current partner. The lifetimes of relationships are independent and identically distributed according to a non-negative random variable $T$ with mean $1/\delta$. For a Markovian model we take $T \sim {\rm Exp} (\delta)$ corresponding to relationships dissolving at rate $\delta$. When a relationship dissolves the individuals involved return to the single state. Therefore there is a constant flux of individuals going from single to coupled and back again. We assume that the disease is introduced into the population at time $t=0$ with the population in stationarity with regards relationship status. The proportion, $\sigma$, of the population who are single in stationarity is given in \cite{Lashari_Trapman} with
\begin{eqnarray} \label{eq:model:2}
\sigma^2 \alpha &=& \delta (1 -\sigma) \nonumber \\
\sigma &=& \frac{- \delta + \sqrt{\delta^2 + 4 \delta \alpha}}{2 \alpha}.
\end{eqnarray} Thus $\tilde{\alpha} = \alpha \sigma$ is the rate at which a single individual enters a relationship.
We assume that the relationship dynamics are in a steady state when the disease is introduced and the introduction of the diseases does not affect relationship dynamics.
We now turn to the disease dynamics. We assume $SIS$ dynamics, in that individuals alternate between being susceptible and infectious and on recovery from being infectious an individual immediately reenters the susceptible state. We allow for two types of sexual contacts, those within relationships and {\it casual} contacts which occur outside relationships. The casual contacts which we term {\it one-night stands} represent single sexual encounters capturing short term liaisons. We assume that the infectious periods are independent and identically distributed according to a non-negative random variable $Q$, where $Q \sim {\rm Exp} (\gamma)$ for a Markovian model.
Whilst in a relationship, we assume that an infectious individual makes infectious contact with their partner at the points of a homogeneous Poisson point process with rate $\beta$. We assume that individuals can also partake in, and transmit the disease via, one-off sexual contacts (one-night stands). We assume that individuals in relationships are less likely to take part in a one-night stand with probability $\rho$ of having a one-night stand. Therefore
we assume that a single individual (individual in a relationship) seeks to make infectious contact via one-night stands at the points of a homogeneous Poisson point process with rate $\omega$ $(\rho \omega)$, where $\omega$ amalgamates the propensity for partaking in a one-night stand with the transmissibility of the disease during a one-night stand. If an individual attempts to have a one-night stand with somebody in a relationship, there is only probability $\rho$ of the one-night stand occurring. Thus $\rho=0$ denotes that individuals in relationships are faithful, whilst $\rho =1$ denotes that there is no difference between those in or out of a relationship with regards one-night stands.
In the early stages of the epidemic with high probability a single infective will form a relationship with a susceptible individual and one-night stands with an individual in a relationship will be with a totally susceptible relationship.
\subsubsection{Construction of $\mathbf{M}$}
For this model there are three types of infectious units; single infective, couple with one infective and couple with two infectives which we will term types 1, 2 and 3, respectively. The possible events and their rates of occurring are presented in Table \ref{tab:sex:1}.
\begin{table}
\begin{tabular}{l|ccc}
Event Type & Single infective & \multicolumn{2}{c}{Infective in a relationship} \\
& & Susceptible Partner & Infectious partner \\
\hline
Relationship form & $\alpha \sigma$ & -- & -- \\
Relationship dissolve & -- & $\delta$ & $\delta$ \\
One-night stand single & $\omega \sigma$ & $\rho \omega \sigma$ & $\rho \omega \sigma$ \\
One-night stand relationship & $\rho \omega \sigma$ & $\rho^2 \omega \sigma$ & $\rho^2 \omega \sigma$ \\
Infect partner & -- & $\beta$ & -- \\
Partner recovers & -- & -- & $\gamma$ \\
Recovers & $\gamma$ & $\gamma$ & $\gamma$ \\
\end{tabular}
\caption{Events and their rates of occurring for an infectious individual in each type of infectious unit.} \label{tab:sex:1}
\end{table}
It is straightforward from Table \ref{tab:sex:1} to construct $\Phi^E$ and $\mathbf{P}^E$ in terms of the next event to occur. For $\Phi^E$, the next event will create at most one infective and we only need to compute the probability of each type of infection. Hence,
\begin{eqnarray} \label{eq:sex:1}
\Phi^E = \begin{pmatrix} \frac{\omega \sigma}{\alpha \sigma + \omega \{1- (1-\rho) (1-\sigma)\} + \gamma} & \frac{\rho \omega \sigma}{\alpha \sigma + \omega \{1- (1-\rho) (1-\sigma)\} + \gamma} & 0 \\
\frac{\rho \omega \sigma}{\delta + \rho\omega \{1- (1-\rho) (1-\sigma)\} + \beta + \gamma} & \frac{\rho^2 \omega \sigma}{\delta + \rho\omega \{1- (1-\rho) (1-\sigma)\} + \beta + \gamma} & \frac{\beta}{\delta + \rho\omega \{1- (1-\rho) (1-\sigma)\} + \beta + \gamma} \\
\frac{\rho\omega \sigma}{\delta + \rho \omega \{1- (1-\rho) (1-\sigma)\} + 2 \gamma} & \frac{\rho^2 \omega \sigma}{\delta + \rho \omega \{1- (1-\rho) (1-\sigma)\} + 2 \gamma} & 0 \end{pmatrix}. \end{eqnarray}
Similarly by considering the transition at each event, we have that
\begin{eqnarray} \label{eq:sex:2}
\mathbf{P}^E = \begin{pmatrix} \frac{\omega \{1- (1-\rho) (1-\sigma)\}}{\alpha \sigma + \omega \{1- (1-\rho) (1-\sigma)\} + \gamma} & \frac{\alpha \sigma}{\alpha \sigma + \omega \{1- (1-\rho) (1-\sigma)\} + \gamma} & 0 \\
\frac{\delta}{\delta + \rho\omega \{1- (1-\rho) (1-\sigma)\} + \beta + \gamma} & \frac{\rho\omega \{1- (1-\rho) (1-\sigma)\}}{\delta + \rho\omega \{1- (1-\rho) (1-\sigma)\} + \beta + \gamma} & \frac{\beta}{\delta + \rho\omega \{1- (1-\rho) (1-\sigma)\} + \beta + \gamma} \\
\frac{\delta}{\delta + \rho \omega \{1- (1-\rho) (1-\sigma)\} + 2 \gamma} & \frac{\gamma}{\delta + \rho \omega \{1- (1-\rho) (1-\sigma)\} + 2 \gamma} & \frac{\rho\omega \{1- (1-\rho) (1-\sigma)\}}{\delta + \rho \omega \{1- (1-\rho) (1-\sigma)\} + 2 \gamma} \end{pmatrix}. \end{eqnarray}
One-night stands do not alter the relationship and hence do not constitute transition events. Given that the number of one-night stands made by an infectious individual in an interval of a given length follows a Poisson distribution with mean proportional to the length of the interval it is straightforward to show that
\begin{eqnarray} \label{eq:sex:3}
\Phi = \begin{pmatrix} \frac{\omega \sigma}{\alpha \sigma + \gamma} & \frac{\rho \omega \sigma}{\alpha \sigma + \gamma} & 0 \\
\frac{\rho \omega \sigma}{\delta + \beta + \gamma} & \frac{\rho^2 \omega \sigma}{\delta + \beta + \gamma} & \frac{\beta}{\delta + \beta + \gamma} \\
\frac{\rho\omega \sigma}{\delta + 2 \gamma} & \frac{\rho^2 \omega \sigma}{\delta + 2 \gamma} & 0 \end{pmatrix}, \end{eqnarray}
and that the transition matrix is given by
\begin{eqnarray} \label{eq:sex:4}
\mathbf{P} = \begin{pmatrix} 0 & \frac{\alpha \sigma}{\alpha \sigma + \gamma} & 0 \\
\frac{\delta}{\delta + \beta + \gamma} & 0 & \frac{\beta}{\delta + \beta + \gamma} \\
\frac{\delta}{\delta + 2 \gamma} & \frac{\gamma}{\delta + 2 \gamma} & 0 \end{pmatrix}. \end{eqnarray}
Straightforward, but tedious algebra, gives that
\begin{eqnarray} \label{eq:sex:5}
\mathbf{M} &=& (\mathbf{I} - \mathbf{P})^{-1}\Phi =(\mathbf{I} - \mathbf{P}^E)^{-1}\Phi^E \nonumber \\ &=& \begin{pmatrix} \frac{\sigma \omega (\alpha \sigma \rho + \delta + \gamma)}{(\alpha \sigma + \delta + \gamma) \gamma} & \frac{\sigma \omega \rho (\alpha \sigma \rho + \delta + \gamma)}{(\alpha \sigma + \delta + \gamma) \gamma} & \frac{\alpha \sigma \beta (\delta + 2 \gamma)}{\gamma (\alpha \sigma + \delta + \gamma)(\beta+ \delta +2 \gamma)} \\
\frac{\sigma \omega (\alpha \sigma \rho + \delta + \gamma \rho)}{(\alpha \sigma + \delta + \gamma) \gamma} & \frac{\sigma \omega \rho (\alpha \sigma \rho + \delta + \gamma \rho)}{(\alpha \sigma + \delta + \gamma) \gamma} & \frac{ \beta (\delta + 2 \gamma) (\alpha \sigma + \gamma)}{\gamma (\alpha \sigma + \delta + \gamma)(\beta+ \delta +2 \gamma)} \\
\frac{\sigma \omega (\alpha \sigma \rho + \delta + \gamma \rho)}{(\alpha \sigma + \delta + \gamma) \gamma} & \frac{\sigma \omega \rho (\alpha \sigma \rho + \delta + \gamma \rho)}{(\alpha \sigma + \delta + \gamma) \gamma} & \frac{ \beta \{ \alpha \sigma (\delta + \gamma) +\gamma^2\}}{\gamma (\alpha \sigma + \delta + \gamma)(\beta+ \delta +2 \gamma)} \end{pmatrix}. \end{eqnarray}
Note that the mean number of one-night stands is the same for all individuals who start their infectious period in a relationship, regardless of the infectious status of their partner.
The eigenvalues of $\mathbf{M}$ can be obtained from solving the characteristic equation ${\rm det} (\mathbf{M} - s \mathbf{I}) = 0$, a cubic polynomial in $s$. The resulting algebraic expressions are not very illuminating about $R_0$ and its properties. However, this does allow for simple computation of $R_0$ for specified parameter values.
In the special case $\rho =0$ where only single individuals can have one-night stands, we note that individuals can only enter the infectious state as a member of an infectious unit of type 1 or 3. Furthermore, if $\omega =0$, there are no one-night stands and individuals only become infected via an infectious partner within a relationship. In this case the first two columns of $\mathbf{M}$ become $\mathbf{0}$ and
\begin{eqnarray} \label{eq:sex:6}
R_0 = M_{3,3} =\frac{ \beta \{ \alpha \sigma (\delta + \gamma) +\gamma^2\}}{\gamma (\alpha \sigma + \delta + \gamma)(\beta+ \delta +2 \gamma)}, \end{eqnarray}
the mean number of times an individual will successfully infect a partner whilst infectious.
The expression for $R_0$ given in \eqref{eq:sex:6} is very similar to that given in \cite{LKD15}, (30). The only difference for $\omega=0$ between our model and the $SI$ sexually transmitted disease model presented in \cite{LKD15} and \cite{Lashari_Trapman} for individuals with a maximum of one sexual partner is that the model of \cite{LKD15} replaces recovery ($\gamma$) by death ($\mu$) which results in the relationship ending as well as removal of the infective. The model of \cite{LKD15} and \cite{Lashari_Trapman} incorporates birth of susceptibles at rate $N \mu$ to maintain the population size of $O(N)$.
| 2024-02-18T23:39:55.869Z | 2019-03-26T01:38:36.000Z | algebraic_stack_train_0000 | 850 | 10,772 |
|
proofpile-arXiv_065-4276 | \section{Signal speeds}\label{S:AppSpeeds}
We describe in this section the computation of the characteristic
signal speeds used in the explicit step (see Section \ref{S:RSU}).
In the particular form of Equations
\eqref{Eq:RadRMHD}-\eqref{Eq:RadRMHD2}, the MHD fluxes are
independent of the radiation variables, and vice-versa.
Hence, the Jacobian matrices of the system are block-diagonal,
one block corresponding to MHD and the other to radiation
transport. Consequently, their corresponding
sets of eigenvalues can be obtained
by computing the eigenvalues of each of these blocks individually.
For the MHD block, maximum and minimum signal speeds are computed
as detailed in \cite{MignoneBodo2006} and \cite{MignoneMcKinney}.
The treatment needed for the radiation wave speeds is rather
simpler, as a short
calculation shows that, for every direction
$d$, the radiation block depends
only on the angle $\theta$ between $\mathbf{F}_r$ and
$\hvec{e}_d$, and on
$f=\vert\vert\mathbf{F}_r\vert\vert/E_r$.
This simplifies the calculation, which can be
performed analitically as shown in \cite{Audit2002} and
\cite{Skinner2013}. The full set of eigenvalues of the
radiation block, which we denote as
$\{\lambda_{r1},\lambda_{r2},\lambda_{r3}\}$, can be computed
as
\begin{equation}
\label{Eq:RadSigSpeed1}
\lambda_{r1} = \frac{f\cos\theta -\zeta(f,\theta)}{\sqrt{4-3f^2}},
\end{equation}
\begin{equation}
\label{Eq:RadSigSpeed2}
\lambda_{r2} = \frac{3\xi(f)-1}{2f}\cos\theta,
\end{equation}
\begin{equation}
\label{Eq:RadSigSpeed3}
\lambda_{r3} = \frac{f\cos\theta +\zeta(f,\theta)}{\sqrt{4-3f^2}},
\end{equation}
where $\xi(f)$ is defined in Eq. \eqref{Eq:M13}, while
\begin{equation}\label{Eq:RadSigSpeed4}
\zeta(f,\theta) =
\left[
\frac{2}{3} \left(
4-3f^2-\sqrt{4-3f^2}
\right)
+ 2\,\cos^2\theta \left(
2-f^2-\sqrt{4-3f^2}
\right)
\right]^{1/2} .
\end{equation}
When $f=0$, $\lambda_{r2}$ is replaced by 0, i.e.,
its limit when $f\rightarrow 0$.
It can be seen from Equations
\eqref{Eq:RadSigSpeed1}-\eqref{Eq:RadSigSpeed4} that the
following inequalities hold for every value of
$f$ and $\theta$:
\begin{equation}
\lambda_{1r}\leq\lambda_{r2}\leq\lambda_{r3}.
\end{equation}
In the free-streaming limit ($f=1$), all these eigenvalues coincide
and are equal to $\cos\theta$,
which gives $\lambda_{rj}= \pm 1$ in
the parallel direction to $\mathbf{F}_r$, and $\lambda_{rj}=0$
in the perpendicular ones for $j=1,2,3$.
On the other hand, in the diffusion limit ($f=0$),
we have \mbox{
$(\lambda_{r1},\lambda_{r2},\lambda_{r3})
=(-1/\sqrt{3},0,1/\sqrt{3})$}
in every direction.
The above analysis can be applied to homogeneous hyperbolic systems.
Although the equations of Rad-RMHD do not belong to this
category, this is not a problem when radiation transport
dominates over radiation-matter interaction. On the contrary,
in the diffusion limit, the moduli of the maximum and
minimum speeds, both equal to $1/\sqrt{3}$,
may be too big and lead to an excessive numerical diffusion.
In those cases, the interaction
terms need to be taken into account to estimate the wave speeds.
With this purpose, following \cite{Sadowski2013}, we include in the
code the option of locally limiting the maximum and minimum speeds
by means of the following transformations:
\begin{equation}\label{Eq:RadSpeedLim}
\begin{split}
\lambda_{r,\,L} & \rightarrow \max
\left( \lambda_{r,\,L} , -\frac{4}{3\tau} \right) \\
\lambda_{r,\,R} & \rightarrow \min
\left( \lambda_{r,\,R} , \frac{4}{3\tau} \right)
\end{split},
\end{equation}
where $\tau=\rho\,\gamma\,\chi\,\Delta x$ is the optical
depth along one cell, being $\Delta x$ its width in the
current direction. Hence, this limiting is only applied whenever
cells are optically thick. The reduced speeds in Eq.
\eqref{Eq:RadSpeedLim} are based on a diffusion equation
like Eq. \eqref{Eq:DiffEq},
where the diffusion coefficient is $1/3\rho\chi$.
\section{Semi-analytical proof of $\lambda_L\leq\lambda^*\leq\lambda_R$}
\label{S:AppLambdaS}
In order to check the validity of Equation \eqref{Eq:CondLambdaS},
we have verified the following relations:
\begin{align}\label{Eq:BA1}
\lambda_R&\geq\max\left(\frac{B_R}{A_R},\frac{B_L}{A_L}\right)\\
\label{Eq:BA2}
\lambda_L&\leq\min\left(\frac{B_R}{A_R},\frac{B_L}{A_L}\right).
\end{align}
As in Section \ref{S:HLLC}, we
omit the subindex $r$, as it is understood that only radiation
fields are here considered.
We begin by proving the positivity of $A_R$. From its definition
in Equation \eqref{Eq:Adef}, we have:
\begin{equation}\label{Eq:ARER}
\frac{A_R}{E_R}=\lambda_R-f_{x,R}
=\max(\lambda_{3,L},\lambda_{3,R})-f_{x,R}\,,
\end{equation}
where $\lambda_{3,L/R}=\lambda_3(f_{L/R},\theta_{L/R})$.
Since $E>0$, we can conclude from Eq. \eqref{Eq:ARER} that
$A_R\geq 0$ is automatically satisfied if
\begin{equation}\label{Eq:ARER2}
\lambda_3(f,\theta)\geq f\cos\theta\,\,\forall\, (f,\theta)\,.
\end{equation}
From Eq. \eqref{Eq:RadSigSpeed3}, this condition can be
rewritten as
\begin{equation}
\zeta(f,\theta)\geq f\cos\theta\,\left(\Delta-1\right)\,,
\end{equation}
where $\Delta = \sqrt{4-3f^2}$. Taking squares at both sides and
rearranging terms, this condition reads
\begin{equation} \label{Eq:ARER3}
X(f) + Y(f) \cos^2\theta \geq 0,
\end{equation}
where $X(f) = \frac{2}{3}(4-3f^2-\Delta)$ and
$Y(f) = (1-f^2)(4-3f^2-2\Delta)$. Since only the second of these
terms can be smaller than $0$, it is enough to prove that
\eqref{Eq:ARER3} holds for $\cos^2\theta=1$, since
that yields the
minimum value that the left-hand side can take when $Y<0$.
Hence, it is enough to prove
\begin{equation}
X(f) + Y(f) = \frac{1}{3}\Delta^2 (5-3f^2-2\Delta) \geq 0\,,
\end{equation}
which holds since the last term between parentheses is always
greater than or equal to $0$. This finishes the proof of Eq.
\eqref{Eq:ARER2}. Using the same equations, we can see that
$\lambda_3(f,\theta)-f_x = 0$ is only satisfied if $f=1$.
An analog treatment can be used for $A_L$,
from which we arrive to the following inequalities:
\begin{align}
A_R\geq 0, \,\,&\mbox{and}\,\,A_R>0\,\,\forall\, f\in[0,1)\\
A_L\leq 0, \,\,&\mbox{and}\,\,A_L<0\,\,\forall\, f\in[0,1).
\end{align}
We now proceed to verify Equations \eqref{Eq:BA1} and
\eqref{Eq:BA2}, firstly considering the case $f_L,f_R<1$,
in which $A_{L/R}\neq 0$.
Under this condition, the ratio $B_S/A_S$ depends only on
$(f_L,f_R,\theta_L,\theta_R)$ as
\begin{equation}
\frac{B_S}{A_S}\equiv \alpha(\lambda_S,f_S,\theta_S) =
\frac{(\lambda_S-\lambda_{2,S})
f_S\cos\theta_S-(1-\xi(f_S))/2}{\lambda_S-f_S \cos\theta_S},
\end{equation}
with $S=L,R$.
In order to verify Eq. \eqref{Eq:BA1}, we must prove
$\lambda_R\geq B_R/A_R$ and $\lambda_R\geq B_L/A_L$.
Since $\lambda_R=\max(\lambda_{3,L},\lambda_{3,R})$, we can
write the first of these conditions considering the cases
$\lambda_R=\lambda_{3,R}$ and $\lambda_R=\lambda_{3,L}$,
as
\begin{align}\label{Eq:condlambda1}
\lambda_{3,R} &\geq \alpha (\lambda_{3,R},f_R,\theta_R)
\,\,\,\forall\, (f_R,\theta_R)\\\label{Eq:condlambda2}
\lambda_{3,L} &\geq \alpha (\lambda_{3,L},f_R,\theta_R)
\,\,\, \forall\, (f_R,\theta_R):\lambda_{3,R}<\lambda_{3,L}
\,\,\forall \,\lambda_{3,L} \in [-1,1] .
\end{align}
The first of these can be verified from the graph of
$\lambda_3(f,\theta)-\alpha (\lambda_3(f,\theta),f,\theta)$,
where it can be seen that this function is always positive
and tends to $0$ for $f\rightarrow 1$.
Similarly, we have checked the second one numerically by
plotting
\mbox{$\lambda_{3,L}-\alpha (\lambda_{3,L},f_R,\theta_R)$}
under the
condition $\lambda_{3,R}<\lambda_{3,L}$, taking multiple values
of $\lambda_{3,L}$ covering the range $[-1,1]$.
The condition $\lambda_R\geq B_L/A_L$ can be proven in a similar
fashion, by considering the cases $\lambda_L=\lambda_{1,L}$ and
$\lambda_L=\lambda_{1,R}$. Since $\lambda_R\geq\lambda_{3,L}$,
it is enough to prove the following conditions:
\begin{align}\label{Eq:condlambda3}
\lambda_{3,L} &\geq \alpha (\lambda_{1,L},f_L,\theta_L)
\,\,\,\forall\, (f_L,\theta_L)\\\label{Eq:condlambda4}
\lambda_{3,L} &\geq \alpha (\lambda_{1,R},f_L,\theta_L)
\,\,\, \forall\, (f_L,\theta_L):\lambda_{1,R}<\lambda_{1,L}
\,\,\forall \,\lambda_{1,R} \in [-1,1] ,
\end{align}
which can be verified in the same manner, finishing the proof
of Eq. \eqref{Eq:BA1} for the case $f_L,f_R<1$.
The same procedure
can be used to prove the validity of Eq. \eqref{Eq:BA2}.
Unlike the RHD case, the maximum and minimum
eigenvalues do not satisfy $\lambda_L<0$ and $\lambda_R>0$.
However, studying the parabolae
defined at both sides of Eq. \eqref{Eq:PLPR}, it can be shown
that $\lambda^*$ is always contained between $B_R/A_R$ and
$B_L/A_L$, regardless of the order of these two values and of
the signs of $\lambda_L$ and $\lambda_R$. Hence,
\begin{equation}
\lambda^*\in \left[\min\left(\frac{B_R}{A_R},\frac{B_L}{A_L}\right),
\max\left(\frac{B_R}{A_R},\frac{B_L}{A_L}\right)\right].
\end{equation}
Together with relations \eqref{Eq:BA1} and \eqref{Eq:BA2},
this proves Eq. \eqref{Eq:CondLambdaS} for $f_L,f_R<1$.
These results are also valid
in the cases $f_L=1$ and $f_R=1$ whenever the $A$ functions
differ from 0. Let us now assume $f_a=1$ and $f_b\neq 1$.
From Eqs. \eqref{Eq:Adef} and \eqref{Eq:Bdef},
we have $A_a\cos\theta_a=B_a$ and consequently
$A_a=0$ implies that $B_a=0$. If $A_a=0$ and $A_b\neq0$,
from \eqref{Eq:PLPR} we can extract that $\lambda^*=B_b/A_b$.
Finally, from the above analysis, we know that
$\lambda_L\leq B_b/A_b\leq\lambda_R$, from which we conclude that
\eqref{Eq:CondLambdaS} holds even in this case. The only
remaining case is that in which $f_L=f_R=1$ and $A_L=A_R=0$,
already considered in Section \ref{S:HLLC}, where the HLLC
solver is replaced by the usual HLL solver.
\section{Introduction}\label{S:Introduction}
Radiative transfer is of great relevance in
many different physical systems,
ocurring in a broad range of size scales.
In the context of astrophysics, for instance, it is of fundamental
importance in the modeling of star atmospheres \citep{Kippenhahn2012},
pulsars \citep{NSandPulsars}, supernovae \citep{Fryer2007},
and black hole accretion disks \citep{Thorne1974}.
In some high-energy environments (e.g. gamma-ray bursts),
matter can emit radiation while being accelerated to relativistic speeds,
sometimes being simultaneously subject to strong electromagnetic (EM) fields
\citep[see, e.g., the review by][]{Meszaros2006}.
In situations where none of these effects can be disregarded,
it is convenient to rely on numerical schemes that are able
to deal with them simultaneously.
Several numerical methods for radiation transport found in the literature
are based on different solutions of the
radiative transfer equation \citep[see, e.g.,][]{Mihalas}, which provides
a simplified yet strong formalism to the problem of radiation
transport, absorption, and emission in presence of matter.
This approach neglects every wave-like behavior of photons,
and focuses, instead, on energy and momentum transport.
Regardless of its simplicity, solving the frequency-dependent radiative
transfer equation is not a trivial task, due to the
high degree of nonlinearity present in the underlying mathematical
description and the number of variables involved in it.
For this reason, several
simplified schemes can be found in the literature.
Typical examples are the post-processing of ideal
hydrodynamical calculations,
\citep[sometimes used in cases where radiation back-reaction can be neglected,
see, e.g.,][]{Mimica2009}, and Monte Carlo methods \citep[see e.g.][]
{Mazzali1993, Kromer2009},
where radiation densities and fluxes are computed
by following the evolution of a large number
of effective `photon packets' along selected trajectories.
An alternative to these methods, followed throughout our work,
is given by the moment approach to the radiative transfer equation.
This consists in taking succesive angular moments of this equation,
in the same fashion as the hydrodynamics (HD)
and magnetohydrodynamics (MHD) can be obtained from the collisional
Boltzmann-Vlasov equation \citep[see e.g.][]{Goedbloed2004}.
The resulting scheme provides an extension to relativistic
and non-relativistic MHD, that can be used to compute the evolution
of the total radiation energy density and its flux, considering its
interaction with a material fluid.
In this work we focus on the relativistic case,
to which we refer as relativistic radiation MHD (Rad-RMHD henceforth).
The model involves a series of additional approximations
which we now describe.
First, as proposed by \cite{Levermore1984}, we close the system
of equations by assuming that
the radiation intensity is isotropic in a certain reference frame
(the M1 closure).
In addition, we consider the
fluid to be a perfect conductor, and assume the validity of
the equations of ideal MHD for the interaction between matter and
EM fields.
Lastly, we adopt an effective gray body approximation,
by replacing the opacity coefficients for a set of conveniently
chosen frequency-averaged values.
Our implementation has been built as a supplementary module in the
multiphysics, multialgorithm, high-resolution code \sftw{PLUTO},
designed for time-dependent explicit computations of either classical,
relativistic unmagnetized or magnetized flows \citep{PLUTO}.
The new module is fully parallel, has been adapted to all available
geometries (Cartesian, cylindrical and spherical) and supports
calculations on adaptively refined grids using the standard
\sftw{PLUTO-CHOMBO} framework \citep[see][]{AMRPLUTO, CHOMBO}.
In addition, we have introduced a new HLLC-type Riemann
solver, suitable for optically thin radiation transport.
In particular, our scheme is based on the HLLC solver for
relativistic HD by \cite{MignoneBodo} and it is designed to improve the
code's ability to resolve contact discontinuities
when compared to HLL (Harten-van Leer) formulations
\citep[see e.g.][]{Toro}.
To integrate the transport terms of the equations of Rad-RMHD, our
implementation employs the same sort of explicit methods used in
\sftw{PLUTO} for the non-radiative case.
However,
gas-radiation interaction is treated differently, since this
process may occur in times
that are much shorter than the dynamical times; for instance,
when matter is highly opaque.
Hence, direct explicit integration of the interaction terms
would lead to prohibitively small time steps and inefficient
calculations.
For this reason, our method of choice relies on Implicit-Explicit (IMEX)
Runge-Kutta methods \citep{Pareschi2005} whereby spatial gradients
are treated explicitly while point-local interaction terms are
integrated via an implicit scheme.
Similar approaches in the context of radiation HD and MHD
have been followed by \citet{Gonzalez2007}, \citet{Commercon2011},
\citet{Roedig2012}, \citet{Sadowski2013}, \citet{Skinner2013},
\citet{Takahashi2013}, \citet{McKinney2014}
and \citet{Rivera2016}.
In particular, it is our intention to include our
module in the following freely distributed versions of \sftw{PLUTO}.
This paper is structured as follows: in Section \ref{S:RadHyd},
we provide a brief summary of radiative transfer and the relevant
equations used in this work, while in Section \ref{S:NumScheme}
we give a description of the implemented algorithms. Section
\ref{S:Tests} shows the code's performance on several selected
tests, and in Section \ref{S:Summary} we summarize the main results
of our work.
\section{Numerical scheme}\label{S:NumScheme}
For numerical purposes we write equations \eqref{Eq:RadRMHD}-\eqref{Eq:RadRMHD2}
in conservative form as
%
\begin{equation}\label{Eq:Hyp}
\frac{\partial{\cal U}}{\partial t} + \nabla \cdot \tens{F}({\cal U})
= {\cal S}({\cal U})\,,
\end{equation}
where ${\cal U} \equiv \left(\rho\gamma,\,\mathcal{E},\,
\mathbf{m},\,\mathbf{B},\,E_r,\,\mathbf{F}_r
\right)^\intercal$ is an array of \emph{conserved} quantities,
$\tens{F}({\cal U})$ is the flux tensor and
${\cal S} \equiv \left(0,G^0,\mathbf{G},
\mathbf{0},-G^0,-\mathbf{G}\right)^\intercal$
contains the radiation-matter interaction terms.
The explicit expressions of $\tens{F}$ can be extracted from Equations
\eqref{Eq:RadRMHD}-\eqref{Eq:RadRMHD2}.
As outlined in the introduction, gas-radiation interaction
may occur in timescales that are much smaller than any dynamical
characteristic time and an explicit integration
of the interaction terms would lead either to instabilities
or to excessively large computing times.
For this reason, the time discretization of Equations \eqref{Eq:Hyp}
is achieved by means of IMEX (implicit-explicit) Runge-Kutta schemes
\citep[see e.g.][]{Pareschi2005}.
In the presented module, the user can
choose between two different IMEX schemes, as
described in Section \ref{S:IMEX}.
In our implementation of the IMEX formalism,
fluxes and geometrical source terms
are integrated explicitly
by means of standard shock-capturing Godunov-type methods,
following a finite volume approach.
Fluxes are thus evaluated at cell interfaces by means of a Riemann
solver between left and right states properly reconstructed
from the two adjacent zones.
Geometrical source terms can be obtained at the cell center or following
the approach outlined in \cite{Mig2014}.
This \emph{explicit step} is thoroughly
described in Section \ref{S:RSU}.
Within this stage, we have included
a new Riemann solver for radiation transport, which we
introduce in Section \ref{S:HLLC}.
On the other hand, the integration of $G^\mu$ is performed implicitly
through a separate step
(the \emph{implicit step}), as described in Section \ref{S:Impl}.
\subsection{Implemented IMEX schemes}\label{S:IMEX}
A commonly used second-order scheme is the IMEX-SSP2(2,2,2)
method by \cite{Pareschi2005} which, when applied to
(\ref{Eq:Hyp}), results in the following discrete scheme:
%
\begin{equation} \label{Eq:IMEX1}
\begin{array}{lcl}
{\cal U}^{(1)} &=& {\cal U}^n + a\Delta t^n{\cal S}^{(1)}
\\ \noalign{\medskip}
{\cal U}^{(2)} &=& {\cal U}^n+ \Delta t^n {\cal R}^{(1)} \\
& & + \Delta t^n\left[(1-2a){\cal S}^{(1)} + a{\cal S}^{(2)}\right]
\\ \noalign{\medskip}
{\cal U}^{n+1} &=& {\cal U}^n + \displaystyle \frac{\Delta t^n}{2}\left[
{\cal R}^{(1)} + {\cal R}^{(2)}\right] \\ \noalign{\medskip}
& & + \displaystyle \frac{\Delta t^n}{2}\left[ {\cal S}^{(1)} +
{\cal S}^{(2)}\right].
\end{array}
\end{equation}
%
Here ${\cal U}$ is an array of volume averages inside the zone $i,j,k$ (indices
have been omitted to avoid cluttered notations), $n$ denotes the current
step number, $\Delta t^n$ is the time step, $a=1-\sqrt{2}/2$, and the operator
${\cal R}$, which
approximates the contribution of $(-\nabla \cdot \tens{F} )$,
is computed in an explicit fashion in terms of the conserved fields
as detailed in Section \ref{S:RSU}.
Potentially stiff terms
- i.e., those poportional to $\kappa$ and $\sigma$ -
are included in the operator ${\cal S}$ which is solved implicitly during the
first and second stages in Eq. (\ref{Eq:IMEX1}).
An alternative scheme which we also consider
in the present context is
the following scheme (IMEX1 henceforth):
%
\begin{equation} \label{Eq:IMEX2}
\begin{array}{lcl}
{\cal U}^{(1)} &=& {\cal U}^n + \Delta t^n {\cal R}^{n} + \Delta t^n\,{\cal S}^{(1)} \\
{\cal U}^{(2)} &=& {\cal U}^{(1)} + \Delta t^n {\cal R}^{(1)} +
\Delta t^n\,{\cal S}^{(2)} \\
{\cal U}^{n+1} &=& \displaystyle \frac{1}{2}\left({\cal U}^n + {\cal U}^{(2)}\right),
\end{array}
\end{equation}
This method is an extension to the second-order
total variation diminishing Runge-Kutta scheme (RK2) by
\cite{GottliebShu1996}, where we have just added an
implicit step after every flux integration.
In the same way, we have included in the code a
third-order version of this scheme that extends the
third-order Runge-Kutta scheme by the same authors.
Both the second- and third-order of this method
are similar to those described in
\cite{McKinney2014}.
Using general methods for IMEX-RK schemes
\citep[see e.g.][]{Pareschi2005}, it can be shown that
IMEX-SSP2(2,2,2) and IMEX1 are of order 2 and 1 in time and
L- and A-stable respectively, which makes
IMEX-SSP2(2,2,2) a seemingly better option when it comes to
the schemes' stability. However, as we have observed
when testing the module, the explicit addition
of previously-calculated source terms in the last
step of IMEX-SSP2(2,2,2) can cause inaccuracies
whenever interaction terms are stiff and there
are large differences in the orders of magnitude of
matter and radiation fields
(see Sections \ref{S:PulseThick} and \ref{S:Shadows}).
Contrarily, IMEX1 seems
to have better positivity-preserving properties
and a higher accuracy in those cases.
In general, as it is shown in Section \ref{S:Tests},
we have obtained equivalent results with both
methods in every other case.
Whenever source terms can be
neglected, both methods
reduce to the standard RK2, which makes
them second-order accurate
in time for optically thin transport.
\subsection{Explicit step}\label{S:RSU}
In order to compute the explicit operator ${\cal R}$,
we implement a standard
\emph{reconstruct-solve-update} strategy
\citep[see e.g.][]{RezzollaZanotti}.
First, the zone averages ${\cal U}$
are used to compute cell-centered values of a set of
\emph{primitive} fields, defined as
\begin{equation}
{\cal V} = \left(\rho,\,p_g,\,
\mathbf{v},\,\mathbf{B} ,\,E_r,\,\mathbf{F}_r
\right)^\intercal.
\end{equation}
Although this is a straightforward step for the radiation fields,
as in their case primitive and conserved quantities coincide,
this is not the case for the remaining variables.
Primitive fields are obtained from conservative ones by means of
a root-finding algorithm, paying
special attention to avoiding problems related to small number
handling that arise when large Lorentz factors are involved.
To perform this conversion, we follow the procedure detailed
in \cite{MignoneMcKinney}.
Next, primitive fields are reconstructed to zone interfaces
(\emph{reconstruction step}).
In more than one dimensions, reconstruction is carried direction-wise.
In order to avoid spurious oscillations next to
discontinuities and steep gradients, reconstruction
must use slope limiters in order to satisfy monotonicity constraints.
During this step, some physical constraints are imposed,
such as gas pressure positivity, an upper boundary for the
velocity given by $\vert\vert \mathbf{v} \vert\vert < 1$,
and the upper limit to the radiation flux given by Equation
\eqref{Eq:FsmallerE}.
The reconstruction step produces left and right discontinuous states
adjacent to zone interfaces, which we denote with ${\cal V}_L$ and ${\cal V}_R$.
This poses a local initial-value problem that is solved
by means of an approximate Riemann solver, whose outcome
is an estimation of the fluxes on each interface.
In our implementation, the user can choose
among three of these methods.
The simplest one of these is
the Lax-Friedrichs-Rusanov solver \citep[see e.g.][]{Toro}, which
yields the following flux:
\begin{equation}\label{Eq:LFR}
{\cal F}_{LF} = \frac{1}{2}\left[
{\cal F}_{L} + {\cal F}_{R} -
\vert \lambda_{max} \vert
\left( {\cal U}_R - {\cal U}_L \right)
\right].
\end{equation}
In this expression, ${\cal U}_{L/R}$ and ${\cal F}_{L/R} = \hvec{e}_d\cdot\tens{F}({\cal U}_{L/R})$
are the conserved fields and flux components in the coordinate direction
$\hvec{e}_d$ (here
$d=x,y,z$ in Cartesian coordinates or $d=r,\theta,\phi$ in spherical coordinates)
evaluated at the left and right of the interface,
while $\lambda_{\max}$ is the fastest signal speed at both sides,
computed using both ${\cal V}_L$
and ${\cal V}_R$. A less diffusive option is given by an HLL
solver \citep[Harter-Lax-van Leer, see e.g.][]{Toro} introduced
by \citet{Gonzalez2007}. In this case, fluxes are computed as
\begin{equation}\label{Eq:Fhll}
{\cal F}_{hll} = \left\{
\begin{array}{ll}
{\cal F}_L & \mathrm{if\ } \lambda_L > 0 \\
\frac{\lambda_R {\cal F}_L - \lambda_L {\cal F}_R
+\lambda_R\lambda_L \left(
{\cal U}_R - {\cal U}_L
\right)
}{\lambda_R-\lambda_L} & \mathrm{if\ }
\lambda_L \leq 0 \leq \lambda_R \\
{\cal F}_R & \mathrm{if\ } \lambda_R < 0
\end{array}
\right.,
\end{equation}
where $\lambda_L$ and $\lambda_R$ are, respectively, the
minimum and maximum characteristic signal speeds, taking into
account both ${\cal V}_L$ and ${\cal V}_R$ states. Finally, a
third option is given by an HLLC solver that estimates the HD (MHD)
fluxes as described in \cite{MignoneBodo} \citep[see also][]{MignoneBodo2006},
and the radiation fluxes as described in Section \ref{S:HLLC}.
From Eqs. \eqref{Eq:RadRMHD}-\eqref{Eq:RadRMHD2} we can
see that, if interaction terms are disregarded, the equations
of Rad-RMHD can be divided into two independent systems,
one corresponding to the equations of relativistic MHD and the
other to those of radiation transport.
Hence, we can expect the maximum and minimum signal speeds
of both systems to be, in the frozen limit\footnote{
In the theory of stiff relaxation systems, the frozen limit
refers to the small time step regime, when the effect of source
terms on the characteristic waves is still negligible.},
different.
In view of this, we compute the fluxes independently for each
subsystem of equations obtaining the speeds shown in
Appendix \ref{S:AppSpeeds}.
In this manner, as it is pointed out in \cite{Sadowski2013},
we avoid the excessive numerical diffusion that appears
when the same signal speeds
are used to update both radiation and MHD fields.
This has been verified in our tests.
Once the fluxes are obtained, we can compute the operator
${\cal R}$ which, in the direction $d$, reads
\begin{equation}\label{Eq:Ld}
{\cal R}_d({\cal V})= -\frac{1}{\Delta V^d}
\left(
A^d_+{\cal F}^{d}_+ - A^d_- {\cal F}^{d}_-
\right)
+{\cal S}_e^d,
\end{equation}
where $A^d_\pm$ are the cell's right ($+$) and left ($-$)
interface areas and $\Delta V^d$ is the cell volume in that
direction \citep[see][]{PLUTO}.
Here ${\cal S}^d_e({\cal U})$ accounts for geometrical terms that arise when the divergence is
written in different coordinate systems.
The full operator ${\cal R}$ is in the end computed as $\sum_d \mathcal{R}_d$.
Once the update of the conserved variables is completed, the
time step is changed using the maximum signal speed computed
in the previous step, according to the Courant-Friedrichs-Lewy
condition \citep[][]{Courant1928}:
\begin{equation}\label{Eq:Courant}
\Delta t^{n+1}= C_a \min_d \left(
\frac{\Delta l_{\min}^d}{\lambda_{\max}^d} \right)
\end{equation}
where $\Delta l_{\min}^d$ and $\lambda_{\max}^d$ are, respectively,
the minimum cell width and maximum signal speed along the direction
$d$, and $C_a$, the Courant factor, is a user-defined parameter.
Finally, when magnetic fields are included, the divergence-free condition
can be enforced using either the constrained transport method
\citep{Balsara1999,Londrillo2004} or hyperbolic divergence cleaning
\citep{Dedner2002,Mignone2010,Mignone2010b}.
Both methods are available in the code.
\subsection{HLLC solver for radiation transport}\label{S:HLLC}
We now present a novel Riemann solver for the solution
of the homogeneous radiative transfer equation.
To this purpose, we consider the subsystem formed by Eqs.
(\ref{Eq:RadRMHD1})-(\ref{Eq:RadRMHD2}) by neglecting interaction terms
and restrict our attention to a single direction,
chosen to be the $x$ axis, without loss of generality.
In Cartesian coordinates, the resulting equations take the form
\begin{equation}\label{Eq:RadHLLC}
\frac{\partial{\cal U}_r}{\partial t} + \frac{\partial\Phi}{\partial x}
= 0
\end{equation}
where ${\cal U}_r = (E,\, \mathbf{F})^\intercal$ while
$\Phi = (F_x,\, P_{xx},\, P_{yx},\, P_{zx})^\intercal$
and we have omitted the subscripts $r$
for clarity purposes (we shall maintain that convention
throughout this section).
From the
analysis carried out in Appendix \ref{S:AppSpeeds}, we know that the Jacobian
$\mathbf{J}^x$ of this system has three different eigenvalues
$\{\lambda_1,\lambda_2,\lambda_3\}$, satisfying
$\lambda_1\leq\lambda_2\leq\lambda_3$.
Since the system is hyperbolic \citep[see e.g.][]{Toro},
the breaking of an initial discontinuity will
involve the development of (at most) as many waves as the
number of different eigenvalues.
On this basis, we have implemented
a three-wave Riemann solver.
Following \cite{HLLCradiatif}, we define the following fields:
\begin{equation}
\begin{array}{lcl}
\beta_x &=&\displaystyle \frac{3\xi-1}{2}
\frac{F_x}{\vert\vert\mathbf{F}\vert\vert^2}E \\ \noalign{\medskip}
\Pi &=&\displaystyle \frac{1-\xi}{2}E \,,
\end{array}
\end{equation}
where $\xi$ is given by Eq. \eqref{Eq:M13}.
With these definitions, the fluxes in Eq. \eqref{Eq:RadHLLC}
can be written as
\begin{equation}
\Phi=\begin{pmatrix}
F_x \\
F_x \,\beta_x + \Pi \\
F_y \,\beta_x \\
F_z \,\beta_x
\end{pmatrix},
\end{equation}
and $F_x$ can be shown to satisfy $F_x=\left(E+\Pi\right)\beta_x$.
These expressions are similar to those of relativistic hydrodynamics
(RHD henceforth),
where $\beta_x$, $\Pi$ and $\mathbf{F}$ play, respectively, the role of $v_x$, $p_g$
and $\mathbf{m}$ while $E$ is tantamount to the total energy.
With the difference that there is no field corresponding to density,
the equations are exactly the
same as those corresponding to energy-momentum conservation of a fluid,
with a different closure relation.
With this in mind, we follow
analogous steps to those in \cite{MignoneBodo}
in order to construct a HLLC solver
for the system defined by Equations \eqref{Eq:RadHLLC}.
In this case,
instead of the intermediate constant state considered in the HLL solver,
we include an additional
middle wave (the analog of a \quotes{contact} mode)
of speed $\lambda^*$ that separates
two intermediate states $\mathcal{U}^*_L$ and $\mathcal{U}^*_R$, where
\begin{equation}\label{Eq:CondLambdaS}
\lambda_L\leq\lambda^*\leq\lambda_R\,.
\end{equation}
In this way, the full approximate
solution verifies
\begin{equation}
{\cal U}_r(0,t)=\begin{cases}
{\cal U}_{r,L} & \text{if } \lambda_L> 0 \\
{\cal U}^*_{r,L} & \text{if } \lambda_L\leq 0 \leq \lambda^* \\
{\cal U}^*_{r,R} & \text{if } \lambda^*\leq 0 \leq \lambda_R \\
{\cal U}_{r,R} & \text{if } \lambda_R< 0 \,.
\end{cases}
\end{equation}
The corresponding fluxes are
\begin{equation}
\Phi_{hllc}(0,t)=\begin{cases}
\Phi_L & \text{if } \lambda_L> 0 \\
\Phi^*_L & \text{if } \lambda_L\leq 0 \leq \lambda^* \\
\Phi^*_R & \text{if } \lambda^*\leq 0 \leq \lambda_R \\
\Phi_R & \text{if } \lambda_R< 0 \,.
\end{cases}
\end{equation}
States and fluxes are related by the Rankine-Hugoniot jump conditions across
the outermost waves $\lambda_S$ ($S=L,R$),
%
\begin{equation}\label{Eq:RH_Rad}
\lambda_S\,({\cal U}^*_{r,S} - {\cal U}_{r,S}) = \Phi^*_S - \Phi_S\,.
\end{equation}
A similar condition must also hold across the middle wave so that,
when Equation \eqref{Eq:RH_Rad} is applied to all three waves, one has at
disposal a system of 12 equations for the 17 unknowns
(${\cal U}^*_{r,L}$, ${\cal U}^*_{r,R}$, $\Phi^*_L$,
$\Phi^*_R$, and $\lambda^*$) and therefore further assumptions
must be made.
From the results of the tests performed with the HLL
solver, we have verified that $\beta_x$ and $\Pi$
are conserved along the intermediate contact
mode for all the obtained solutions.
Noting that $\lambda_2(E,\mathbf{F})=\beta_x(E,\mathbf{F})$,
it can be seen that, for a discontinuity of speed $\beta_x$
along which $\beta_x$ and $\Pi$ are continuous, the
jump conditions
\eqref{Eq:RH_Rad} are satisfied, as pointed out in
\cite{HLLCradiatif} and proven in \cite{Hanawa2014}.
Thus, we impose the constraints
$\lambda^*=\beta^*_{x,L}=\beta^*_{x,R}$ and
$\Pi^*_L=\Pi^*_R$.
These conditions are analogous to those satisfied by the
contact discontinuity in RHD, across which $p_g$ and $v_x$
are conserved, and where the latter
coincides with the propagation speed.
Following \cite{MignoneBodo}, we assume that
$\Phi^*$ can be written in terms of
the five variables $(E^*,\Pi^*,\beta_x^*,F^*_y,F^*_z)$
in the following way:
\begin{equation}
\Phi^*=\begin{pmatrix}
F^*_x \\
F^*_x \,\beta^*_x + \Pi^* \\
F^*_y \,\beta^*_x \\
F^*_z \,\beta^*_x
\end{pmatrix},
\end{equation}
where for consistency we have defined
$F^*_x\equiv(E^*+\Pi^*)\beta^*_x$.
Under these constraints, the jump conditions across the middle
wave are automatically satisfied, and Eq. \eqref{Eq:RH_Rad}
is reduced to the following system of 8 equations
in 8 unknowns:
\begin{equation}\label{Eq:RH_Rad1}
\begin{array}{lcl}
E^*(\lambda-\lambda^*)&=&
E(\lambda-\beta_x)+\Pi^*\lambda^*-\Pi\,\beta_x\\
F_x^*(\lambda-\lambda^*)&=&F_x(\lambda-\beta_x)+\Pi^*-\Pi \\
F_y^*(\lambda-\lambda^*)&=&F_y(\lambda-\beta_x) \\
F_z^*(\lambda-\lambda^*)&=&F_z(\lambda-\beta_x)\,,
\end{array}
\end{equation}
which holds for both subscripts L and R (we shall maintain
this convention in what follows). The first two
equations in Eq. \eqref{Eq:RH_Rad1}
can be turned into the following quadratic
expression, from which $\lambda^*$ can be obtained:
\begin{equation}\label{Eq:PLPR}
(A_L\lambda^*-B_L)(1-\lambda_R\lambda^*)=
(A_R\lambda^*-B_R)(1-\lambda_L\lambda^*),
\end{equation}
with
\begin{align}\label{Eq:Adef}
A &= \lambda E - F_x \\\label{Eq:Bdef}
B &= (\lambda - \beta_x) F_x - \Pi.
\end{align}
Once $\lambda^*$ is known, we can compute $\Pi^*$ as
\begin{equation}
\Pi^*=\frac{A\,\lambda^*-B}{1-\lambda\,\lambda^*},
\end{equation}
and the remaining fields from Eq. \eqref{Eq:RH_Rad1}.
Similarly to the RHD counterpart, among the
two roots of Equation \eqref{Eq:PLPR} we must choose the only
one that guarantees $\lambda^*\in[-1,1]$, which in our case
corresponds to that with the minus sign.
As shown in Appendix \ref{S:AppLambdaS}, this definition
of $\lambda^*$ satisfies Eq. \eqref{Eq:CondLambdaS}.
We have also checked by means of extensive numerical testing
that the intermediate states $\mathcal{U}^*_L$ and $\mathcal{U}^*_R$
constructed in this way satisfy Equation \eqref{Eq:FsmallerE},
which guarantees the positivity of our HLLC scheme.
However, unlike the RHD case, the coefficients
$\{A_L,B_L,A_R,B_R\}$
defined in Equations \eqref{Eq:Adef} and \eqref{Eq:Bdef}
can simultaneously be equal to zero, meaning that
$\lambda^*$ can no longer be determined from Equation \eqref{Eq:PLPR}.
This happens under
the conditions $\vert\vert\mathbf{F}\vert\vert = E$ for both L and R,
and $F_{xL}/\vert\vert\mathbf{F}_L\vert\vert\leq
F_{xR}/\vert\vert\mathbf{F}_R\vert\vert$, in which case the jump
conditions lead to the formation of vacuum-like intermediate
states.
We overcome this issue by switching the solver to the
standard HLL whenever these conditions are met.
As for the HLL solver, signal velocities must be limited when
describing radiation transfer in highly opaque materials in order
to reduce numerical diffusion (see Appendix \ref{S:AppSpeeds}).
Whenever this occurs, we also switch
to the standard HLL solver,
and limit $\lambda_L$ and $\lambda_R$ according to
Equation \eqref{Eq:RadSpeedLim}. Hence, we can only expect the
HLLC solver to improve the accuracy of the obtained solutions in
optically thin regions of space, whereas the results should be
the same for both HLL and HLLC everywhere else. Finally, although
the use of the HLLC solver can reduce the numerical diffusion
when compared to the HLL solver, this can cause
spurious oscillations around shocks
that would be damped with a more diffusive method. As for the HLLC
solver for relativistic HD and MHD included in \sftw{PLUTO}, this
problem can be reduced by implementing an additional flattening
in the vicinity of strong shocks \citep[see e.g.][]{MignoneBodo}.
\subsection{Implicit step}\label{S:Impl}
We now describe the algorithm employed for the
implicit integration of the radiation-matter interaction
terms.
A typical implicit step of an IMEX scheme
(see Eqs. \ref{Eq:IMEX1} and \ref{Eq:IMEX2})
takes the form
\begin{equation} \label{Eq:ImplEq0}
{\cal U} = {\cal U}' + s\, \Delta t^n\,{\cal S}\,,
\end{equation}
where $s$ is a constant and primed terms denote
some intermediate state value.
Equation \eqref{Eq:ImplEq0} shows that
the mass density, computed as $\rho\gamma$, as well as
the total energy and momentum densities, defined as
$ E_{tot} = \mathcal{E} + E_r$ and
$\mathbf{m}_{tot}=\mathbf{m} + \mathbf{F}_r$, must be
conserved during this partial update owing
to the particular form of the source terms.
This yields the following implicit relations between $\mathcal{V}$
and ${\cal U}_r$:
\begin{align}\label{Eq:PrimImpl1}
\begin{array}{ll}
\mathcal{E}(\mathcal{V}) &= E_{tot}-E_r \\
\mathbf{m}(\mathcal{V}) &= \mathbf{m}_{tot}-\mathbf{F}_r .
\end{array}
\end{align}
We can then solve Eq. \eqref{Eq:ImplEq0} in terms of the
following reduced system:
\begin{equation} \label{Eq:ImplEq}
{\cal U}_{r} = {\cal U}'_{r} - s\, \Delta t^n\,\mathcal{G}\,,
\end{equation}
with
$\mathcal{G} \equiv (G^0,\mathbf{G})^\intercal$,
where $G^\mu$ is given in Eq. \eqref{Eq:GmuExpl}.
In Eq. \eqref{Eq:ImplEq}, radiation fields can be regarded
as functions of the MHD fields and vice-versa by means of Eq.
\eqref{Eq:PrimImpl1}, and therefore the system can be solved
in terms of either one of these.
In order to solve Equation \eqref{Eq:ImplEq}, we have implemented
and compared three different multidimensional root finder
algorithms, which we now describe.
\begin{enumerate}
\item \emph{Fixed-point method}.
This method \citep[originally proposed by][]{Takahashi2013} is based
on iterations of ${\cal U}_r$ and follows essentially the same
approach outlined by \cite{Palenzuela2009}
in the context of resistive MHD.
In this scheme all of the MHD primitive
variables, as well as $D^{ij}$, are written at a previous
iteration with respect to ${\cal U}_r$.
In that manner,
$\mathcal{G}$ can be written at a given iteration $m$ as
\begin{equation}
\mathcal{G}^{(m)}=
\mathcal{M}^{(m)}{\cal U}^{(m+1)}_r+b^{(m)} ,
\end{equation}
where
$\mathcal{M}$ is a matrix and $b$ a column vector,
both depending on $\mathcal{V}$ and $D^{ij}$, and
the numbers between parentheses indicate the iteration in which
the fields are evaluated. Inserting this in Equation
\eqref{Eq:ImplEq}, the updated conserved fields can be computed
as
\begin{equation}
{\cal U}^{(m+1)}_r =
\left( \mathcal{I} + s\, \Delta t^n \mathcal{M}^{(m)} \, \right)^{-1}
\left( {\cal U}'_r - s\, \Delta t^n\, b^{(m)} \right),
\end{equation}
after which primitive fields can be updated using Eq.
\eqref{Eq:PrimImpl1}.
\item \emph{Newton's method for radiation fields},
implemented in \cite{Sadowski2013} and \cite{McKinney2014}.
This scheme consists in finding the roots of the nonlinear
multidimensional function
\begin{equation}
\mathcal{Q}(E_r,\mathbf{F}_r)
={\cal U}_r-{\cal U}'_r+ s\,\Delta t^n\, \mathcal{G},
\end{equation}
updating the radiation variables on each iteration as
\begin{equation}
{\cal U}_r^{(m+1)}={\cal U}_r^{(m)}
-\left[\mathcal{J}^{(m)}\right]^{-1}
\mathcal{Q}^{(m)},
\end{equation}
where we have defined the Jacobian matrix $\mathcal{J}$ as
$\mathcal{J}_{ij}=\partial \mathcal{Q}_i / \partial {\cal U}_r^j$.
The elements of $\mathcal{J}$ are computed numerically, taking
small variations of the iterated fields.
As in the fixed-point method,
matter fields are computed from ${\cal U}_r$ for each step
by means of an inversion of
Eq. \eqref{Eq:PrimImpl1}.
\item \emph{Newton's method for matter fields},
implemented in \cite{McKinney2014}.
This procedure is identical to the previous one,
with the difference that in this case the iterated fields are
the fluid's
pressure and the spatial components of its four-velocity,
which we denote as $\mathcal{W}=(p_g,\mathbf{u})^\intercal$.
These are updated as
\begin{equation}
\mathcal{W}^{(m+1)}=\mathcal{W}^{(m)}
-\left[\mathcal{J}^{(m)}\right]^{-1}
\mathcal{Q}^{(m)},
\end{equation}
where now $\mathcal{J}_{ij}=\partial \mathcal{Q}_i /
\partial \mathcal{W}^j$
and $\mathcal{Q}$ is regarded as a function of $\mathcal{W}$.
This scheme is much faster than the previous one, since
the computation of ${\cal U}_r$ from $\mathcal{W}$
by means of Eq. \eqref{Eq:PrimImpl1}
is now straightforward, and no longer requires
a cumbersome inversion of conserved to primitive fields.
\end{enumerate}
For each of these methods,
iterations are carried out until convergence is reached
by means of some error function.
In the first
of them, such function is chosen as the norm of the relative
differences between successive values of $\mathcal{V}$, whereas
in the last two of them it is defined as the norm of
$\mathcal{Q}^{(m+1)}$.
If $\mathcal{E}\ll E_r$,
the errors of the matter fields can be large even when
radiation fields converge, since Eq.
\eqref{Eq:PrimImpl1}
implies that $\mathcal{E}$ and $E_r$ have the same absolute
error, as well as $\mathbf{m}$ and $\mathbf{F}_r$.
Therefore,
having small relative differences of $E_r$ does not guarantee
the same for $\mathcal{E}$, which can lead to non-negligible
inaccuracies if the second method is used.
Equivalently, the same problem can occur whenever
$\mathcal{E}\gg E_r$
if method 3 is chosen \citep[see also][]{McKinney2014}.
To overcome this issue, we have included in the code the
option of adding to the convergence function the norm of the
relative differences of $\mathcal{E}$ when using the
second method, and of $E_r$ when using the third one.
We have seen in the performed tests that the fixed-point
method converges rather fast, meaning that
the number of iterations
that it requires frequently coincides with that obtained
with the last two methods. This scheme has sufficed to perform
all the tests carried out in this work, being often the fastest
one when compared to the other two, having been overcome only
by method 3 in a few cases.
\section{Radiation hydrodynamics}\label{S:RadHyd}
\subsection{The equation of radiative transfer}\label{S:RadTransf}
In this section we outline the basic derivation that leads to
the equations of Rad-RMHD, which are described in Section \ref{S:RadRMHD}.
We follow the formalism shown in \citet{Mihalas},
taking as a starting point the radiative transfer equation,
\begin{equation}\label{Eq:TransfEq}
\begin{split}
\frac{\partial I_\nu (t,\mathbf{x},\mathbf{n})}{\partial t} &+
\mathbf{n} \cdot \nabla I_\nu (t,\mathbf{x},\mathbf{n})\\ &=
\eta_\nu(t,\mathbf{x},\mathbf{n}) -
\chi_\nu(t,\mathbf{x},\mathbf{n})\,I_\nu (t,\mathbf{x},\mathbf{n}).
\end{split}
\end{equation}
In this framework, photons are treated as point-like wave packets,
that can be instantly emitted or absorbed by matter particles.
As outlined in the introduction, this approach
rules out effects due to the wave-like nature of light such
as diffraction, refraction, dispersion, and polarization,
and takes care only of energy and momentum transport
\citep[see e.g.][]{Pomraning1973}.
Macroscopic EM fields, however, do not get such treatment
along this work, and are instead regarded separately as
classical fields.
Equation \eqref{Eq:TransfEq} describes the evolution of the
radiation specific intensity $I_\nu$, defined as the amount of
energy per unit area transported in a time interval $dt$ through
an infinitesimal solid angle around the direction given by
$\mathbf{n}$, in a range of frequencies between $\nu$ and
$\nu+d\nu$.
The quantities on the right hand side of this equation
describe the interaction of the gas with the radiation field.
The function $\eta_\nu$, known as emissivity, accounts for
the energy released by the material per unit length,
while the last term, proportional to $I_\nu$,
measures the energy removed
from the radiation field, also per unit length.
The total opacity $\chi_\nu$ comprises absorption and scattering
in the medium:
\begin{equation}
\chi_\nu(t,\mathbf{x},\mathbf{n}) =
\kappa_\nu(t,\mathbf{x},\mathbf{n}) +
\sigma_\nu(t,\mathbf{x},\mathbf{n}),
\end{equation}
where $\kappa_\nu$ and $\sigma_\nu$ are, respectively, the absorption
and scattering frequency-dependent opacities.
Solving Equation \eqref{Eq:TransfEq} in the presented form is
not a trivial task since integration must be in general
carried out considering the dependency of $I_\nu$ on
multiple variables $(t,\mathbf{r},\nu,\mathbf{n})$, while
concurrently taking into account changes in the moving material.
It also requires a precise
knowledge of the functions $\eta_\nu$ and
$\chi_\nu$, including effects such as the anisotropy
caused by the Doppler shift.
Instead of attempting a full solution, we adopt a
frequency-integrated moment-based approach:
we integrate Equation \eqref{Eq:TransfEq} over the frequency domain
and take convenient averages in the angle -the moments- that can be
naturally introduced in the equations of hydrodynamics.
This procedure is described in the next section.
\subsection{Energy-momentum conservation and interaction terms}
\label{S:EMCons}
We now explicitly derive the set of conservation laws describing the
coupled evolution of fluid, EM, and radiation fields.
While MHD quantities and radiation fields are calculated in an
Eulerian frame of reference, absorption and scattering coefficients are
best obtained in the fluid's comoving frame (comoving frame henceforth),
following the formalism described in \cite{Mihalas}.
The convenience of this choice relies on the fact that the opacity
coefficients can be averaged easily without taking into account
anisotropies due to a non-null fluid's
velocity, while the hyperbolic form of the
conservation equations is kept.
In this formalism, we split the total
energy-momentum-stress tensor
$T^{\mu\nu}$ into a gas, EM, and a radiative contribution:
%
\begin{equation}\label{Eq:Tmunu}
T^{\mu\nu} = T_g^{\mu\nu} + T_{em}^{\mu\nu} + T_r^{\mu\nu}\,.
\end{equation}
%
The first of these can be written as
\begin{equation}
T_g^{\mu\nu} = \rho h\, u^\mu u^\nu
+ p_g\,\eta^{\mu\nu},
\end{equation}
where $u^\mu$ is the fluid's four-velocity and $\eta^{\mu\nu}$ is
the Minkowski tensor, while $\rho$, $h$, and $p_g$ are,
respectively, the fluid's matter density, specific enthalpy,
and pressure, measured in the comoving frame
(our units are chosen so that $c=1$).
This expression is valid as long as material
particles are in \emph{local
thermal equilibrium} (LTE henceforth), which is one of the assumptions
of the hydrodynamical treatment.
The electromagnetic contribution is given by
the EM stress-energy tensor:
\begin{equation}
T_{em}^{\mu\nu} = F^{\mu\alpha} F^\nu_\alpha
- \frac{1}{4} \eta^{\mu\nu}F_{\alpha\beta}
F^{\alpha\beta},
\end{equation}
where the components of the field tensor $F^{\mu\nu}$
are given by
\begin{equation}
F^{\mu\nu}=\begin{pmatrix}
0 & -E_1 & -E_2 & - E_3 \\
E_1 & 0 & -B_3 & B_2 \\
E_2 & B_3 & 0 & -B_1 \\
E_3 & -B_2 & B_1 & 0
\end{pmatrix}\,,
\end{equation}
where $E_i$ and $B_i$ are, respectively, the components of the
electric and magnetic fields.
Lastly, $T_r^{\mu\nu}$ can be written in terms of the specific
intensity $I_\nu$, as
%
\begin{equation}\label{Eq:Tr}
T_r^{\alpha\beta} = \int_0^\infty\mathrm{d}\nu
\oint \mathrm{d}\Omega\,\,
I_\nu(t,\mathbf{x},\mathbf{n})\, n^\alpha n^\beta ,
\end{equation}
%
where $n^\mu \equiv (1,\mathbf{n})$ denotes
the direction of propagation, $\mathrm{d}\nu$ the differential
frequency, and $\mathrm{d}\Omega$ the differential solid angle
around $\mathbf{n}$. This expression, by definition
covariant \citep[see e.g.][]{Mihalas}, can be shortened as
\begin{equation}
T_r= \left( \begin{array}{cc}
E_r & F_r^i\\
F_r^j & P_r^{ij}\\
\end{array} \right),
\end{equation}
%
where
%
\begin{eqnarray}
\label{Eq:RadMoments}
E_r &\displaystyle = \int_0^\infty\mathrm{d}\nu
\oint \mathrm{d}\Omega\,\,
I_\nu(t,\mathbf{x},\mathbf{n}) \\
\label{Eq:RadMomentsF}
F_r^i&\displaystyle = \int_0^\infty\mathrm{d}\nu
\oint \mathrm{d}\Omega\,\,
I_\nu(t,\mathbf{x},\mathbf{n})\, n^i \\
P_r^{ij} &\displaystyle = \int_0^\infty\mathrm{d}\nu
\oint \mathrm{d}\Omega\,\,
I_\nu(t,\mathbf{x},\mathbf{n})\, n^i\,n^j
\end{eqnarray}
are the first three moments of the radiation field, namely, the
radiation energy density, the flux, and the pressure tensor. In
our scheme, we follow separately the evolution of $E_r$ and
$F_r^i$,
and define the pressure tensor in terms of these fields by means
of a closure relation, as it is described in Section \ref{S:M1}.
Following these definitions, and imposing conservation of
mass, total energy, and momentum, we have
\begin{equation}
\nabla_\mu(\rho u^\mu)=0
\end{equation}
and
\begin{equation}\label{Eq:Tmunumu}
\nabla_\mu T^{\mu\nu}=0.
\end{equation}
From equations \eqref{Eq:Tmunu} and \eqref{Eq:Tmunumu},
we immediately obtain
%
\begin{equation}\label{Eq:TrConsCov}
\nabla_\mu \left( T^{\mu\nu}_{g} + T^{\mu\nu}_{em} \right)
= -\nabla_\mu T^{\mu\nu}_{r} \equiv G^\nu
\end{equation}
where $G^\mu$ - the radiation four-force density -
is computed by integrating Eq. \eqref{Eq:TransfEq}
over the frequency and the solid angle, as
\begin{equation}\label{Eq:Gmu}
G^\mu = \int_0^\infty\mathrm{d}\nu
\oint \mathrm{d}\Omega\,\,
\left(
\chi_\nu\, I_\nu
-\eta_\nu
\right)
\, n^\mu .
\end{equation}
The equations of Rad-RMHD can then be derived from Eq.
\eqref{Eq:TrConsCov}, where the term $G^\mu$ accounts for
the interaction between radiation and matter.
The previous expression can be simplified in the comoving frame
provided some conditions are met.
Firstly, we assume coherent and isotropic scattering and
calculate the total comoving emissivity as
%
\begin{equation}
\eta_\nu(t,\mathbf{x},\mathbf{n}) =
\kappa_\nu B_\nu(T) + \sigma_\nu J_\nu,
\end{equation}
%
where $B_\nu(T)$ is the Planck's spectral radiance
at a temperature $T$, while $J_\nu$ is the angle-averaged
value of $I_\nu$.
The temperature can be determined from the ideal gas law
\begin{equation}
T = \frac{\mu\, m_p\, p_g}{k_B\,\rho},
\end{equation}
where $\mu$ is the mean molecular weight,
$m_p$ is the proton mass, and $k_B$ the Boltzmann constant.
We can then insert these expressions in Eq. \eqref{Eq:Gmu} and
replace the opacities by their corresponding frequency-averaged values,
such as the Planck and Rosseland means
\citep[see e.g.][]{Mihalas,Skinner2013}.
In this way, we obtain the following comoving-frame source terms
%
\begin{equation}\label{Eq:Gc}
\tilde{G}^\mu = \rho \left[
\kappa \tilde{E}_{r} - 4\pi\kappa B(T)
,\, \chi \tilde{\mathbf{F}}_{r} \right]
\end{equation}
where $B(T)=\sigma_{\textsc{SB}}T^4/\pi c$,
$\sigma_{\textsc{SB}}$
is the Stefan-Boltzmann constant, and $\chi$, $\kappa$,
and $\sigma$ are the mentioned frequency-averaged
opacities, per unit density.
In the code, these can either be set as constants,
or defined by the user as functions of any set of fields
(for instance, $\rho$ and $T$).
From now on and except for the opacity coefficients, we label
with a tilde sign quantities in the comoving frame.
Finally, $G^\mu$ can be obtained in the Eulerian frame
by means of a Lorentz boost applied to Equation
\eqref{Eq:Gc} \citep[see e.g.][]{McKinney2014}:
\begin{equation}\label{Eq:GmuExpl}
\begin{split}
G^\mu= &-\kappa\rho \left(T_r^{\mu\alpha}\,u_\alpha
+ 4\pi B(T)\, u^\mu\right) \\
& -\sigma\rho\left( T_r^{\mu\alpha}\,u_\alpha + T_r^{\alpha\beta}\,
u_\alpha u_\beta u^\mu \right)\,.
\end{split}
\end{equation}
\subsection{The equations of Rad-RMHD}
\label{S:RadRMHD}
Assuming ideal MHD for the interaction between matter and EM fields,
we obtain the equations of Rad-RMHD in quasi-conservative form:
%
\begin{eqnarray}
\label{Eq:RadRMHD}
\frac{\partial \left(\mathcal{\rho\gamma}\right)}{\partial t} +
\nabla \cdot \left(\mathcal{\rho\gamma \mathbf{v}}\right) &= 0 \\
\frac{\partial \mathcal{E}}{\partial t} +
\nabla \cdot \left(
\mathbf{m} - \rho\gamma\mathbf{v} \right) &= G^0 \\
\frac{\partial \mathbf{m}}{\partial t} +
\nabla \cdot \left(
\rho h \gamma^2 \mathbf{v}\mathbf{v}-\mathbf{B}\mathbf{B}
-\mathbf{E}\mathbf{E} \right) + \nabla p &= \mathbf{G} \\
\label{Eq:RadRMHD1a}
\frac{\partial \mathbf{B}}{\partial t} +
\nabla \times \mathbf{E} &= 0 \\ \label{Eq:RadRMHD1}
\frac{\partial E_r}{\partial t} +
\nabla \cdot \mathbf{F}_r &= - G^0 \\
\frac{\partial \mathbf{F}_r}{\partial t} +
\nabla \cdot P_r &= - \mathbf{G} ,
\label{Eq:RadRMHD2}
\end{eqnarray}
%
where $\mathbf{v}$ is the fluid's velocity,
$\gamma$ is the Lorentz factor,
$\mathbf{B}$ the mean magnetic field,
$\mathbf{E}=-\mathbf{v}\times\mathbf{B}$ the electric field.
In addition, we have introduced the quantities
%
\begin{equation}\label{Eq:prstot}
p = p_g + \frac{\mathbf{E}^2+\mathbf{B}^2}{2},
\end{equation}
\begin{equation}
\mathbf{m} = \rho h \gamma^2 \mathbf{v} + \mathbf{E}\times\mathbf{B} ,
\end{equation}
\begin{equation}
\mathcal{E} = \rho h \gamma^2 - p_g - \rho\gamma +
\frac{\mathbf{E}^2+\mathbf{B}^2}{2} ,
\end{equation}
which account, respectively, for the total pressure,
momentum density, and
energy density of matter and EM fields.
The system \eqref{Eq:RadRMHD}-\eqref{Eq:RadRMHD2}
is subject to the constraint
$\nabla\cdot\mathbf{B}=0$, and
the non-magnetic case (Rad-RHD) is recovered by taking the
limit $\mathbf{B}\rightarrow \mathbf{0}$ in the previous expressions.
In our current scheme, Equations \eqref{Eq:RadRMHD} to
\eqref{Eq:RadRMHD2} can be solved in Cartesian, cylindrical or
spherical coordinates.
\subsection{Closure relations}\label{S:M1}
An additional set of relations is required in order to close the
system of Equations \eqref{Eq:RadRMHD}--\eqref{Eq:RadRMHD2}.
An equation of state (EoS) provides closure between thermodynamical
quantities and it can be specified as the constant-$\Gamma$ law
%
\begin{equation}\label{Eq:IdealEoS}
h = 1 + \frac{\Gamma}{\Gamma-1}\,\Theta,
\end{equation}
%
or the Taub-Mathews equation, introduced by \cite{Mathews1971},
%
\begin{equation}\label{Eq:TMEoS}
h = \frac{5}{2}\Theta + \sqrt{1+\frac{9}{4}\,\Theta^2},
\end{equation}
%
where $\Theta=p_g/\rho$.
The properties of these equations are known and can be found, e.g., in
\cite{MignoneMcKinney}.
A further closure relation is needed for the radiation
fields, i.e., an equation relating $P_r^{ij}$ to $E_r$
and $\mathbf{F}_r$.
We have chosen to implement the M1 closure, proposed by \cite{Levermore1984},
which permits to handle both the optically thick and optically thin regimes.
In this scheme, it is assumed that
$I_\nu$ is isotropic in some inertial frame, where the radiation
stress-energy tensor takes the form
$T'^{\mu\nu}_r=\diag(E_r',E_r'/3,E_r'/3,E_r'/3)$.
This leads to the following relations, which hold in any frame:
%
\begin{equation}\label{Eq:M11}
P_r^{ij}=D^{ij}E_r,
\end{equation}
%
\begin{equation}
D^{ij}=\frac{1-\xi}{2}\,\delta^{ij}+
\frac{3\xi-1}{2}n^in^j,
\end{equation}
%
\begin{equation}\label{Eq:M13}
\xi=\frac{3+4f^2}{5+2\sqrt{4-3f^2}},
\end{equation}
%
where now $\bm{n}=\mathbf{F}_r/\vert\vert\mathbf{F}_r\vert\vert$
and $f=\vert\vert\mathbf{F}_r\vert\vert/E_r$, while
$\delta^{ij}$ is the Kronecker delta.
These relations are well behaved, as
Equations \eqref{Eq:RadMoments} and \eqref{Eq:RadMomentsF} provide
an upper limit to the flux, namely
%
\begin{equation}\label{Eq:FsmallerE}
\vert\vert\mathbf{F}_r\vert\vert \leq E_r,
\end{equation}
and therefore $0\leq f \leq 1$.
In our scheme, we apply Equations \eqref{Eq:M11}-\eqref{Eq:M13}
in the laboratory frame.
In the diffusion limit, namely,
if $\vert\vert\mathbf{F}_r\vert\vert \ll E_r$,
this closure leads to $P_r^{ij}=\left(\delta^{ij} \middle/ 3\right)E_r$,
which reproduces an isotropic specific intensity known as
Eddington limit.
Likewise, in the free-streaming limit given by
$\vert\vert\mathbf{F}_r\vert\vert \rightarrow E_r$,
the pressure tensor tends to
$P_r^{ij}=E_r\,n^in^j$, which corresponds
to a delta-like $I_\nu$ pointing in the same direction and
orientation as $\mathbf{F}_r$.
We point out that, even though
both the free-streaming and the diffusion limits
are reproduced correctly, the M1 closure may fail in some cases,
since it implicitly assumes, for example,
that the intensity $I_\nu$ is axially
symmetric in every reference frame with respect to the direction
of $\mathbf{F}_r$.
This is not the case, for example, when two or more
radiation sources are involved,
in which case direct employment of the
M1 closure may become inaccurate, leading to instabilities
\citep[see e.g.][]{Sadowski2013, Skinner2013}.
\section{Summary}\label{S:Summary}
We have presented a relativistic
radiation transfer code, designed to function
within the \sftw{PLUTO} code.
Our implementation can be used together with the relativistic HD
and MHD modules of \sftw{PLUTO} to solve the equations of radiation
transfer under the gray approximation.
Integration is achieved through one of two possible IMEX schemes,
in which source terms due to radiation-matter interaction
are integrated implicitly and flux divergences, as well as every
other source term, are integrated explicitly.
The transition between optically thick
and thin regimes is controlled by imposing the M1 closure to the
radiation fields, which allows to handle both the diffusion and
free-streaming limits.
Opacity coefficients can be arbitrarily defined, depending on
problem at hand, as functions of the primitive variables.
In our implementation, a novel HLLC Riemann solver for radiation
transport has been introduced.
The new solver is designed to improve the accuracy of the solutions
with respect to it predecessors (such as HLL)
in optically thin regions of space.
The module has been designed to function with either Cartesian,
cylindrical or spherical coordinates in multiple spatial dimensions
and it is suitable for either serial or parallel computations.
Extension to adaptive grids, based on the standard implementation of
the \sftw{CHOMBO} library within the code, has also been presented.
We have performed a series of numerical benchmarks
to assess the module performance under different configurations,
including handling of radiation transport,
absorption, and emission in systems with different characteristics.
Our results demonstrate excellent stability
properties under the chosen parameters, in both the free-streaming
and diffusion limits.
In the latter case, numerical diffusion is
successfully controlled by limiting the signal speeds of the
radiation transport equations whenever the material is opaque
across single grid cells.
Overall, the transition between both regimes has been properly
captured by the code in all the considered cases.
For optically thin transport, our HLLC solver
produces more accurate solutions when compared to HLL.
Regarding the implemented IMEX schemes, we have seen a similar
performance of both IMEX-SSP2(2,2,2) and IMEX1 except in tests where
the order of magnitude of the radiation flux is much smaller
than both its source terms and the divergence of its own flux,
in which IMEX1 seems to have better stability and
positivity-preserving properties.
When AMR is used, the obtained solutions
exhibit a similar overall behavior to those computed using a fixed
grid. Good agreement is also shown with standard tests whenever the
comparison is possible. Furthermore, parallel performance tests
show favorable scaling properties which are comparable to those of
the RHD module of \sftw{PLUTO}.
The code presented in this work will be made publicly available
as part of future versions of \sftw{PLUTO}, which can currently
be downloaded from \url{http://plutocode.ph.unito.it/}.
\\\\
\section{Numerical Benchmarks}\label{S:Tests}
We show in this section a series of numerical benchmarks to verify
the code performance, as well as the correctness of the implementation
under different physical regimes and choices of coordinates.
Unless otherwise stated, we employ the HLLC
solver introduced in Section \ref{S:HLLC}, the
domain is discretized using a fixed uniform grid and
outflow boundary conditions are imposed for all the fields.
Magnetic fields are neglected in all the considered problems,
except in Section \ref{S:TestRMHD}.
Furthermore, all the
tests have been run with both the IMEX-SSP2(2,2,2) and IMEX1 methods,
obtaining equivalent results unless indicated otherwise.
\subsection{Riemann Problem for optically-thin radiation transport}
\label{S:ShockThin}
We first validate the implemented radiation transport schemes when
any interaction with matter is neglected.
To this end, we have run several
one-dimensional Riemann problems setting all the interaction terms to
zero and focusing only on the evolution of the radiation
fields. The initial setup of these consists of
two regions of uniform $E_r$ and $\mathbf{F}_r$, separated by a
discontinuity at $x=0$.
The full domain is defined as the interval $[-20,20]$.
We show here two of such tests, exploring the case
$\vert\vert \mathbf{F}_r \vert\vert < E_r$ (test 1) and the free-streaming
limit, $\vert\vert \mathbf{F}_r \vert\vert \simeq E_r$ (test 2).
\begin{figure}[t!]
\centering
\includegraphics[width=0.47\textwidth]{f1}
\caption{ Radiation fields in the optically thin Riemann
test 1 at $t=20$. Two solutions obtained with
the HLL solver (solid blue line) and the HLLC solver
(solid orange line), computed
using $2^8$ zones in both cases,
are compared to a reference solution
obtained with $2^{14}$ zones.
These show a left shock at $x\approx-11$, a right
expansion wave at $x\approx 11$, and a central contact
discontinuity at $x\approx -1$, along which the fields $\Pi$ and
$\beta_x$ are continuous.}
\label{fig:Paper_hll_hllc_test1}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.47\textwidth]{f2}
\caption{ Same as Fig. \ref{fig:Paper_hll_hllc_test1},
for the optically thin Riemann test 2.
The solutions exhibit a leftward-moving shock,
a contact discontinuity and a rightward-moving shock,
at $x\approx -2.2$, $4.5$ and $7$ respectively. }
\label{fig:Paper_hll_hllc_test2}
\end{figure}
In the first test, initial states are assigned at $t=0$ as
\begin{equation}
(E_r,\, F_r^x,\, F_r^y)_{L,R} = \left\{\begin{array}{ll}
\left(1,\, 0,\, \frac{1}{2}\right) & \;{\rm for}\quad x < 0 \\ \noalign{\medskip}
\left(1,\, 0,\, 0\right) & \;{\rm for}\quad x > 0
\end{array}\right.
\end{equation}
The solution, plotted in Fig \ref{fig:Paper_hll_hllc_test1} at $t=20$
with a resolution of $2^{14}$ zones (solid black line),
shows a three-wave pattern as it is expected from the eigenstructure
of the radiation transport
equations (see Section \ref{S:HLLC} and Appendix \ref{S:AppLambdaS}).
The left and right outermost waves are, respectively, a left-facing shock
and a right-going expansion wave, while the middle wave
is the analog of a contact wave.
The fields $\Pi$ and $\beta_x$, defined in Section \ref{S:HLLC},
are constant across the contact mode.
On the same Figure, we show the solution obtained with the HLL and HLLC solvers
at the resolution of 256 zones using a $1^{\rm st}$ order
reconstruction scheme \citep[see][]{MignoneBodo}.
As expected, the employment of the HLLC solver yields
a sharper resolution of the middle wave.
For the second test, the initial condition is defined as
\begin{equation}
(E_r,\, F_r^x,\, F_r^y)_{L,R} = \left\{\begin{array}{ll}
\left(\frac{1}{10},\, \frac{1}{10},\, 0\right) & \;{\rm for}\quad x < 0 \\ \noalign{\medskip}
\left( 1 ,\, 0,\, 1 \right) & \;{\rm for}\quad x > 0
\end{array}\right.
\end{equation}
Results obtained with the $1^{\rm st}$-order scheme and the HLL and HLLC solvers
are plotted in Fig. \ref{fig:Paper_hll_hllc_test2} together with the reference
solution (solid black line) at $t=20$.
As for the previous case, a three-wave pattern emerges,
formed by two left- and right-going shocks
and a middle contact wave.
It can be also seen that $\Pi$ and $\beta_x$
are again continuous across the contact wave.
Differences between HLLC and HLL are less pronounced than the previous case,
with the HLL (HLLC) overestimating the left-going shock position
by 50\% (30\%).
For both tests, we have conducted a resolution study covering the
range $[2^{6},2^{10}]$ using $1^{\rm st}$- as well as $2^{\rm nd}$-order
reconstructions making use of the second-order harmonic mean limiter by
\cite{vanLeer1974}.
In Figure \ref{fig:Paper_err_hll_hllc}, we plot the error in
L$_1$-norm of $E_r$ (computed with respect to the reference solution)
as functions of the resolution.
The Courant number is $C_a = 0.4$ for both cases.
Overall, the HLLC yields smaller errors when compared to HLL,
as expected.
This discrepancy is more evident in the $1^{\rm st}-$order case and it
is mitigated in the case of a $2^{\rm nd}$ order interpolant
\citep[a similar behavior is also found in][]{MignoneBodo}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.47\textwidth]{f3}
\caption{ L$_1$ error of $E_r$ in
the optically thin Riemann tests
1 and 2, computed in each case
with respect to a reference solution
obtained using $2^{14}$ zones. The errors are plotted
for several resolutions as a function of $1/dx$, where
$dx$ is the cell's width in each case. Different results
are shown using first-order (upper panels) and
second-order (lower panels) reconstruction schemes.
}
\label{fig:Paper_err_hll_hllc}
\end{figure}
\subsection{Free-streaming beam}\label{S:Beams}
A useful test to investigate the code's accuracy for multidimensional
transport is the propagation of a radiation beam oblique
to the grid \citep[see e.g.][]{Richling2001,Gonzalez2007}.
This problem is also useful to quantify the numerical diffusion
that may appear when fluxes are not aligned with the axes.
We again neglect the gas-radiation interaction terms,
and follow solely the evolution of the radiation fields.
The initial setup consists of a square Cartesian
grid of side $L=5$ cm, where the radiation energy density is set to
$E_{r,0}=10^{4}$ \mbox{erg cm$^{-3}$}.
At the $x=0$ boundary, a radiation beam is injected by fixing
$E_r=10^8\,E_{r,0}$ and $\mathbf{F}_r=(1/\sqrt{2},1/\sqrt{2})\, E_r$
for $y\in [0.30,0.44]$ cm.
Thus, the injected beam satisfies the equality
$\vert\vert\mathbf{F}_r\vert\vert=E_r$, which
corresponds to the free-streaming limit.
Outflow conditions are imposed on the remaining boundaries.
Again we compare the performance of the HLL and HLLC
solvers, using the fourth-order linear slopes
by \cite{Miller2001} and resolutions of
$150\times150$ and
$300\times300$ zones.
The Courant number is $C_a=0.4$.
The energy density distribution obtained with the HLLC solver
at the largest resolution
is shown in Fig. \ref{fig:BeamTest} at $t=5\times10^{-10}$ s.
In every case, a beam forms and reaches
the upper boundary between $x=4$ cm and $x=5$ cm,
after crossing a distance equivalent to roughly $\sim 64$
times its initial width.
Since no interaction with matter is considered,
photons should be transported in straight lines.
As already mentioned, the free-streaming limit corresponds
to a delta-like specific intensity parallel to $\mathbf{F}_r$.
Hence, photons are injected in only one direction, and
the beam's structure should be maintained as it crosses the
computational domain.
However, in the simulations, the beam broadens due to
numerical diffusion before reaching the upper boundary.
For this particular test, due to its strong discontinuities,
we have seen that this effect is enhanced by the flattening
applied during the reconstruction step
in order to satisfy Equation \eqref{Eq:FsmallerE},
which is necessary for stability reasons.
In order to quantify this effect and its dependecy
on the numerical resolution, we have computed several
time-averaged $E_r(y)$ profiles along vertical
cuts at different $x$ values.
As an indicator of the beam's width, we have computed
for each $x$
the standard deviation of these profiles as
\begin{equation}
\sigma_y= \sqrt{\int_0^L \left[
y - \overline{y}
\right]^2 \varphi(y)\, \mathrm{d}y}\,,
\end{equation}
with
\begin{equation}
\overline{y} =\int_0^L \varphi(y)\, y \,\mathrm{d}y \,,
\end{equation}
where the weighting function $f(y)$ is defined as
\begin{equation}
\varphi(y) = \overline{E}_r(y)\bigg/
\int_0^L \overline{E}_r(y) \, \mathrm{d}y\,,
\end{equation}
being $\overline{E}_r$ the time-averaged value of $E_r$.
We have then divided the resulting values of $\sigma_y$ by
$\sigma_{y0}\equiv\sigma_y(x=0)$, in order to show the relative
growth of the dispersion.
The resulting values of $\sigma_y/\sigma_{y0}$ are shown in Fig.
\ref{fig:BeamTest}, where it can be seen that the beam's dispersion
grows with $x$.
The difference between $\sigma_y/\sigma_{y0}$ and its ideal value
($\sigma_y/\sigma_{y0}\equiv 1$) gets reduced
by a factor between 2 and 2.5 when the highest resolution is used.
In the same figure, it can be seen that the dispersion is only
slightly reduced when the HLLC solver is used instead of HLL.
A similar plot of $\sigma_y/\sigma_{y0}$ is obtained with the
second-order limiter by \cite{vanLeer1974}, where the values
of the relative dispersion increase roughly between $30\%$ and $40\%$,
showing as in Section \ref{S:ShockThin} that the accuracy of these
methods not only depends on the chosen Riemann solver but it is also
extremely sensitive to the chosen reconstruction scheme.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{f4}
\caption{Free-streaming beam test.
A radiation beam is introduced in a 2D grid
from its lower-left boundary, at $45\degree$
with respect to the
coordinate axes. The values of $\log_{10} E_r$ obtained with
the HLLC solver using a
resolution of $300\times300$ zones are plotted
as a function of $(x,y)$ at $t=5\times10^{-10}$ s (color scale).
The relative dispersion $\sigma_y/\sigma_{y0}$ along
the $y$ direction is shown in the
lower-right corner as a function of $x$ (cm), for the
selected resolutions of $150\times150$ (black lines)
and $300\times300$ (blue lines). In both cases, solid
and dashed lines correspond respectively to the results
obtained with the HLL and the HLLC solver.
}
\label{fig:BeamTest}
\end{figure}
\subsection{Radiation-matter coupling}\label{S:RadMatCoup}
In order to verify the correct integration
of the interaction terms, we have
run a test proposed by \citet{TurnerStone2001},
in which matter and radiation approach thermal equilibrium
in a homogeneous system. This is achieved by solving the
Rad-RHD equations in a single-cell grid, thus removing any
spatial dependence.
In this configuration, due to the form of Equations
\eqref{Eq:RadRMHD}-\eqref{Eq:RadRMHD2}, all the fields but
the energy densities of both radiation and matter remain constant
for $t>0$.
Using conservation of total energy, the resulting equation for
the evolution of the gas energy density (in cgs units)
is
\begin{equation}\label{Eq:RadMatCoup}
\frac{1}{c}\frac{\partial \mathcal{E}}{\partial t} = \rho \kappa \left(
E_r - 4\pi B\left( T \right)
\right).
\end{equation}
This can be simplified if the chosen initial conditions are
such that $E_r$ is constant throughout the system's evolution. In
that case, Equation \eqref{Eq:RadMatCoup} can be solved
analytically, leading to an implicit relation between $\mathcal{E}$
and $t$ that can be inverted using standard methods.
We have run this test for two different initial conditions,
using in both cases $\rho=10^{-7}$ \mbox{g cm$^{-3}$},
$E_r= 10^{12}$ \mbox{erg cm$^{-3}$}, opacities $\kappa=0.4$
\mbox{cm$^{2}$ g$^{-1}$} and $\sigma = 0$,
and a mean molecular weight $\mu=0.6$.
A constant-gamma EoS has been assumed, with $\Gamma=5/3$.
We have chosen the initial gas energy density
to be either $\mathcal{E}=10^{10}$
\mbox{erg cm$^{-3}$} or $\mathcal{E}=10^2$ \mbox{erg cm$^{-3}$},
which are, respectively, above and below the final equilibrium
value, of around $7\times 10^7$ \mbox{erg cm$^{-3}$}.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{f5}
\caption{Radiation-matter coupling test. The gas energy density
$\mathcal{E}$ is plotted as a function of time
for the two chosen initial conditions,
until thermal equilibrium is reached.
The obtained numerical values (empty squares)
are shown here to match the analytical solutions (solid lines)
for both initial conditions.}
\label{fig:CoupRadMatter}
\end{figure}
The gas energy density is plotted as a function
of time for both conditions in Fig. \ref{fig:CoupRadMatter}.
Simulations are started from $t=10^{-10}$ s, with an initial
time step $\Delta t = 10^{-10}$ s. An additional run between
$t=10^{-16}$ s and $10^{-10}$ s is done for each initial
condition with an initial $\Delta t = 10^{-16}$ s,
in order to show the evolution in the initial stage.
In every case, the gas radiation energy goes through an initial
constant phase that lasts until $t\sim 10^{-14}$ s, after which
it varies towards the equilibrium value. Equilibrium
occurs when the condition $E_r=4\pi B(T)$ is reached
(see Eq. \eqref{Eq:RadMatCoup}), i.e.,
when the power emitted by the gas equals its energy absorption
rate.
This happens around $t\approx 10^{-7}$ s
for both initial conditions.
As shown in Fig. \ref{fig:CoupRadMatter}, the numerical solutions
match the analytical ones in the considered time range.
\subsection{Shock waves}\label{S:Shocks}
\begin{deluxetable*}{c|cccccccccc}
\tablecaption{Parameters used in the shock tests, in code units.
The subscripts $R$ and $L$ correspond, respectively, to the
initial conditions for $x>0$ and $x<0$. \label{Table:ShockParams} }
\tablewidth{0pt}
\tablehead{
\colhead{Test} & \colhead{$\rho_L$} & \colhead{$p_{g,L}$} &
\colhead{$u^x_L$} &\colhead{$\tilde{E}_{r,L}$}
& \colhead{$\rho_R$} & \colhead{$p_{g,R}$} &
\colhead{$u^x_R$} &\colhead{$\tilde{E}_{r,R}$}
&\colhead{$\Gamma$} &\colhead{$\kappa$}
}
\startdata
1 & $1.0$ & $3.0\times 10^{-5}$ & $0.015$ & $1.0\times 10^{-8}$ &
$2.4$ & $1.61\times 10^{-4}$ & $6.25\times 10^{-3}$
& $2.51\times 10^{-7}$ & $5/3$ & $0.4$ \\
2 & $1.0$ & $4.0\times 10^{-3}$ & $0.25$ & $2.0\times 10^{-5}$ &
$3.11$ & $0.04512$ & $0.0804$
& $3.46\times 10^{-3}$ & $5/3$ & $0.2$ \\
3 &$1.0$ & $60.0$ & $10.0$ & $2.0$ &
$8.0$ & $2.34\times 10^{3}$ & $1.25$
& $1.14\times 10^{3}$ & $2$ & $0.3$ \\
4 & $1.0$ & $6.0\times 10^{-3}$ & $0.69$ & $0.18$ &
$3.65$ & $3.59\times 10^{-2}$ & $0.189$
& $1.3$ & $5/3$ & $0.08$ \\
\enddata
\end{deluxetable*}
We now study the code's ability to reproduce general shock-like
solutions without neglecting the interaction terms.
To this purpose, we have reproduced a series of tests
proposed by \cite{Farris2008}.
As in Section \ref{S:ShockThin}, we place a single initial
discontinuity at the center of the one-dimensional domain
defined by the interval \mbox{$[-20,20]$}.
At $t=0$, both matter and radiation fields are constant on
each side of the domain, and satisfy the condition
for LTE between matter and radiation, that is,
\mbox{$\tilde{E}_r=4\pi B(T)$}.
Additionally, the fluxes on each side obey
$\tilde{F}_r^x = 0.01 \times \tilde{E}_r$.
A constant-gamma EoS is assumed, scattering
opacity is neglected, and a Courant factor \mbox{$C_a=0.25$}
is used.
Initial conditions are chosen in such a way that the system evolves
until it reaches a final stationary state.
Neglecting time derivatives,
Equations \eqref{Eq:RadRMHD}-\eqref{Eq:RadRMHD2} lead to
%
\begin{align} \label{Eq:Shock1}
\partial_x \left(\rho u^x\right) &= 0 \\ \label{Eq:Shock2}
\partial_x \left(m_{tot}^x\right) &= 0 \\ \label{Eq:Shock3}
\partial_x \left( m^x v^x + p_g + P_r^{xx} \right) &= 0 \\
\label{Eq:Shock4}
\partial_x \left(F_r^x \right)&= -G^0 \\ \label{Eq:Shock5}
\partial_x \left(P_r^{xx}\right) &= -G^x.
\end{align}
A time-independent solution demands that quantities under derivative
in Equations \eqref{Eq:Shock1}--\eqref{Eq:Shock3} remain constant,
and this condition must also be respected by the initial states.
In addition, Equations \eqref{Eq:Shock4} to \eqref{Eq:Shock5} show
that the final $F_r^x$ and $P_r^{xx}$ must be continuous,
although their derivatives can be discontinuous.
This does not necessarily imply that the final $E_r$
profile must also be continuous, since any value
of $P_r^{xx}(E_r,F^x_r)$ can correspond to up to two
different $E_r$ values for fixed $F^x_r$.
However, in the particular case where \mbox{$F^x_r<P_r^{xx}$},
it can be shown using Eqs. \eqref{Eq:M11}-\eqref{Eq:M13}
that the inversion of $P_r^{xx}(E_r,F^x_r)$
in terms of $E_r$ leads to unique solutions,
and thus $E_r$ must be continuous.
In the same way, we have verified that this condition
is equivalent to \mbox{$F^x_r/E_r<3/7$}.
We have performed four tests for different physical regimes.
All the initial values are chosen to coincide with those in
\citet{Farris2008}. In that work, as in several others where the
same tests are performed \citep[see e.g.][]{Zanotti2011,
Fragile2012,Sadowski2013}, the Eddington approximation, given by
\mbox{$\tilde{P}^{xx}_r=\tilde{E}_r/3$}, is used instead of the M1
closure. Therefore, our results are not comparable with these unless
the final state satisfies \mbox{$\tilde{P}^{xx}_r\simeq\tilde{E}_r/3$}
in the whole domain. We now outline the main features of each test,
whose parameters are summarized in Table \ref{Table:ShockParams}:
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{f6}
\caption{Final profiles of the nonrelativistic
strong shock test, obtained using 3200 zones (solid black line)
and 800 zones (empty blue circles, plotted every 10 values). }
\label{fig:ShockTest1}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{f7}
\caption{Same as Fig. \ref{fig:ShockTest1},
for the mildly relativistic shock test.}
\label{fig:ShockTest2}
\end{figure}
\begin{enumerate}
\item \emph{Nonrelativistic strong shock}.
A gas-pressure-dominated shock moves at a nonrelativistic speed
in a cold gas (\mbox{$p_g\ll \rho$}),
with a maximum $u^x$ of 0.015.
The final profiles of $\rho$, $p_g$, $u^x$, $\tilde{E}_r$,
and $\tilde{F}^x_r$ are shown in Fig. \ref{fig:ShockTest1}. As in the
non-radiative case, the first three show an abrupt change at $x=0$,
while radiation fields seem continuous.
\item \emph{Mildly relativistic strong shock}.
The conditions are similar to the previous test, with the
difference that a mildly relativistic velocity ($u^x\le 0.25$)
is chosen. The final profiles (see Fig. \ref{fig:ShockTest2})
look similar to those in Fig. \ref{fig:ShockTest1}, with the
difference that $\tilde{E}_r$ exhibits a small discontinuity close to
$x=0$.
\item \emph{Highly relativistic wave}.
Initial conditions are those of a highly relativistic
gas-pressure-dominated wave (\mbox{$u^x\le 10$},
\mbox{$\rho\ll\tilde{P}^{xx}_r < p_g$}).
In this case, as it can be seen in Fig. \ref{fig:ShockTest3},
all the profiles are continuous.
\item \emph{Radiation-pressure-dominated wave}.
In this case we study a situation where the radiation pressure
is much higher than the gas pressure, in a shock that propagates
at a mildly relativistic velocity ($u^x\le 0.69$). As in the
previous case, there are no discontinuities in the final profiles
(see Fig. \ref{fig:ShockTest4}).
\end{enumerate}
In order to test the convergence of the numerical solutions, we
have performed each simulation twice, dividing the domain in $800$
and in $3200$ zones. In every case, as shown in Figs.
\ref{fig:ShockTest1}-\ref{fig:ShockTest4}, both solutions
coincide. However, our results do not coincide with those
obtained in the references mentioned above.
The most noticeable case is the test shown in Fig. \ref{fig:ShockTest2},
where the ratio $\tilde{P}^{xx}_r/\tilde{E}_r$
reaches a maximum value of $0.74$ close to
the shock, instead of the value of $1/3$ that would be obtained
within the Eddington approximation.
The result is a much smoother $\tilde{E}_r$ profile than
the one shown in, for instance, \citet{Farris2008}. Yet,
our results show a good agreement with those in
\citet{Takahashi2013}, where the tests are also performed
assuming the M1 closure.
We point out that, in the nonrelativistic strong shock case,
characteristic fluid speeds are $\sim 35$
times smaller than those corresponding to radiation transport.
Still, computations
do not show significant increase of numerical diffusion owing
to such scale disparity. The
same conclusion holds if computations are done in the
downstream reference frame (not shown here).
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{f8}
\caption{Same as Fig. \ref{fig:ShockTest1},
for the highly relativistic wave test.}
\label{fig:ShockTest3}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{f9}
\caption{Same as Fig. \ref{fig:ShockTest1},
for the radiation-pressure-dominated wave test.}
\label{fig:ShockTest4}
\end{figure}
\subsection{Radiation pulse}\label{S:RadPulse}
Following \citet{Sadowski2013}, we have tested the evolution of
a radiation pulse in the optically thin and optically thick limits.
These two regimes allowed us to assess, respectively, the code performance when
choosing different coordinate systems and its accuracy in
the diffusion limit, as summarized below.
\subsubsection{Optically thin case}\label{S:PulseThin}
We considered an initial spherically symmetric radiation energy
distribution, contained around the center of a 3D box of side
\mbox{$L=100$}.
Radiation energy is initially set as $E_r=4\pi B(T_r)$, with
\begin{equation}
T_{r}= T_0\left(1+100\, e^{-r^2/w^2}\right),
\end{equation}
where $r$ is the spherical radius, while
$T_0=10^6$ and $w=5$. Similarly, gas pressure is
set in such a way that $T(\rho,p_g)=T_0$, which means that
the system is initially in thermal equilibrium far from the
pulse.
We also set $\rho=1$, $v^x=0$ and $F_r^x=0$ in the whole domain,
$\Gamma=5/3$, $C_a=0.4$,
$\kappa=0$, and a small scattering
opacity $\sigma=10^{-6}$. In this way, the total optical depth
from side to side of the box is
\mbox{$\tau=\rho\,\sigma L=10^{-4}\ll 1$}, i.e., the box is
transparent to radiation.
We have computed the departure from these conditions using
1D spherical and 3D Cartesian coordinates.
In the Cartesian case, we have employed a uniform grid resolution
of \mbox{$200\times200\times200$} zones.
On the other hand, in spherical geometry,
our domain is the region $r\in[0,L/2]$
using a uniformly spaced grid of $100$ zones, in order to have a
comparable resolution with the 3D simulations.
In this last case, reflective boundary conditions have
been set at $r=0$.
As shown in Fig. \ref{fig:PulseThin1}, the pulse expands and forms
a nearly isotropic blast wave, which slightly deviates from the
spherical shape in the Cartesian case due to grid noise.
The evolution of the radiation energy profiles in both simulations
is shown in the two upper panels of Figure \ref{fig:PulseProfiles}.
Since no absorption in the material is considered, the total
radiation energy is conserved, and thus the maximum energy density
of the formed expanding wave decreases as \mbox{$1/r^2$}.
As it can be seen in Fig. \ref{fig:PulseProfiles}, this
dependence is effectively verified once the
blast wave is formed. The same kind of analysis is possible if
radiation is contained entirely on the plane \mbox{$z=0$}.
In this case, the maximum energy density decreases
as \mbox{$1/R$}, with \mbox{$R=\sqrt{x^2+y^2}$}.
We have verified this behavior in 1D
cylindrical and 2D Cartesian coordinates,
employing uniform grids of $100$ zones
in the first case and \mbox{$200\times200$} in the second
(see the two lower panels in Fig. \ref{fig:PulseProfiles}).
In every case, the same simulations performed with different
coordinates systems show a good agreement.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{f10}
\caption{ Radiation energy density map of the optically thin
radiation pulse computed using a \mbox{$200\times200\times200$}
uniform Cartesian grid. Values of $\log_{10}E_r$ on the plane
$z=0$ are shown at $t=35$, when the blast wave has already
been formed.}
\label{fig:PulseThin1}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{f11}
\caption{Radiation energy density profiles in the optically
thin pulse test (solid colored lines), computed at $y=z=0$.
From above: profiles obtained
using 3D Cartesian,
1D spherical, 2D Cartesian and 1D cylindrical coordinates,
shown every $\Delta t = 5.0$. The dependence of the
maximum energy on $1/r^2$ ($1/R$) is shown with dashed black
lines in the first (last) two cases.}
\label{fig:PulseProfiles}
\end{figure}
\subsubsection{Optically thick case}\label{S:PulseThick}
We now consider the case where the scattering opacity is nine
orders of magnitude larger than in the previous simulations, i.e.,
\mbox{$\sigma=10^3$}, and all the other parameters remain unchanged.
In that situation, the optical thickness from side to side of the
box is \mbox{$\tau = 10^5\gg 1$}, which means that the box is
largely opaque to radiation.
Here we solve the evolution equations on a
Cartesian one-dimensional grid
with uniform spacing.
Using a resolution of 101 zones, the
optical thickness of a single cell is $\tau\sim10^3$. For this
reason, signal speeds are always limited accordingly to Eq.
\eqref{Eq:RadSpeedLim}.
Under these conditions, the system evolves in such a way that
\mbox{$\vert\partial_t F_r^x\vert\ll \vert \partial_x P_r^{xx}
\vert$} and \mbox{$\vert F_r^x\vert\ll E_r$}, and therefore
\mbox{$P_r^{xx}\simeq E_r/3$}, as pointed out in Section
\ref{S:M1}. Neglecting the term \mbox{$\partial_t F_r^x$} in Eq.
(\ref{Eq:RadRMHD2}) and assuming $P_r^{xx}=E_r/3$, the radiation
flux can be written as \mbox{$F_r^x=-\partial_x E_r /3\rho \chi $}.
Hence, assuming the density to remain constant,
the radiation energy density
should evolve accordingly to the following diffusion equation:
%
\begin{equation}\label{Eq:DiffEq}
\frac{\partial E_r}{\partial t} = \frac{1}{3\rho \chi}
\frac{\partial^2 E_r}{\partial x^2} .
\end{equation}
%
With the chosen initial conditions, this equation can be solved
analitically, e.g., by means of a Fourier transform
in the spatial domain.
The exact and numerical solution are shown in Fig.
\ref{fig:DiffEq}.
Our results show a good agreement between the analytical
and numerical solutions. Furthermore, we have verified that, if
radiation-matter interaction is not taken into account for the
signal speed calculation, i.e., if the limiting given by Eq.
\eqref{Eq:RadSpeedLim} is not applied, the pulse gets damped much
faster than what it should be expected from Eq. \eqref{Eq:DiffEq},
due to the numerical diffusion that occurs when signal speeds
are overestimated.
We have observed that this test leads
to inaccurate values of $F_r^x$ if IMEX-SSP2(2,2,2) is used, although
the values of $E_r$ remain close to the analytical ones.
This problem lies in the fact that both the gradient of the flux of
$F^x_r$ and its source term largely exceed
$F^x_r$ and are not compensated in the last
explicit step of the method (see Eq. \eqref{Eq:IMEX1}).
When these conditions are met, we have observed that
IMEX-SSP2(2,2,2) can lead to inaccuracies and instabilities due to
failure in preserving energy positivity
(see Section \ref{S:Shadows}).
On the contrary, IMEX1 shows better performances in those cases,
as flux and source terms are more accurately balanced during
the implicit steps (see Eq. \eqref{Eq:IMEX2}).
The limiting scheme in Eq. \eqref{Eq:RadSpeedLim} depends on the
optical depth of individual cells, which is inversely proportional
to the resolution. Therefore, when AMR is used, there can be situations
where this limiting is applied in the coarser levels, but not in the
finer ones. Furthermore, when using HLLC, the solver
is replaced by HLL for every zone where Eq. \eqref{Eq:RadSpeedLim}
is enforced. To study the code's performance under these conditions,
we have run this test on a static AMR grid using $128$ zones at the coarsest
level with 6 levels of refinement with a jump ratio
of 2, yielding an equivalent resolution of $8192$ zones.
We choose $\sigma=50$ so that levels 0 to 4 are solved with the
HLL solver limiting the maximum signal speeds accordingly to Eq.
\eqref{Eq:RadSpeedLim}, while levels 5 and 6 are solved using the
HLLC solver.
The solution thus obtained converges to the analytic
solution of Eq. \eqref{Eq:DiffEq} in all the refinement levels
(see Fig. \ref{fig:DiffEqAMR}).
However, we have observed the formation of spurious overshoots at the
boundaries between refinement levels.
These artifacts are drastically reduced if the order of the reconstruction
scheme is increased; for instance, if the weighted essentially
non-oscillatory (WENO) method by \cite{JiangShu} or the
piecewise parabolic method (PPM)
by \cite{PPM} are used, as shown in Fig. \ref{fig:DiffEqAMR}.
We argue that such features, which are not uncommon in AMR
codes \citep{Choi_etal2004, Chilton_Colella2010}, can be attributed to the
refluxing process needed to ensure correct conservation of momentum and
total energy \citep[see][]{AMRPLUTO}.
In addition, the presence of source terms requires additional care
when solving the Riemann problem betwen fine-coarse grids due to temporal
interpolation \citep{Berger_LeVeque1998}.
We do not account here for such modifications and defer these potential
issues to future investigations.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{f12}
\caption{Radiation energy density and flux profiles
in the optically thick pulse test, shown at $t=10^3, 10^4$
and $4\times10^4$ (solid black lines). The analytical solution of
the diffusion equation (Eq. \eqref{Eq:DiffEq}) is superimposed
(dashed colored lines).
}
\label{fig:DiffEq}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{f13}
\caption{ Radiation energy density (top) and flux (bottom) profiles
in the optically thick pulse test with $\sigma=50$ using a static
AMR grid with six refinement levels.
Solution are shown at $t=1500$ using linear reconstruction
(red triangles), WENO (green circles) and PPM (blue squares).
Refinement levels are marked by coloured boxes (top panel),
with red corresponding to the base level grid.
The analytical solution of Eq. \eqref{Eq:DiffEq} is
plotted for comparison while a close-up view of the interface
between the first two grid levels is shown in the bottom panel.
}
\label{fig:DiffEqAMR}
\end{figure}
\subsection{Shadows}\label{S:Shadows}
One of the main features of the M1 closure is its ability to
reproduce the behavior of physical systems in which the angular
distribution of the radiation specific intensity has strong
spatial variations. One such example is
a system where a free-streaming radiation field encounters a
highly opaque region of space, casting a shadow behind it.
To test the code performance when solving such
problems, we have performed a test in
which a shadow is formed behind a high-density elliptic cylinder,
following \citet{HayesNorman2003} and using the same parameters
as in \citet{Gonzalez2007}.
Computations are carried out in the two-dimensional domain given by
\mbox{$\left\lbrace(x,y)\in [-0.5,0.5]
\,\mbox{cm}\times[0,0.6]\, \mbox{cm}\right\rbrace$}.
Reflective boundary conditions are imposed at $y=0$.
A constant density \mbox{$\rho_0=1$ g cm$^{-3}$}
is fixed in the whole
space, except in the elliptic region,
where $\rho=$ \mbox{$\rho_1=10^3$ g cm$^{-3}$}. In order to have a
smooth transition between $\rho_0$ and $\rho_1$, the initial
density field is defined as
\begin{equation}
\rho\,(x,y)=\rho_0 + \frac{\rho_1-\rho_0}{1+e^\Delta},
\end{equation}
where
\begin{equation}
\Delta = 10 \left[
\left(\frac{x}{x_0}\right)^2 +\left(\frac{y}{y_0}\right)^2 -1
\right],
\end{equation}
with \mbox{$(x_0,y_0)=(0.10,0.06)$ cm}. In such a way, the region with
$\rho=\rho_1$ is approximately contained in an ellipse of
semiaxes $(x_0,y_0)$. Initially, matter is set in thermal
equilibrium with radiation at a temperature $T_0=290$ K,
and fluxes and velocities are initialized to zero.
The absorption opacity
in the material is computed according to Kramers' law, i.e.,
$\kappa=\kappa_0\left(\frac{\rho}{\rho_0}\right)
\left(\frac{T}{T_0}\right)^{-3.5}$, with
\mbox{$\kappa_0 = 0.1$ g$^{-1}$cm$^2$}, while scattering
is neglected. Therefore, the cylinder's optical thickness
along its largest diameter is approximately
$\tau \approx 2\times 10^4$, which means that its
width exceeds $\tau\gg 1$ times the photons' mean free path in
that region.
On the contrary, above $y>y_0$,
the optical thickness is $\tau=0.1$, so that
the exterior of the cylinder is transparent to
radiation while its interior is opaque.
Radiation is injected from the left boundary at a temperature
\mbox{$\left(c\,E_r/4\,\sigma_{SB}\right)^{1/4}=1740$ K}
$>T_0$, with a flux
$\mathbf{F}_r=c\, E_r\,\hvec{e}_x$. Hence, the radiation
field is in initially in the free-streaming limit, and should
be transported at the speed of light in the transparent regions.
\begin{figure*}[t]
\centering
\includegraphics[width=0.65\textwidth]{f14}
\caption{Radiation energy density maps obtained in the
shadow test.
The radiation front crosses the domain from left to right,
casting a shadow behind an elliptic cylinder centered
at $(0,0)$.
From top to bottom we show the numerical solutions obtained, respectively,
on a static uniform grid with resolution $280\times80$ at $t=10\,t_c$,
on the AMR grid ($80\times16$ zones on the base level) at
$t=0.2\,t_c$, $0.6\,t_c$, and $10\,t_c$.
The radiation front crosses the domain at the speed of light in the
transparent regions.
Refinement levels are superimposed with colored lines in the
lower halves of these figures, corresponding to $l=0$ (blue), $1$ (red),
$2$ (green), $3$ (purple), $4$ (golden), and $5$ (black),
where $l$ is the refinement level.
}
\label{fig:ShadowTest}
\end{figure*}
We have initially computed the system's evolution in a fixed
uniform grid of resolution $280 \times 80$, using a fourth-order
reconstruction scheme with a Courant factor $C_a=0.4$, and with
$\Gamma=5/3$.
Simulations show a radiation front that crosses the space at light
speed from left to right, producing a shadow behind the
cylinder. After interacting with it, the shadow settles
into a final stable state that is
ideally maintained until the matter
distribution is modified due to its interaction with radiation.
The radiation energy density distribution is shown in
the upper panel of Fig. \ref{fig:ShadowTest} at
$t=10\,t_c$, where
\mbox{$t_c=1\,\mbox{cm}/c=3.336\times 10^{-11}$ s}
is the light-crossing time, namely, the time it takes light
to cross the domain horizontally in the transparent region.
Behind the cylinder, radiation energy is roughly equal to its
initial equilibrium value of
$(4\,\sigma_{SB}/c)\,T_0^4$. This value
is slightly affected by small waves that are produced in the
upper regions of the cylinder, where the matter distribution stops
being opaque to radiation along horizontal lines.
Above the cylinder, the radiation field remains
equal to the injected one.
The transition between the shadowed and transparent regions
is abrupt, as it can be seen in Fig. \ref{fig:ShadowTest}.
The shape of the $E_r$ profile along vertical cuts
is roughly maintained as radiation is transported
away from the central object.
When IMEX-SSP2(2,2,2) is used, we have noticed that $E_r$ goes frequently
below $0$ on the left edge of the cylinder where the radiation
field impacts it.
Still, the obtained solutions are stable
and convergent as long as $E_r$ is floored to a small value
whenever this occurs.
As in Section \ref{S:PulseThick},
the radiation flux is much smaller in those zones than both its
flux and the source terms, and the problem does not occur if
IMEX1 is used.
We have used this same problem to test the code's
performance when AMR is used in a multidimensional setup.
In this case, we have run the same simulation, using an initially
coarse grid of resolution $80 \times 16$ set to adapt to changes
in $E_r$ and $\rho$ \citep[see][]{AMRPLUTO}.
We have used $5$ refinement levels, in every case with a
grid jump ratio of $2$, which gives an equivalent
resolution of $2560 \times 512$. The resulting energy profiles
are plotted in the lower panels of Figure
\ref{fig:ShadowTest}, for $t=0.2\,t_c$, $0.6\,t_c$, and $10\,t_c$,
and agree with those computed using a fixed grid.
In each panel we have superimposed the refinement level.
\subsection{Magnetized cylindrical blast wave}\label{S:TestRMHD}
\begin{figure*}[t]
\centering
\includegraphics[width=0.97\textwidth]{f15}
\caption{Density maps at $t=4$ in the magnetized cylindric
blast wave test, corresponding to $\kappa=1$ (top row),
$\kappa=1000$ (middle row) and ideal relativistic MHD (bottom row).}
\label{fig:CylBlastWave}
\end{figure*}
We now examine a case in which matter is affected by both radiation
and large-scale EM fields. We consider the case of a cylindrical
blast wave, represented in a two-dimensional Cartesian grid as in
the planar configurations described in Section \ref{S:PulseThin}.
In the context of MHD,
this kind of problem has been used formerly to check the robustness
of the employed methods
when handling relativistic magnetized shocks, as well
as their ability to deal with different kinds of
degeneracies \citep[see e.g.][]{Komissarov,MignoneBodo2006}.
In our case, we draw on this configuration as
an example system that can switch from
radiation-dominated to magnetically dominated regimes,
depending on the material's opacity.
To this end, we set up a cylindrical explosion
from an area where the magnetic pressure is of the same order
of the gas pressure, and both are
smaller than the radiation pressure.
Under this condition,
matter dynamics is magnetically dominated
when the opacities are low, and radiation-dominated in the
opposite case. The latter case also serves to investigate the
high-absorption regime in which both
the diffusion approximation and LTE are valid.
We consider a square domain defined as $(x,y)
\in [-6,6]\times[-6,6]$,
initially threaded by a uniform magnetic field,
$\mathbf{B}=B_0\,\hvec{e}_x$ with $B_0=0.1$.
Gas pressure and density are initially set as follows:
\begin{equation}
\left(\begin{array}{c}
p \\
\rho \end{array}\right) =
\left(\begin{array}{c}
p_1 \\
\rho_1 \end{array}\right) \delta
+
\left(\begin{array}{c}
p_0 \\
\rho_0 \end{array}\right) (1-\delta)
\end{equation}
%
where $p_0 = 3.49\times 10^{-5}$, $\rho_0 = 10^{-4}$ are the ambient values
while $p_1 = 1.31\times 10^{-2}$, $\rho_1 = 10^{-2}$ identify the
over-pressurized region.
Here $R=\sqrt{x^2 + y^2}$ is the cylindrical radius while $\delta\equiv\delta(R/R_0)$ is a
taper function that decreases linearly for $R_0<R\le1$ (we use $R_0=0.8$).
The ideal equation of state with $\Gamma = 4/3$ is used throughout the
computations.
A radiation field is introduced initially in equilibrium with the gas.
Since $\mathbf{v}=\mathbf{0}$ in the whole domain, the condition of
LTE is initially satisfied if $E_r=4\pi B(T)$ and
$\mathbf{F}_r=\mathbf{0}$.
These conditions are chosen in such a way
that, close to the center of the domain,
$p_g\sim\mathbf{B}^2/2<E_r/3$, where $\mathbf{B}^2/2$ gives the
initial magnetic contribution to the total pressure
(see Eq. \eqref{Eq:prstot}). To guarantee the condition
$\nabla \cdot \mathbf{B}=0$, necessary for the solutions'
stability, we have implemented in every case the constrained
transport method.
Figure \ref{fig:CylBlastWave} shows a set of 2D color maps
representing the fields' evolution at $t=4$,
using a resolution of $360\times 360$ zones.
The two upper rows correspond to computations
using $\sigma=0$ and $\kappa=1$ (top) or $1000$ (middle).
For $\kappa=1$, the initial optical depth along the central sphere
is $\tau\approx\rho_1 \kappa \Delta x= 0.02 \ll 1$,
and therefore the material's expansion
should not be noticeably affected by the radiation field.
Indeed, in this case, the radiation energy
profile expands spherically as in Section \ref{S:PulseThin}.
The dynamic is magnetically dominated and matter is accelerated up
to $\gamma\sim 1.7$ along the magnetic field lines along the $x$ axis,
which is why the hydrodynamic variables are characterized by
an elongated horizontal shape.
The second row of Fig. \ref{fig:CylBlastWave} shows analog
results obtained with
$\kappa=1000$, where $\tau \approx 20 \gg 1$. In this case,
the interaction of the radiation field with the gas during its
expansion produces a much more isotropic acceleration. This
acceleration is still maximal along the $x$ direction, due to
the joint contributions of the magnetic field and the radiation
pressure. This is why the Lorentz factor is larger in this
case, reaching $\gamma\sim 2.7$.
Gas density and pressure reach their maxima along
an oblated ring, instead of the previous elongated distributions
obtained with $\kappa=1$.
As shown in the same figures, the
magnetic field lines are pushed away from the center as matter
is radially accelerated, producing a region of high magnetic
energy density around the area where $\gamma$ is the highest,
and a void of lower magnetic energy inside. Also differently from
the previous case, the radiation energy distribution is no longer
spherically symmetric due to its interaction with the matter
distribution.
For high values of $\rho\kappa$, it is expected that the radiation
reaches LTE with matter, as Eqs. \eqref{Eq:Gc},
\eqref{Eq:RadRMHD1} and \eqref{Eq:RadRMHD2} lead to
$\tilde{E}_r\rightarrow 4\pi B(T)$ and $\tilde{F}^i_r\rightarrow 0$
for smooth field distributions that do not vary abruptly in time.
In this limit, Eqs. \eqref{Eq:RadRMHD}-\eqref{Eq:RadRMHD2}
can be reduced to those of relativistic MHD, redefining the total gas
pressure as
\begin{equation}
p_{\rm tot}=p_g'+\frac{\mathbf{E}^2+\mathbf{B}^2}{2},
\end{equation}
with $p_g'=p_g+\tilde{p}_r$, and the enthalpy density as
\begin{equation}
\rho h_{\rm tot}=\rho h_g + \tilde{E}_r + \tilde{p}_r,
\end{equation}
where $\tilde{P}_r^{ij}=\tilde{p}_r\, \delta^{ij}$, which follows from
the M1 closure in this limit. Taking a constant-$\Gamma$ EoS with
$\Gamma=4/3$ in every case, the equations of state of both
systems of equations coincide in the large-opacity limit, and
therefore the results obtained with both of them are comparable.
The third row of Fig. \ref{fig:CylBlastWave} shows the results
of an ideal relativistic MHD simulation performed in such a way,
using the same initial conditions as before.
To compute the gas pressure represented herein,
it was assumed that $p_g'\simeq \tilde{p}_r=4\pi B(T)/3$, from where
it is possible to extract $T$ and then $p_g$.
Following the same idea, an effective
$E_{r}$ was computed boosting its comoving value, assumed to
be equal to $4\pi B(T)$, and taking
$\tilde{F}^i_r=0$. The resulting plots thus obtained are
in fact similar to those computed with $\kappa=1000$, with
slight differences that can be explained taking into account that
$\kappa$ has a finite value, and that close to the shocks
the fields do not satisfy
the conditions of being smooth and varying slowly with time.
Consequently, the value of $\tilde{E}_r$ can be different than
$4\pi B(T)$ in the regions of space that are close to
discontinuities, which means that the hypothesis of LTE,
assumed by ideal MHD, is not satisfied in the whole domain.
This is verified in Figure \ref{fig:CylBlastWaveErBT}, where
it is shown that, for $\kappa=1000$,
the ratio $\tilde{E}_r/4\pi B(T)$ differs from $1$ only in the
regions that are close to shocks, shown
in Fig. \ref{fig:CylBlastWave}.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{f16}
\caption{ Values of $\log_{10}\left(\tilde{E}_r/4\pi B(T)\right)$
in the cylindric blast wave for $\kappa=1000$, computed at
at $t=4$. The condition for LTE is here verified except in
the closest regions to the shocks
(see Fig. \ref{fig:CylBlastWave}).}
\label{fig:CylBlastWaveErBT}
\end{figure}
\subsection{Parallel Performance}\label{S:Scaling}
Parallel scalability of our algorithm has been investigated in strong
scaling through two- and three-dimensional computations.
For simplicity, we have chosen the (unmagnetized) blast wave problem
of Section \ref{S:TestRMHD} with $\kappa=10$ leaving the remaining
parameters unchanged.
For reference, computations have been carried out with and
without the radiation field on a fixed grid of $2304^2$ (in 2D)
and $288^3$ (in 3D) zones, a constant time step and the
solver given by Eq. \eqref{Eq:LFR}.
The number of processors - Intel Xeon Phi7250 (KnightLandings) at 1.4 GHz -
has been varied from $N_{\rm CPU}=8$ to $N_{\rm CPU} = 1024$.
The corresponding speed-up factors are plotted in Fig. \ref{fig:Scaling2D3D}
as a function of N$_\mathrm{CPU}$ (solid lines with symbols) together with
the ideal scaling-law curve $\propto N_{\rm CPU}$.
We compute the speedup factors as $S = T_{\rm ref}/T_{\mathrm{N}_\mathrm{CPU}}$
where $T_{\rm ref}$ is a normalization constant while $T_{\mathrm{N}_\mathrm{CPU}}$
is the total running time for a simulation using $N_\mathrm{CPU}$
processors.
Overall, favorable scaling properties are observed in two and three
dimensions with efficiencies that remain above $90\%$ up to 256 cores and
drops to $\sim 70\%$ when $N_{\rm CPU} = 1024$.
Slighlty better results are achieved when radiation is included, owing to
the additional computational overhead introduced by the implicit part of the
algorithm which uses exclusively local data without requiring additional
communication between threads.
Note that, for convenience, we have normalized the curves to the
corresponding running time without the radiation field.
This demonstrates that, by including radiation, the code is (approximately)
four times more expensive than its purely hydro counterpart, regardless of the
dimensionality.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{f17}
\caption{ Speed-up for the 2D (blue) and 3D (red) blast wave tests,
with and without radiation fields (triangles and squares)
as a function of the number of processors.
The ideal scaling law (dashed, black line) is shown
for comparison.}
\label{fig:Scaling2D3D}
\end{figure}
| 2024-02-18T23:39:56.110Z | 2019-03-26T01:41:45.000Z | algebraic_stack_train_0000 | 870 | 17,448 |
|
proofpile-arXiv_065-4325 | \section{Introduction}
\label{intro}
Open clusters are ideal tracers to study the stellar population, the Galactic environment, and the formation and evolution of Galactic disk. Open clusters have large age and distance spans and can be relatively accurately dated; the spatial distribution and kinematic properties of OCs provide critical constraints on the overall structure and dynamical evolution of the Galactic disk. Meanwhile, their [M/H] values serve as excellent tracers of the abundance gradient along the Galactic disk, as well as many other important disk properties, such as the age-metallicity relation (AMR), abundance gradient evolution, etc \citep{1979ApJS...39..135J,1995ARA&A..33..381F,1993A&A...267...75F,1998MNRAS.296.1045C,2002AJ....124.2693F,2008A&A...480...79B,2008A&A...488..943S,2009A&A...494...95M,2010AJ....139.1942F,2011A&A...535A..30C,2016MNRAS.463.4366R}.
Most open clusters are located on the galactic disk. Up to now, about 3000 star clusters have been cataloged \citep{2002A&A...389..871D, 2013A&A...558A..53K} including about 2700 open clusters, most of which were located within 2-3~kpc of the Sun. However, limited by the precision of astrometric data, for many of those cataloged open clusters the reliability of member-selection and thereby the derived fundamental parameters had remained being uncertain. The European Space Agency (ESA) mission {\it Gaia} ({\it https://www.cosmos.esa.int/gaia}) implemented an all-sky survey, which has released its Data Release 2 \citep[Gaia-DR2;][]{2018A&A...616A...1G} providing precise five astrometric parameters (positions, parallaxes, and proper motions) and three band photometry ($G$, $G_{BP}$ and $G_{RP}$ magnitude) for more than one billion stars \citep{2018A&A...616A...2L}. Using the astrometry and photometry of Gaia DR2, cluster members and fundamental parameters of open clusters have been determined with high level of reliability \citep{2018A&A...618A..93C, 2018A&A...619A.155S, 2019A&A...623A.108B, 2019AstL...45..208B}. Furthermore, the unprecedented high precision astrometry in Gaia DR2 is also can be used to discover new open clusters in the solar neighborhood \citep{2018A&A...618A..59C,2019A&A...624A.126C, 2019MNRAS.483.5508F}, as well as the extended substructures in the outskirts of open clusters \citep{2019A&A...624A..34Z,2019A&A...621L...2R,2019A&A...621L...3M}.
Although Gaia DR2 provide accurate radial velocities for about 7.2 million FGK stars, it is incomplete in terms of radial velocities, providing them only for the brightest stars. The observational mode of slitless spectroscopy of Gaia made it hard to observe densely crowded regions, since multiple overlapping spectra would be noisy and make the deblending process very difficult \citep{2018A&A...616A...5C}. Using the weighted mean radial velocity based on Gaia DR2, \citet[ hereafter SC18]{2018A&A...619A.155S} reported the 6D phase space information of 861 star clusters. However, about 50\% clusters only have less than 3 member stars with radial velocity available.
As an ambitious spectroscopic survey project, the Large sky Area Multi-Object fiber Spectroscopic Telescope \citep[LAMOST,][]{Cui2012,Zhao2012,Luo2012} provided about 9 million spectra with radial velocities in its fifth data-release (DR5), including 5.3 million spectra with stellar atmospheric parameters (effective temperature, surface gravity and metallicity) derived by LAMOST Stellar Parameter Pipeline (LASP). In order to study the precision and uncertainties of atmospheric parameters in LAMOST, \citet{2015RAA....15.1095L} performed the comparison for 1812 common targets between LAMOST and SDSS DR9, and provided the measurement offsets and errors as: -91$\pm$111 K in effective temperature (T$_{\rm eff}$), 0.16 $\pm$ 0.22 dex in surface gravity (Log$g$), 0.04 $\pm$ 0.15 dex in metallicity ([Fe/H]) and -7.2 $\pm$ 6.6 km s$^{-1}$ in radial velocity (RV). Since most of observations in LAMOST were focus on the Galactic plane, we expect to obtain the full 3D velocities information for members of hundreds open clusters in the Galactic Anti-Center.
In this paper, our main goals are to derive the properties of open clusters based on Gaia DR2 and LAMOST data, and to provide a catalog of spectroscopic parameters of cluster members. In section~\ref{s}, we describe how we derived the cluster properties, including radial velocities, metallicities, ages, and 6D kinematic and orbital parameters. Using the sample of 295 open clusters, we investigate their statistic properties, and study the radial metallicity gradient and the age-metallicity relation in section~\ref{p}. A brief description of the catalogs of the clusters and their member stars are presented in section~\ref{cat}.
\begin{figure}
\centering
\includegraphics[angle=0,scale=1.1]{his_mem.eps}
\caption{Left panel: cumulative number of RV members in 295 open clusters. About 59\% clusters have RV members greater than 5. Right panel: cumulative number of [Fe/H] members in 220 open clusters. About 38\% clusters have [Fe/H] members greater than 5. }
\label{his_mem}
\end{figure}
\section{The sample}
\label{s}
\subsection{Members and cluster parameters}
We choose the open cluster catalog and their member stars of CG18 as our starting sample. In this catalog, a list of members and astrometric parameters for 1229 clusters were provided, including 60 newly discovered clusters.
In order to identify cluster members, CG18 applied a code called UPMASK \citep{2014A&A...561A..57K} to determine the membership probability of stars located on the cluster field. Based on the unprecedentedly precision Gaia astrometric solution ($\mu_{\alpha},~ \mu_{\delta}, ~\varpi$ ), those cluster members were believed to be well identified with highly reliability. A total of 401,448 stars were provided by CG18, with membership probabilities ranging from 0.1 to 1.
Once cluster members were obtained, the mean astrometric parameters of clusters such as proper motions and distance were derived. In CG18, the cluster distances were estimated from the Gaia DR2 parallaxes, while the fractional uncertainties $\sigma_{\langle \varpi \rangle}$ / $\langle \varpi \rangle $ for 84\% clusters are below 5\%.
\subsection{Radial velocities}
\label{radial velocity}
\begin{figure*}
\centering
\includegraphics[angle=0,scale=1.1]{ocs_rv.eps}
\caption{Radial velocity distribution and fitting profile for each open cluster. The complete figures of fitting result are available alongside the article.}
\label{rv}
\end{figure*}
Using the member stars provided by CG18, we perform the cross-matching process with the LAMOST DR5 by a radius of 3". A number of 8811 stars were identified as having the LAMOST spectra, while 3935 of them have atmospheric parameters with high signal-to-noise ratio (SNR in $g$ band $\geq$ 15 for A,F,G type stars and SNR in $g$ band $\geq$ 6 for K-type stars ) . The uncertainty of RV provided by LAMOST is about 5 km s$^{-1}$ \citep{2015MNRAS.448..822X,2015RAA....15.1095L}.
In order to derive the average radial velocity for each open cluster, we only select stars whose membership probabilities greater than 0.5 and have RV parameter available in in LAMOST DR5. A total of 6017 stars in 295 cluster were left for average RV calculation. The left panel in Figure~\ref{his_mem} shows the cumulative number distribution of RV members in 295 open clusters. In our cluster sample, the number of RV members of 174 cluster (59 \%) is greater than 5, which indicate the higher reliability of derived RV parameters for these clusters.
It is not suitable to simply use the mean RV of members as the overall RV of an open cluster. This is because the mean RV is easy to be contaminated by misidentified member stars (in fact they are field stars with different RVs) or member stars with large RV measurement uncertainties (e.g., stars of early type or late type, or stars with low SNR). The mean RV of members will have large uncertainties and lead to unpredictable offsets, especially for clusters with only a few RV members.
To solve this problem and derive a reliable average RV for open clusters, we carefully check the RV distribution histogram of each open cluster and for those with sufficient RV data we use a Gaussian profile to fit the RV distribution of member stars. Outliers will be excluded in the Gaussian fitting process. For each cluster, the $\mu$ and $\sigma$ of Gaussian function are used as the average RV and corresponding uncertainty. Figure~\ref{rv} shows a few examples of the RV fitting results. In our sample, clusters which have the average RV estimation derived by the Gaussian fitting process are marked as the high quality samples with the RV\_flag labeled as 'Y' in the catalog (See Table~\ref{cat_ocs}). On the other hand, for clusters which were suffered with small RV members or have large dispersion in RV distribution, we simply provide mean RVs and standard deviations as their overall RVs and uncertainties, respectively.
\subsection{Metallicities}
\begin{figure*}
\centering
\includegraphics[angle=0,scale=1.1]{ocs_feh.eps}
\caption{[Fe/H] metallicity distribution and fitting profile for each open cluster. The complete figures of fitting result are available alongside the article.}
\label{feh}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[angle=0,scale=1.1]{feh_teff.eps}
\caption{[Fe/H] metallicity distribution as a function of temperature (Teff) for each open cluster. Dashed line represent the overall [Fe/H] metallicity derived by Gaussian fitting in Figure~\ref{feh}.}
\label{feh_teff}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[angle=0,scale=1.1]{feh_logg.eps}
\caption{[Fe/H] metallicity distribution as a function of surface gravity (Logg) for each open cluster. Dashed line represent the overall [Fe/H] metallicity derived by Gaussian fitting in Figure~\ref{feh}.}
\label{feh_logg}
\end{figure*}
The fifth data release of LAMOST (DR5) provides a stellar parameters catalog including 5.3 million spectra \citep{2015RAA....15.1095L}. Following the determination process of the overall RV of open clusters, we first cross-match cluster members of CG18 with the stellar parameters catalog in LAMOST. Then, we select stars with membership probabilities greater than 0.5 and have [Fe/H] measurements available, 3024 stars in 220 clusters were selected for metallicity estimation.
Using members with [Fe/H] measurement, we plot the metallicity distribution histogram and perform the Gaussian fitting for each open cluster. As we have done in the RV estimation, outliers which have very different metallicity values were excluded by visual inspection. A few examples of the fitting results were presented in Figure~\ref{feh}, while the $\mu$ and $\sigma$ of Gaussian function are used as the average metallicity and corresponding uncertainty respectively. For the rest of open clusters, whose metallicity distribution can not be fitted by the Gaussian function, their overall metallicities and uncertainties are set as the mean [Fe/H] and standard deviations respectively.
In order to further understand the internal consistency and parameter independence of [Fe/H] metallicity of LAMOST DR5, we study the [Fe/H] distribution as a function of Teff and Logg. Using the same clusters in Figure~\ref{feh} as examples, Figure~\ref{feh_teff} and Figure~\ref{feh_logg} show [Fe/H] Vs. Teff and [Fe/H] Vs. Logg results, respectively. Although there are a few outliers or stars with large [Fe/H] measurement errors, there is no apparent degeneracy between [Fe/H] and other parameters, and the fitting results (dashed line) properly represent the overall metallicity of these clusters.
\begin{figure*}
\centering
\includegraphics[angle=0,scale=1.1]{isofit.eps}
\caption{Examples of members distribution in color-magnitude diagram. Colors are represent isochrone parameters which provided by different literatures: \citet{2002A&A...389..871D} in green, \citet{2012A&A...543A.156K} in blue and \citet{2019A&A...623A.108B} in red. The complete figures of fitting result are available alongside the article.}
\label{iso}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[angle=0,scale=1.1]{param.eps}
\caption{Distribution of derived spatial and kinematic parameters. Blue dots are 295 open clusters with radial velocity estimations. Red dots are 109 open clusters which have radial velocity measurements with high quality. The distribution of clusters illustrate that they are located on the Galactic plane and have kinematics typical of the thin disk. }
\label{parm}
\end{figure*}
\subsection{Ages}
\label{age}
In order to provide the age parameter of our sample clusters, we have utilized literature results from \citet{2002A&A...389..871D,2012A&A...543A.156K,2019A&A...623A.108B} to perform the isochrone fitting and visually determine best fitting result of the age, distance and reddening parameters. Since membership probabilities provided by CG18 are more reliable than previous works, member stars used for isochrone fitting were come from CG18 with probability greater than 0.5. We only provide literature parameters whose isochrone is consistent with the distribution of cluster members in the color-magnitude diagram. In other words, if the age parameter of a cluster in our catalog is zero, that means none of the literature parameters can meet the distribution of cluster members properly.
Figure~\ref{iso} presents a few examples of the isochrone fitting results. Colors are used to represent three different literature parameters as \citet{2002A&A...389..871D} in green, \citet{2012A&A...543A.156K} in blue and \citet{2019A&A...623A.108B} in red.
\begin{figure}
\centering
\includegraphics[angle=0,scale=1.5]{compare_rv.eps}
\caption{ Upper panel: RV difference for 71 common clusters between SC18 and our catalog. Bottom panel: RV difference for 36 common clusters between DJ20 and our catalog. The solid circles and their corresponding error bars represent the mean RV and dispersion of each cluster in our catalog, respectively. The color of the data points represents the number of stars used to estimate the average in our catalog. As comparison results of overall RV of open clusters, the average difference for LAMOST-Gaia and LAMOST-APOGEE are -5.1$\pm$6.4 km s$^{-1}$ and -5.5$\pm$5.4 km s$^{-1}$ respectively. }
\label{cmp_rv}
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=0,scale=0.9]{fsr0904.eps}
\caption{Spatial distribution (left panel) and color-magnitude distribution (right panel) of member stars of FSR\_0904. Black dots are cluster members in CG18. Green and red dots are member stars used for RV estimation in SC18 and our catalog, respectively. It is clear that our RV value of this cluster is more reliable since most of our stars are more likely to be cluster members.}
\label{fsr0904}
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=0,scale=1.1]{compare_feh.eps}
\caption{ [Fe/H] difference for 38 common clusters between DJ20 and our catalog. The solid circles and their corresponding error bars represent the mean [Fe/H] and dispersion of each cluster in our catalog, respectively. As an comparison result of overall [Fe/H] of open clusters, the average difference is -0.02$\pm$0.10 dex.}
\label{cmp_feh}
\end{figure}
\subsection{kinematic parameters}
We calculated the Galactocentric cartesian coordinates (X, Y, Z) and velocities (U, V, W) of 295 open clusters by using formulas in \citet{1987AJ.....93..864J}. The celestial coordinates, distance and proper motions of each cluster are from CG18, while the radial velocity is determined from the LAMOST DR5 (See section~\ref{radial velocity}). We adopt the solar position and its circular rotation velocity as R$_0$=-8.34 kpc and $\Theta_0$=240 km s$^{-1}$ respectively \citep{2014ApJ...783..130R}. In order to correct for the solar motion in the local standard of rest, we adopt the solar peculiar velocity as (U$_\odot$, V$_\odot$, W$_\odot$)= (11.1, 12.4, 7.25) km s$^{-1}$ \citep{2010MNRAS.403.1829S}.
Based on the astrometry parameters from Gaia DR2 and LAMOST DR5, we further calculated the orbital parameters of 295 open clusters making use of galpy\footnote{http://github.com/jobovy/galpy} \citep{2015ApJS..216...29B}. The orbital parameters are listed in Table~\ref{cat_ocs}, including apogalactic (R$_{\rm ap}$) and perigalactic (R$_{\rm peri}$) distances from the Galactic centre, orbital eccentricity ($e$), and the maximum vertical distance above the Galactic plane (Z$_{\rm max}$).
Figure~\ref{parm} show the distribution of derived spatial and kinematic parameters (blue dots). In particular, we use red color to represent 109 clusters which have radial velocity estimations with high quality ('RV\_flag' marked 'Y', see section~\ref{radial velocity}). Kinematic parameters,specifically orbital parameters of these clusters (red dots) are more reliable than others. The Galactocentric spatial distribution of 295 open clusters in our catalog are shown in the top panels. We find that most of clusters are located on the Galactic anti-center, this is because a large number of LAMOST observational fields are focused on this region. The Galactocentric velocities of open clusters are shown in middle panels. In particular, we exclude 6 open clusters from the velocity and the orbital parameters distribution (bottom panels), since their unreliable radial velocities led to outliers of kinematic parameters. In bottom panels, the distribution of orbital parameters show that most of open clusters have approximate circular motions and small distance to the Galactic plane. Specifically, the kinematic distribution diagrams clearly illustrate that most of open clusters in our catalog are kinematically typical thin disk.
\subsection{ Comparison to the other works }
To verify the reliability and accuracy of the cluster properties derived by LAMOST DR5, we employed clusters in common between our catalog and other literature catalogs which have high-resolution observation.
\subsubsection{ Verifying radial velocities }
As we described in Section~\ref{intro}, Gaia DR2 also include accurate radial velocities for 7.2 million stars, which provided by the high-resolution slitless spectrograph (R=11500). SC18 published mean RV for 861 star clusters using spectral results from the Gaia DR2. We use our catalog to crossmatch with SC18 and obtain 218 common clusters. In order to use reliable clusters in SC18 as reference, our comparison only include 83 common clusters which defined as the high quality clusters (see more detail in SC18). In addition, we further exclude 12 common clusters since their mean RV in our catalog are unreliable (uncertainty greater than 20 km s$^{-1}$). Finally, the number of common clusters used for comparison is 71.
Figure~\ref{cmp_rv} (upper panel) shows the RV difference between SC18 and our catalog for open clusters in common. The average offset of RV is -5.1 km s$^{-1}$ with a scatter of 6.4 km s$^{-1}$. In general, this result shows good agreement with Gaia. The scatter is mainly caused by the RV uncertainties of LAMOST spectra (R=1800, $\sigma \sim$ 5 km s$^{-1}$), and the number of LAMOST stars in a cluster that used for mean RV estimation (red dots has less scatter than violet dots).
In particular, we note that there is an outlier (blue dot in the upper panel of Figure~\ref{cmp_rv}) with discrepant RV greater than 20 km s$^{-1}$, which named FSR\_0904. After carefully checking the RV data of two catalogs, we find the number of stars for mean RV estimation is 3 for SC18 and 20 for our catalog. Figure~\ref{fsr0904} shows the spatial distribution and color-magnitude distribution of member stars which were used by two works. At least for this cluster, although the scatter of mean RV in our catalog (7.2 km s$^{-1}$) is greater than in SC18 (2.66 km s$^{-1}$), it is more reliable for the mean RV which provided by our catalog since our stars are mainly distribute on the cluster center and follow the cluster main sequence.
In addition, we use our catalog and the APOGEE catalog \citep[here after DJ20]{2020AJ....159..199D} to perform the comparison of mean RV and mean [Fe/H] abundance. There are 128 open clusters published by DJ20, including mean RV and mean abundances from the APOGEE DR16. After cross-matching with two catalogs, our sample includes 48 open clusters in common with DJ20. 6 open clusters were further excluded since their 'qual' in DJ20 are flagged as '0' or 'potentially unreliable'.
For the comparison of mean RV difference with the APOGEE catalog, 36 common clusters, whose RV uncertainty in our catalog are less than 20 km s$^{-1}$, are plotted in the bottom panel of Figure~\ref{cmp_rv}. The average offset of RV is -5.5 km s$^{-1}$ with a scatter of 5.4 km s$^{-1}$. Similarly as compared with the Gaia result, our mean RV results of clusters are also consistent with the APOGEE catalog, especially for clusters which have more stars to estimate the mean values.
We note that there are similar RV offsets between our catalog and literature catalogs (SC18 and DJ20), with around -5 km s$^{-1}$. In order to understand the origin and amount of this offset in LAMOST, we perform a general cross-match of stars between LAMOST DR5 and other spectroscopic catalogs (GALAH DR2, APOGEE DR16 and Gaia DR2). Table~\ref{rv_offset} shows the results of RV difference for common stars whose SNR in LAMOST are greater than 10. Here we list the median RV offset, the mean RV offset, standard deviation of RV difference and number of common stars that used for calculation. The similar comparison results of general stars and open clusters show that the RV difference are mainly from the measurement of LAMOST spectra. In addition, we study the RV offset as a function of stellar atmospheric parameters and find that the RV offset is almost a constant all over the parameter space. The result of RV different is also consistent with the conclusion of LAMOST LSP3 parameters analysis \citep{2015MNRAS.448..822X}.
\begin{table}
\caption{Difference of RV for general common stars between LAMOST DR5 and other spectroscopic catalogs.}
\label{rv_offset}
\centering
\begin{threeparttable}
\begin{tabular}{ccccc}
\hline
Catalog & Median & Mean & $\sigma$ & Number\\
& km s$^{-1}$ & km s$^{-1}$ & km s$^{-1}$ & \\
\hline
GALAH~\tnote{1} & -4.9 & -4.8 & 10.6 & 12538 \\
APOGEE~\tnote{2} & -4.7 & -4.3 & 9.8 & 96459 \\
Gaia~\tnote{3} & -4.9 & -5.0 & 8.2 & 689838 \\
\hline
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[1] \citet{2018MNRAS.478.4513B}
\item[2] \citet{2019arXiv191202905A}
\item[3] \citet{2018A&A...616A...1G}
\end{tablenotes}
\end{threeparttable}
\end{table}
\begin{table}
\caption{Difference of [Fe/H] for general common stars between LAMOST DR5 and other spectroscopic catalogs.}
\label{feh_offset}
\centering
\begin{threeparttable}
\begin{tabular}{ccccc}
\hline
Catalog & Median & Mean & $\sigma$ & Number\\
& dex & dex & dex & \\
\hline
GALAH~\tnote{1} & 0.01 & 0.01 & 0.13 & 11968 \\
APOGEE~\tnote{2} & -0.001 & -0.002 & 0.11 & 84355 \\
\hline
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[1] \citet{2018MNRAS.478.4513B}
\item[2] \citet{2019arXiv191202905A}
\end{tablenotes}
\end{threeparttable}
\end{table}
\begin{figure*}
\centering
\includegraphics[angle=0,scale=1.]{allstar.eps}
\caption{ [Fe/H] metallicity difference of common stars as a function of LAMOST metallicity. Giants and dwarfs are separated by adopting the criteria of logg < 3 and logg > 3, respectively. }
\label{star_feh}
\end{figure*}
\subsubsection{ Verifying metallicities }
We compared the [Fe/H] metallicity between our catalog and DJ20. In Figure~\ref{cmp_feh}, there are 38 common clusters whose [Fe/H] uncertainty in our catalog are not zero and we find a mean offset in [Fe/H] of -0.02 dex and a scatter of 0.10 dex. We note that all discrepant values are come from clusters with the lower number of stars for estimation. Excluding clusters whose number of stars for estimation are less than 10, our result shows good agreement with APOGEE result.
Furthermore, we note that the offset shows a tiny gradient along the metallicity in Figure~\ref{cmp_feh}. In order to study the origin of this trend, we compare the metallicity difference of common stars between LAMOST DR5 and other spectroscopic catalogs (GALAH DR2 and APOGEE DR16). To reduce the effect of stars with low SNR, we only select common stars whose LAMOST SNR are greater than 10 for comparison. Table~\ref{feh_offset} list the comparison results of metallicity offset and dispersion. The overall small offsets and dispersion indicate the reliability of metallicity measurement in LAMOST DR5 since they are in good agreement with high resolution spectroscopic results.
In Figure~\ref{star_feh}, we plot the stellar [Fe/H] metallicity difference between LAMOST DR5 and GALAH DR2 and APOGEE DR16. We note that the [Fe/H] difference of dwarfs between LAMOST and APOGEE shows positive gradient along the metallicity, which also indicate the trend in Figure~\ref{cmp_feh} may come from the measurement difference of dwarfs between the two catalogs.
\begin{figure}
\centering
\includegraphics[angle=0,scale=1.2]{gradient.eps}
\caption{ Radial (upper panel) and vertical (bottom panel) metallicity gradient of young open clusters. The slope of gradients are -0.053 $\pm$ 0.004 dex kpc$^{-1}$ and -0.252$\pm$ 0.039 dex kpc$^{-1}$, respectively.}
\label{gradient}
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=0,scale=1.15]{radial_time.eps}
\caption{Radial metallicity gradients in different age bins. Dashed lines are linear least-squares approximation in one-dimension with [Fe/H] errors. }
\label{radial_time}
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=0,scale=1.15]{vertical_time.eps}
\caption{ Vertical metallicity gradients in different age bins. Symbols are the same as in Figure~\ref{radial_time}}
\label{vertical_time}
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=0,scale=1.1]{gradient_sum.eps}
\caption{Radial (left panel) and vertical (right panel) metallicity gradient trends along the median age of each age bin.}
\label{gradient_sum}
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=0,scale=1.1]{amr.eps}
\caption{Age-metallicity relation of open clusters. In our sample, the slope for open clusters with age < 6 Gyr is -0.022 $\pm$ 0.008 dex Gyr$^{-1}$. Three outliers are marked as triangles and excluded from the linear fitting procedure.}
\label{amr}
\end{figure}
\section{Abundance analysis}
\label{p}
\begin{table}
\caption{Summary of reported radial metallicity gradients using open clusters as tracers}
\label{ref}
\centering
\begin{tabular}{cccc}
\hline
Slope & Range & Number & ref.\\
dex kpc$^{-1}$ & kpc & & \\
\hline
-0.053 $\pm$ 0.004 & 7-15 & 183 & this work \\
-0.061 $\pm$ 0.004 & 7-12 & 18 & \citet{2018AJ....156..142D} \\
-0.052 $\pm$ 0.011 & < 12 & 79 & \citet{2016MNRAS.463.4366R} \\
-0.056 $\pm$ 0.007 & < 17 & 488 & \citet{2009MNRAS.399.2146W} \\
-0.063 $\pm$ 0.008 & < 17 & 118 & \citet{2003AJ....125.1397C} \\
-0.059 $\pm$ 0.010 & 7-16 & 39 & \citet{2002AJ....124.2693F} \\
-0.085 $\pm$ 0.008 & 7-16 & 37 & \citet{1998MNRAS.296.1045C} \\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{Pearson correlation coefficients of radial and vertical metallicity gradients in different age bins. }
\label{pcc}
\centering
\begin{tabular}{ccc}
\hline
Age range & Radial & Vertical \\
Gyr & & \\
\hline
< 0.1 & -0.55 & 0.11 \\
0.1-0.5 & -0.47 & -0.12 \\
0.5-1.0 & -0.56 & -0.45 \\
1.0-2.0 & -0.61 & -0.16 \\
> 2.0 & -0.50 & 0.34 \\
\hline
\end{tabular}
\end{table}
\subsection{Radial metallicity gradient}
Radial metallicity gradient in the Galactic disk plays an important role in studying the chemical formation and evolution of the Galaxy. In addition of stars or planetary nebulae (PNe) \citep[e.g.,][]{2011AJ....142..136L,2014A&A...565A..89B}, open clusters are ideal tracers of the radial metallicity gradient study, since they have a wide span of age and distance, their coeval member stars have small metallicity dispersion. From open cluster sample in previous works, the radial metallicity gradients range from -0.052 to -0.063 dex kpc$^{-1}$ within 12 kpc \citep{2003AJ....125.1397C,2009MNRAS.399.2146W,2010A&A...511A..56P,2016MNRAS.463.4366R,2016A&A...585A.150N}.
In our sample, most of open clusters are younger than 3 Gyr. We use these clusters to fit the average radial metallicity gradient of young component in the Galactic disk. The upper panel in Figure~\ref{gradient} shows the metallicity gradient in the Galactocentric distance range R$_{GC}$ = 7-15 kpc, with a linear fit to the whole range. Although the radial metallicity gradient of -0.053$\pm$0.004 dex kpc$^{-1}$ in the radial range 7-15 kpc is consistent with the previous works (see table~\ref{ref} for more details and comparison), the Pearson correlation coefficient of -0.33 indicate a weak correlation for overall radial metallicity gradient of all clusters, which may be caused by the mixture of open clusters with different populations.
In order to constraint the Galactic chemo-dynamical model, the study of gradient evolution in the Galactic disk is important \citep{1998MNRAS.296.1045C,2003AJ....125.1397C,2012AJ....144...95Y}. Figure~\ref{radial_time} shows the radial metallicity gradients in different age bins. Since we have a sufficient number of clusters in different age bins, we can perform the analysis of gradient evolution. We separate our samples into five age bins, including very young age bin (< 0.1 Gyr), from young to intermediate age bins (0.1-0.5 Gyr, 0.5-1.0 Gyr, 1.0-2.0 Gyr), and old age bin (> 2.0 Gyr). Table~\ref{pcc} show the Pearson correlation coefficient of radial metallicity gradients in different age bins. After separating clusters with age bins, the Pearson correlation coefficient show that the correlation of metallicity gradients in different age bins are stronger than the correlation of overall metallicity gradient, which also indicate the higher reliability of radial metallicity gradients in different age bins. The gradient trend with median age of each sub-sample is shown in the left panel of Figure~\ref{gradient_sum}. Ignoring very young sample, the rest of four age samples display a mild flat trend of radial metallicity gradient with time. For clusters with age greater than 0.1 Gyr (most of them less than 4 Gyr), the steeper gradient of older population is consistent with previous studies \citep[e.g.,][]{1998MNRAS.296.1045C,2002AJ....124.2693F,2020AJ....159..199D}. The time-flattening tendency may be explained by the common influence of radial migration \citep{2016A&A...585A.150N,2017A&A...600A..70A} and chemical evolution in the Galactic disk \citep{2000ASSL..255..505T,2002ChJAA...2..226C, 2016A&A...591A..37J}.
However, we notice that there is a steep gradient for very young samples (< 0.1 Gyr), which is not consistent with previous results \citep{2011A&A...535A..30C,2017A&A...601A..70S} and the corresponding explanation \citep{2020A&A...634A..34B}. Although there is no convincing explanation for this reverse trend, this result is not contradictory to the chemo-dynamical simulation of \citet[MCM]{2013A&A...558A...9M,2014A&A...572A..92M}. In the MCM model, radial migration is expected to flatten the chemical gradients for ages > 1 Gyr, while also predicts an almost unchanged gradient for the very young population. Since there is no process that has a significant impact on the gradient of very young population, their steep gradient partly represent the current chemical gradient in the Galactic disk (R$_{GC}$ $\sim$ 8-12 kpc).
In particular, it is noteworthy that the cluster NGC6791 include in our initial sample. As many previous works noticed, this cluster is very metal-rich and fairly old \citep{1994A&A...287..761C,2014AJ....148...61T,2018AJ....156..142D}, and believed to be migrated to its current location \citep{2017ApJ...842...49L}. In order to reduce the influence of outlier on gradients, we excluded NGC6791 from our cluster sample, and then perform the radial and vertical gradient analysis in Figure~\ref{gradient} - \ref{gradient_sum}.
\begin{table*}
\caption{Description of the open cluster properties catalog. }
\label{cat_ocs}
\centering
\begin{threeparttable}
\begin{tabular}{llll}
\hline
Column & Format & Unit & Description \\
\hline
CLUSTER & string & - & Cluster name \\
RA & float & deg & Mean right ascension of members in CG18 (J2000)\\
DEC & float & deg & Mean declination of members in CG18 (J2000)\\
PMRA & float & mas yr$^{-1}$ & Mean proper motion along RA of members in CG18 \\
PMRA\_std & float & mas yr$^{-1}$ & Standard deviation of pmRA of members in CG18 \\
PMDE & float & mas yr$^{-1}$ & Mean proper motion along DE of members in CG18 \\
PMDE\_std & float & mas yr$^{-1}$ & Standard deviation of pmDE of members in CG18 \\
DMODE & float & pc & Most likely distance of clusters in CG18 \\
RV & float & km s$^{-1}$ & Mean radial velocity measured from member spectra in LAMOST \\
RV\_std & float & km s$^{-1}$ & Standard deviation of RV \\
RV\_num & integer & - & Number of stars used for RV estimation \\
RV\_flag & String & - & Flag of Gaussian fitting process for RV estimation\\
FEH & float & dex & Mean [Fe/H] measured from member spectra in LAMOST \\
FEH\_std & float & dex & Standard deviation of [Fe/H] \\
FEH\_num & integer & - & Number of stars used for [Fe/H] estimation \\
FEH\_flag & String & - & Flag of Gaussian fitting process for [Fe/H] estimation\\
GX & float & pc & Galactocentric coordinate points to the direction opposite to that of the Sun \\
GX\_err & float & pc & Mean errors of GX coordinate calculation\\
GY & float & pc & Galactocentric coordinate points to the direction of Galactic rotation \\
GY\_err & float & pc & Mean errors of GY coordinate calculation\\
GZ & float & pc & Galactocentric coordinate points toward the North Galactic Pole \\
GZ\_err & float & pc & Mean errors of GZ coordinate calculation \\
U & float & km s$^{-1}$ & Galactocentric space velocity in X axis \\
U\_err & float & km s$^{-1}$ & Mean errors of U velocity calculation\\
V & float & km s$^{-1}$ & Galactocentric space velocity in y axis \\
V\_err & float & km s$^{-1}$ & Mean errors of V velocity calculation\\
W & float & km s$^{-1}$ & Galactocentric space velocity in Z axis \\
W\_err & float & km s$^{-1}$ & Mean errors of W velocity calculation\\
R$_{\rm ap}$ & float & pc & Averaged apogalactic distances from the Galactic centre \\
R$_{\rm peri}$ & float & pc & Averaged perigalactic distances from the Galactic centre \\
EC & float & pc & Eccentricity calculated as e=(R$_{\rm ap}$-R$_{\rm peri}$) / (R$_{\rm ap}$+R$_{\rm peri}$) \\
ZMAX & float & pc & Averaged maximum vertical distances above the Galactic plane \\
R$_{\rm gc}$ & float & pc & Galactocentric distance assuming the Sun is located at 8340 pc \\
R$_{\rm gc}$\_err & float & pc & Mean errors of Galactocentric distance calculation \\
AGE\_ref & float & Gyr & Age from literature results determined by the isochrone fit \\
DIST\_ref & float & pc & Distance from literature results determined by the isochrone fit \\
EBV\_ref & float & - & Reddening from literature results determined by the isochrone fit \\
REF~\tnote{1}& String & - & Label of referred literature for age, distance and EBV determination \\
\hline
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[1] Three labels are used to refer different literatures:
(1)= \citet{2019A&A...623A.108B};
(2)=\citet{2013A&A...558A..53K};
(3)=\citet{2002A&A...389..871D}
\end{tablenotes}
\end{threeparttable}
\end{table*}
\begin{table*}
\caption{Description of the spectroscopic catalog of cluster members }.
\label{cat_mem}
\centering
\begin{threeparttable}
\begin{tabular}{llll}
\hline
Column & Format & Unit & Description \\
\hline
OBSID & string & - & Object unique spectra ID in LAMOST DR5 \\
DESIGNATION & string & - & Object designation in LAMOST DR5 \\
RA\_obs & float & deg & Object right ascension in LAMOST DR5 (J2000) \\
DEC\_obs & float & deg & Object declination in LAMOST DR5 (J2000) \\
SNRG & float & - & Signal-to-noise ration of g filter in LAMOST spectrum \\
SNRR & float & - & Signal-to-noise ration of r filter in LAMOST spectrum \\
SNRI & float & - & Signal-to-noise ration of i filter in LAMOST spectrum \\
RV\_2d & float & km s$^{-1}$ & Radial velocity derived by the LAMOST 2D pipeline \\
RV\_2d\_err & float & km s$^{-1}$ & Uncertainty of radial velocity derived by the LAMOST 2D pipeline \\
RV\_1d & float & km s$^{-1}$ & Radial velocity derived by the LAMOST 1D pipeline \\
RV\_1d\_err & float & km s$^{-1}$ & Uncertainty of radial velocity derived by the LAMOST 1D pipeline \\
TEFF & float & k & Effective temperature derived by the software of ULYSS\\
TEFF\_err & float & k & Error of effective temperature derived by the software of ULYSS\\
LOGG & float & dex & Surface gravity derived by the software of ULYSS \\
LOGG\_err & float & dex & Error of surface gravity derived by the software of ULYSS\\
FEH & float & dex & [Fe/H] derived by the the software of ULYSS\\
FEH\_err & float & dex & Error of [Fe/H] derived by the software of ULYSS\\
SOURCE & string & - & Gaia DR2 source id \\
PARALLAX & float & mas & Parallax in Gaia DR2 \\
PARALLAX\_err & float & mas & Parallax error in Gaia DR2 \\
PMRA & float & mas yr$^{-1}$ & Proper motion along RA in Gaia DR2 \\
PMRA\_err & float & mas yr$^{-1}$ & Error of pmRA in Gaia DR2 \\
PMDE & float & mas yr$^{-1}$ & Proper motion along DE in Gaia DR2 \\
PMDE\_err & float & mas yr$^{-1}$ & Error of pmDE in Gaia DR2 \\
GMAG & float & mag & G-band magnitude in Gaia DR2 \\
BP\_RP & float & mag & BP minus RP color in Gaia DR2 \\
PROB & float & - & Membership probability provided by CG18 \\
CLUSTER & string & - & Corresponding cluster name \\
\hline
\end{tabular}
\end{threeparttable}
\end{table*}
\subsection{Vertical metallicity gradient}
The vertical metallicity gradient is another important clue to constrain the formation history of the Galactic disk, while its existence among old open clusters was controversial \citep{1995ARA&A..33..381F,1995AJ....110.2813P}. The bottom panel in Figure~\ref{gradient} show the vertical metallicity gradient of our clusters within 1 kpc distance from the Galactic mid-plane. The resulting slope is -0.252$\pm$ 0.039 dex kpc$^{-1}$, which is in good agreement with previous results \citep[e.g,][]{1998MNRAS.296.1045C,2003AJ....125.1397C}.
As \citet{1998MNRAS.296.1045C} pointed out, the cluster sample that they used for deriving the vertical gradient is significantly biased, because of the tidal disruption, which is more effective when closer to the Galactic mid-plane. In order to disentangle the effect of age dependence, we plot the vertical gradients in different age bins in Figure~\ref{vertical_time}, and the gradient trend along the median age of each age sample in Figure~\ref{gradient_sum} (right panel), while age bins are the same as in radial gradient analysis. The Pearson correlation coefficients of vertical metallicity distribution with different age bins are presented in Table~\ref{pcc}, which show weak correlation or even no correlation. It is worth noting that the vertical distribution of open clusters is effected by the different scale-heights of different age population \citep{1996A&A...310..771N}. For very young samples (< 0.1 Gyr), the positive gradient maybe caused by the small scale-height, which also leads a large dispersion of the trend. For old samples (> 2 Gyr), we suppose the positive gradient is the result of both migration and tidal disruption. Therefore, this suggests that open clusters with intermediate ages provide more reliable trend of vertical metallicity gradient than other age population.
\subsection{Age metallicity relation}
The age-metallicity relation (AMR) is a useful clue for understanding the history of metal enrichment of the disk and providing an important constraint on the chemical evolution models. During past two decades, many works are focused on this study, either use nearby stars \citep{2001A&A...377..911F,1998MNRAS.296.1045C,1993A&A...275..101E} or use open clusters with multiple ages \citep{2016A&A...585A.150N,2003AJ....125.1397C,1998MNRAS.296.1045C}. In general, the observational data shows the evidence of decreasing metallicity with increasing age for both tracers, which indicate in principle the metal-enrichment in the interstellar medium (ISM) during the chemical evolution of the Galaxy.
Comparing with the nearby stars, the open clusters have great advantage to identify the AMR since their metallicities and ages can be relatively more reliably determined. However, even based on the open clusters, the existence of AMR on the disk is not significant \citep{2009A&A...494...95M,1994A&A...287..761C, 1985A&A...147...47C}. For some studies, only a mild decrease of the metal content of clusters with age is found \citep{2016A&A...585A.150N,2010A&A...511A..56P,2003AJ....125.1397C}.
Figure~\ref{amr} shows the age-metallicity relation of open clusters in our catalog. Ages were determined by visual inspection through the best fitting isochrone in the color-magnitude diagram (See section~\ref{age}). To remove the effect of the spatial variation of the metallicity due to the radial metallicity gradients, we build up a AMR in which we correct our [Fe/H] with the following relation [Fe/H]$_{\rm corr}$=[Fe/H]-0.053 (R$_{\odot}$-R) (kpc). After excluding 3 old open clusters as outliers, we perform the linear fitting of open clusters in our sample. The metallicity decreases with 0.022 $\pm$ 0.008 dex Gyr$^{-1}$ for open clusters within 6 Gyr. The Pearson correlation coefficients of -0.28 also indicate the weak correlation of AMR, which is consistent with the mild decrease relation in previous works \citep[e.g.,][]{2016A&A...585A.150N,2010A&A...511A..56P,2003AJ....125.1397C}.
We noted that there are three very old but metal-rich open clusters in our sample (triangles in Figure~\ref{amr}), with age in 8 Gyr or older. One of the possible explanation about the origin of these open clusters is the infalling or merger events within the time of 3-5 Gyr \citep{1998MNRAS.296.1045C}. For open clusters with age > 8 Gyr, it is suggested that they might be related by the formation of the triaxial bar structure \citep{1996A&A...310..771N} and further migrated to the current position.
\section{Description of the catalog}
\label{cat}
We provide two catalogs\footnote{The catalogs can be download via http://dr5.lamost.org/doc/vac. Electronic versions are also available alongside the article.} in this paper: one for the properties of 295 open clusters and the other for spectroscopic parameters of 8811 member stars.
Table~\ref{cat_ocs} describes the catalog of open cluster properties. Columns 2-8 list astrometic parameters of open clusters provided by CG18, including the coordinates, mean proper motions, and distances, which were mainly based on the Gaia solution. Columns 9-16 list the measurement results of radial velocity and metallicity by LAMOST DR5. Columns 17-34 list derived kinematic and orbital parameters of open clusters. Columns 35-38 list parameters by the isochrone fit results in literature, including age, distance and reddening.
Table~\ref{cat_mem} describes the spectroscopic catalog of cluster members, including the LAMOST spectra information (columns 1-7), the derived stellar fundamental parameters by the LAMOST spectra (columns 8- 17), the astrometric and photometric parameters in Gaia DR2 (columns 18-26) and the membership probability in CG18 (columns 27).
\section{Summary}
We have used the identified cluster members by CG18 to cross-match with the LAMOST spectroscopic catalog. A total of 8811 member stars with spectrum data were provided. Using the spectral information of cluster members, we also provide average radial velocity of 295 open clusters and metallicity of 220 open cluster s. Considering the accurate observed data of tangential velocity provided by Gaia DR2 and radial velocity provided by LAMOST DR5, we further derived the 6D phase positions and orbital parameters of 295 open clusters. The kinematic results shows that most of open clusters in our catalog are located on the thin disk and have approximate circular motions. In addition, referring to the literature results of using isochrone fitting method, we estimated the age, distance and reddening of our sample of open clusters.
As an value-added catalog in LAMOST DR5, the provided list of cluster members make a correlation between the LAMOST spectra and the cluster overall properties, especially for stellar age, reddening and distance module. Comparing with the spectra of field stars, the LAMOST spectra of member stars are valuable source to perform the detail study of stellar physics or to calibrate the stellar fundamental parameters, since the cluster can provide statistical information for these members with higher precision.
Furthermore, using the open clusters as tracers, we make use of their metallicities to study the radial metallicity gradient and the age-metallicity relation. The derived radial metallicity gradient for young clusters is -0.053$\pm$0.004 dex kpc$^{-1}$ within the radial range of 7-15 kpc, which is consistent with previous works. After excluding 3 old but metal-rich open clusters, we derived an AMR as -0.022$\pm$0.008 dex Gyr$^{-1}$ for young clusters, which follow the tendency that younger clusters have higher metallicities, as a consequence of the more enriched ISM from which they formed \citep{2009A&A...494...95M}. On the other hand, considering that the metallicity increasing of the disk is mild during the past 5 Gyr \citep{2003AJ....125.1397C}, which is indeed in agreement with our findings that a small increase in the youngest clusters, the nature of AMR of open clusters need further investigations.
{\bf Acknowledgments}
We are very grateful to the referee for helpful suggestions, as well as the correction for some language issues, which have improved the paper significantly.
This work supported by National Key R\&D Program of China No. 2019YFA0405501. The authors acknowledges the National Natural Science Foundation of China (NSFC) under grants U1731129 (PI: Zhong), 11373054 and 11661161016 (PI: Chen) and .
Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences.
This work has made use of data from the European Space Agency (ESA) mission Gaia (\url{https://www.cosmos.esa.int/gaia}), processed by the Gaia Data Processing and Analysis Consortium (DPAC,\url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.
| 2024-02-18T23:39:56.296Z | 2020-06-17T02:08:40.000Z | algebraic_stack_train_0000 | 880 | 7,545 |
|
proofpile-arXiv_065-4345 | \subsection{Evidence from string theory for the central dogma}
Though we said that the ``central dogma'' is an unproven assumption, there is a great deal of very non-trivial evidence from string theory.
String theory is a modification of Einstein gravity that leads to a well defined perturbative expansion and also some non-perturbative results. For this reason it is believed to define a full theory of quantum gravity.
One big piece of evidence was the computation of black hole entropy for special extremal black holes in supersymmetric string theories \cite{Strominger:1996sh}. In these cases one can reproduce the
Bekenstein-Hawking formula from an explicit count of microstates. These computations match not only the area formula, but all its corrections, see e.g. \cite{Dabholkar:2014ema}. Another piece of evidence comes from the AdS/CFT correspondence \cite{Maldacena:1997re,Witten:1998qj,Gubser:1998bc}, which is a conjectured relation between the physics of AdS and a dual theory living at its boundary. In this case, the black hole and its whole exterior can be represented in terms of degrees of freedom living at the boundary. There is also evidence from matrix models that compute scattering amplitudes in special vacua \cite{Banks:1996vh}. We will not discuss this further in this review, since we are aiming to explain features which rely purely on gravity as an effective field theory.
\section{Fine-grained vs coarse-grained entropy} \label{finecoarse}
There are two notions of entropy that we ordinarily use in physics and it is useful to make sure that we do not confuse them in this discussion.
The simplest to define is the von Neuman entropy. Given the density matrix, $\rho$, describing the quantum state of the system, we have
\begin{equation} \label{vnfine}
S_{vN} = - Tr[ \rho \log \rho ]
\end{equation}
It quantifies our ignorance about the precise quantum state of the system. It vanishes for a pure state, indicating complete knowledge of the quantum state. An important property is that it is invariant under unitary time evolution $\rho \to U \rho U^{-1}$.
The second notion of entropy is the coarse-grained entropy. Here we have some density matrix $\rho$ describing the system, but we do not measure all observables, we only measure a subset of simple, or coarse-grained observables $A_i$. Then the coarse-grained entropy is given by the following procedure. We consider all possible density matrices $\tilde \rho$ which give the same result as our system for the observables that we are tracking,
$Tr[ \tilde \rho A_i] =Tr[\rho A_i]$. Then we compute the von Neumann entropy $S(\tilde \rho)$. Finally we maximize this over all possible choices of $\tilde \rho$.
Though this definition looks complicated, a simple example is the ordinary entropy used in thermodynamics. In that case the $A_i$ are often chosen to be a few observables, say the approximate energy and the volume. The thermodynamic entropy is obtained by maximizing the von Neumann entropy among all states with that approximate energy and volume.
Coarse-grained entropy obeys the second law of thermodynamics. Namely, it tends to increase under unitary time evolution.
Let us make some comments.
\begin{itemize}
\item The von Neumann entropy is sometimes called the ``fine-grained entropy'', contrasting it with the coarse-grained entropy defined above. Another common name is ``quantum entropy."
\item
Note that the generalized entropy defined in \nref{sgen} increases rapidly when the black hole first forms and the horizon grows from zero area to a larger area. Therefore if it has to be one of these two entropies, it can only be the thermodynamic entropy. In other words, the entropy \eqref{sgen} defined as the area of the horizon plus the entropy outside is the coarse-grained entropy of the black hole.
\item
Note that if we have a quantum system composed of two parts $A$ and $B$, the full Hilbert space is $H= H_A \times H_B$. Then we can define the von Neumann entropy for the subsystem $A$. This is computed by first forming a density matrix $\rho_A$ obtained by taking a partial trace over the system $B$. The entropy of $\rho_A$ can be non-zero, even if the full system is in a pure state. This arises when the original pure state contains some entanglement between the subsystems $A$ and $B$.
In this case $S(A)=S(B)$ and $S(A\cup B) =0$.
\item
The fine-grained entropy cannot be bigger than the coarse-grained entropy, $S_{vN} \leq S_{coarse}$.
This is a simple consequence of the definitions, since we can always consider $\rho$ as a candidate $\tilde \rho$. Another way to say this is that because $S_{coarse}$ provides a measure of the total number of degrees of freedom available to the system, it sets an upper bound on how much the system can be entangled with something else.
\end{itemize}
It is useful to define the fine-grained entropy of the quantum fields in a region of space. Let $\Sigma$ be a spatial region, defined on some fixed time slice. This region has an associated density matrix $\rho_{\Sigma}$, and the fine-grained entropy of the region is denoted
\begin{equation}
S_{vN}(\Sigma) \equiv S_{vN}(\rho_\Sigma) \ .
\end{equation}
If $\Sigma$ is not a full Cauchy slice, then we will have some divergences at its boundaries. These divergences are not important for our story, they have simple properties and we can deal with them appropriately. Also, when $\Sigma$ is a portion of the full slice, $S_{vN}(\Sigma)$ is generally time-dependent. It can increase or decrease with time as we move the slice forwards in time. The slice $\Sigma$ defines an associated causal diamond, which is the region that we can determine if we know initial data in $\Sigma$, but not outside $\Sigma$. The entropy is the same for any other slice $\tilde \Sigma$ which has the same causal diamond as $\Sigma$, see figure \ref{Diamond}.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=.3]{figures/Diamond.png}
\caption{ Given a region $\Sigma$ of a spatial slice, shown in red, we can define its causal diamond to be all points where the evolution is uniquely determined by initial conditions on $\Sigma$. The alternative slice $\tilde \Sigma$ defines the same causal diamond. The von Neumann entropies are also the same. }
\label{Diamond}
\end{center}
\end{figure}
\subsection{Semiclassical entropy}\label{semiclassical}
We now consider a gravity theory which we are treating in the semiclassical approximation. Namely, we have a classical geometry and quantum fields defined on that classical geometry.
Associated to a spatial subregion we can define its
``semiclassical entropy," denoted by
\begin{equation}
S_{\rm semi \text{-} cl}(\Sigma) \ .
\end{equation}
$S_{\rm semi \text{-} cl}$ is the von Neumann entropy of quantum fields (including gravitons) as they appear on the semiclassical geometry. In other words, this is the fine-grained entropy of the density matrix calculated by the standard methods of quantum field theory in curved spacetime. In the literature, this is often simply called the von Neumann entropy (it is also called $S_{\rm matter}$ or $S_{\rm outside}$ in the black hole context).
\section{The Hawking information paradox }
The Hawking information paradox is an argument against the ``central dogma'' enunciated above \cite{Hawking:1976ra}. It is only a paradox if we think that the central dogma is true.
Otherwise, perhaps it can be viewed as a feature of quantum gravity.
The basic point rests on an understanding of the origin of Hawking radiation. We can first start with the following question. Imagine that we make a black hole from the collapse of a pure state, such as a large amplitude gravity wave \cite{Christodoulou:2008nj}. This black hole emits thermal radiation. Why do we have these thermal aspects if we started with a pure state? The thermal aspects of Hawking radiation arise because we are essentially splitting the original vacuum state into two parts, the part that ends up in the black hole interior and the part that ends up in the exterior. The vacuum in quantum field theory is an entangled state. As a whole state it is pure, but the degrees of freedom are entangled at short distances. This implies that if we only consider half of the space, for example half of flat space, we will get a mixed state on that half. This is a very basic consequence of unitarity and relativistic invariance \cite{Bisognano:1976za}.
Often this is explained qualitatively as follows. The vacuum contains pairs of particles that are constantly being created and annihilated. In the presence of a horizon, one of the members of the pair can go to infinity and the other member is trapped in the black hole interior. We will call them the ``outgoing Hawking quantum" and the ``interior Hawking quantum." These two particles are entangled with each other, forming a pure state. However if we consider only one member, say the outgoing Hawking quantum, we fill find it in a mixed state, looking like a thermal state at the Hawking temperature \nref{thawking}. See figure \ref{fig:evap-stages}b and figure \ref{fig:evap-penrose}.
This process on its own does not obviously violate the central dogma. In fact, if we had a very complex quantum system which starts in a pure state, it will appear to thermalize and will emit radiation that is very close to thermal. In particular, in the early stages, if we computed the von Neumann entropy of the emitted radiation it would be almost exactly thermal because the radiation is entangled with the quantum system. So it is reasonable to expect that during the initial stages of the evaporation, the entropy of radiation rises. However, as the black hole evaporates more and more, its area will shrink and we run into trouble when the entropy of radiation is bigger than the thermodynamic entropy of the black hole. The reason is that now it is not possible for the entropy of radiation to be entangled with the quantum system describing the black hole because the number of degrees of freedom of the black hole is given by its thermodynamic entropy, the area of the horizon.
In other words, if the black hole degrees of freedom together with the radiation are producing a pure state, then the fine-grained entropy of the black hole should be equal to that of the radiation $S_{\rm black ~hole} = S_{\rm rad}$. But this fine-grained entropy of the black hole should be less than the Bekenstein-Hawking or thermodynamic entropy of the black hole,
$S_{\rm black~hole} \leq S_{\rm Bekenstein-Hawking}=S_{\rm coarse-grained}$.
If the central hypothesis were true, we would expect that the entropy of radiation would need to start decreasing at this point. In particular, it can never be bigger than the Bekenstein Hawking entropy of the old black hole. Notice that we are talking about the von Neumann or fine-grained entropy of the radiation.
Then,
as suggested by D. Page \cite{Page:1993wv,Page:2013dx}, the entropy of the radiation would need to follow the curve indicated in figure \ref{HawkingPageCurves}, as opposed to the Hawking curve.
The time at which $S_{\rm Bekestein-Hawking} = S_{\rm rad}$ is called the Page time.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=.5]{figures/BHPage.png}
\caption{ Schematic behavior of the entropy of the the outgoing radiation. The precise shape of the lines depends on the black hole and the details of the matter fields being radiated. In green we see Hawking's result, the entropy monotonically increases until $t_{\rm End}$, when the black hole completely evaporates. In orange we see the thermodynamic entropy of the black hole. If the process is unitary, we expect the entropy of radiation to be smaller than the thermodynamic entropy. If it saturates this maximum, then it should follow the so called ``Page'' curve, denoted in purple. This changes relative to the Hawking answer at the Page time, $t_{\rm Page}$, when the entropy of Hawking radiation is equal to the thermodynamic entropy of the black hole. }
\label{HawkingPageCurves}
\end{center}
\end{figure}
Now let us finish this discussion with a few comments.
\begin{itemize}
\item
Note that, as the black hole evaporates, its mass decreases. This is sometimes called the ``backreaction'' of Hawking radiation. This effect is included in the discussion. And it does not ``solve'' the problem.
\item
When the black hole reaches the final stages of evaporation, its size becomes comparable to the Planck length and we can no longer trust the semiclassical gravity description. This is not relevant since the conflict with the central dogma appeared at the Page time, when the black hole was still very big.
\item The argument is very robust since it relies only on basic properties of the fine-grained entropy. In particular, it is impossible to fix the problem by adding small corrections to the Hawking process by slightly modifying the Hamiltonian or state of the quantum fields near the horizon \cite{Mathur:2009hf,Almheiri:2012rt,Almheiri:2013hfa}. In other words, the paradox holds to all orders in perturbation theory, and so if there is a resolution it should be non-perturbative in the gravitational coupling $G_N$.
\item
We could formulate the paradox by constantly feeding the black hole with a pure quantum state so that we exactly compensate the energy lost by Hawking radiation. Then the mass of the black hole is constant. Then the paradox would arise when this process goes on for a sufficiently long time that the entropy of radiation becomes larger than the entropy of the black hole.
\item
One could say that the gravity computation only gives us an approximate description and we should not expect that a sensitive quantity like the von Neumann entropy should be exactly given by the semiclassical theory. In fact, this is what was said until recently. We will see however, that there \emph{is} a way to compute the von Neuman entropy using just this semiclassical description.
\end{itemize}
We have described here one aspect of the Hawking information paradox, which is the aspect that we will see how to resolve. We will comment about other aspects in the discussion.
\section{Entropy of an evaporating black hole}
In this section, we will see how to apply the fine-grained entropy formula \eqref{RT} to all stages of the evaporating black hole.
Let us first compute the entropy after the black hole forms but before any Hawking radiation has a chance to escape the black hole region.
In this case, there are no extremal surfaces encountered by deforming $X$ inwards, and we are forced to shrink it all the way down to zero size. See figure \ref{CollapseRT}. The area term vanishes, so the fine-grained entropy is just the entropy of the matter enclosed by the cutoff surface.
Note that this calculation is sensitive to the geometry in the interior of the black hole. This means that the entropy at the initial stage will vanish, assuming that the collapsing shell was in a pure state.\footnote{ We are neglecting the contribution from the entanglement of the fields near the cutoff surface. We are taking this contribution to be time independent, and we implicitly subtract it in our discussion.} If we ignore the effects of Hawking radiation, this fine-grained entropy is invariant under time evolution. This is in contrast with the area of the horizon, which starts out being zero at $r=0$ and then grows to $4\pi r_s^2$ after the black hole forms.
Once the black hole starts evaporating and the outgoing Hawking quanta escape the black hole region, the von Neumann entropy of this region will no longer be zero due to the entanglement between the interior Hawking quanta and those that escaped. As shown in figure \ref{nicesliceentropy}, this entropy continues to grow as the black hole evaporates due to the pile up of the mixed interior modes. This growth of entropy precisely parallels that of the outgoing Hawking radiation, and seems to support the idea that the black hole can have arbitrarily larger entropy than its Area$/4G_N$, inconsistent with the central dogma.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.39]{figures/nicesliceentropy.pdf}
\ \ \ \ \ \ \includegraphics[scale=0.56]{figures/trivialcurve.pdf}
\caption{As more outgoing Hawking quanta escape the black hole region, its entropy grows due to the pile up of interior Hawking quanta. Modes of like colors are entangled with one another. On the right is a plot comparing this growing entropy to the decreasing thermodynamic entropy of the black hole. }
\label{nicesliceentropy}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.39]{figures/nontrivialsurface.pdf}
\ \ \ \ \ \ \includegraphics[scale=0.56]{figures/nontrivialcurve.pdf}
\caption{When the non-vanishing extremal surface first appears, it lies inside the black hole near the event horizon. For different times on the cutoff surface, it is a different surface which moves along a spacelike direction up the horizon. This gives a decreasing generalized entropy since the black hole area is shrinking.}
\label{nontrivialsurface}
\end{center}
\end{figure}
The story is not yet complete since there is also a non-vanishing extremal surface that appears shortly after the Hawking radiation starts escaping the black hole region. The precise location of this surface depends on how much radiation has escaped, and hence on the time $t$ along the cutoff surface when we decide to compute the entropy.
It turns out that the surface lies close to the event horizon.
Its location along the horizon is determined as follows. We go back along the cutoff surface by a time of order $r_s \log S_{BH}$ and we shoot an ingoing light ray. Then the surface is located close to the point where this light ray intersects the horizon. Note that $r_s$, and also $r_s \log S_{BH}$, are times which are short compared to the evaporation time, $r_s S_{BH}$. The time scale $r_s \log S_{BH}$ is called ``the scrambling time'' and it has an interesting significance that we will not discuss in this review, see e.g. \cite{Hayden:2007cs,Sekino:2008he}.
This is shown in figure \ref{nontrivialsurface}. The generalized entropy now has an area term as well as the von Neumann entropy of quantum fields, $S_{\rm semi \text{-} cl}$.
This quantum field theory entropy is relatively small because it does not capture many Hawking quanta and thus the entropy is dominated by the area term
\begin{align}
S_\mathrm{gen} \approx {\mathrm{Horizon \ Area}(t) \over 4 G_N} \,.
\end{align}
This generalized entropy follows closely the evolution of the thermodynamic entropy of the black hole. Since the area of the black hole decreases as it evaporates, this extremal surface gives a decreasing generalized entropy.
The complete proof for the existence of this surface would be to show that the change of area of $X$ under a small deformation in any direction perfectly balances the change in the von Neumann entropy $S_{\rm semi \text{-} cl}$. We give some intuition for this procedure by extremizing along the ingoing null direction. The key point is that, while the area of $X$ is monotonically decreasing along this direction, the entropy $S_{\rm semi \text{-} cl}$ is not. To see this, imagine starting with $X$ right on the horizon and analyze the entanglement pattern across the surface as it is moved inwards. As the surface is moved inwards, the entropy $S_{\rm semi \text{-} cl}$ decreases as the newly included interior Hawking modes purify the outgoing quanta already included in the region `outside.' Once all of those outgoing quanta have been purified, moving the surface inward further would start including modes entangled with outgoing quanta outside the black hole region thereby increasing $S_{\rm semi \text{-} cl}$. It is in this regime that changes in the area and entropy can exactly balance each other out. For the precise equations see \cite{Almheiri:2019psf,Penington:2019npb}.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=.5]{figures/simpleextremization}
\caption{$S_\mathrm{semi\text{-} cl}$ begins to increase when going inwards along the ingoing null coordinate once all the outgoing Hawking quanta in the black hole region are purified by the newly included interior modes. This allows for an extremum along this direction since the area of the surface shrinks. }
\label{simpleextremization}
\end{center}
\end{figure}
Full application of the entropy formula \eqref{RT} requires taking the minimum of the generalized entropy over all available extremal surfaces. We found two such surfaces: a growing contribution from the vanishing surface and a decreasing one from the non-vanishing surface just behind the event horizon. At very early times, only the vanishing surface exists, giving a contribution which starts at zero and grows monotonically until the black hole evaporates away. Some short time after the black hole forms, the non-vanishing surface is created and it starts with a large value given by the current area of the black hole, and decreases as the black hole shrinks. Therefore, the vanishing surface initially captures the true fine-grained entropy of the black hole, up until the non-vanishing surface contribution becomes smaller and starts to represent the true fine-grained entropy. In this way, by transitioning between these two contributions, the entropy of the black hole closely follows the Page curve indicative of unitary black hole evaporation, see figure \ref{both}.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.39]{figures/surfacetransition.pdf}
\ \ \ \ \ \ \includegraphics[scale=0.56]{figures/bothcurves.pdf}
\caption{The Page curve for the fine-grained entropy of the black hole (shown in black) is captured by the transition between a growing contribution from the trivial surface and a decreasing contribution from a non-trivial surface near the black hole horizon.}
\label{both}
\end{center}
\end{figure}
\section{Entropy of radiation}
We have seen how the black hole fine-grained entropy, as computed via the gravitational formula \eqref{RT}, accurately follows the Page curve. This does not directly address the information paradox since that concerns the entropy growth of the Hawking radiation.
In fact, the semi-classical black hole evolution leads to a growing value for the entropy outside the cutoff surface, the region containing the radiation \cite{Hawking:1976ra},
\begin{equation}
S_{\rm semi \text{-} cl}(\Sigma_{\rm Rad})\,. \langle{SemiRad}
\end{equation}
This radiation lives in a spacetime region where the gravitational effects can be made very small. In other words, we can approximate this as a rigid space. Alternatively, we can think we collected the radiation into a big quantum computer.
However, due to the fact that we used gravity to obtain this state, it turns out that we should apply the gravitational fine-grained entropy formula to compute its entropy.
In our first presentation of the formula \nref{RT}, we were imagining that we had a black hole inside the region. Now we are trying to apply this formula to the region outside the cutoff surface, which contains no black hole. Nevertheless, the formula \nref{RT} can also be applied when there is no black hole present. The spirit of the formula is that the region in which we compute the entropy can be changed in size, by moving the surface $X$, so as to minimize the entropy. So far we have considered cases where $\Sigma_X$ was connected. However, it seems also natural to consider the possibility that $\Sigma_X$ is disconnected. When would this be required? By making $\Sigma_X$ disconnected, we increase the area of the boundary. So this can only be required if we can decrease the semiclassical entropy contribution at the same time. This could happen if we have regions that are far away with entangled matter.
In fact, this is precisely what happens with Hawking radiation. The radiation is entangled with the fields living in the black hole interior. Therefore, we can decrease the semiclassical entropy contribution by also including the black hole interior. In doing so, we will have to add an area term. At late times, the net effect is to decrease the generalized entropy, so we are required to include this disconnected region inside the black hole, which is sometimes called an ``island." The final region we are considering looks as in figure \ref{islandprocedure}.
More precisely, the full fine-grained entropy of the radiation, computed using the fine-grained gravitational entropy formula, is given by
\begin{align}
S_\mathrm{Rad} = \mathrm{min}_X \Bigg\{ \mathrm{ext}_X\left[ {\mathrm{Area}(X) \over 4 G_N} + S_{\rm semi \text{-} cl} [\Sigma_{\rm Rad} \, \cup \, \Sigma_{\rm Island}] \right] \Bigg\}, \label{island}
\end{align}
where the area here refers to the area of the boundary of the island, and the min/ext is with respect to the location and shape of the island \cite{Almheiri:2019hni,Penington:2019kki,Almheiri:2019qdq} .
The left hand side is the full entropy of radiation.
And $S_{\rm semi \text{-} cl} [\Sigma_{\rm Rad} \, \cup \, \Sigma_{\rm Island}] $ is the von Neumann entropy of the quantum state of the combined radiation and island systems {\it in the semiclassical description}.
Note that the subscript `Rad' appears both on the left hand side and the right hand side of \nref{island}, a fact that has caused much confusion and heated complaints. The left hand side is the full entropy of radiation, as computed using the gravitational fine-grained entropy formula. This is supposed to be the entropy for the full exact quantum state of the radiation. On the right hand side we have the state of radiation {\it in the semiclassical description}. This is a different state than the full exact state of the radiation. Note that in order to apply the formula we do not need to know the exact state of the radiation. The formula does not claim to give that state to us in an explicit form, it is only computing the entropy of that state.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.4]{figures/islandprocedure.pdf}
\caption{The fine-grained entropy of the region Rad containing the Hawking radiation can get contributions from regions inside the black hole, called islands. The total entropy is the area of $X$ plus a contribution from the semiclassical entropy of the union of $\Sigma_{\rm Rad}$ and $\Sigma_{\rm Island}$.}
\label{islandprocedure}
\end{center}
\end{figure}
The ``island formula'' \nref{island} is a generalization of the black hole gravitational fine-grained entropy formula \nref{RT} and really follows from the same principles. Some authors consider it as simply a part of \nref{RT}. We decided to give it a special name and to discuss it separately because we motivated \nref{RT} as a generalization of black hole entropy. However, if we look just at the radiation, there does not seem to be any black hole. The point is that because we prepared the state using gravity, this is the correct formula to use. We will later give a sketch of the derivation of the formula. It is likely that in future treatments of the subject, both will be discussed together.
The procedure for applying this formula is as follows. We want to compute the entropy of all of the Hawking radiation that has escaped the black hole region. This is captured by computing the entropy of the entire region from the cutoff all the way to infinity. This region is labeled by $\Sigma_{\rm Rad}$ in the formula, see figure \ref{islandprocedure}. The islands refer to any number of regions contained in the black hole side of the cutoff surface. The figure shows the case of a single island centered around the origin. In principle we can have any number of islands, including zero. We then extremize the right hand side of
\nref{island} with respect to the position of the surface $X$. Finally we minimize over all possible extremal positions and choices of islands.
The simplest possibility is to have no island.
This vanishing island contribution gives simply \nref{SemiRad}. As more and more outgoing Hawking quanta escape the black hole region, the entropy continues to grow. See figure \ref{radnoisland}. This contribution always extremizes the generalized entropy but it will not always be the global minimum of the entropy.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.4]{figures/radnoisland.pdf}
\ \ \ \ \ \ \includegraphics[scale=0.6]{figures/radnoislandcurve.pdf}
\caption{The no-island contribution to the island formula gives a growing entropy due to the escaping outgoing Hawking quanta. }
\label{radnoisland}
\end{center}
\end{figure}
A non-vanishing island that extremizes the generalized entropy appears some time after the black hole forms. A time of order $r_s \log S_{BH}$ is enough. This island is centered around the origin and its boundary is very near the black hole event horizon. It moves up the horizon for different times on the cutoff surface. This is shown in figure \ref{radwithisland}. The generalized entropy with this island includes the term which is given by the area of the black hole. The von Neumann entropy term involves the entropy of the union of the outgoing radiation and the island, and is therefore small for all times, since the island contains most or all of the interior Hawking modes that purify the outgoing radiation. This contribution to the island formula starts at a large value, given by the area of the horizon at early times, and decreases down to zero.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.4]{figures/radwithisland.pdf}
\ \ \ \ \ \ \includegraphics[scale=0.6]{figures/radwithislandcurve.pdf}
\caption{The island contribution appears some time after the black hole forms. It gives a decreasing contribution that tracks the thermodynamic entropy of the black hole. }
\label{radwithisland}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.4]{figures/islandtransition.pdf}
\ \ \ \ \ \ \includegraphics[scale=0.6]{figures/islandtransitioncurve.pdf}
\caption{ We now consider both contributions and at each time we pick the minimal one, which gives the final answer for the full entropy of radiation. This gives the Page curve, shown in black on the right figure. }
\label{radboth}
\end{center}
\end{figure}
The fine-grained entropy of the Hawking radiation is then the minimum of these two contributions. This gives the Page curve; the rising piece comes from the no-island contribution and the novel decreasing piece from the island contribution.
If we formed the black hole from an initially pure state, then we expect that the entropy of the black hole and the entropy of the radiation region should be equal.
Indeed, the fine grained entropy formula for the black hole and the one for the radiation give the same answer. The reason is the following. In both cases, the same surface $X$ is involved. In addition, When the matter state is pure on the whole Cauchy slice, we have that
$S_{\rm semi \text{-} cl}(\Sigma_X) = S_{\rm semi \text{-} cl}(\Sigma_\textup{Rad} \cup \Sigma_\textup{Island})$. Then we get the same answer because we are minimizing/extremizing the same function.
In conclusion, the black hole and the radiation entropy are described by the same curve, see figure \ref{HawkingPageCurves}.
Now a skeptic would say: ``Ah, all you have done is to include the interior region. As I have always been saying, if you include the interior region you get a pure state," or ``This is only an accounting trick." But we did not include the interior ``by hand." The fine-grained entropy formula is derived from the gravitational path integral through a method conceptually similar to the derivation of the black hole entropy by Gibbons and Hawking, discussed in section \ref{ss:gibbonshawking}. It is gravity itself, therefore, that instructs us to include the interior in this calculation.
It is gravity's way of telling us that black hole evaporation is unitary without giving us the details of the actual state of the outgoing radiation.
An analogy from the real world is the following. Imagine that there is a man who owns a house with many expensive paintings inside. Then he starts borrowing money from the townspeople giving the paintings as a guarantee. He spends his money throwing expensive parties and people who visit his house think he is very rich. However, he eventually borrows so much that most of the house and its contents belong to the townspeople. So, when the townspeople compute their wealth, they include the paintings in this man's house. But the man cannot include them in his computation of his wealth.
In this analogy, the house is the interior of the black hole and the wealth is the quantum information. The townspeople is the region far from the black hole containing the Hawking radiation. The casual observer who thinks that the townspeople are poor because they don't have paintings in their homes would be wrong. In the same way, the one who looks at the Hawking radiation and said that it is mixed would be wrong, because the interior should also be included.
\section{Introduction}
\input{section_introduction.tex}
\input{section_bhreview.tex}
\section{The black hole as an ordinary quantum system} \label{central}
\input{CentralDogma}
\section{A formula for fine-grained entropy in gravitational systems}\label{finegrain}
\input{section_finegrained.tex}
\input{Entropyofevaporatingblackhole.tex}
\input{Entropyofradiation.tex}
\input{section_wedge.tex}
\input{section_wormholes.tex}
\input{section_discussion.tex}
\vspace{1cm}
\textbf{Acknowledgments} We are grateful to R. Bousso, A. Levine, A. Lewkowycz, R. Mahajan, S. Shenker, D. Stanford, A. Strominger, L. Susskind, A. Wall and
Z. Yang
for helpful discussions on these topics. We also thank G. Musser for comments on a draft of this review.
A.A. is supported by funds from the Ministry of Presidential Affairs, UAE. The work of ES is supported by the Simons Foundation as part of the Simons Collaboration on the Nonperturbative Bootstrap. The work of TH and AT is supported by DOE grant DE-SC0020397.
J.M. is supported in part by U.S. Department of Energy grant DE-SC0009988 and by the Simons Foundation grant 385600.
\section{Comments on the AMPS paradox }
In \cite{Almheiri:2012rt} a problem or paradox was found, and a proposal was made for its resolution. Briefly stated, the paradox was the impossible quantum state appearing after the Page time, where the newly outgoing Hawking quantum needs to be maximally entangled with two seemingly separate systems: its interior partner and the early Hawking radiation. The proposed resolution was to declare the former entanglement broken, forming a ``firewall'' at the horizon.
A related problem was discussed in \cite{Marolf:2012xe}.
The paradox involved the central dogma plus one extra implicit assumption.
The extra assumption is that the black hole interior can also be described by the {\it same}
degrees of freedom that describe the black hole from the outside, the degrees of freedom that appear in the central dogma.
We have not made this assumption in this review.
According to this review, the paradox is resolved by dropping the assumption that the interior is also described by the same degrees of freedom that describe it as viewed from outside. Instead, we assume that only a portion of the interior is described by the black hole degrees of freedom appearing in the central dogma $-$ only the portion in the entanglement wedge, see figure \ref{EWfig}(b). This leaves the interior as part of the radiation, and the resolution of the apparently impossible quantum state is that the interior partner is identified with part of the early radiation that the new Hawking quantum is entangled with.
This is different than the resolution proposed in AMPS. With this resolution, the horizon is smooth.
\section{Glossary}
{\bf Causal diamond}: The spacetime region that can be determined by evolution (into the future or the past) of initial data on any spatial region. See figure \ref{Diamond}. \\
\\
{\bf Central dogma}: A black hole -- as viewed from the outside -- is simply a quantum system with a number of degrees of freedom equal to Area$/4G_N$. Being a quantum system, it evolves unitarily under time evolution. See section \ref{central}. \\
\\
{\bf Fine-grained entropy}: Also called the von Neumann entropy or quantum entropy. Given a density matrix $\rho$, the fine-grained entropy is given as $S = -Tr[\rho \log \rho]$. See section \ref{finecoarse}.\\
\\
{\bf Coarse-grained entropy}: Given a density matrix $\rho$ for our system, we measure a subset of simple observables $A_i$ and consider all $\tilde{\rho}$ consistent with the outcome of our measurements, $Tr[\tilde{\rho} A_i] = Tr[\rho A_i]$. We then maximize the entropy $S(\tilde{\rho}) = -Tr[\tilde{\rho} \log \tilde{\rho}]$ over all possible choices of $\tilde{\rho}$. See section \ref{finecoarse}. \\
\\
{\bf Semiclassical entropy}: The fine-grained entropy of matter and gravitons on a fixed background geometry. See section \ref{semiclassical}. \\
\\
{\bf Generalized entropy}: The sum of an area term and the semi-classical entropy. See \eqref{sgendef}. When evaluated at an event horizon soon after it forms, for example in \eqref{sgen}, the generalized entropy is coarse grained. When evaluated at the extremum, as in \eqref{RT} or \eqref{island}, the generalized entropy is fine grained. \\
\\
{\bf Gravitational fine-grained entropy}: Entropy given by the formulas \nref{RT} and \nref{island}. They give the fine-grained entropy through a formula that involves a geometric part, the area term, and the semiclassical entropy of the quantum fields. \\
\\
{\bf Page curve}: Consider a spacetime with a black hole formed by the collapse of a pure state. Surround the black hole by an imaginary sphere whose radius is a few Schwarzschild radii. The Page curve is a plot of the fine-grained entropy outside of this imaginary sphere, where we subtract the contribution of the vacuum. Since the black hole Hawking radiates and the Hawking quanta enter this faraway region, this computes the fine-grained entropy of Hawking radiation as a function of time. Notice that the regions inside and outside the imaginary sphere are open systems. The curve begins at zero when no Hawking quanta have entered the exterior region, and ends at zero when the black hole has completely evaporated and all of the Hawking quanta are in the exterior region. The ``Page time" corresponds to the turnover point of the curve. See figure \ref{HawkingPageCurves}.\\
\\
{\bf Quantum extremal surface}: The surface $X$ that results from extremizing (and if necessary minimizing) the generalized entropy as in \eqref{RT}. This same surface appears as a boundary of the island region in \eqref{island}. \\
\\
{\bf Island}: Any disconnected codimension-one regions found by the extremization procedure \eqref{island}. Its boundary is the quantum extremal surface. The causal diamond of an island region is a part of the entanglement wedge of the radiation.\\
\\
{\bf Entanglement wedge}: For a given system (in our case either the radiation or the black hole), the entanglement wedge is a region of the semiclassical spacetime that is described by the system. It is defined at a moment in time and has nontrivial time dependence. Notice that language is not a good guide: the transition in the Page curve from increasing entropy to decreasing entropy corresponds to when most of the interior of the black hole becomes described by the radiation, i.e. the entanglement wedge of the black hole degrees of freedom does not include most of the black hole interior. See section \ref{wedge} and figure \ref{EWfig}. \\
\\
{\bf Replica trick}: A mathematical technique used to compute $-Tr[\rho \log \rho]$ in a situation where we do not have direct access to the matrix $\rho_{ij}$. See section \ref{replicas}.\\
\\
\section{Preliminaries}
\subsection{Black hole thermodynamics}
When an object is dropped into a black hole, the black hole responds dynamically. The event horizon ripples briefly, and then quickly settles down to a new equilibrium at a larger radius. It was noticed in the 1970s that the resulting small changes in the black hole geometry are constrained by equations closely parallel to the laws of thermodynamics \cite{Christodoulou:1970wf,Christodoulou:1972kt,Hawking:1971tu,Bekenstein:1972tm,Bekenstein:1973ur,carter1972rigidity,Bardeen:1973gs,Hawking:1974rv,Hawking:1974sw}. The equation governing the response of a rotating black hole is \cite{Bardeen:1973gs}
\begin{equation}
\frac{\kappa}{8\pi G_N} d\left( \mbox{Area} \right) = dM - \Omega dJ \ ,
\end{equation}
where $\kappa$ is its surface gravity\footnote{Unfortunately, the name ``surface gravity'' is a bit misleading since the proper acceleration of an observer hovering at the horizon is infinite. $\kappa$ is related to the force on a massless (unphysical) string at infinity, see e.g. \cite{Wald:1984rg}.}, $M$ is its mass, $J$ is its angular momentum, and $\Omega$ is the rotational velocity of the horizon. The area refers to the area of the event horizon, and $G_N$ is Newton's constant.
If we postulate that the black hole has temperature $T \propto \kappa$, and entropy $S_{\rm BH} \propto \mbox{Area}$, then this looks identical to the first law of thermodynamics in the form
\begin{equation}\label{firstlaw}
TdS_{\rm BH} = dM - \Omega dJ \ .
\end{equation}
In addition, the area of the horizon always increases in the classical theory \cite{Hawking:1971tu}, suggesting a connection to the second law of thermodynamics.
This is just a rewriting of the Einstein equations in suggestive notation, and initially, there was little reason to believe that it had anything to do with `real' thermodynamics. In classical general relativity, black holes have neither a temperature nor any significant entropy. This changed with Hawking's discovery that, when general relativity is coupled to quantum field theory, black holes have a temperature \cite{Hawking:1974sw}
\begin{equation}
T = \frac{\hbar \kappa}{2\pi } \ .
\end{equation}
(We set $c=k_B = 1$.) This formula for the temperature fixes the proportionality constant in $S_{\rm BH} \propto \mbox{Area}$. The total entropy of a black hole and its environment also has a contribution from the quantum fields outside the horizon. This suggests that the total or `generalized' entropy of a black hole is \cite{Bekenstein:1973ur}
\begin{equation}\label{sgen}
S_{\rm gen} = \frac{\mbox{Area of horizon}}{4 \hbar G_N} + S_{\rm outside} \ , \end{equation}
where $S_{\rm outside}$ denotes the entropy of matter as well as gravitons outside the black hole, as it appears in the semiclassical description. It also includes a vacuum contribution from the quantum fields \cite{Bombelli:1986rw}.\footnote{ The quantum contribution by itself has an ultraviolet divergence from the short distance entanglement of quantum fields across the horizon. This piece is proportional to the area, $A/\epsilon_{uv}^2$. However, matter loops also lead to an infinite renormalization of Newton's constant, $1/(4 G_N) \to { 1 \over 4 G_N} - { 1 \over \epsilon^2_{uv}}$. Then these two effects cancel each other so that $S_{\rm gen}$ is finite. As usual in effective theories, these formally ``infinite'' quantities are actually subleading when we remember that we should take a small cutoff but not too small, $l_p \ll \epsilon_{uv} \ll r_s$. }
The generalized entropy, including this quantum term, is also found to obey the second law of thermodynamics \cite{Wall:2011hj},
\begin{equation}
\Delta S_{\rm gen} \geq 0 \ ,
\end{equation}
giving further evidence that it is really an entropy. This result is stronger than the classical area theorem because it also covers phenomena like Hawking radiation, when the area decreases but the generalized entropy increases due to the entropy of Hawking radiation.
The area is measured in Planck units, $l_p^2 = \hbar G_N$, so if this entropy has an origin in statistical mechanics then a black hole must have an enormous number of degrees of freedom. For example, the black hole at the center of the Milky Way, Sagittarius A*, has
\begin{equation}
S \approx {10^{85}} \ .
\end{equation}
Even for a black hole the size of a proton, $S \approx 10^{40}$.
In classical general relativity, according to the no-hair theorem, there is just one black hole with mass $M$ and angular momentum $J$, so the statistical entropy of a black hole is naively zero.
Including quantum fields helps, but has not led to a successful accounting of the entropy.
Finding explicitly the states giving rise to the entropy is an interesting problem, which we will not discuss in this review.
\subsection{Hawking radiation}
The metric of a Schwarzschild black hole is
\begin{equation}\label{schw}
ds^2 = -\left( 1 - \frac{r_s}{r} \right) dt^2 + \frac{dr^2}{1 - \frac{r_s}{r}} + r^2 d\Omega_2^2 \ .
\end{equation}
The Schwarzschild radius $r_s = 2G_N M$ sets the size of the black hole. We will ignore the angular directions $d\Omega_2^2$ which do not play much of a role. To zoom in on the event horizon, we change coordinates, $r \to r_s(1 + \frac{\rho^2}{4r_s^2})$, $t \to 2 r_s \tau$, and expand for $\rho \ll r_s$. This gives the near-horizon metric
\begin{equation} \langle{RescMe}
ds^2 \approx -\rho^2 d\tau^2 + d\rho^2 \ .
\end{equation}
To this approximation, this is just flat Minkowski spacetime. To see this, define the new coordinates
\begin{equation}\label{rindlerc}
x^0 = \rho \sinh \tau , \qquad x^1 = \rho \cosh \tau
\end{equation}
in which
\begin{equation} \langle{LocMin}
ds^2\approx -\rho^2 d\tau^2 + d\rho^2 = -(dx^0)^2 + (dx^1)^2 \ .
\end{equation}
Therefore according to a free-falling observer, the event horizon $r=r_s$ is not special. It is just like any other point in a smooth spacetime, and in particular, the geometry extends smoothly past the horizon into the black hole. This is a manifestation of the equivalence principle: free-falling observers do not feel the effect of gravity. Of course, an observer that crosses the horizon will not be able to send signals to the outside\footnote{We can say that the interior lies behind a Black Shield (or Schwarz Schild in German).}.
The spacetime geometry of a Schwarzschild black hole that forms by gravitational collapse is illustrated in fig.~\ref{fig:schwarzschild}. An observer hovering near the event horizon at fixed $r$ is accelerating --- a rocket is required to avoid falling in. In the near-horizon coordinates \nref{LocMin}, an observer at fixed $\rho$ is following the trajectory of a uniformly accelerated observer in Minkowski spacetime.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=1]{figures/collapse-penrose.pdf}
\end{center}
\caption{\small Left: Penrose diagram of a black hole formed by gravitational collapse. Right: Zoomed-in view of the flat near-horizon region, with the trajectory of a uniformly accelerated observer at $\rho = a^{-1}$.
\label{fig:schwarzschild}}
\end{figure}
A surprising fact is that a uniformly accelerating observer in flat space detects thermal radiation.
This is known as the Unruh effect \cite{Unruh:1976db}. There is a simple trick to obtain the temperature \cite{Bisognano:1976za}. The coordinate change \eqref{rindlerc} is very similar to the usual coordinate change from Cartesian coordinates to polar coordinates. It becomes identical if we perform the Wick rotation $\tau = i \theta$, $x^0 = i x^0_E$; then
\begin{equation}
x^0_E = \rho \sin \theta , \quad x^1 = \rho \cos\theta \ .
\end{equation}
The new coordinates $(x^0_E, x^1)$ or $(\rho, \theta)$ are simply Cartesian or polar coordinates on the Euclidean plane $\mathbb{R}^2$.
In Euclidean space, an observer at constant $\rho$ moves in a circle of length $ 2\pi \rho$.
Euclidean time evolution on a circle is related to the computation of thermodynamic quantities for the original physical system (we will return to this in section \ref{ss:gibbonshawking}).
Namely, $\textrm{Tr}\,[e^{ - \beta H}] $ is the partition function at temperature $T=1/\beta$. $\beta$ is the length of the Euclidean time evolution and the trace is related to the fact that we are on a circle. This suggests that the temperature that an accelerated observer feels is
\begin{equation} \langle{PropT}
T_{proper} = { 1 \over 2 \pi \rho } = { a \over 2 \pi } = { \hbar \over k_B c } { a \over 2 \pi }
\end{equation}
where $a$ is the proper acceleration and we also restored all the units in the last formula.
Though this argument seems a bit formal, one can check that a physical accelerating thermometer would actually record this temperature \cite{Unruh:1976db}.
Now, this is the proper temperature felt by an observer very close to the horizon. Notice that it is infinite at $\rho=0$ and it decreases as we move away. This decrease in temperature is consistent with thermal equilibrium in the presence of a gravitational potential. In other words, for a spherically symmetric configuration, in thermal equilibrium, the temperature obeys the Tolman relation \cite{Tolman:1930zza}
\begin{equation}
T_{proper}(r) \sqrt{-g_{\tau\tau} (r) }= {\rm constant .}
\end{equation}
This formula tracks the redshifting of photons as they climb a gravitational potential. It says that locations at a higher gravitational potential feel colder to a local observer.
Using
the polar-like coordinates \nref{LocMin} and \nref{PropT} we indeed get a constant equal to $1/(2\pi)$.
Since this formula is valid also in the full geometry \nref{schw}, we can then use it to find the temperature that an observer far from the black hole would feel. We simply need to undo the rescaling of time we did just above \nref{RescMe} and go to large $r$ where
$g_{tt} = -1$ to find the temperature
\begin{equation}\label{thawking}
T
= T_{proper} (r\gg r_s) = \frac{1}{4\pi r_s} \ .
\end{equation}
This is the Hawking temperature. It is the temperature measured by an observer that is more than a few Schwarzschild radii away from the black hole.
\subsection{The Euclidean black hole}\label{ss:gibbonshawking}
We will now expand a bit more on the connection between Euclidean time and thermodynamics. We will then use it to get another perspective on thermal aspects of black holes. Sometimes Euclidean time $t_E$ is called imaginary time and Lorentzian time $t$ is called real time because of the Wick rotation $t = i t_E$ mentioned above.
There are different ways to see that imaginary-time periodicity is the same as a temperature. In a thermal state, the partition function is
\begin{equation}
Z = \textrm{Tr}\, [ e^{-\beta H} ]\ .
\end{equation}
Any observable such as $\textrm{Tr}\,[ {\cal O}(t) {\cal O}(0) e^{-\beta H}]$ is periodic under $t \to t + i \beta$, using ${\cal O}(t) = e^{i H t}{\cal O}e^{-i H t}$ and the cyclic property of the trace.
A more general argument in quantum field theory is to recast the trace as a path integral. Real-time evolution by $e^{-i H t}$ corresponds to a path integral on a Lorentzian spacetime, so imaginary-time evolution, $e^{-\beta H}$, is computed by a path integral on a Euclidean geometry. The geometry is evolved for imaginary time $\beta$, and the trace tells us to put the same boundary conditions at both ends and sum over them. A path integral on a strip of size $\beta$ with periodic boundary conditions at the ends is the same as a path integral on a cylinder. Therefore in quantum field theory $Z = \textrm{Tr}\, e^{-\beta H}$ is calculated by a path integral on a Euclidean cylinder with $\theta = \theta + \beta$. Any observables that we calculate from this path integral will automatically be periodic in imaginary time.
\begin{figure}
\begin{center}
\includegraphics[scale=1]{figures/cigar.pdf}
\end{center}
\caption{\small The Euclidean Schwarzschild black hole. The Euclidean time and radial directions have the geometry of a cigar, which is smooth at the tip, $r=r_s$. At each point we also have a sphere of radius $r$. \label{fig:cigar}}
\end{figure}
Similarly, in a black hole spacetime, the partition function at inverse temperature $\beta$ is calculated by a Euclidean path integral. The geometry is the Euclidean black hole, obtained from the Schwarzschild metric \eqref{schw} by setting $t = i t_E$,
\begin{equation} \langle{EuclBH}
ds^2_E = \left( 1 - \frac{r_s}{r} \right) dt_E^2 + \frac{dr^2}{1 - \frac{r_s}{r}} + r^2 d\Omega_2^2 \ , ~~~~~~~~~~ t_E = t_E + \beta \ .
\end{equation}
In the Euclidean geometry, the radial coordinate is restricted to $r> r_s$, because we saw that $r-r_s$ is like the radial coordinate in polar coordinates, and $r=r_s$ is the origin $-$ Euclidean black holes do not have an interior. In order to avoid a conical singularity at $r=r_s$ we need to adjust $\beta$ to
\begin{equation}
\beta = 4 \pi r_s \ .
\end{equation}
This geometry, sometimes called the `cigar,' is pictured in fig.~\ref{fig:cigar}. The tip of the cigar is the horizon. Far away, for $r \gg r_s$, there is a Euclidean time circle of circumference $\beta$, which is the inverse temperature as seen by an observer far away. Notice that in the gravitational problem we fix the length of the circle far away, but we let the equations determine the right radius in the rest of the geometry.
The Euclidean path integral on this geometry is interpreted as the partition function,
\begin{equation}
Z(\beta) = \mbox{Path integral on the Euclidean black hole} \sim e^{ - I_{\rm classical}} Z_{\rm quantum} \ .
\end{equation}
It has contributions from both gravity and quantum fields. The gravitational part comes from the Einstein action, $I$, and is found by evaluating the action on the geometry \nref{EuclBH}. The quantum part is obtained by computing the partition function of the quantum fields on this geometry \nref{EuclBH}.
It is important that the geometry is completely smooth at $r=r_s$ and therefore the quantum contribution has no singularity there.
This is related to the fact that an observer falling into an evaporating black hole sees nothing special at the horizon, as in the classical theory.
Then applying the standard thermodynamic formula to the result,
\begin{equation}
S = (1 - \beta \partial_\beta) \log Z(\beta)
\end{equation}
gives the generalized entropy \eqref{sgen}. We will not give the derivation of this result but it uses that we are dealing with a solution of the equations of motion and that the non-trivial part of the variation can be concentrated near
$r=r_s$ \cite{Gibbons:1977mu }.
\subsection{Evaporating black holes}
\begin{figure}[p!]
\begin{center}
\begin{overpic}[grid=false,scale=1]{figures/evap-stagesc.pdf}
\put(150,1050){
\parbox[t]{4in}{\centering \Large Stages of Black Hole Evaporation}
}
\put(50,950) {
\parbox[t]{2in}{$(a)$ After stellar collapse,
the outside of the black hole is nearly stationary, but on the inside, the geometry continues to elongate in one direction while pinching toward zero size in the angular directions.
}}
\put(50,720) {
\parbox[t]{4in}{
$(b)$
The Hawking process creates entangled pairs, one trapped behind the horizon
and the other escaping to infinity where it is observed as (approximate)
blackbody radiation.
}}
\put(50,535) {
\parbox[t]{2in}{
The black hole slowly shrinks as its
mass is carried away by the radiation.
}}
\put(50,340) {
\parbox[t]{3.5in}{
$(c)$ Eventually the angular directions shrink to zero size. This is the singularity. The event horizon also shrinks to zero.
}}
\put(50,150) {
\parbox[t]{3.5in}{
$(d)$ At the end there is a smooth spacetime containing thermal Hawking radiation but no black hole.
}}
\end{overpic}
\end{center}
\caption{ \label{fig:evap-stages}}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=1]{figures/evap-penrose.pdf}
\end{center}
\caption{Penrose diagram for the formation and evaporation of a black hole. Spatial slices $(a)$-$(d)$ correspond to the slices drawn in fig.~\ref{fig:evap-stages}. \label{fig:evap-penrose}}
\end{figure}
Hawking radiation carries energy away to infinity and therefore reduces the mass of the black hole. Eventually the black hole evaporates away completely --- a primordial black hole of mass $10^{12}\,$kg, produced in the early universe, would evaporate around now. The Hawking temperature of a solar mass black hole is $10^{-7}\,$K and its lifetime is $10^{64}$ years.
The spacetime for this process is described in figures \ref{fig:evap-stages} and \ref{fig:evap-penrose}.
The Hawking process can be roughly interpreted as pair creation of entangled particles near the horizon, with one particle escaping to infinity and the other falling toward the singularity. This creation of entanglement is crucial to the story and we will discuss it in detail after introducing a few more concepts.
\section{Discussion}
\langle{Discussion}
\subsection{Short summary}
Let us summarize some of the points we made in the review.
First we discussed classic results in black hole thermodynamics, including Hawking radiation and black hole entropy. The entropy of the black hole is given by the area of the horizon plus the entropy of the quantum fields outside.
We discussed how these results inspired a central dogma which says that a black hole from the outside can be described in terms of a quantum system with a number of degrees of freedom set by the entropy.
Next we discussed a formula for the fine-grained entropy of the black hole which involves finding a surface that minimizes the area plus the entropy of quantum fields outside.
Using this formula, we computed the entropy for an evaporating black hole and found that it follows the Page curve. Then we discussed how to compute the entropy of radiation. The gravitational fine-grained entropy formula tells us that we should include the black hole interior and it gives a result that follows the Page curve, too.
These results suggest that the black hole degrees of freedom describe a portion of the interior, the region inside the entanglement wedge.
Finally we discussed how replica wormholes explain why the interior should be included in the computation of the entropy of radiation.
\subsection{Comments and problems for the future }
It is important to point out the following important feature of the gravitational entropy formulas, both the coarse-grained and the fine-grained one. Both formulas involve a geometric piece, the area term, which does not obviously come from taking a trace over some explicit microstates. The interpretation of these quantities as arising from sums over microstates is an assumption, a part of the ``central dogma," which is the simplest way to explain the emergence of black hole thermodynamics, and has strong evidence from string theory.
For this reason, the success in reproducing the Page curve does not translate into a formula for individual matrix elements of the density matrix. The geometry is giving us the correct entropy, which involves a trace of (a function of) the density matrix.
Similarly we do not presently know how to extract
individual matrix elements of the black hole S-matrix, which describes individual transition amplitudes for each microstate. Therefore the current discussion leaves an important problem unresolved. Namely, how do we compute individual matrix elements of the S-matrix, or $\rho$, directly from the gravity description (without using a holographic duality)? In other words, we have discussed how to compute the entropy of Hawking radiation, but not how to compute its precise quantum state. This is an important aspect of the black hole information problem, since one way of stating the problem is: Why do different initial states lead to the same final state? In this description the different initial states correspond to different interiors. In gravity, we find that the final state for radiation also includes the interior.
The idea is that very complex computations in the radiation can create wormholes that reach into that interior and pull out the information stored there \cite{Penington:2019kki},
see also \cite{Maldacena:2013xja,Gao:2016bin}.
The present derivations for the
gravitational fine-grained entropy formulas discussed in this paper rely on the Euclidean path integral. It is not clear how this is defined precisely in gravity. For example, which saddle points should we include? What is the precise integration contour? It is possible that some theories of gravity include replica wormhole saddles and black holes evaporate unitarily, while in other theories of gravity they do not contribute to the path integral, the central dogma fails, and Hawking's picture is accurate. (We suspect that the latter would not be fully consistent theories.)
Another aspect of the formulas which is not yet fully understood is the imaginary cutoff surface, beyond which we treated spacetime as fixed. This is an important element in the derivation of the formula \eqref{island} as discussed in section \ref{replicas}.
A more complete understanding will require allowing gravity to fluctuate everywhere throughout spacetime. For example, we do not know whether the central dogma applies when the cutoff is at a finite distance from the black hole, or precisely how far we should go in order to apply these formulas. The case that is best understood is when this cutoff is at the boundary of an AdS space. On the other hand, the imaginary cutoff surface is not as drastic as it sounds because the same procedure is required to make sense of the ordinary Gibbons-Hawking entropy in asymptotically flat spacetime.
Note that when we discussed the radiation, we had two quantum states in mind. First we had the semiclassical state, the state of radiation that appears when we use the semiclassical geometry of the evaporating black hole. Then we had the exact quantum state of radiation. This is the state that would be produced by the exact and complete theory of quantum gravity. Presumably, to obtain this state we will need to sum over all geometries, including non-perturbative corrections. This is something that we do not know how to do in any theory of gravity complicated enough to contain quantum fields describing Hawking radiation. (See however \cite{Saad:2019lba,Penington:2019kki} for some toy models.) The magic of the gravitational fine-grained entropy formula is that it gives us the entropy of the exact state in terms of quantities that can be computed using the semiclassical state. One could ask, if you are an observer in the radiation region, which of these two states should you use? If you do simple observations, the semiclassical state is good enough. But if you consider very complex observables, then you need to use the exact quantum state. One way to understand this is that very complex operations on the radiation weave their own spacetime, and this spacetime can develop a connection to the black hole interior. See \cite{Susskind:2018pmk} for more discussion.
This review has focused on novel physics in the presence of black hole event horizons. In our universe, we also have a cosmological event horizon due to accelerated expansion. This horizon is similar to a black hole horizon in that it has an associated Gibbons-Hawking entropy and it Hawking radiates at a characteristic temperature
\cite{Figari:1975km,Gibbons:1977mu}.
However, it is unclear whether we should think of the cosmological horizon as a quantum system in the sense of the central dogma for black holes. Applying the ideas developed in the previous sections to cosmology may shed light on the nature of these horizons and the quantum nature of cosmological spacetimes.
There is a variant of the black hole information problem where one perturbs the black hole and then looks at the response at a very late time in the future \cite{Maldacena:2001kr}. For recent progress in that front see \cite{Saad:2018bqo,Saad:2019lba,Saad:2019pqd}.
Wormholes similar to the ones discussed here were considered in the context of theories with random couplings \cite{Coleman:1988cy,Giddings:1988cx,Polchinski:1994zs}. Recently, random couplings played an important role in the solution of a simple two dimensional gravity theory \cite{Saad:2018bqo,Saad:2019lba}.
We do not know to what extent random couplings are important for the issues we discussed in this review. See also \cite{Marolf:2020xie}.
We should emphasize one point. In this review, we have presented results that can be understood purely in terms of gravity as an effective theory. However, string theory and holographic dualities played an instrumental role in inspiring and checking these results. They provided concrete examples where these ideas were tested and developed, before they were applied to the study of black holes in general.
Also, as we explained in the beginning, we have not followed a historical route and we have not reviewed ideas that have led to the present status of understanding.
Finally, we should finish with a bit of a cautionary tale. Black holes are very confusing and many researchers who have written papers on them have gotten some things right and some wrong. What we have discussed in this review is an {\it interpretation} of some geometric gravity computations. We interpreted them in terms of entropies of quantum systems. It could well be that our interpretation will have to be revised in the future, but we have strived to be conservative and to present something that is likely to stand the test of time.
A goal of quantum gravity is to understand what spacetime is made of. The fine-grained entropy formula is giving us very valuable information on how the fundamental quantum degrees of freedom are building the spacetime geometry. These studies have involved the merger and ringdown of several different fields of physics over the last few decades: high energy theory, gravitation, quantum information, condensed matter theory, etc., creating connections beyond their horizons. This has not only provided exciting insights into the quantum mechanics of black holes, but also turned black holes into a light that illuminates many questions of these other fields. Black holes have become a veritable source of information!
\section{The entanglement wedge and the black hole interior} \label{wedge}
The central dogma talks about some degrees of freedom which suffice to describe the black hole from the outside. A natural question to ask is whether these degrees of freedom also describe the interior. We have several possibilities:
a) They do not describe the interior.
b) They describe a portion of the interior.
c) They describe all of the interior.
A guiding principle has been the formula for the fine-grained entropy of the black hole. This formula is supposed to give us the entropy of the density matrix that describes the black hole from the outside, if we are allowed to make arbitrarily complicated measurements.
We have seen that the answer for the entropy depends on the geometry of the interior. However, it only depends on the geometry and the state of the quantum fields up to the extremal surface. Note that if we add an extra spin in an unknown state between the cutoff surface and the extremal surface, then it will modify the fine-grained entropy.
Therefore it is natural to imagine that the degrees of freedom in the central dogma describe the geometry up to the minimal surface. If we know the state on any spatial region, we also know it in the causal diamond associated to that region, recall figure \ref{Diamond}. This has historically been called the ``entanglement wedge" \cite{Czech:2012bh,Wall:2012uf,Headrick:2014cta}. Following our presentation perhaps a better name would be ``the fine-grained entropy region," but we will not attempt to change the name.
As a first example, let us look again at a black hole formed from collapse but before the Page time. The minimal surface is now a vanishing surface at the origin and the entanglement wedge of the black hole is the region depicted in green in figure \ref{EWfig}a.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=.35]{figures/EWa.pdf} \ \ \ \ \ \ \ \
\includegraphics[scale=.35]{figures/EWb.pdf} \ \ \ \ \ \ \ \
\includegraphics[scale=.35]{figures/EWc.pdf}
\caption{ In green we show the entanglement wedges of the black hole and in blue the entanglement wedges of the radiation region. Different figures show the wedges at different times. They are different because there is transfer of quantum information through the cutoff surface. To describe the white regions we need information both from the black hole region and the radiation region. }
\label{EWfig}
\end{center}
\end{figure}
As a second example, we can look at the entanglement wedges of both the black hole and the radiation at late times, larger than the Page time. These are shown in figure \ref{EWfig}(b).
The idea is that the black hole degrees of freedom describe the region of the spacetime in the black hole entanglement wedge while the radiation describes the degrees of freedom in the radiation entanglement wedge. It is important that the degrees of freedom that describe the black hole only describe a portion of the interior, the green region in figure \ref{EWfig}(b). The rest of the interior is encoded in the radiation.
Note how this conclusion, namely that the interior belongs to the entanglement wedge of the radiation, follows from the same guiding principle of using the fine-grained entropy. Since the fine-grained entropy of the radiation after the Page time contains the interior as part of the island, its entropy is sensitive to the quantum state of that region; a spin in a mixed state in the island contributes to the fine-grained entropy of the radiation.
Finally, as a third example, we can consider a fully evaporated black hole, see figure \ref{EWfig}(c). In this case the region inside the cutoff surface is just flat space. The entanglement wedge of the radiation includes the whole black hole interior. This picture assumes that nothing too drastic happens at the endpoint of the evaporation.
So far we have been a bit vague by the statement that we can ``describe'' what is in the entanglement wedge. A more precise statement is the ``entanglement wedge reconstruction hypothesis," which says that if we have a relatively small number of qubits in an unknown state but located inside the entanglement wedge of the black hole, then by performing operations on the black hole degrees of freedom, we can read off the state of those qubits.
This hypothesis is supported by general principles of quantum information. Let us consider the case when the radiation entanglement wedge covers most of the interior, as in figure \ref{EWfig}(b). Then the action of the black hole interior operators of the semiclassical description affect the entropy of radiation, according to the gravitational entropy formula. Assuming that this formula captures the true entropy of the exact state of radiation, this means that these operators are changing this exact state \cite{Jafferis:2015del,Dong:2016eik}, see also \cite{Almheiri:2014lwa}. Then it follows from general quantum information ideas that there is a map, called the
Petz map \cite{PetzBook}, that allows us to recover the information \cite{Cotler:2017erl}. In the context of simple gravity theories, this map can be constructed using the gravitational path integral \cite{Penington:2019kki}, again via the replica method. This provides a formal argument, purely from the gravitational side, for the validity of the hypothesis.
The actual quantum operations we would need to perform on the radiation are expected to be exceedingly complex, with a complexity that is (roughly) exponential in the black hole entropy
\cite{Brown:2019rox,Kim:2020cds}.
For black holes after the Page time, most of the interior is {\it not} described by the black hole degrees of freedom appearing in the central dogma. In fact, it is described by the radiation degrees of freedom. At late times, these are much more numerous than the area of the horizon.
Note that there is an unfortunate language problem which sometimes gets in the way of the concepts we are trying to convey. The reason is that there are two {\it different} things that people might want to call ``black hole degrees of freedom." We have been calling ``black hole degrees of freedom'' the ones that appear in the central dogma. They are the ones that are sufficient to describe the black hole from the outside. These are not very manifest in the gravity description. The other possible meaning would refer to the quantum fields living in the semiclassical description of the black hole interior. As we explained above, depending on which side of the quantum extremal surface they lie on, these degrees of freedom can be encoded either in the Hilbert space appearing in the central dogma or the Hilbert space living in the radiation region.
This observation also solves an old puzzle with the interpretation of the Bekenstein-Hawking area formula that was raised by Wheeler \cite{WheelerBag}.
He pointed out that there exist classical geometries which look like a black hole from the outside but that inside can have arbitrarily large entropy, larger than the area of the horizon. He named them ``bags of gold," see figure \ref{BagGold}. The solution to this puzzle is the same. When the entropy in the interior is larger than the area of the neck the entanglement wedge of the black hole degrees of freedom will only cover a portion of the interior, which does not include that large amount of entropy \cite{WallGold}. In fact, the geometry of an evaporating black hole after the Page time is a bit like that of the ``bag of gold'' examples.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=.35]{figures/BagGold}
\caption{ Wheeler's ``bag of gold'' geometry. It is a spatial geometry that is asymptotically flat and has a narrow neck with a big ``bag'' containing some matter. It obeys the initial value constraints of general relativity. From the point of view of the outside the geometry evolves into a black hole whose area is given by the area of the neck. The entropy inside the bag can be much larger than the area of the neck. Under these circumstances the fine-grained entropy of the exterior is just given by the area of the neck and the entanglement wedge does not include the interior. }
\label{BagGold}
\end{center}
\end{figure}
\section{Replica wormholes}
\langle{replicas}
In this section we would like to give a flavor for the derivation of the
formula for fine-grained entropy \cite{Lewkowycz:2013nqa,Faulkner:2013ana,Dong:2016hjy,Dong:2017xht,Penington:2019kki,Almheiri:2019qdq}. We focus on the island formula \eqref{island} for the entropy of the Hawking radiation \cite{Penington:2019kki,Almheiri:2019qdq}.
To illustrate the principle, we will consider the case when the black hole has evaporated completely and we will ignore details about the last moments of the black hole evaporation, when the interior disconnects from the exterior. For the purposes of computing the entropy, the geometry is topologically as shown in figure \ref{BabyUniverse}(b). We want to compute the entropy of the final radiation, assuming that the black hole formed from a pure state.
We start from the (unnormalized) initial state $|\Psi\rangle$ $-$ for example a collapsing star $-$ and evolve to the final state using the gravitational path integral which involves the semiclassical geometry of the evaporating black hole. This gives an amplitude $\langle j | \Psi\rangle$ for going from the initial state to a particular final state of radiation, $|j\rangle $ . We can now form a density matrix $\rho = |\Psi\rangle\langle \Psi|$ by computing the bra of the same state via a path integral. Its matrix elements $\rho_{ij} = \langle i |\Psi\rangle\langle \Psi|j\rangle$ are, in principle, computed by the full gravitational path integral in figure \ref{fig:rhoLorentzianA}a. We have specified the initial and final state on the outside, but we have not been very explicit about what we do in the interior yet, and indeed, this will depend on a choice of $|i\rangle $ and $|j\rangle$.
\begin{figure}
\begin{center}
\includegraphics[scale=1]{figures/rhoLorentzianA.pdf}
~~~~~~~~~~~~(a) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(b)
\end{center}
\caption{\small (a) Path integral representation of the matrix elements $\rho_{ij}$. (b) Path integral representation of $\textrm{Tr}\, \rho$. Regions with repeated indices are identified in this figure and the figures that follow. The purple line represents entanglement. \label{fig:rhoLorentzianA}}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=1]{figures/rhoLorentzianHawking.pdf}
\end{center}
\caption{\small Hawking saddle in the calculation of $\Tr[\rho^2]$. Note that following the pink line through the identifications $i \leftrightarrow i$ and $j \leftrightarrow j$ produces just one closed loop. Therefore this does not factorize into two copies of $\textrm{Tr}\, \rho$.\label{fig:rhoLorentzianHawking}}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.95]{figures/rhoLorentzianWormhole.pdf}
\end{center}
\caption{\small Replica wormhole saddle in the calculation of $\Tr[\rho^2]$. The black holes are joined in the interior. The second figure is just a rearrangement of the first, showing that $\Tr[\rho^2] = (\Tr[\rho])^2$. \label{fig:rhoLorentzianWormhole}}
\end{figure}
The trace of the density matrix,
\begin{equation}
\textrm{Tr}\, \rho = \sum_{i} \langle i|\Psi\rangle\langle \Psi|i\rangle \ ,
\end{equation}
is computing by identifying the final state of the bra and the ket and summing over them. This gives
the geometry in figure \ref{fig:rhoLorentzianA}b. (For those who know, this is really an in-in Schwinger-Keldysh diagram.)
We want to diagnose whether the final state has zero entropy or not.
For that purpose, we compute the so-called ``purity'' of the state, defined as $\textrm{Tr}\,[\rho^2]$. If $\rho$ is an unnormalized {\it pure state} density matrix then $\textrm{Tr}\,[\rho^2] = ( \textrm{Tr}\,[\rho])^2$, while if it has a large entropy we expect $ \textrm{Tr}\,[\rho^2 ] \ll ( \textrm{Tr}\,[\rho])^2 $.
We can compute $\textrm{Tr}\,[\rho^2]$ via a path integral argument by connecting the exterior regions as shown in figures \ref{fig:rhoLorentzianHawking} and \ref{fig:rhoLorentzianWormhole}. A key point is that, in gravity, we should typically consider a sum over all possible topologies.\footnote{ This sum is very clearly required in some examples of AdS/CFT to match CFT properties
\cite{Witten:1998zw}.} This implies that
we should sum over different ways of connecting the interiors. Figures \ref{fig:rhoLorentzianHawking} and \ref{fig:rhoLorentzianWormhole} show two different ways of connecting the interior. The first diagram,
figure \ref{fig:rhoLorentzianHawking}, gives the Hawking answer with its large entropy, so that
\begin{equation} \textrm{Tr}\,[\rho^2 ]|_{\rm Hawking~saddle} \ll ( \textrm{Tr}\,[\rho])^2 \langle{HSad} \ .
\end{equation}
The second diagram, figure \ref{fig:rhoLorentzianWormhole}, which is called a replica wormhole, gives
\begin{equation} \langle{WSad}
Tr[\rho^2]|_{\rm Wormhole~saddle} = (\textrm{Tr}\,[\rho])^2
\end{equation}
and therefore has zero entropy. The contribution of the replica wormhole is larger and it therefore dominates over the Hawking saddle \nref{HSad}. We conclude that the leading order contribution gives the expected answer from unitarity, \nref{WSad}.
The contribution in figure \ref{fig:rhoLorentzianHawking} is still present and one could worry that it would spoil the agreement. We will not worry about exponentially small contributions, hoping that this (small) problem will be fixed in the future.
This calculation is very similar to the Gibbons-Hawking calculation of the black hole entropy reviewed in section \ref{ss:gibbonshawking}. The Hawking saddle and the replica wormhole saddle in Euclidean signature are drawn in fig. \ref{fig:euclidean-wormholes}. In the Hawking calculation we have two copies of the cigar geometry, while in the replica wormhole the black holes are joined through the interior. These pictures are schematic because the actual replica wormhole for an evaporating black hole is a complex saddlepoint geometry.
\begin{figure}
\begin{center}
\includegraphics[scale=1]{figures/euclidean-wormholes.pdf}
\end{center}
\caption{\small Euclidean replica wormholes for computing the purity of the radiation outside the cutoff surface. The dots denote other possible topologies, which are generally subdominant. \label{fig:euclidean-wormholes}}
\end{figure}
The calculation of the von Neumann entropy is a bit more complicated, but the spirit is the same. We use the replica method. That is, to compute the entropy, we consider $n$ copies of the system and compute $\textrm{Tr}\,[\rho^n]$, where $\rho$ is the density matrix of either the black hole or the radiation. We then analytically continue in $n$ and compute the entropy,
\begin{equation}
S = (1-n \partial_n) \log\textrm{Tr}\, [\rho^n]_{n=1} \ .
\end{equation}
For $n \neq 1$, the black hole interior can be connected in various ways among the $n$ copies. If they are disconnected we get the usual Hawking answer for the entropy, and if they are completely connected we get the answer from the new quantum extremal surface, after continuing to $n \to 1$. The minimum of the two dominates the path integral and gives the unitary Page curve, see \cite{Penington:2019kki,Almheiri:2019qdq} (see also \cite{Marolf:2020xie,Hartman:2020swn}).
| 2024-02-18T23:39:56.370Z | 2020-06-15T02:06:29.000Z | algebraic_stack_train_0000 | 885 | 13,397 |
|
proofpile-arXiv_065-4381 | \section{Introduction}
In high-resolution γ-ray spectroscopy, efficiency is a significant attribute.
While a large detection efficiency is beneficial for data collection, it is the precise value (and its energy dependency) which is crucial for data analysis.
The γ-ray energy range of interest heavily depends on the experiment.
Most γ-rays observed in nuclear physics stem from transitions between excited states of nuclei.
These commonly have energies between \SI{100}{\keV} and \SI{3}{\MeV}, and thus γ-ray spectroscopy is often performed in this energy region.
Several research areas have come into focus which require γ-ray detection at energies around \SI{10}{\MeV} and higher, for example studies of the Pygmy Dipole Resonance (PDR) and radiative capture reactions for nuclear astrophysics.
For the PDR, the decay behavior of $J^\pi=1^-$ states at energies below the neutron separation energy is studied \cite{Savran2013}.
This includes direct decays to the ground state, i.e., γ-ray transitions around \SIrange{5}{10}{\MeV}.
For radiative capture reactions, direct transitions from the entry state at the sum of center-of-mass energy and Q-value to the ground state must be investigated \cite{Netterdon2015}.
This translates to γ-ray energies up to \SI{15}{MeV}.
The higher the γ-ray energy, the harder becomes a reliable experimental determination of the efficiency.
Standard sources provide calibration up to \SI{3.6}{\MeV} only.
From thereon, fewer and more complex methods can be used, see \cref{c:excal}.
Our areas of research require precise efficiency calibration at energies hardly accessible experimentally.
Simulations can address this need for fast, easy, and reliable calibration at any γ-ray energy.
Interactions of γ-rays with matter are known well enough; and given geometries and materials, Monte-Carlo simulations with particle transport codes like \textsc{Geant4} \cite{Agostinelli2003} can provide full-energy-peak (FEP), single-escape-peak (SEP), double-escape-peak (DEP), and coincidence efficiencies.
\textsc{Geant4} provides a simulation framework, but no ready-to-use executable -- one must implement each specific setup.
G4Horus provides a ready-to-use \textsc{Geant4} based application for simulating the efficiency of γ-ray detectors.
It is used at the Institute for Nuclear Physics, University of Cologne, to simulate the efficiency of the HPGe-detector array HORUS, see \cref{c:horus}.
It provides everything required to simulate the efficiency, that includes especially detector and target chamber geometries and a predefined workflow that requires minimal knowledge and effort from the user.
\subsection{\texorpdfstring{γ}{Gamma}-ray spectroscopy with HORUS}\label{c:horus}
Located at the \SI{10}{MV} FN-Tandem accelerator at the Institute for Nuclear Physics, University of Cologne, the γ-ray spectrometer HORUS (High-efficiency Observatory foR Unique Spectroscopy) is used to investigate the structure of nuclei and measure cross sections to answer questions in nuclear astrophysics.
It consists of up to 14 HPGe detectors, six of which are equipped with active anti-Comp\-ton BGO shields \cite{Netterdon2014a}.
Signals from the detectors are processed by XIA's Digital Gamma Finder 4C Rev.\,F, which allows for acquisition of so-called \emph{listmode} data, where coincident hits in different detectors can be correlated \cite{Pickstone2012}.
For example, γγ coincidences can be used to investigate quadrupole and octupole states \cite{Pascu2015} or low-spin structures \cite{Fransen2004}.
Passivated Implanted Planar Silicon (PIPS) particle detectors can be added with the SONIC detector chamber \cite{Pickstone2017}.
They are used in coincidence with the HPGe detectors to select events with a specific excitation energy, which eliminates other unwanted feeding transitions.
The resulting spectra are used for lifetime measurements with the DSAM technique \cite{Hennig2015} or to investigate the Pygmy Dipole Resonance \cite{Pickstone2015}.
In addition, high energetic γ-rays, which are emitted after capture of protons or α-particles, can be used to determine total and partial cross sections for nuclear astrophysics \cite{Netterdon2014a, Mayer2016}.
HORUS has no default, fixed configuration. For every experiment, the detectors and target chambers are optimized to match the experimental requirements.
\subsection{Experimental efficiency calibration}\label{c:excal}
The full-energy-peak efficiency can be determined experimentally using standardized calibration sources and known reactions.
Standard sources of not-too-short lived radioactive isotopes provide easily accessible calibration points up to \SI{3.6}{\MeV} and thus are commonly used for both energy and efficiency calibration.
Sources with known activity made from, e.g., \isotope[152]{Eu} and \isotope[226]{Ra}, are excellent for the γ-ray-energy range up to \SI{3}{\MeV}.
As their half-lifes span decades, they only need to be procured once.
\isotope[56]{Co} emits usable γ-rays up to \SI{3.6}{\MeV}.
Due to its half-life of \SI{77}{\day}, sources need to be re-activated about every year via the (p,n) reaction on an enriched \isotope[56]{Fe} target.
More exotic isotopes can extend the coverage up to \SI{5}{\MeV}.
The energy range covered by the 69 nuclides included in the IAEA xgamma standard \cite{iaea-xgamma} ends at \SI{4.8}{\MeV} with the isotope \isotope[66]{Ga}.
The Decay Data Evaluation Project (DDEP) \cite{DDEP} lists several more exotic nuclei.
Here, the highest transition at \SI{5}{\MeV} also stems from \isotope[66]{Ga}.
With an almost negligible intensity of \SI{0.00124\pm0.00018}{\percent}, it is, however, not well suited for calibration purposes.
While the energy range covered by \isotope[66]{Ga} is expedient, the short half-life of \SI{9.5}{\hour} is not and requires the source to be produced anew for each project -- increasing the already high workload of the main experiment.
Decay measurements of short-lived isotopes in target position can extend the energy range up to \SI{11}{\MeV}.
The decay of \isotope[24]{Al} with a half-life of \SI{2}{\s}, created by pulsed activation of \isotope[24]{Mg}, is a feasible way to obtain calibration lines up to \SI{10}{\MeV} \cite{Wilhelm1996, Pickstone2017}.
Neither the IAEA nor the DDEP currently include \isotope[24]{Al} in their list of recommended nuclides, thus there can be doubts on the accuracy of the existing decay intensity data.
This method is even more involved than the methods mentioned before, as a pulsing device must be set up at the accelerator injection and linked to the data acquisition.
In addition, this method releases neutrons close to the HPGe detectors, which might be damaged.
Direct γ-ray emissions from capture reactions can also be used for efficiency calibration.
Emissions from neutron capture reactions, mostly \isotope[14]{N}(n,γ)\isotope[15]{N}, have been used successfully \cite{Molnar2002, Belgya2008, MIYAZAKI2008}.
As this method requires neutrons, which are neither trivial to procure nor healthy for HPGe detectors, we have made no efforts to introduce this method at HORUS.
We have previously used direct γ-ray emissions from the proton capture resonance of \isotope[27]{Al}(p,γ)\isotope[28]{Si} at $E_p = \SI{3674.4}{\keV}$ \cite{Netterdon2014a}.
As the measurements take about a day, the intensity uncertainties are high, and angular distributions must be corrected for, we no longer perform these regularly.
The \isotope[27]{Al}(p,γ)\isotope[28]{Si} reaction has many resonances, however only few have been measured extensively, e.g., at $E_p = \SI{992}{\keV}$ \cite{Scott1975}.
There are also several resonant proton capture reactions on other light isotopes, e.g., on \isotope[23]{Na}, \isotope[39]{K}, \isotope[11]{B}, \isotope[7]{Li} \cite{Elekes2003,Zijderhand1990,Ciemaa2009}, and \isotope[13]{C} \cite{Kiener2004}.
Unfortunately, these comparatively low-lying resonances are hard to reach with the high-energy FN-Tandem accelerator -- they might be perfectly accessible for other groups.
Alternatively, given enough calibration points, extrapolation using fitted functions can be used. This process can produce diverging results for large distances from the highest calibration point and choice of fit function \cite{Molnar2002}, but is reasonably accurate otherwise and low-effort.
To Summarize: A thorough γ-ray efficiency calibration uses up more time and effort the higher the γ-ray energy of interest.
\section{Purpose}\label{c:purpose}
We developed G4Horus to provide several services to support experiments at HORUS.
The goals in order of importance are:
1) Provide accurate full-energy-peak efficiency.
The difficult access to calibration points at high energies as described in \cref{c:excal} leaves a gap which Monte-Carlo simulations can fill.
Simultaneously, they can provide the single- (SEP) and double-escape-peak (DEP) efficiency with and without active veto signal from the BGO anti-Compton shields.
2) Require minimum effort and domain-specific knowledge from the user.
\textsc{Geant4} does not offer a ready-to-use application and even to get \emph{just} the efficiency, a full implementation of all components is required.
All users should be able to use the software without having to worry about knowing \textsc{Geant4} and without spending more time than necessary.
3) Adapt to all experimental configurations.
The HORUS setup is highly configurable with many different detectors, target chambers, and other equipment.
Users should be able to reproduce their individual configuration from predefined modular parts.
4) Guide new developments.
Experimental requirements continuously change.
Simulations can help to make informed decisions for adaptations to the setup.
5) Provide coincidence and other high-level data.
With simulations, coincidence efficiencies can be checked, and the correctness of the analysis-software procedure confirmed. They can also be used to develop and test new experimental setups and analysis methods.
\section{Implementation}
Monte Carlo simulations of γ-ray detectors are well established \cite{Hardy2002, Soderstrom2011, Baccouche2012}.
For \textsc{Geant4}, the three main components geometry, physics, and actions must be implemented.
The main difficulty is summarized well in \cite{Giubrone2016}:
\enquote{The accuracy of \textsc{Geant4} simulations is heavily dependent on the modeled detector geometry. Characterizing a detector is difficult, especially if its technical characteristics are not well known.}
This especially applies to HPGe detectors, where the manufacturer often only provides the most basic information, e.g., crystal size and weight.
X-ray imaging is a non-destructive method to obtain excellent geometry data for the crystal \cite{Chuong2016}, however not the full volume of the crystal might be \emph{active} volume, see \cref{c:geocoax}.
Passive materials between the source and the detector must be implemented accurately as well.
Users of Monte-Carlo simulation software commonly manufacture the desired shapes by writing code to create, intersect, and position basic shapes.
This seems excessively complicated compared to industry standard engineering tools.
In our case, the complex shapes of the CNC-milled target chambers are difficult or even impossible to implement with standard \textsc{Geant4} tools.
Instead, we use CAD files directly, see \cref{c:chambergeo}.
\subsection{Geometry}
\subsubsection{Target chambers and CAD-based geometry}\label{c:chambergeo}
In general, geometry in \textsc{Geant4} is implemented by writing \texttt{C++} code.
Basic shapes like boxes and spheres are created, rotated, intersected, and placed manually without visual interfaces.
While this is feasible for simple volumes, more complicated structures might be drastically reduced in detail or simply skipped and not implemented at all.
Such a simplified geometry might be acceptable or even desired for faster execution in some cases.
However, investigations of, e.g., background caused by passive components, are meaningless without all physical structures placed completely and accurately.
The target chambers used at HORUS are, like most modern mechanical structures, created using Computer Aided Design (CAD) software, and then build with Computer Numerical Control (CNC) milling machines or even 3D printers.
We think that not using these CAD-files, which already exist \emph{anyway}, is a massive waste of time and effort, independent of the complexity of the models.
Even if these do not exist yet, it should be significantly faster and less error prone to re-create them with a CAD program instead of writing \texttt{C++}-\textsc{Geant4} code.
There are several concepts for creating \textsc{Geant4} compatible volumes from CAD models.
If the shape has been constructed with Constructive Solid Geometry (CSG), the underlying configuration of basic components can be translated to basic \textsc{Geant4} shapes and Boolean operations.
In principle, this is the favorable solution, as it is simple yet elegant and might offer the best performance during simulation.
If the CSG configuration is not known, it is sometimes possible to recreate it with CSG decomposition \cite{Lu2017a}.
Complex volumes can also be converted to a tessellated shape, where the surface is represented by a triangle mesh, called \texttt{G4TessellatedSolid} in \textsc{Geant4} \cite{Poole2012}.
Alternatively, the whole volume can be split into many tiny tetrahedrons (\texttt{G4Tet}) using a Delaunay-based algorithm \cite{Si2015}.
A hybrid approach, that is building a simple structure with CGS and adding complex details with tessellated meshes, is also conceivable.
Converted shapes can be stored in the \texttt{GDML} (Geometry Description Markup Language) format.
The idea of using these CAD files in \textsc{Geant4} is not new, but there is no widely adopted solution.
A conversion can either be performed with plugins in the CAD program, a standalone software, or as a dependency in the \textsc{Geant4} application itself.
For example, plugins have once been developed for \emph{FreeCAD} \cite{FreeCADGDML, Pinto2019} and \emph{CATIA} \cite{Belogurov2011}.
Notable standalone projects are \emph{cad-to-geant4-converter} \cite{Tykhonov2015}, \emph{STEP-to-ROOT} \cite{Stockmanns2012a}, \emph{SW2GDMLconverter} \cite{Vuosalo2016}, and \emph{McCad-Salome} \cite{Lu2017a}.
Some projects seem to be abandoned, having received their last update several years ago.
We had success with \emph{CADMesh} \cite{Poole2012a} to integrate our geometry.
The CADMesh software package supports creating tessellated and tetrahedral meshes in \textsc{Geant4} at runtime, which enables fast iteration and a flexible geometry selection.
The sequence of operations is as follows:
We receive the original geometry as STEP file from the mechanical workshop, which includes every detail as its own object.
First, we use FreeCAD to reduce the complexity by deleting minor components that have little to no impact on the efficiency.
This should provide both a smoother mesh conversion as well as a faster simulation.
To asses which components can be deleted, we reasoned that objects that are not in the direct path between source and detector are less critical, for example the connectors in the footer of SONIC, see \cref{f:targetchamber}.
In addition, objects that are either tiny (screws) or made from low-Z material (gaskets, isolation) are also expendable in our case.
This might not hold when investigating the efficiency at very low γ-ray energies or in the X-ray regime, or scenarios where charged particles pass through.
Ideally, one could even remove the screw holes entirely, which would both be closer to reality in terms of material budget and a less complex model.
Second, we group objects made from the same material, e.g., aluminum, together and save them in a single STEP file.
Third, the STEP geometry is converted to an STL mesh.
While FreeCAD can perform this conversion, we experienced several problems, mostly stuck tracks during simulation, using this process.
Instead, we used the online STEP-to-STL converter of a 3D-print-on-demand service without issues.
An honorable mention at this point is the \emph{MeshLab} software for mesh processing and editing.
Once CADMesh loads the STL shape as tessellated volume, it can be assigned its material and placed like any other shape.
An example of this process is shown in \cref{f:targetchamber}.
\begin{figure}
\centering
\includegraphics[width=0.49\columnwidth, height=0.495\columnwidth]{figures/cad-full.jpg}
\includegraphics[width=0.49\columnwidth, height=0.495\columnwidth]{figures/cad-red.jpg}
\includegraphics[width=0.49\columnwidth, height=0.495\columnwidth]{figures/cad-geant4.jpg}
\includegraphics[width=0.49\columnwidth, height=0.495\columnwidth]{figures/sonic2.jpg}
\caption{\label{f:targetchamber} Example for using CAD geometry in \textsc{Geant4}. The original highly-detailed CAD file (t.l.) is reduced to is main components (t.r.) and converted to an STL Mesh. CADMesh then loads this mesh, which can then be assigned a material and placed like a regular solid in \textsc{Geant4} (b.l.). This process can recreate the real-life geometry (b.r.) quickly and accurately.}
\end{figure}
\subsubsection{Detector geometry}\label{c:hpgegeo}
Several types of detectors are implemented in G4Horus, which are derived from a common \texttt{Detector} class.
This base class provides basic operations to be placeable by the \texttt{Setup} class, such that they can be mounted appropriately, see \cref{c:setup}.
\texttt{PIPS} particle detectors directly derive from this base class.
For HPGe detectors, several different crystal types exist.
A common \texttt{HPGe} base class provides implementation of the cylindrical aluminum hull, while the derived \texttt{HPGe\-Coaxial}, \texttt{HPGeClover}, and \texttt{HPGeHexagonal} classes implement the respective inner structures.
Initial parameters for most HPGe detectors were taken from the manufacturer data sheets and gathered in \texttt{DetectorLibrary}, a factory class that instantiates the correct detector from its identifier.
While all our HPGe detectors used here are technically coaxial detectors, the \texttt{HPGeCoaxial} implements the unaltered detector shape, a cylinder with a drilled hole from the back.
Data sheets provided by the manufacturer are reasonably detailed and include diameter, length, volume and distance to the end cap.
Educated guesses had to be made sometimes for the dimensions of the hole drilled for the cooling finger.
The crystals implemented by \texttt{HPGeHexagonal} are cut to semi-hexagonal conical shapes and encapsulated in hermetically closed aluminum cans of the same form \cite{Thomas1995}.
This type is used also in EUROBALL \cite{Simpson1997} and it is the predecessor to the six-fold segmented encapsulated MINIBALL \cite{Warr2013} and 36-fold segmented AGATA \cite{Akkoyun2012} detectors.
The dimensions of each crystal are identical apart from the length, which can vary slightly and is noted in the data sheets.
The implementation was tested with \isotope[226]{Ra}, \isotope[56]{Co}, and \isotope[27]{Al}(p,γ)\isotope[28]{Si} calibration data \cite{Mayer2016}.
In addition, a calibration data set with \isotope[226]{Ra}, \isotope[56]{Co}, \isotope[66]{Ga}, and \isotope[24]{Al} was used from an experiment with the SONIC-V3-ΔEE target chamber.
For most classic coaxial detectors, only minor changes, e.g., to the dead layer thickness, were necessary to reproduce the absolute FEP efficiency.
While we tried to bring the efficiency shape in line over the whole energy range, we focused less on the low energy part than described in, e.g., \cite{Chuong2016}.
Some of the encapsulated, hexagonal detectors show an experimental efficiency which is up to \SI{30}{\percent} lower as expected from simulations.
We have investigated this issue in more detail and studied the impact on the simulation accuracy at high energies, see \cref{c:geocoax}.
BGO shields for active Compton suppression were implemented with two different types of cone-shaped, lead front pieces (\emph{noses}).
Energy deposited in these detectors is converted to a veto signal afterwards.
For determining the HPGe FEP efficiency, it is not required to record veto detector data, and they can be used passively.
The two HPGe Clover detectors of the Cologne Clover Counting Setup \cite{Scholz2014a} with four crystals each were implemented with dimensions from prior work.
\subsubsection{Setup geometry}\label{c:setup}
For our experiments, detectors are placed around the target in the center.
The base class \texttt{Setup} is the abstract concept of an experimental setup which provides the common detector placement logic.
The individual setups derive from this base class and provide the Θ and φ coordinates of the mounting points as well as physical structures, if needed.
The main experimental setup covered in this project is the high-efficiency γ-ray spectrometer HORUS \cite{Netterdon2014a}.
It provides 14 mounting points, labeled \texttt{Ge00} to \texttt{Ge13}, for HPGe detectors and BGO anti-Compton shields, see \cref{f:horus}.
In the center of HORUS, different target chambers can be installed.
Two different target chambers for nuclear astrophysics were implemented, one with conventional and one with CAD geometry.
Different versions of the SONIC target chamber are available via CAD geometry.
The SONIC-V3 target chamber has 12 mounting points for PIPS detectors, and its ΔE-E variant additional 12 positions to accommodate thinner PIPS detectors to form ΔE-E telescopes \cite{Pickstone2017}.
For each experiment, the user builds the geometry in \texttt{DetectorConstruction} using \texttt{PlaceDetector(id, po\-si\-tion, distance, filters)}.
Within a single line, a detector is identified by its id, mounted to a named position, and equipped with passive filter materials.
See \cref{f:section} for a schematic view and distance definition.
The whole process of creating all required geometry information is thus reduced to a handful of clearly arranged lines of code, and can be done within minutes:
\begin{verbatim}
auto horus = new Horus(worldLV);
horus->PlaceDetector(
"609502", "Ge00", 7. * cm,
{{"G4_Cu", 2. * mm}, {"G4_Pb", 1. * mm}}
);
horus->PlaceDetector("73954", "Ge01", 7. * cm);
// ...
auto sonic = new SonicV3(worldLV);
sonic->PlaceDetector("PIPS", "Si00", 45.25 * mm);
sonic->PlaceDetector("PIPS", "Si01", 45.25 * mm);
// ...
\end{verbatim}
This method requires recompilation on any geometry change.
While it is possible to build a messenger system to set up the geometry at runtime with \textsc{Geant4} macros, the resulting improvement in usability is currently not deemed worth the loss of direct control and flexibility.
This is a subjective matter and we might revisit this decision in the future.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/horus.png}
\caption{\label{f:horus} Full virtual assembly of SONIC@HORUS. 14 HPGe detectors (blue germanium crystals with transparent black aluminum enclosures) and 6 BGO anti-Compton shields (red, with black lead noses) pointed at the target chamber (grey). Note that the z-axis points in beam direction, and the y-axis points down. Copper filters (orange) are installed in front of the detectors to reduce the number of low-energy γ-rays hitting the detectors.}
\end{figure}
\begin{figure}
\begin{tikzpicture}[>=Latex, font=\sffamily, scale=\columnwidth/252.0pt]
\node [anchor=north west,inner sep=0] (img) at (0,-0.5) {\includegraphics[width=\columnwidth]{figures/section.png}};
\node at (2,-0.2) {anti-Compton Shield};
\draw [-] (2,-0.4) -- (2,-0.7);
\node at (5,-0.35) {Lead Nose};
\draw [-] (5,-0.55) -- (4.7,-1.2);
\node at (7.5,-0.2) {Target Chamber};
\draw [-] (7.5,-0.4) -- (7.5,-0.7);
\node at (1,-4) {Detector Hull};
\draw [-] (1,-3.8) -- (1,-2.5);
\node at (4.5,-4) {Germanium Crystal};
\draw [-] (4,-3.8) -- (3,-2.5);
\node at (2.5,-4.5) {Cooling Finger};
\draw [-] (2.5,-4.3) -- (2,-2);
\node at (5.9,-3.4) {Energy Filter};
\draw [-] (5.9,-3.2) -- (5.3,-2.6);
\node at (8.1,-2) {Target};
\node at (5.8,-1.6) {d\textsubscript{HPGe}};
\draw [thick, |<->|] (7.55,-1.85) -- (3.55,-1.85);
\draw [thick, |<->|] (7.55,-2.15) -- (5.335,-2.15);
\node at (5.8,-2.45) {d\textsubscript{BGO}};
\end{tikzpicture}
\caption{\label{f:section} Schematic view of a HPGe detector and its anti-Compton shield. The distances $d_\text{HPGe}$ and $d_\text{BGO}$ are measured from the target position to the front of the detector or shield with filters equipped. For the anti-Compton shields, different nose sizes are available to match the opening angle at different distances.}
\end{figure}
\subsection{Physics}
Interactions of γ-rays are known well enough for most simulation purposes between \SI{20}{keV} and \SI{20}{\MeV}.
A predefined physics lists can supply all these interactions without hassle.
It is not necessary to create the physics from smallest components.
Most physics lists use the same standard electromagnetic physics, which, given the geometrical uncertainties, should be sufficient for this use case --- there should be no advantage in using the specialized high precision models for X-rays and low energy γ-rays.
G4Horus uses the \texttt{Shielding} physics list by default, because it includes the radioactive decay database.
\subsection{Actions}
All actions are initially dispatched by the \texttt{Action\-Ini\-tial\-iza\-tion} management class.
It parses the parameters passed to the executable and selects the appropriate primary generator, run action, and event action class.
Primary particles can either be generated by the basic \textsc{Geant4} \texttt{ParticleGun} to generate single, mono-energetic γ-rays for efficiency simulation or by specialized generators for, e.g., pγ-reactions.
One out of three output formats can be selected:
The simplest output type are histograms, which are created with the ROOT-compatible classes from \textsc{Geant4} and filled with the deposited energy for each detector.
If coincidence data is required, \texttt{ntuples} can be used.
Here, a table-like structure with a row for each detector is filled with a column for each event, also implemented with the ROOT-compatible classes from \textsc{Geant4}.
For simple efficiency simulations, this is extraordinarily inefficient as almost all entries will be zero.
Even with compression and zero-suppression, several gigabytes of data are accumulated quickly.
Instead, \emph{binary event files} can be used to store events.
They are normally produced by the sorting code \emph{SOCOv2} \cite{SOCOv2} as an intermediate event storage from raw experimental data.
Its data types, an output management class, and the respective actions have been implemented in G4Horus.
The format is well suited for the sparse data produced here, and a full simulation will produce only a few hundred megabytes of data.
The simulated data can be analyzed with the same procedure as real experimental data with the same or similar workflows.
All components are built with multi-threading in mind.
The main servers at the Institute for Nuclear Physics in Cologne provide 32 or 56 logical cores each, which can be used to capacity with the simulations.
The executable can either run in visual mode, where the geometry can be examined in 3D, or batch mode for the actual simulation.
\subsection{Automated data evaluation}
The main mission is the reliable and robust efficiency determination, which extends to simulation evaluation.
For this, a ROOT-script is included to automatically extract full-energy, single-escape, and double-escape-peak efficiencies for all simulated energies.
As energy resolution is neither included in the Monte-Carlo simulation itself nor currently added in the application, the full energy peak is a single isolated bin in the spectrum.
For single- and double-escape-peak, the Compton background is subtracted.
In case the single- and double-escape peak efficiencies must be determined with active Compton suppression, the vetoed spectra are created from \texttt{ntuple} data first.
\section{Dead regions and possible aging effects}\label{c:geocoax}
During extensive simulations of several experiments, it was found that for several hexagonally cut N-type HPGe crystals, the simulated efficiency is higher than the actual measured efficiency, up to \SI{30}{\percent} in some cases.
This issue was investigated further.
The shape of the crystal cannot be the issue, as its dimensions and especially its weight are well documented.
The dead-layer at the front of the detector was also excluded, as matching the required efficiency reduction leads to unrealistic thicknesses of \SI{10}{\mm} (instead of \SI{0.3}{\micro\m}) as well as strong deviations in the shape of the efficiency curve.
As the detectors in question were built over 20 years ago, aging effects might play a role.
The detector was used and stored cooled for most of the time but heated many times to anneal neutron induced damage.
While the dead layer at the front is created due to boron doping and should be immobile, the lithium doping of the core may have diffused further into the detector over time, creating a dead layer around the cooling finger.
Other groups have reported deviations from the manufacturer's crystal dimension specifications and aging effects.
For example, Berndt and Mortreau discovered that their cooling finger diameter is \SI{14}{\mm} instead of the declared \SI{10}{mm} by scanning the detector with highly collimated sources \cite{Berndt2012}.
Huy \emph{et al.} could trace an efficiency reduction back to an increase in the lithium dead layer of their p-type coaxial detector \cite{Huy2007}.
See also \cite{Sarangapani2017, Boson2008} and references therein.
We simulate a possible dead layer increase by splitting the geometry of the hexagonal cut HPGe crystal (radius $r_{C}$ and height $h_{C}$) in an active and inactive part.
Here, we made the simplest possible assumption: A cylinder with radius $r_{I}$ and height $h_{I}$ around the cylindrical borehole with
radius $r_{B}$ and height $h_{B}$, see \cref{f:deadhex-sketch}.
\begin{figure}
\begin{tikzpicture}[font=\small, scale=\columnwidth/6.2cm]
\tikzset{>=latex}
\draw[semithick] (0,-1.5) arc (-90:90:1.5/2.7 and 1.5);
\draw[semithick] (0,-1.5) arc (270:90:1.5/2.7 and 1.5);
\draw[semithick] (5,-1.5) arc (-90:90:1.5/2.7 and 1.5);
\draw[semithick] (5,-1.5) arc (270:90:1.5/2.7 and 1.5);
\draw[semithick] (0,-1.5) -- (5,-1.5);
\draw[semithick] (0,+1.5) -- (5,+1.5);
\draw[|<->|,thin] (0,-1.6) -- (5,-1.6) node [midway, below, yshift=0.9mm] {$h_C$};
\draw[dashed,color=gray] (0,-0.7) arc (-90:90:0.7/2.7 and 0.7);
\draw[dashed,color=gray] (0,-0.7) arc (270:90:0.7/2.7 and 0.7);
\draw[dashed,color=gray] (4.3,-0.7) arc (-90:90:0.7/2.7 and 0.7);
\draw[dashed,color=gray] (4.3,-0.7) arc (270:90:0.7/2.7 and 0.7);
\draw[dashed,color=gray] (0,-0.7) -- (4.3,-0.7);
\draw[dashed,color=gray] (0,+0.7) -- (4.3,+0.7);
\draw[|<->|,thin] (0,-0.8) -- (4.3,-0.8) node [midway, below, yshift=0.9mm] {$h_I$};
\draw[semithick] (0,-0.4) arc (-90:90:0.4/2.7 and 0.4);
\draw[semithick] (0,-0.4) arc (270:90:0.4/2.7 and 0.4);
\draw[semithick] (4,-0.4) arc (-90:90:0.4/2.7 and 0.4);
\draw[semithick] (4,-0.4) arc (270:90:0.4/2.7 and 0.4);
\draw[semithick] (0,-0.4) -- (4,-0.4);
\draw[semithick] (0,+0.4) -- (4,+0.4);
\draw[|<->|,thin] (0,-0.5) -- (4,-0.5) node [midway, below, yshift=0.9mm] {$h_B$};
\draw[dotted] (0,0) -- (5,0);
\draw[|<->|,thin] (5,0) -- (5,1.5) node [midway, right] {$r_C$};
\draw[|<->|,thin] (4.3,0) -- (4.3,0.7) node [at end, above] {$r_I$};
\draw[|<->|,thin] (3.7,0) -- (3.7,0.4) node [midway, left] {$r_B$};
\end{tikzpicture}
\caption{\label{f:deadhex-sketch}
Sketch of a HPGe crystal with radius $r_C$ and height $h_{C}$ with its borehole with radius $r_{B}$ and height $h_{B}$.
Around this hole, we assume an inactive zone with radius $r_{I}$ and height $h_{I}$.
}
\end{figure}
A quick approximation for $r_{I}$ and $h_{I}$ as a function of the relative active volume $A=\frac{\text{Active Volume}}{\text{Total Volume}}$ can be made in two steps:
First, the back part with the bore hole, i.e., three cylinders with the same height
\begin{equation}
A = \frac{r_C^2 - r_{I}^2}{r_C^2 - r_B^2} \Rightarrow r_{I} = \sqrt{r_C^2-A(r_C^2-r_B^2)},
\end{equation}
where a normal cylindrical shape $C$ for the whole crystal is assumed.
Second, the front part:
\begin{equation}
A = 1 - \frac{(h_{I}-h_B)r_{I}^2}{(h_C-h_B)r_C^2} \Rightarrow h_{I} = h_B + (1-A)(h_C-h_B)\frac{r_C^2}{r_{I}^2}
\end{equation}
Simulations exploring a large range of $A$ are compared to experimental values for one detector in \cref{f:deadhex-efficiency}.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/deadhex/efficiency.pdf}
\includegraphics[width=\columnwidth]{figures/deadhex/effdiv.pdf}
\includegraphics[width=\columnwidth]{figures/deadhex/effscaling.pdf}
\caption{\label{f:deadhex-efficiency}
a) Experimental and simulated full-energy-peak efficiency for a hexagonally cut encapsulated HPGe detector.
b) Experimental and simulated full-energy-peak efficiency divided by a reference simulation ($A=\SI{85}{\percent}$).
c) Scale and d) shape quality indicators for different values of active volume $A$.
Notice how the simulation for $A=\SI{100}{\percent}$ is overestimating the real performance by a significant amount - simply scaling an efficiency to the experimental values will not result in accurate results.
The relative differences between the simulations also increase drastically with γ-ray energy.
Once all geometry parameters are optimized, the minima for SCAL and EWSD should be at the same position.
}
\end{figure}
The simulation should reproduce the scale and shape of the efficiency curve.
\texttt{curve\_fit} from the scipy-optimize library was used to find the scaling factor $p$ for each simulation to the measured data points.
Values between the \SI{100}{\keV}-spaced simulation points were interpolated linearly.
An ideal value would be $p=1$, i.e., no scaling. To derive the best value for $A$, this can be reformulated as a smooth minimizable function
\begin{equation}
\text{SCAL}(A) = (1-p)^2.
\end{equation}
In addition, the shape of the curve is extraordinarily important, especially with respect to deriving efficiencies at \SI{10}{\MeV}.
To emphasize the fewer calibration points at high energies, we define the energy weighted squared deviation of the scaled curve
\begin{equation}
\text{EWSD}(A) = \frac{\sum_i{E_{\gamma_i} (\epsilon_{exp}(E_{\gamma_i})-p\epsilon_{sim}(E_{\gamma_i}))^2}}{\sum_i E_{\gamma_i}},\label{eq:ewsd}
\end{equation}
which is another minimizable function of $A$ and related to the covariance / uncertainty of the scaling factor.
Note that other scaling factors for the energy could also be used, e.g., $E_{\gamma_i}^3$.
With this approach the single free variable $A$ can be determined by minimizing both SCAL and EWSD, see \cref{f:deadhex-efficiency}.
\section{Results}
The goals described in \cref{c:purpose} could be achieved.
Efficiencies can be simulated with satisfactory accuracy, including SEP and DEP efficiencies with and without veto, an example is show in \cref{f:efficiency}.
In version 1.0 \cite{jan_mayer_2020_3692475}, 22 HPGe detectors and 5 target chambers are implemented and can be easily combined to the individual setup with minimal knowledge of \textsc{Geant4} or HPGe detector geometries.
Adding new or tweaking existing detectors is possible with a central data file.
There is a procedure in place to add new experimental setups and target chambers as well as detector types.
We have used this simulation environment to make informed decisions about extensions to the existing setup, e.g., adding passive shielding to reduce the number of low energetic γ-rays.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/efficiency/efficiency.pdf}
\caption{\label{f:efficiency}
Example for simulated single escape efficiencies with and without active Compton suppression \cite{Mayer2016}. The escape-peak efficiency can also be tested in-beam with transitions from common contaminants like oxygen and carbon by scaling their intensity to the full-energy-peak efficiency.
}
\end{figure}
The software has been used for several experiments with good results, even though some detectors still require manual parameter tweaking to reproduce the experimental values accurately.
This project was released as Open-Source and is available from \url{https://github.com/janmayer/G4Horus} \cite{jan_mayer_2020_3692475}.
We invite everyone to adapt the project or scrounge parts of the code for other projects.
While our developments are focused on the HORUS setup, the code can be used for other, unrelated experiments employing γ-ray detectors surrounding a target.
Experimental setups can be added by deriving them from the \texttt{Setup} class and specifying the detector Θ and φ angles in the constructor.
Typical HPGe detectors can be added by appending their individual parameter sets to the Detector Library.
If the existing detector templates are insufficient, more can be added by deriving them from the \texttt{Detector} class and overriding
the provided virtual methods.
Target chamber can be implemented with the usual \textsc{Geant4} methods or with CAD-based models as described in \cref{c:chambergeo}.
\section{Outlook}
A large problem with \textsc{Geant4} is the geometry implementation.
While using code is a step up over digital punch cards used in MCNP, it is decades behind other simulation integrations as seen in, e.g., finite element analysis.
In the future, it would be advisable to find a modern solution that is ready for everyday production usage.
Due to its massive advantages in development speed, ease of use, and flexibility, CAD based simulation geometry could be officially supported by the \textsc{Geant4} collaboration.
To reduce the slowdown of simulations, a hybrid approach might be feasible: Convert structures to simple shapes where possible and use tessellated shapes for the remnants.
In a new Monte Carlo code, only tessellated shapes could be supported and used exclusively with GPUs.
For G4Horus, we continue to make improvements to the description of our detectors as well as add new functionality like better support for pγ- and γγ-coincidence measurements.
\section{Acknowledgments}
We would like to thank D. Diefenbach and S. Thiel from our development workshop for accelerators and accelerator experiments for designing the target chambers and their help with the CAD models, Dr. J. Eberth for the fruitful discussions about HPGe detectors, and C. Müller-Gatermann and the accelerator crew for their help with the experiments and source production.
Supported by the DFG (ZI 510/8-1, ZI-510/9-1).
| 2024-02-18T23:39:56.542Z | 2020-06-15T02:11:43.000Z | algebraic_stack_train_0000 | 894 | 6,223 |
|
proofpile-arXiv_065-4451 | \section{Introduction}\label{s:intro}
The identification of characteristic parameters for bifurcations is one of the key goals of bifurcation theory. For the Andronov-Hopf bifurcation from equilibria to periodic orbits, the most relevant characteristic parameter is the first Lyapunov coefficient, $\sigma_s\in\mathbb{R}$. If it is non-zero, it determines the scaling and the direction of bifurcation relative to real part of the critical eigenvalues of the linearization at the equilibrium. It is well known that the truncated normal form of the radial component on the center manifold reads
\begin{equation}
\dot{u} = \mu u +\sigma_s u^3,
\label{Npitchfork}
\end{equation}
with parameter $\mu\in\mathbb{R}$. {An excellent exposition of Andronov-Hopf bifurcation theory and applications can be found in \cite{Marsden76}, see also \cite{Guckenheimer, Kuznetsov}.}
In Figure~\ref{f:Hopf}(a,b) we plot the associated pitchfork bifurcation for different signs of $\sigma_s$.
\begin{figure}
[bt]
\begin{tabular}{cccc}
\includegraphics[width= 0.15\textwidth]{supercritical_smooth3.pdf}\hspace*{1cm}
&\includegraphics[width= 0.15\textwidth]{subcritical_smooth3n.pdf}\hspace*{1.5cm}
&\includegraphics[width= 0.15\textwidth]{supercritical_nonsmooth3.pdf}\hspace*{1cm}
& \includegraphics[width= 0.15\textwidth]{subcritical_nonsmooth3n.pdf}\\
(a) & (b) &(c) &(d)
\end{tabular}
\caption{(a) Supercritical, $\sigma_s=-1$, and (b) subcritical, $\sigma_s=1$, pitchfork bifurcation of \eqref{Npitchfork} with stable (green) and unstable (red dashed) equilibria. In (c,d) we plot the analogous `degenerate' pitchforks for the non-smooth case \eqref{Dpitchfork}, with $\sigma_{_\#}=-1 ,1$, respectively. }
\label{f:Hopf}
\end{figure}
Generically, $\sigma_s\neq0$ and the bifurcating periodic orbits either coexist with the more unstable equilibrium -- the supercritical case, $\sigma_s<0$ -- or with the more stable equilibrium -- the subcritical case, $\sigma_s>0$. This distinction is relevant for applications since the transition induced by the bifurcation is a `soft' first order phase transition in the supercritical case, while it is `hard' in the subcritical one. Indeed, the transition is `safe' in a control sense in the supercritical case and `unsafe' in the subcritical one, where the local information near the equilibrium is insufficient to determine the dynamics near the unstable equilibrium.
Therefore, a formula for the first Lyapunov coefficient is important from a theoretical as well as applied viewpoint. In generic smooth bifurcation theory, such a formula is well known in terms of quantities derived from the Taylor expansion to order three in the equilibrium at bifurcation, {e.g.,} \cite{Kuznetsov}. However, this cannot be applied in non-smooth systems. Non-smooth terms appear in models for numerous phenomena and their study has gained momentum in the past decades,
as illustrated by the enormous amount of literature, see \cite{BookAlan,KuepperHoshamWeiss2013,NonsmoothSurvey2012,TianThesis} and the references therein to hint at some; below we discuss literature that relates to our situation.
In this paper, we provide explicit formulas for the analogues of the first Lyapunov coefficient in systems with regular linear
term and Lipschitz continuous, but only piecewise smooth nonlinear terms, with jumps in derivatives across switching surfaces. We also discuss codimension-one degeneracies and the second Lyapunov coefficient. To the best of our knowledge, this analysis is new.
Such systems can be viewed as mildly non-smooth, but occur in various models, e.g., for ship maneuvering
\cite{InitialPaper,FossenHandbook,ToxopeusPaper}, which motivated the present study. Here the hydrodynamic drag force
at high-enough Reynolds number is a non-smooth function of the velocity $u$. More specifically, a dimensional and
symmetry analysis with $\rho$ being the density of water, $C_D$ the drag coefficient and $A$ the effective drag area, yields
$$F_D = -\frac{1}{2}\rho C_D A u\abs{u}.$$
Effective hydrodynamic forces among velocity components $u_i, u_j$, $1\leq i,j\leq n$, with $n$ depending on the model type, are often likewise modeled by second order modulus terms: $u_i\abs{u_j}$, cf.\ \cite{FossenHandbook}.
For illustration, let us consider the corresponding non-smooth version of \eqref{Npitchfork},
\begin{equation}
\dot{u} = \mu u +\sigma_{_\#} u\abs{u},
\label{Dpitchfork}
\end{equation}
where the nonlinear term has the odd symmetry of the cubic term in \eqref{Npitchfork} and is once continuously differentiable, but not twice. We note that in higher dimensions, the mixed nonlinear terms $u_i\abs{u_j}$ for $i\neq j$, are differentiable at the origin only.
In Figure \ref{f:Hopf}(c,d) we plot the resulting bifurcation diagrams. Compared with \eqref{Npitchfork},
the amplitude scaling changes from $\sqrt{\mu}$ to $\mu$ and the rate of convergence to the bifurcating state changes from $-2\mu$ to $-\mu$.
Indeed, in this scalar equation case, the singular coordinate change $u=\tilde u^2$ transforms \eqref{Dpitchfork} into \eqref{Npitchfork} up to time rescaling by $2$. However, there is no such coordinate change for general systems of equations with non-smooth terms of this kind.
More generally, we consider $n$-dimensional systems of ordinary differential equations (ODE) of the form
\begin{equation}\label{e:abstract0}
\dot \textbf{u}= A(\mu)\textbf{u}+G(\textbf{u}),
\end{equation}
with matrix $A(\mu)$ depending on a parameter $\mu\in\mathbb{R}$, and Lipschitz continuous nonlinear $G(\textbf{u})=\mathcal{O}(|\textbf{u}|^2)$. We shall assume the nonlinearity is smooth away from the smooth hypersurfaces $H_j$, $j=1,\ldots,n_H$, the \emph{switching surfaces}, which intersect pairwise transversally at the equilibrium point $\textbf{u}_*=0$. We assume {further that} the smoothness of $G$ extends to the boundary within each component of the complement of $\cup_{j=1}^{n_H} H_j\subset \mathbb{R}^n$.
The bifurcation of periodic orbits is -- as in the smooth case -- induced by the spectral structure of $A(\mu)$, which is (unless stated otherwise) hyperbolic except for a simple complex conjugate pair that crosses the imaginary axis away from the origin as $\mu$ crosses zero.
\medskip
Our main results may be summarized informally as follows.
We infer from the result in \cite{IntegralManifold} that the center manifold of the smooth case is replaced by a Lipschitz invariant manifold (Proposition \ref{prop:inv_man}), and directly prove that a unique branch of periodic orbits emerges at the bifurcation (Theorem \ref{t_per_orb}). Moreover, we prove that the quadratic terms of $G$ are of generalized second order modulus type if $G$ is piecewise $C^2$ smooth (Theorem \ref{t:abstractnormal}). Here the absolute value in the above terms is replaced by
\begin{align}
[u]_{\pn}^{\pp} =
\begin{cases}
p_{_+} u, & u\geq 0, \\
p_{_-} u, & u<0,
\end{cases}
\label{gen_abs_val}
\end{align}
where $p_{_-},p_{_+}\in\mathbb{R}$ are general different slopes left and right of the origin, respectively.
This already allows to express the first Lyapunov coefficient in an integral form, but its explicit evaluation is somewhat involved, so that we defer it to \S\ref{Gen_linear_part}. Instead, we start with the simpler case when $A$ is in block-diagonal form, in normal form on the center eigenspace, and of pure second order modulus form ($p_{_+}=-p_{_-}=1$). For the planar situation, we derive a normal form of the bifurcation equation with rather compact explicit coefficients using averaging theory (Theorem \ref{t_averaging}). Beyond the first Lyapunov coefficient $\sigma_{_\#}$, this includes the second Lyapunov coefficient $\sigma_2$, which becomes relevant when $\sigma_{_\#}=0$, and which explains how the smooth quadratic and cubic terms interact with the non-smooth ones in determining the bifurcation's criticality.
For refinement and generalization, and to provide direct self-contained proofs, we proceed using the Lyapunov-Schmidt reduction for the boundary value problem of periodic solutions, and refer to this as the `direct method' (\S\ref{s:direct}). We also include a discussion of the Bautin-type bifurcation in this setting, when $\sigma_{_\#}=0$. Concluding the planar case, we {generalize} the results to arbitrary $p_{_+}, p_{_-}$ (\S\ref{s:appplanar}).
These results of the planar case readily generalize to higher dimensions, $n>2$, with additional hyperbolic directions (\S\ref{s:3D}, \ref{s:nD}). In addition, we apply the direct method to the situation with an additional non-hyperbolic direction {in the sense that the linearization at bifurcation has three eigenvalues on the imaginary axis: one zero eigenvalue and a complex conjugate pair.}
{In this case we show that either no periodic solutions bifurcate or two curves bifurcate (Corollaries \ref{c:3D}, \ref{c:nD}), depending on the sign of a particular combination of coefficients of the system that is assumed to be non-zero.}
{Concluding the summary of main results,} in \S\ref{Gen_linear_part}, we discuss the modifications in case the linear part is not in normal form.
\medskip
For illustration, we consider the planar case with linear part in normal form, so \eqref{e:abstract0} with $\textbf{u}=(v,w)$ reads
\begin{equation}\label{e:planar0}
\begin{aligned}
\dot{v} &= \mu v - \omega w + f\left( v, w \right), \\
\dot{w} &= \omega v + \mu w + g\left( v, w \right),
\end{aligned}
\end{equation}
and the case of purely quadratic nonlinearity with second order modulus terms gives
\begin{equation}\label{e:quadnonlin}
\begin{aligned}
f\left( v, w \right) &= a_{11}v\abs{v} + a_{12}v\abs{w} + a_{21}w\abs{v} + a_{22}w\abs{w},\\
g\left( v, w \right) &= b_{11}v\abs{v} + b_{12}v\abs{w} + b_{21}w\abs{v} + b_{22}w\abs{w},
\end{aligned}
\end{equation}
where $a_{ij}$, $b_{ij}$, $1\leq i,j\leq 2$, are real parameters. In this simplest situation, our new first Lyapunov coefficient reads
\begin{equation}\label{sigma1}
\sigma_{_\#} = 2a_{11}+a_{12}+b_{21}+2b_{22}
\end{equation}
and we plot samples of bifurcation diagrams in Figure~\ref{f:auto} computed by numerical continuation with the software \texttt{Auto} \cite{auto}. {For these} we used numerically computed Jacobians and avoided evaluation too close to the non-smooth point by choosing suitable step-sizes and accuracy.
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width= 0.4\linewidth]{subcrit3D.pdf} &
\includegraphics[width= 0.4\linewidth]{supcrit3D.pdf}\\
(a) & (b)
\end{tabular}
\caption{Plotted are bifurcation diagrams of \eqref{e:planar0} with \eqref{e:quadnonlin} computed by numerical continuation. Blue curves are periodic orbits, black lines the equilibrium at the origin, and orange the extrema in $w$, showing the non-smooth bifurcation. In (a) we use $a_{ij}=b_{ij}=1$ except
$b_{21}=-1$, so that $\sigma_{_\#}=4>0$ (subcritical). In (b) $b_{22}=-3$, so that $\sigma_{_\#}=-4$ (supercritical).}
\label{f:auto}
\end{figure}
In comparison, the classical first Lyapunov coefficient for purely cubic nonlinearity, i.e., $|\cdot |$ replaced by $(\cdot)^2$, reads
\[
\sigma_s = 3a_{11}+a_{12}+b_{21}+3b_{22}.
\]
The leading order expansion of the radius $r_{_\#}$ and $r_s$ of bifurcating periodic solutions in these cases read, respectively,
\[
r_{_\#}(\mu) = -\frac{3\pi}{2\sigma_{_\#}}\mu + \mathcal{O}\left(\mu^2\right), \qquad
r_s(\mu) = 2\sqrt{-\frac{2}{\sigma_s}\mu} + \mathcal{O}\left(\mu\right).
\]
We show that for $\sigma_{_\#}=0$ the bifurcation takes the form
\[
r_0=\sqrt{\frac{2\pi\omega}{\sigma_2}\mu}+\mathcal{O}\left(\mu\right),
\]
analogous to the smooth case, but with second Lyapunov coefficient in this setting given by
\begin{equation}\label{sigma2}
\begin{aligned}
\sigma_2 =& \frac{1}{3}( b_{12}a_{11}-b_{21}a_{22}-a_{21}b_{22}+a_{12}b_{11}-2a_{11}a_{22}-2a_{12}a_{21}+2b_{12}b_{21}+2b_{11}b_{22} ) \\
+\, & \frac{\pi}{4}( b_{12}b_{22}-a_{12}a_{22}-a_{11}a_{21}+b_{21}b_{11}+2a_{11}b_{11}-2b_{22}a_{22} ).
\end{aligned}
\end{equation}
In presence of smooth quadratic and cubic terms, the latter is modified with the classical terms, as we present in \S\ref{AV_S}.
Despite the similarity of $\sigma_{_\#}$ and $\sigma_s$, it turns out that there is no fixed smoothening of the absolute value function that universally predicts the correct criticality of Hopf bifurcations in these systems (\S\ref{s:smooth}).
For exposition of this issue, consider the $L^\infty$-approximations, with regularization parameter $\varepsilon> 0$, of the absolute value function $f_1(x) = \abs{x}$, given by $f_2(x)=\frac{2}{\pi}\arctan\left(\frac{x}{\varepsilon}\right)x$ (cf.\ \cite{Leine}), and $f_3(x)=\frac{2}{\pi}\arctan\left(\frac{x}{\varepsilon}(x-1)(x+1)\right)x$, a convex and a non-convex approximation, respectively.
This last function approximates the absolute value for large (absolute) values of $x$. We plot the graphs in Figure \ref{3_Cases}(a) and the bifurcation diagrams for $\dot{x}=\mu x-f_i(x)x$, $i\in\{1,2,3\}$, in Figure \ref{3_Cases}(b). In particular, $f_3$ gives a `microscopically' wrong result, which is nevertheless correct `macroscopically'.
\begin{figure}
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width= 0.6\linewidth]{3_Cases_abs_fun.pdf}
\subcaption{In blue $f_1(x)$, in green $f_2(x)$ and in red $f_3(x)$.}
\end{subfigure}
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width= 0.6\linewidth]{multiplots.pdf}
\subcaption{Bifurcation diagrams respect to $\mu$.}
\end{subfigure}
\caption{Comparison of bifurcation diagrams for $f_1, f_2$ and $f_3$ as in the text.}
\label{3_Cases}
\end{figure}
Indeed, non-smooth terms in models typically stem from passing to a macro- or mesoscopic scale such that microscopic and smooth information is lost. Hence, the bifurcations in such models carry a macroscopic character and it is not surprising that an arbitrary smoothening changes this nature microscopically: a macroscopically supercritical bifurcation might show a subcritical behaviour on the microscopic level. However, the relevant information for the model is the macroscopic character, and -- for the class of models considered -- this is given by our newly derived Lyapunov coefficients.
The basic idea of proof is to change coordinates to a non-autonomous system for which the lack of smoothness is in the time variable only, so that averaging and the `direct method' can be applied.
We remark that in standard ODE literature on existence and bifurcations, smoothness of the time variable is often assumed, for example \cite{ChowHale}, but it is not needed in parts relevant for us. Indeed, merely continuity in time is for instance considered in \cite{CoddLev,Hartman,BookRasmussen}.
\medskip
{In order to demonstrate how to apply our method in a concrete case, we discuss in \S\ref{s:shim} the 3D model of a shimmying wheel from \cite{SBeregi}. This systems is of the form \eqref{e:abstract0} with pure second order modulus nonlinearity,
but linear part not in normal form, though it has a non-zero real eigenvalue as well as a pair of complex conjugate eigenvalues that crosses the imaginary axis upon parameter change.
We fully characterize the resulting bifurcations in Theorem \ref{t:shym}.}
\medskip
We briefly discuss related literature. As mentioned, piecewise smooth vector fields have been widely investigated in many different applications as well as from a theoretical point of view, leading to a broad analysis in terms of bifurcation theory, cf.\ \cite{ReviewAlan,KuepperHoshamWeiss2013,NonsmoothSurvey2012}.
Herein theory of continuous as well as discontinuous vector fields, e.g., \cite{Filippov1988, Kunze2000}, is used and further developed. A major distinction between our case and the systems studied in the literature is that we assume a separation a priori of a linear part and a non-smooth nonlinear part. Broadly studied are the more general switching differential systems that are discontinuous across a switching surface or piecewise linear. These have been analyzed in various different forms, and we refer to \cite{TianThesis} for an exhaustive list of references; a typical case of discontinuity across the switching manifolds arises from the Heaviside step functions in biology neural models, e.g., \cite{Amari1977,Coombes2005,Harris2015}.
In analogy to center manifolds, the existence of invariant manifolds and sets
has been investigated in \cite{IntegralManifold} for Carath\'eodory vector fields, and in \cite{KuepperHosham2010,KuepperHoshamWeiss2012} for vector fields with one switching surface.
The bifurcation of periodic orbits in planar vector fields with one axis as the switching line has been studied in \cite{CollGasullProhens,GasTorr2003} via one-forms, and characteristic quantities have been determined, though the aforementioned Lyapunov coefficients are not included. Planar Hopf bifurcations for piecewise linear systems have been studied via return maps for one switching line in \cite{KuepperMoritz}, for several switching lines meeting at a point in \cite{BuzMedTor2018,Simpson2019,ZouKuepper2005},
and for non-intersecting switching manifold using Li\'enard forms in \cite{LlibrePonce}. Higher dimensional Filippov-type systems with a single switching manifold are considered in \cite{ZouKuepperBeyn}, which allows to abstractly study the occurrence of a Hopf bifurcation also for our setting; see also \cite{KuepperHoshamWeiss2013}.
An approach via averaging with focus on the number of bifurcating periodic orbits for discontinuous systems is discussed in \cite{LlibreEtAl2017}.
Nevertheless, we are not aware of results in the literature that cover our setting and our results on the explicit derivation of Lyapunov coefficients and the leading order analysis of bifurcating periodic solutions.
\medskip
This paper is organized as follows. In \S\ref{s:abstract} we discuss the abstract setting and provide basic results for the subsequent more explicit analysis. This is conducted in \S\ref{Planar_Section} for the planar case with linear part in normal form and nonlinear part with pure second order modulus terms for the non-smooth functions, together with quadratic and cubic smooth functions. In \S\ref{s:general} we generalize the absolute value to arbitrary slopes, the system to higher space dimensions, and consider the linear part not being in normal form. Finally, in \S\ref{s:shim} we illustrate the application of our method and results to a concrete model.
\section{Abstract viewpoint}\label{s:abstract}
In this section we discuss the abstract starting point for our setting and motivate the specific assumptions used in the following sections. We consider an $n$-dimensional system of autonomous ODEs in an open set $U\subset\mathbb{R}^n$, with $0\in U$, of the form
\begin{equation}\label{e:abstract}
\dot \textbf{u}= A(\mu)\textbf{u}+G(\textbf{u}),
\end{equation}
with matrix $A(\mu)$ depending on a parameter $\mu\in\mathbb{R}$, and Lipschitz continuous nonlinear $G(\textbf{u})$.
We are interested in a detailed analysis of Hopf-type bifurcations at the equilibrium point $\textbf{u}_*=0$. This requires control over the linear part, which is separated a priori in \eqref{e:abstract} from the potentially non-differentiable nonlinear part -- note that $G$ is differentiable at $\textbf{u}_*$ but not necessarily elsewhere. As usual for Hopf bifurcations, we assume that a pair of simple complex conjugate eigenvalues of $A(\mu)$ crosses the imaginary axis upon moving $\mu\in\mathbb{R}$ through zero. We collect the structural hypotheses on $A$ and $G$ without further loss of generality to our leading order analysis.
\begin{hypothesis}\label{h:AG}
The eigenvalues of $A(\mu)$ are given by $\mu\pm \mathrm{i}\omega(\mu)$ with smooth non-zero $\omega(\mu)\in\mathbb{R}$ and all other eigenvalues have non-zero real part at $\mu=0$. The nonlinearity $G$ is Lipschitz continuous and satisfies $G(\textbf{u})=\mathcal{O}(|\textbf{u}|^2)$.
\end{hypothesis}
We denote by $E^\mathrm{c}$ the center eigenspace of $A(0)$ of the eigenvalues $\pm\mathrm{i}\omega(0)$, and first note the following result on invariant manifolds due to \cite{IntegralManifold}, which corresponds to center manifolds in the smooth case.
\begin{proposition}\label{prop:inv_man}
Under Hypothesis~\ref{h:AG}, for $0\leq |\mu|\ll1$ there exist $2$-dimensional Lipschitz continuous invariant manifolds $\mathcal{M}_\mu$ in an open neighborhood $U_*\subset U$ of $\textbf{u}_*$, which contain $\textbf{u}_*$ and all solutions that stay in $U_*$ for all time. Furthermore, if at $\mu=0$ all eigenvalues other than $\pm i\omega(0)$ have strictly negative real part, then each $\mathcal{M}_\mu$ is (transversally) exponentially attractive. In addition, each $\mathcal{M}_\mu$ is a Lipschitz continuous graph over $E^\mathrm{c}$ that depends Lipschitz continuously on $\mu$.
\end{proposition}
\begin{proof}
The statements follow directly from \cite{IntegralManifold} upon adding a trivial equation for the parameter, as usual in center manifolds. As for center manifolds, the proof relies on cutting off the vector field near $\textbf{u}_*$, cf.\ \cite[Remark 6.2]{IntegralManifold}, and we infer the existence of $\mathcal{M}_\mu$ from \cite[Corollary 6.4]{IntegralManifold}. The assumptions are satisfied since $G$ is of quadratic order, which means the Lipschitz constant of $G$ becomes arbitrarily small on small balls centered at $\textbf{u}_*$. The stability statement follows from \cite[Corollary 6.5]{IntegralManifold}.
\end{proof}
More refined stability information and estimates can be found in \cite{IntegralManifold}.
Next, we present a variant of the standard Andronov-Hopf bifurcation theorem, cf.\ \cite{ChowHale}, which does not use any additional smoothness assumption. Here the uniqueness part relies on Proposition~\ref{prop:inv_man}, but the existence is independent of it. As mentioned, in case of a single switching surface, the abstract bifurcation of periodic solutions without smoothness statement concerning the branch follows from the results in \cite{ZouKuepperBeyn}, see also \cite{KuepperHoshamWeiss2013}.
\begin{theorem}
\label{t_per_orb}
Assume Hypothesis~\ref{h:AG}.
A locally unique branch of periodic solutions to \eqref{e:abstract} bifurcates from $\textbf{u}_*=0$ at $\mu=0$. Specifically, there is a neighborhood $V\subset U$ of $\textbf{u}_*$, such that for $0<|\mu|\ll1$ periodic solutions to \eqref{e:abstract} in $V$ are given (up to phase shift) by a Lipschitz continuous one-parameter family of $\tilde\omega(a)$-periodic solutions $\textbf{u}_{\rm per}(t;a)$, $\mu=\mu(a)$ for $0\leq a\ll1$,
$\tilde\omega(0)=\omega(0)$, $\mu(0)=0$, whose projections into $E^\mathrm{c}$ have the complexified form
$a \mathrm{e}^{\mathrm{i} \tilde\omega(a) t} + o(|a|)$. Moreover, we have the estimate $\mathrm{dist}(\textbf{u}_{\rm per}(\cdot;a),E^{\mathrm{c}}) =\mathcal{O}(a^{2})$.
\end{theorem}
This bifurcation is typically `degenerate' compared to the generic smooth Hopf bifurcation as in the example \eqref{Dpitchfork}, where {the bifurcating branch} is not $C^1$ through $u=0$.
\begin{proof}
We change coordinates such that $A(\mu)$ is in block-diagonal form with upper left 2-by-2 block for the eigenspace $E^\mathrm{c}$ having diagonal entries $\mu$ and anti-diagonal $\pm\omega(\mu)$, and remaining lower right $(n-2)$-dimensional block invertible at $\mu=0$; the modified $G$ remains of quadratic order and is Lipschitz continuous. Upon changing to cylindrical coordinates with vertical component $u=(u_3,\ldots,u_n)$, where $u_j$ are the scalar components of $\textbf{u}$, we obtain
\begin{equation}\label{e:cylindrical0}
\begin{aligned}
\dot{r} &= \mu r + \mathcal{R}_1(r,u;\mu),\\
r\dot{\varphi} &= \omega(0)r + \mathcal{R}_2(r,u;\mu),\\
\dot u &= \tilde A u + \mathcal{R}_3(r,u;\mu).
\end{aligned}
\end{equation}
Here $\tilde A$ is the invertible right lower block at $\mu=0$ and we suppress the dependence on $\varphi$ of $\mathcal{R}_j$, $j=1,2,3$. Due to the Hypothesis~\ref{h:AG} in these coordinates we have the estimates $\mathcal{R}_1(r,u;\mu) = \mathcal{O}(r^2 +|\mu|(r^2 + |u|) + |u|^2)$, $\mathcal{R}_{j}(r,u;\mu) = \mathcal{O}(r^2 +|\mu|(r + |u|) + |u|^2)$, $j=2,3$. We seek initial conditions $r_0,u_0,\varphi_0$ and a parameter $\mu$ that permit a periodic solution near the trivial solution $r=u=0$.
By Proposition~\ref{prop:inv_man} any such periodic orbit is a Lipschitz graph over $E^\mathrm{c}$ so that there is a periodic function ${\tilde u}$ with $u=r{\tilde u}$.
Let $T>0$ denote the period and suppose $r(t)=0$ for some $t\in[0,T]$. Then $u(t)=0$ and therefore $\textbf{u}(t)=\textbf{u}_*$, so that $\textbf{u}=\textbf{u}_*$ is the trivial solution. Hence, we may assume that $r$ is nowhere zero and thus ${\tilde u}$ solves
\[
\dot {\tilde u} = \tilde A{\tilde u} + \widetilde{\mathcal{R}}_3(r,{\tilde u};\mu),
\]
where $\widetilde{\mathcal{R}}_3(r,{\tilde u};\mu) = \mathcal{O}\big(r+|\mu|+|{\tilde u}|(|\mu|+r|{\tilde u}|)\big)$. By variation of constants we solve this for given $r, \varphi$ as
\begin{equation}\label{e:tv}
{\tilde u}(t) = e^{\tilde A t}{\tilde u}_0 + \int_0^t e^{\tilde A(t-s)}\widetilde{\mathcal{R}}_3(r(s),{\tilde u}(s);\mu) ds,
\end{equation}
with initial condition ${\tilde u}(0)={\tilde u}_0$. $T$-periodic solutions solve in particular the boundary value problem
\begin{align}
0&= {\tilde u}(T)-{\tilde u}(0) = \int_0^{T} \dot {\tilde u}(s)ds
\nonumber\\
&= \int_0^{T}\tilde A e^{\tilde A s}{\tilde u}_0 ds + \int_0^{T}\left(\tilde A \int_0^s e^{\tilde A(s-\tau)}\widetilde{\mathcal{R}}_3(r(\tau),{\tilde u}(\tau);\mu) d\tau + \widetilde{\mathcal{R}}_3(r(s),{\tilde u}(s);\mu) \right)ds \nonumber\\
&= \left(e^{\tilde A T} - \mathrm{Id}\right){\tilde u}_0 + \widetilde{\mathcal{R}}_4(r,{\tilde u};\mu), \label{e:bvp0}
\end{align}
where $e^{\tilde A T} - \mathrm{Id}$ is invertible since $\tilde A$ is invertible.
We have $\widetilde{\mathcal{R}}_4(r,{\tilde u};\mu)= \mathcal{O}\big(r_\infty +|\mu| + {\tilde u}_\infty(|\mu| + r_\infty {\tilde u}_\infty)\big)$ with $r_\infty = \sup\{r(t)\;|\; t\in[0,T]\}$, ${\tilde u}_\infty = \sup\{|{\tilde u}(t)|\;|\; t\in[0,T]\}$ and by \eqref{e:tv} there is a $C>0$ depending on $T$ with
\[
{\tilde u}_\infty \leq C \big(|{\tilde u}_0| + r_\infty + |\mu| + {\tilde u}_\infty(|\mu|+r_\infty {\tilde u}_\infty)\big)
\;\Leftrightarrow\;\big(1-C(|\mu|+r_\infty {\tilde u}_\infty)\big){\tilde u}_\infty \leq C(|{\tilde u}_0| + r_\infty + |\mu|),
\]
so that for $0\leq |{\tilde u}_0|, r_\infty, |\mu|\ll1$ it follows $\frac{1}{2}\leq \big(1-C(|\mu|+r_\infty {\tilde u}_\infty)\big)$ and therefore ${\tilde u}_\infty \leq 2C(|{\tilde u}_0| + r_\infty + |\mu|)$. Thus,
\[
\widetilde{\mathcal{R}}_4(r,{\tilde u};\mu)= \mathcal{O}(r_\infty +|\mu| + |{\tilde u}_0|(|\mu| + r_\infty|{\tilde u}_0|)).
\]
Based on this, the uniform Banach contraction principle applies upon rewriting \eqref{e:bvp0} as
\[
{\tilde u}_0 = \left(e^{\tilde A T} - \mathrm{Id}\right)^{-1}\widetilde{\mathcal{R}}_4(r,{\tilde u};\mu),
\]
which yields a locally unique Lipschitz continuous solution ${\tilde u}_0(r,\varphi;\mu) = \mathcal{O}(r_\infty+|\mu|)$.
Note that together with the aforementioned, this implies the estimate ${\tilde u}_{\infty}=\mathcal{O}(r_{\infty}+|\mu|)$.
Substituting ${u}(t) = r(t){\tilde u}(t)$ with initial condition ${\tilde u}_0(r,\varphi;\mu)$ for ${\tilde u}$
into the first two equations of \eqref{e:cylindrical0} gives
\begin{equation}\label{e:cylindrical01}
\begin{aligned}
\dot{r} &= \mu r + \mathcal{R}_5(r;\mu),\\
\dot{\varphi} &= \omega(0) + \mathcal{R}_6(r;\mu),
\end{aligned}
\end{equation}
where we have divided the equation for $\varphi$ by $r$, since we look for non-zero solutions, and
\[
\mathcal{R}_5(r;\mu)=r\mathcal{O}\big(r+|\mu|r+|{\tilde u}|(|\mu|+r|{\tilde u}|)\big)=r\mathcal{O}(r_\infty + |\mu|r_\infty)=r\mathcal{O}(r_\infty),
\quad \mathcal{R}_6(r;\mu) = \mathcal{O}(r_\infty+|\mu|).
\]
Since $\omega(0)\neq 0$, for $0\leq r,|\mu|\ll 1$ we may normalize the period to $T=2\pi$ and obtain
\begin{equation}\label{e:absper}
\frac{d r}{d\varphi} = \frac{\mu r + \mathcal{R}_5(r;\mu)}{\omega(0) + \mathcal{R}_6(r;\mu)}
= r\left(\frac{\mu}{\omega(0)} + \mathcal{R}_7(r;\mu)\right),
\end{equation}
where $\mathcal{R}_7(r;\mu) = \mathcal{O}(r_\infty + |\mu|r_\infty) = \mathcal{O}(r_\infty)$ follows from direct computation.
Analogous to ${\tilde u}$ above, the boundary value problem $r(2\pi)=r(0)$ can be solved by the uniform contraction principle, which yields a locally unique and Lipschitz continuous solution $\mu(r_0)= \mathcal{O}(r_0)$. Since $\varphi$ is $2\pi$-periodic, any periodic solution has a period $2\pi m$ for some $m\in \mathbb{N}$, and the previous computation gives a unique solution for any $m$, from which we took the one with minimal period, i.e., $m=1$.
Finally, the statement of the form of periodic solutions directly proceeds with $a=r_0$ from changing back to the original time scale and coordinates. Notice that $r_{\infty}=\mathcal{O}(r_{0})$ holds true since we are integrating an ODE over a bounded interval, such that the ratio between $r_\infty$ and $r_0$ is a bounded quantity, which is uniform because the vector field goes to zero when $r$ goes to zero. Therefore, and together with $\mu(r_0)= \mathcal{O}(r_0)$, the previous estimate ${\tilde u}_{\infty}=\mathcal{O}(r_{\infty}+|\mu|)$ becomes ${\tilde u}_{\infty}=\mathcal{O}(r_0)$. Moreover, applying the supremum norm on both sides of $u=r{\tilde u}$ one gets $u_\infty = r_\infty{\tilde u}_\infty$, which is precisely of order $\mathcal{O}(r_0^2)$, as we wanted to prove.
\end{proof}
While this theorem proves the existence of periodic orbits, it does not give information about their location in parameter space, scaling properties and stability; the problem is to control the leading order part of $\mathcal{R}_7$ in \eqref{e:absper}, which -- in contrast to the smooth case -- turns out to be tedious. Consequently, we next aim to identify a suitable setting analogous to the center manifold reduction, and normal form transformations for a smooth vector field. In particular, we seek formulas for the analogue of the first Lyapunov coefficient from the smooth framework, whose sign determines whether the bifurcation is sub- or supercritical.
\medskip
In order to specify a setting that allows for such an analysis, and is also relevant in applications, we will assume additional regularity away from sufficiently regular hypersurfaces $H_j$, $j=1,\ldots,n_H$, and denote $H:=\cup_{j=1}^{n_H} H_j$. We refer to these hypersurfaces as \emph{switching surfaces} and assume these intersect pairwise transversally at the equilibrium point $\textbf{u}_*=0$.
\begin{hypothesis}\label{h:Ck}
The switching surfaces $H_j$, $j=1,\ldots,n_H$, are $C^k$ smooth, $k\geq1$ and intersect transversally at $\textbf{u}_*=0$. In each connected component of $U\setminus H$ the function $G$ is $C^k$ smooth and has a $C^k$ extension to the component's boundary.
\end{hypothesis}
For simplicity, and with applications in mind, we consider only two switching surfaces, $n_H=2$. In order to facilitate the analysis, we first map $H_1, H_2$ locally onto the axes by changing coordinates.
\begin{lemma}\label{l:cyl}
Assume Hypotheses~\ref{h:AG} and \ref{h:Ck} and let $n_H=2$. There is a neighborhood $V\subset U$ of $\textbf{u}_*$ and a diffeomorphism $\Psi$ on $V$ such that $\Psi(H_j\cap V) = \{ u_j=0\}\cap \Psi(V)$, $j=1, 2$; in particular $\Psi(\textbf{u}_*)=0$.
In subsequent cylindrical coordinates $(r,\varphi,u)\in \mathbb{R}_+ \times[0,2\pi)\times\mathbb{R}^{n-2}$ with respect to
the $(u_1,u_2)$-coordinate plane,
the vector field is $C^k$ with respect to $(r,u)$.
\end{lemma}
\begin{proof}
The smoothness of $H_j$, $j=1, 2$, and their transverse intersection allow for a smooth change of coordinates that straighten $H_1$, $H_2$ locally near $\textbf{u}_*$ and maps these onto the coordinate hypersurfaces $\{u_1=0\}$, $\{u_2=0\}$, respectively. The assumed smoothness away from the switching surfaces implies the smoothness in the radial direction.
\end{proof}
A concrete analysis of the nature of a Hopf bifurcation requires additional information on the leading order terms in $G$. As shown subsequently, a sufficient condition to identify the structure of the quadratic terms is Hypothesis~\ref{h:Ck} with $k=2$.
\begin{theorem}
\label{t:abstractnormal}
Assume Hypotheses~\ref{h:AG} and \ref{h:Ck}
for $k\geq 2$ and let $n_H=2$. In the coordinates of Lemma~\ref{l:cyl}, the non-smooth quadratic order terms in a component $G_j$, $j=1,\ldots, n$, of $G$ are of the form $u_\ell[u_i]_{\pn}^{\pp}$, $1\leq \ell\leq n$, $i=1,2$, where $p_{_+},p_{_-}\in \mathbb{R}$ depend on $i,\ell,j$ and are the limits of second derivatives of $G$ on the different connected components of $\mathbb{R}^n\setminus H$.
\end{theorem}
\begin{proof}
Consider a coordinate quadrant and let $\widetilde{G}$ be the extension of $G$ to its closure. By assumption, we can Taylor expand $\widetilde{G}(\textbf{u})= \frac 1 2 D^2\widetilde{G}(0)[\textbf{u},\textbf{u}] + o(|\textbf{u}|^2)$ since $G(0)=0$ as well as $D\widetilde{G}(0)=0$. However, for different coordinate quadrants the second order partial derivates may differ. By the form of $H$ in Lemma~\ref{l:cyl}, one-sided derivatives transverse to the coordinate axes might be distinct only for the $u_1, u_2$ axes. Hence, at $\textbf{u}=0$ second order derivatives involving $u_1, u_2$ may differ, and we denote by $p_{j\ell i_\pm}$ the partial derivatives $\frac{\partial^2}{\partial u_i \partial u_\ell}G_j(0)$, $1\leq \ell\leq n$, that are one-sided with respect to $i=1,2$ as indicated by the sign. The functions $\ABS{u_i}{p_{j\ell i}}$ thus provide a closed formula for the quadratic terms of $G_j$ as claimed.
\end{proof}
Even with explicit quadratic terms in these coordinates, an analysis based on the coordinates of Lemma~\ref{l:cyl} remains a challenge.
\begin{remark}\label{e:arrangement}
In cylindrical coordinates relative to $E^\mathrm{c}$, cf.\ \eqref{e:cylindrical0}, the vector field is generally not smooth in the radial direction. In general, smoothness cannot be achieved by changing coordinates as this typically modifies $H$ to be non-radial. In particular, we cannot assume, without loss of generality, that the linear part in the coordinates of Lemma~\ref{l:cyl} is in block-diagonal form or in Jordan normal form as in \eqref{e:cylindrical0}.
\end{remark}
For exposition, we consider the planar situation $n=2$,
where $H_1, H_2$ are the $u_1$- and $u_2$-axes, respectively. In contrast to \eqref{e:planar0} (and \eqref{e:cylindrical0}), the linear part is generally not in normal form, i.e., we have
\begin{equation}\label{e:abstractplanar}
\begin{pmatrix}
\dot u_1\\
\dot u_2
\end{pmatrix} =
\begin{pmatrix}
m_1 & m_2\\
m_3 & m_4
\end{pmatrix}\begin{pmatrix}
u_1\\
u_2
\end{pmatrix}+\begin{pmatrix}
f_1\left( u_1, u_2 \right)\\
f_2\left( u_1, u_2 \right)
\end{pmatrix},
\end{equation}
where $G=(f_1,f_2)$ is nonlinear.
Based on Hypothesis~\ref{h:AG} the linear part satisfies $\mu=\frac 1 2 (m_1+m_4)$, with $\mu=0$ at the bifurcation point, and the determinant at $\mu=0$ is positive so that we get together $m_1^2+m_2m_3<0$ and $m_2 m_3<0$.
Upon changing to polar coordinates we obtain, generally different from \eqref{e:cylindrical01},
\begin{equation}
\begin{cases}
\dot{r} = M(\varphi)r+\chi_2(\varphi)r^2 + \mathcal{O}(r^3),\\
\dot{\varphi} = W(\varphi) + \Omega_1(\varphi)r + \mathcal{O}(r^2),
\end{cases}
\label{Sys_Polar_NoNF}
\end{equation}
where $M, \chi_2, W, \Omega_1$ are $2\pi$-periodic in $\varphi$. Abbreviating $c:=\cos{\varphi}$ and $s:=\sin{\varphi}$, we have explicitly
\begin{align*}
M(\varphi) &= m_1c^2 + (m_2+m_3)sc + m_4s^2, \\
W(\varphi)&= m_3c^2 + (m_4-m_1)sc - m_2s^2,
\end{align*}
where $\chi_2, \Omega_1$ are continuous but in general non-smooth in $\varphi$ as a combination of generalized absolute value terms \eqref{gen_abs_val}.
Due to the conditions at $\mu=0$ we have $W(\varphi)\neq 0$ for any $\varphi$ so that $\dot{\varphi}\neq 0$ for $0\leq r,|\mu|\ll1$.
This allows to rescale time in \eqref{Sys_Polar_NoNF} analogous to \eqref{e:absper} and gives
\begin{equation}\label{new_time}
{r}' := \frac{dr}{d\varphi} = \frac{M(\varphi)r+\chi_2(\varphi)r^2}{W(\varphi) + \Omega_1(\varphi)r} + \mathcal{O}(r^3)
= \frac{M(\varphi)}{W(\varphi)}r+\left(\frac{\chi_2(\varphi)}{W(\varphi)} - \frac{M(\varphi)\Omega_1(\varphi)}{W(\varphi)^2}\right) r^2 +\mathcal{O}(r^3).
\end{equation}
Using averaging theory, as it will be discussed in detail in \S\ref{AV_S}, periodic orbits of \eqref{new_time} are generically in 1-to-1 correspondence with equilibria of the averaged form of \eqref{new_time} given by
\begin{align}\label{r_bar}
\bar{r}' &= {\Lambda}\bar{r}+{\Sigma}\bar{r}^2+\mathcal{O}(\bar{r}^3),
\end{align}
where $\Lambda, \Sigma\in\mathbb{R}$ are the averages of the linear and quadratic coefficients, respectively:
\begin{align}
\Lambda &= \frac{1}{2\pi} \int_0^{2\pi}\frac{M(\varphi)}{W(\varphi)}\mathrm{d} \varphi=
\frac{m_1+m_4}{\sqrt{-4m_2m_3-(m_1-m_4)^2}}, \label{check_mu} \\
\Sigma &= \frac{1}{2\pi} \int_0^{2\pi}\frac{\chi_2(\varphi)}{W(\varphi)}
- \frac{M(\varphi)\Omega_1(\varphi)}{W(\varphi)^2}\mathrm{d} \varphi.\label{check_sigma}
\end{align}
The explicit expression in \eqref{check_mu} follows from a straightforward but tedious calculation;
note that $\Lambda\in \mathbb{R}$ for $0\leq |\mu|\ll1$ due to the above conditions at bifurcation.
For $\Sigma\neq 0$, equilibria of \eqref{r_bar} are $\bar{r}=0$ and $\bar{r}=-\Lambda/{\Sigma}$, which gives a branch of non-trivial periodic orbits parameterized by $\Lambda$. The direction of branching, and thus the super- and subcriticality, is determined by the sign of $\Sigma$, which therefore is a generalized first Lyapunov coefficient.
However, this is still unsatisfying as it does not readily provide an explicit algebraic formula for $\Sigma$ in terms of the coefficients of $A(\mu)$ and $G$.
In order to further illustrate this issue, let $f_1,f_2$ be purely quadratic and built from second order modulus terms as in \eqref{e:quadnonlin}. In this case we explicitly have
\begin{align}
\chi_2(\varphi) &= c\abs{c}(a_{11}c+b_{11}s) + c\abs{s}(a_{12}c+b_{12}s) + s\abs{c}(a_{21}c+b_{21}s) + s\abs{s}(a_{22}c+b_{22}s), \label{chi}\\
\Omega_1(\varphi) &= -\Big[c\abs{c}(a_{11}s-b_{11}c) + c\abs{s}(a_{12}s-b_{12}c) + s\abs{c}(a_{21}s-b_{21}c) + s\abs{s}(a_{22}s-b_{22}c) \Big],\label{Omega}
\end{align}
which are continuous but not differentiable due to the terms involving $|c|,|s|$.
Clearly, the building blocks of the integrals in \eqref{check_sigma} are rational trigonometric functions with denominator $W$ of degree $2$ and numerators of degree $3$ and $5$. However, explicit formulas based on this appear difficult to obtain, so that we instead change to linear normal form as discussed in \S\ref{Gen_linear_part}, with the caveat that the nonlinear terms are in general not smooth in the radius.
Indeed, in the normal form case $m_1=\mu,\, m_2=-\omega,\, m_3=\omega,\, m_4=\mu$ the situation becomes manageable: in \eqref{Sys_Polar_NoNF} we have constant $M(\varphi)= \mu$ and $W(\varphi)=\omega(\mu)$, and we will show below that then $\Sigma=\frac{2}{3\pi\omega}\sigma_{_\#}$, with $\sigma_{_\#}$ as defined in \S\ref{s:intro}.
Therefore, until \S\ref{Gen_linear_part} we will assume that the linear part is in normal form in the coordinates of Lemma~\ref{l:cyl}, which also occurs in applications as mentioned in \S\ref{s:intro}.
\section{Planar normal form case with absolute values}
\label{Planar_Section}
In this section we discuss two approaches to {prove}
existence and bifurcation of periodic orbits in our mildly non-smooth setting. First, we provide details for the aforementioned approach by averaging, and second discuss a direct approach that provides a detailed unfolding by Lyapunov-Schmidt reduction, and that can also be used in some non-generic cases.
While we focus here on the planar case, both methods readily generalize to higher dimensional settings. For averaging one needs normal hyperbolicity in general, and for the direct approach we present higher dimensional cases in upcoming sections.
Without change in the leading order result, for simplicity we fix the imaginary part $\omega\neq 0$ independent of $\mu$.
To simplify the exposition in this section, we assume the linear part is in normal form and the non-smooth terms are of second order modulus type, i.e., with absolute value $|\cdot| = [\cdot]_{-1}^1$. The general case will be discussed in \S\ref{s:general}.
With the linear part in normal form and including smooth quadratic and cubic terms we thus consider the form of \eqref{e:abstract} given by, cf.\ {\eqref{e:planar0}},
\begin{align}
\begin{cases}
\dot{v} &= \mu v - \omega w + f\left( v, w \right) + f_q\left( v, w\right) + f_c\left( v, w \right), \\
\dot{w} &= \omega v + \mu w + g\left( v, w \right) + g_q\left( v, w \right) + g_c\left( v, w \right),
\end{cases}
\label{General2D_AV}
\end{align}
where $f,g$ are as in \eqref{e:quadnonlin}, and
\begin{align*}
f_q\left( v, w\right) &= a_1 v^2 + a_2 vw + a_3 w^2,
&f_c\left( v, w \right) &= c_{a1} v^3 + c_{a2} vw^2 + c_{a3} v^2w + c_{a4} w^3,\\
g_q\left( v, w\right) &= b_1 v^2 + b_2 vw + b_3 w^2,
&g_c\left( v, w\right) &= c_{b1} v^3 + c_{b2} vw^2 + c_{b3} v^2w + c_{b4} w^3,
\end{align*}
and $\mu, \omega \in \mathbb{R}$ with $\omega\neq 0$, and $a_{ij}, b_{ij}, a_k, b_k, c_{ah}, c_{bh},$ $\forall i,j\in \{1,2\}, \forall k\in\{1,2,3\}, \forall h\in\{1,2,3,4\}$ are real constants, all viewed as parameters.
\subsection{Averaging}\label{AV_S}
We next show how to apply averaging theory to \eqref{General2D_AV} in polar coordinates. In addition to $\sigma_{_\#}, \sigma_2$ from \eqref{sigma1}, \eqref{sigma2}, the following appear as normal form coefficients:
\begin{align*}
S_q :=\, & a_1a_2 +a_2a_3 - b_1b_2 - b_2b_3 -2a_1b_1 + 2a_3b_3,\nonumber\\
S_c :=\, & 3c_{a1} + c_{a2} + c_{b3} + 3c_{b4}.\nonumber
\end{align*}
Notice that $\sigma_{_\#}, \sigma_2$ depend only on $f, g$, i.e., the non-smooth terms, while $S_q$ depends on the smooth quadratic terms $f_q, g_q$; and $S_c$ on the cubic ones $f_c,g_c$.
\begin{theorem}
\label{t_averaging}
For $0<|\mu| \ll1$ periodic solutions to \eqref{General2D_AV} are locally in 1-to-1 correspondence with equilibria of the averaged normal form in polar coordinates $v=r\cos{\varphi},\, w=r\sin{\varphi}$ of \eqref{General2D_AV} given by
\begin{equation}
{\bar{r}}' = \frac{\mu}{\omega} \bar{r} + \frac{2}{3\pi\omega}\sigma_{_\#} \bar{r}^2 + \left( \frac{1}{8\omega^2}S_q + \frac{1}{8\omega}S_c - \frac{1}{2\pi\omega^2}\sigma_2 \right) \bar{r}^3 + \mathcal{O}\left(\bar{r}^4+\abs{\mu}\bar{r}^2\right).
\label{NormalFormAV}
\end{equation}
\end{theorem}
Remark that in accordance with the smooth Hopf bifurcation, the quadratic term in $\bar r'$ vanishes for vanishing non-smooth terms $f=g=0$, so that the leading order nonlinear term in the normal form is cubic. Before giving the proof we note and discuss an important corollary. For this recall the pitchfork bifurcation of \eqref{Dpitchfork} which is degenerate in that the bifurcating branch is non-smooth.
\begin{corollary}
\label{c_averaging}
If $\sigma_{_\#}\neq 0$, then at $\mu=0$ \eqref{NormalFormAV} undergoes a degenerate pitchfork bifurcation in $\mu$, where non-trivial equilibria are of the form
\begin{equation}\label{periodic_orbit}
r_0(\mu) = -\frac{3\pi}{2\sigma_{_\#}}\mu + \mathcal{O}\left(\mu^2\right).
\end{equation}
In this case, \eqref{General2D_AV} undergoes a degenerate Hopf bifurcation in the sense that for $0<|\mu| \ll1$ periodic solutions to \eqref{General2D_AV} are locally in 1-to-1 correspondence with $r_0(\mu)$, which is also the expansion of the radial component of the periodic solutions.
In particular, this Hopf bifurcation is subcritical if {$\sgn(\sigma_{_\#})>0$} and supercritical if {$\sgn(\sigma_{_\#})<0$}.
%
Moreover, the bifurcating periodic orbits of \eqref{General2D_AV} are of the same stability as the {corresponding} equilibria in \eqref{NormalFormAV}.
\end{corollary}
\begin{proof} (Corollary \ref{c_averaging})
The bifurcation statement follows directly from Theorem \ref{t_averaging} and the statement about stability follows from \cite[Thm.\ 6.3.3]{Averaging}. Since $r_0\geq 0$ we must have $\frac{\mu}{\sigma_{_\#}}\geq 0$. Hence, the sign of $\sigma_{_\#}$ determines the criticality of the bifurcation.
\end{proof}
The radial components $r(\varphi;\mu)$ of the periodic orbits are in general not constant in $\varphi$, but this dependence is of order $\mu^2$. We thus consider \eqref{periodic_orbit} as the leading order amplitude of the periodic solutions.
\begin{remark}\label{1st_Lyap}
Since the criticality of the Hopf bifurcation is given by the sign of $\sigma_{_\#}$, it is an analogue of the first Lyapunov coefficient in this non-smooth case. For the smooth case $f=g=0$, where $\sigma_{_\#}=\sigma_2=0$, the classical first Lyapunov coefficient is $\sigma_s:=\frac{1}{8\omega}S_q + \frac{1}{8}S_c$. In \S\ref{s:smooth} we show that there is no canonical way to infer the sign of $\sigma_{_\#}$ from smoothening a priori.
\end{remark}
\begin{remark}\label{2nd_Lyap}
In case $\sigma_{_\#}=0$ but non-zero cubic coefficient in \eqref{NormalFormAV}, the bifurcating branch is a quadratic function of $\mu$ to leading order. This readily gives an analogue of the second Lyapunov coefficient in this non-smooth case. An explicit statement in absence of smooth terms is given in Theorem~\ref{2ndPart} below. Notably, in the smooth case, vanishing first Lyapunov coefficient, but non-zero second Lyapunov coefficient yields a quartic bifurcation equation. Hence, the scaling laws {for the radius} are $\mu^{1/j}$ with $j=1,2$ in the non-smooth {case} and $j=2,4$ in the smooth case, respectively.
\end{remark}
Next we give the proof of Theorem \ref{t_averaging}.
\begin{proof} (Theorem \ref{t_averaging})
Taking polar coordinates $(v,w)=(r\cos{\varphi},r\sin{\varphi})$ system \eqref{General2D_AV}, cf.\ \eqref{Sys_Polar_NoNF}, becomes
\begin{align}
\begin{cases}
\dot{r} &= \mu r + r^2\chi_2(\varphi) + r^3\chi_3(\varphi), \\
\dot{\varphi} &= \omega + r\Omega_1(\varphi) + r^2\Omega_2(\varphi),
\end{cases}
\label{System2DpolarNF}
\end{align}
where $\chi_2(\varphi)$ and $\Omega_1(\varphi)$ are as in \eqref{chi} and \eqref{Omega}, respectively, but adding now the contributions of the smooth quadratic terms of $f_q, g_q$:
\begin{align*}
\chi_2(\varphi) &= c\abs{c}(a_{11}c+b_{11}s) + c\abs{s}(a_{12}c+b_{12}s) + s\abs{c}(a_{21}c+b_{21}s) + s\abs{s}(a_{22}c+b_{22}s) \\
&+ (a_1-b_2-a_3)c^3 + (b_1+a_2-b_3)sc^2 + (b_2+a_3)c + b_3s, \\
\Omega_1(\varphi) &= -\Big[c\abs{c}(a_{11}s-b_{11}c) + c\abs{s}(a_{12}s-b_{12}c) + s\abs{c}(a_{21}s-b_{21}c) + s\abs{s}(a_{22}s-b_{22}c) \Big] \\
&+ (b_1+a_2-b_3)c^3 + (-a_1+b_2+a_3)sc^2 + (-a_2+b_3)c - a_3s,
\end{align*}
and $\chi_3(\varphi)$ and $\Omega_2(\varphi)$ are smooth functions of $\varphi$ and the coefficients of $f_c, g_c$:
\begin{align*}
\chi_3(\varphi) &= (c_{a1}-c_{a2}-c_{b3}+c_{b4})c^4 + (c_{a3}-c_{a4}+c_{b1}-c_{b2})sc^3 \\
&+ (c_{a2}+c_{b3}-c_{b4})c^2 + (c_{a4}+c_{b2})sc + c_{b4}s^2,\\
\Omega_2(\varphi) &= (c_{a3}-c_{a4}+c_{b1}-c_{b2})c^4 + (c_{a2}-c_{a1}+c_{b3}-c_{b4})sc^3 - c_{a3}c^2 \\
&+ (c_{b4}-c_{a2})sc - c_{a4}s^2 + (c_{b2}+c_{a4})c^2.
\end{align*}
To simplify the notation we write, as before, $c:=\cos{\varphi}$, $s:=\sin{\varphi}$.
Analogous to \eqref{new_time}, we change parametrization such that the return time to $\varphi=0$ is equal for all orbits starting on this half-axis with initial radius $r_0>0$ to get
\begin{align*}
{r}':= \dv{r}{\varphi} &= \frac{\mu r + r^2\chi_2(\varphi) + r^3\chi_3(\varphi)}{\omega + r\Omega_1(\varphi) + r^2\Omega_2(\varphi)}.
\end{align*}
Expanding the right-hand side of $r'$ in small $r$ and $\mu$ gives
\begin{equation}
{r}' =
\frac{\mu}{\omega} r + \frac{\chi_2}{\omega} r^2 + \left( \frac{\chi_3}{\omega} - \frac{\chi_2\Omega_1}{\omega^2} \right) r^3 + \mathcal{O}\left( r^4+\abs{\mu}r^2 \right).
\label{DoNormalForm_r}
\end{equation}
In order to follow the method of averaging (e.g., \cite{Guckenheimer,Averaging}), we write $r=\epsilon x$ and $\mu=\epsilon m$ for $0<\epsilon\ll 1$, such that \eqref{DoNormalForm_r} in terms of $x$ and $m$ becomes
\begin{align}
x' &= \epsilon \left( \frac{m}{\omega} x + \frac{\chi_2}{\omega} x^2 \right) + \epsilon^2\left( \frac{\chi_3}{\omega} - \frac{\chi_2\Omega_1}{\omega^2} \right) x^3 + \epsilon^2\mathcal{O}\left( \epsilon x^4+\abs{m}x^2 \right).
\label{x_form_for_av}
\end{align}
Following \cite{Averaging}, there is a near-identity transformation which
maps solutions of the truncated averaged equation
\begin{equation}
y' = \epsilon\bar{f}_1(y) + \epsilon^2\bar{f}_{2}(y)+ \epsilon^3\bar{f}_{[3]}(y,\varphi,\epsilon)
\label{truncated_av}
\end{equation}
to solutions of \eqref{x_form_for_av}, where its detailed derivation is given in Appendix~\ref{NIT}, as well as an explanation of the computation of the following integrals:
\begin{equation}
\label{averaging_integrals}
\begin{aligned}
\bar{f}_1(y) &=
\frac{1}{2\pi} \int_0^{2\pi} \left( \frac{m}{\omega}z + \frac{\chi_2(\varphi)}{\omega}z^2 \right)\mathrm{d}\varphi = \frac{m}{\omega}y +\frac{2}{3\pi\omega}\sigma_{_\#}y^2, \\
\bar{f}_{2}(y) &=
\frac{1}{2\pi} \int_0^{2\pi} \left( \frac{\chi_3(\varphi)}{\omega} - \frac{\chi_2(\varphi)\Omega_1(\varphi)}{\omega^2} \right)y^3 \mathrm{d}\varphi = \left( \frac{1}{8\omega^2}S_q + \frac{1}{8\omega}S_c - \frac{1}{2\pi\omega^2}\sigma_2 \right)y^3.
\end{aligned}
\end{equation}
We obtain the averaged equation \eqref{NormalFormAV} from \eqref{DoNormalForm_r} by the change of coordinates $y=\frac{\bar{r}}{\epsilon}$ and $m=\frac{\mu}{\epsilon}$ applied to \eqref{truncated_av} {with \eqref{averaging_integrals}}; this becomes \eqref{NormalFormAV} since all terms involving $\epsilon$ cancel out.
Finally, from \cite[Thm.\ 6.3.2]{Averaging} the existence of a periodic orbit in the averaged system implies the existence of a periodic orbit in the original system.
\end{proof}
\subsection{Smoothening and the first Lyapunov coefficient}\label{s:smooth}
From Remarks \ref{1st_Lyap} and \ref{2nd_Lyap} on the first and second Lyapunov coefficients, it is natural to ask in what way the non-smooth first Lyapunov coefficient
\[
\sigma_{_\#} = 2a_{11}+a_{12}+b_{21}+2b_{22}
\]
from \eqref{sigma1} differs from the first Lyapunov coefficient of a smoothened version of \eqref{e:planar0}.
More specifically, the question is whether one can smoothen the vector field in such a way that the sign of the {resulting} first Lyapunov coefficient is the same as that of the non-smooth one, $\sigma_{_\#}$, in all cases. We shall prove that this is not possible \emph{without using the formula for $\sigma_{_\#}$} -- {with the help of} this formula we can find suitable smoothening.
\medskip
Clearly, non-convex approximations of the absolute value $|\cdot|$ can change criticality compared to the non-smooth case (see Figure \ref{3_Cases}). More generally, we have the following.
\begin{lemma}
\label{Convex_Approx}
For any $f,g$ with a sign change in the coefficients $a_{11}, a_{12}, b_{21}, b_{22}$, there are smooth approximations $f_\varepsilon, g_\varepsilon$ with $(f_\varepsilon,g_\varepsilon)\to (f,g)$ in $L^\infty$ such that the criticality of the smoothened Hopf bifurcation is opposite that of the non-smooth case. Moreover, $f_\varepsilon, g_\varepsilon$ can be chosen as symmetric smooth convex approximations of the absolute values in $f,g$.
\end{lemma}
\begin{proof}
Without loss of generality, we consider system \eqref{e:planar0}. For given $f,g$ we can choose a smooth approximation of $|\cdot|$ in the terms with coefficients $a_{11}, a_{12}, b_{21}, b_{22}$ that have quadratic terms with positive coefficients of the form $\varepsilon^{-1} \tilde a_{11}, \varepsilon^{-1} \tilde a_{12}, \varepsilon^{-1} \tilde b_{21}, \varepsilon^{-1} \tilde b_{22}$, respectively. Then the (smooth) first Lyapunov coefficient reads
\[
\sigma_{s,\varepsilon}:= \varepsilon^{-1}\left(3\tilde a_{11} a_{11}+\tilde a_{12} a_{12}+\tilde b_{21} b_{21}+3\tilde b_{22} b_{22} \right),
\]
which is the same as $S_c$ in \S\ref{AV_S} when replacing accordingly coefficients of $f,g$ and $f_c,g_c$, respectively.
Suppose now $\sigma_{_\#}<0$. In this case, the sign change within $(a_{11}, a_{12}, b_{21}, b_{22})$ allows to choose $(\tilde a_{11}, \tilde a_{12}, \tilde b_{21}, \tilde b_{22})>0$ such that $\sigma_{s,\varepsilon}>0$. Likewise for $\sigma_{_\#}>0$ we can arrange $\sigma_{s,\varepsilon}<0$.
\end{proof}
\begin{remark}
If all of $a_{11}, a_{12}, b_{21}, b_{22}$ have the same sign, then any convex smoothening of the absolute value with non-zero quadratic terms will yield a first Lyapunov coefficient of the same sign as $\sigma_{_\#}\neq 0$.
Moreover, having derived the formula for $\sigma_{_\#}$, we can -- a posteriori -- identify a smoothening that preserves {the} criticality for all $f,g$. With the notation of Lemma \ref{Convex_Approx} this is
$\tilde a_{11} = \tilde b_{22} = 2/3, \tilde a_{12} = \tilde b_{21} = 1$.
\end{remark}
\begin{lemma}
There is no smooth approximation of the absolute value function with non-zero quadratic term that preserves the criticality of the non-smooth case for all $f,g$.
\end{lemma}
\begin{proof}
In contrast to Lemma \ref{Convex_Approx}, here all absolute value terms in $f,g$ are approximated in the same way so that in the notation of the proof of Lemma \ref{Convex_Approx} we have $\tilde a_{11}=\tilde a_{12}= \tilde b_{21}= \tilde b_{22} > 0$. Without loss of generality we can assume {these coefficients are all equal $1$} due to the prefactor $\varepsilon^{-1}$, so that the first Lyapunov coefficient is
\[
\sigma_{s,\varepsilon}= \varepsilon^{-1}\left(3 a_{11}+ a_{12}+ b_{21}+3 b_{22} \right),
\]
and we readily find examples of $(a_{11}, a_{12}, b_{21}, b_{22})$ such that the signs of $\sigma_{_\#}$ and $\sigma_{s,\varepsilon}$ differ.
\end{proof}
The discrepancies shown here for the absolute value function readily carry over to the generalized absolute value function \eqref{gen_abs_val}.
\subsection{Direct method}\label{s:direct}
In Theorem \ref{t_averaging}, the conclusion for \eqref{General2D_AV} does not cover the bifurcation point $\mu=0$ so that we cannot infer uniqueness of the branch of bifurcating periodic orbits directly.
In order to directly include $\mu=0$ in the bifurcation analysis and to facilitate the upcoming generalizations, we present a `direct' method for a general (possibly) non-smooth planar system. This does not rely on the existence of an invariant manifold as in Proposition~\ref{prop:inv_man} or results from averaging theory.
The basic result is the following bifurcation of periodic solutions for a radial equation with quadratic nonlinear terms, which cannot stem from a smooth planar vector field, but occurs in our setting as in \eqref{Sys_Polar_NoNF}.
\begin{proposition}
\label{Thm_Gen}
Consider a planar system in polar coordinates $(r,\varphi)\in \mathbb{R}_+ \times [0,2\pi)$ periodic in $\varphi$ {of the form}
\begin{align}
\begin{cases}
\dot{r} &= r\mu + r^2\chi_2(\varphi), \\
\dot{\varphi} &= \omega + r\Omega_1(\varphi),
\end{cases}
\label{System2Dpolar}
\end{align}
where $\mu\in\mathbb{R}$, $\omega\neq 0$ and continuous $\chi_2(\varphi),\Omega_1(\varphi)$ with minimal period $2\pi$.
If $\int_0^{2\pi}\chi_2(\varphi)\mathrm{d} \varphi\neq 0$, then a locally unique branch of periodic orbits bifurcates at $\mu=0$. These orbits have period $2\pi + \mathcal{O}(\mu)$ and constant radius satisfying
\begin{equation} \label{General_Result}
r_0=\frac{-2\pi}{\int_0^{2\pi}\chi_2(\varphi)\mathrm{d}\varphi}\mu+\mathcal{O}\left(\mu^2\right).
\end{equation}
In particular, since $r_0\geq 0$, the criticality of the bifurcation is determined by the sign of $\int_0^{2\pi}\chi_2(\varphi)\mathrm{d} \varphi$.
\end{proposition}
For later reference we present a rather detailed proof.
\begin{proof}
As in the proof of Theorem \ref{t_averaging}, for small $r$ the radius satisfies
\begin{align}
{r}':= & \frac{r\mu + r^2\chi_2(\varphi)}{\omega + r\Omega_1(\varphi)} =: \Psi(r,\varphi).
\label{System2Dparam}\end{align}
We fix the initial time at $\varphi_0=0$ and for any initial $r(0)=r_0$, a unique local solution is guaranteed from the Picard-Lindel\"of theorem with continuous time dependence, e.g., \cite{Hartman}.
This also guarantees existence on any given time interval for sufficiently small $|\mu|, r_0$. Moreover, the solution $r(\varphi;r_0)$ can be Taylor expanded with respect to $r_0$ due to the smoothness of $\Psi(r,\varphi)$ in $r$ and continuity in the time component using the uniform contraction principle for the derivatives, cf.\ \cite{Hartman}.
On the one hand, we may thus expand $r(\varphi)= r(\varphi; r_0)$ as
\begin{equation*}
r(\varphi)=\alpha_1(\varphi)r_0 + \alpha_2(\varphi)r_0^2 + \mathcal{O}\left(r_0^3\right),
\end{equation*}
and differentiate with respect to $\varphi$,
\begin{equation}
r'(\varphi)=\alpha_1'(\varphi)r_0 + \alpha_2'(\varphi)r_0^2 + \mathcal{O}\left(r_0^3\right),
\label{Expansion_r'}
\end{equation}
where $\alpha_1(0)=1$ and $\alpha_2(0)=0$ since $r(0)=r_0$.
On the other hand, we Taylor expand $\Psi(r,\varphi)$ in $r=0$ from \eqref{System2Dparam}, using $\Psi(0,\varphi)=0$, as
\begin{align}
r' &= \Psi(r,\varphi) = \Psi(0,\varphi) + \partial_r\Psi(0,\varphi)r + \frac{1}{2}\partial^2_r\Psi(0,\varphi) r^2 + \mathcal{O}\left(r^3\right)
= k_1 r + k_2 r^2 + \mathcal{O}\left(r^3\right),
\label{Expansion2_r'}
\end{align}
where we denote $\partial^i_r\Psi(0,\varphi)=\frac{\partial^i\Psi(r,\varphi)}{\partial r^i}\big\rvert_{r=0}$, $i\in\mathbb{N}$, and set
\begin{equation}\label{ks}
k_1 := \partial_r\Psi(0,\varphi) = \frac{\mu}{\omega}, \hspace*{4mm} k_2(\varphi) := \frac{1}{2}\partial^2_r\Psi(0,\varphi) = \frac{\omega\chi_2(\varphi) - \mu\Omega_1(\varphi)}{\omega^2}.
\end{equation}
Matching the coefficients of $r_0$ and $r_0^2$ in \eqref{Expansion_r'} and \eqref{Expansion2_r'} gives the ODEs $\alpha_1' = k_1 \alpha_1$ and $\alpha_2' = k_1 \alpha_2 +k_2\alpha_1^2$.
The solutions with $\alpha_1(0)=1$ and $\alpha_2(0)=0$ read
\begin{align*}
\alpha_1(\varphi) &= e^{k_1 \varphi},
&\alpha_2(\varphi) &=
\int_0^\varphi e^{k_1(\varphi+s)}k_2(s)\mathrm{d} s.
\end{align*}
Periodic orbits necessarily have period $2\pi m$ for some $m\in\mathbb{N}$, which yields the condition
\begin{equation}\label{per_orb_r}
0 = r(2\pi m)-r(0) = \int_0^{2\pi m} r' \mathrm{d}\varphi = r_0\int_0^{2\pi m} \alpha_1'(\varphi)\mathrm{d} \varphi + r_0^2\int_0^{2\pi m} \alpha_2'(\varphi)\mathrm{d} \varphi + \mathcal{O}\left(r_0^3\right).
\end{equation}
Using the series expansion of $e^{2\pi m k_1}$ in $\mu=0$ we have
$$ \int_0^{2\pi m} \alpha_1'(\varphi)\mathrm{d} \varphi = \alpha_1(2\pi m)-\alpha_1(0) = e^{2\pi m k_1}-1 = 2\pi m k_1 + \mathcal{O}\left(\mu^2\right), $$
and similarly,
\begin{align*}
\int_0^{2\pi m} \alpha_2'(\varphi)\mathrm{d} \varphi &= \alpha_2(2\pi m)-\alpha_2(0) = e^{2\pi m k_1}\int_0^{2\pi m} e^{k_1\varphi}k_2(\varphi)\mathrm{d} \varphi - 0 \\
&= (1+2\pi m k_1)\int_0^{2\pi m}k_2(\varphi)\mathrm{d}\varphi + k_1\int_0^{2\pi m}\varphi k_2(\varphi)\mathrm{d}\varphi + \mathcal{O}\left(\mu^2\right).
\end{align*}
For non-trivial periodic orbits, $r_0\neq 0$, we divide \eqref{per_orb_r} by $r_0$, which provides the bifurcation equation
\begin{equation*}
0 = 2\pi m k_1 + r_0\left( (1+2\pi m k_1)\int_0^{2\pi m}k_2(\varphi)\mathrm{d}\varphi + k_1\int_0^{2\pi m}\varphi k_2(\varphi)\mathrm{d}\varphi \right) + \mathcal{O}\left(\mu^2\right),
\end{equation*}
where the factor of $r_0$ is non-zero at $\mu=0$ by assumption. Hence, the implicit function theorem applies and gives a unique solution. Since the solution for $m=1$ is a solution for any $m$, this is the unique periodic solution. Solving the bifurcation equation for $m=1$ yields
\begin{equation*}
r_0 = \frac{- 2\pi \mu}{(\omega+2\pi \mu)\int_0^{2\pi }k_2(\varphi)\mathrm{d}\varphi + \mu\int_0^{2\pi m}\varphi k_2(\varphi)\mathrm{d}\varphi} + \mathcal{O}\left(\mu^2\right),
\end{equation*}
whose expansion in $\mu=0$ gives the claimed \eqref{General_Result} and in particular the direction of branching.
Finally, the exchange of stability between the trivial equilibrium and the periodic orbit follows from the monotonicity of the $1$-dimensional Poincar\'e Map on an interval that contains $r=0$ and $r=r_0(\mu)$ by uniqueness of the periodic orbit.
\end{proof}
We next note that higher order perturbations do not change the result to leading order.
\begin{corollary}\label{hot2D}
The statement of Proposition \ref{Thm_Gen} holds for a planar system in polar coordinates $(r,\varphi)\in \mathbb{R}_+ \times [0,2\pi)$ periodic in $\varphi$, of the form
\begin{align}
\begin{cases}
\dot{r} &= r\mu + r^2\chi_2(\varphi) + r^3\chi_3(r,\varphi), \\
\dot{\varphi} &= \omega + r\Omega_1(\varphi) + r^2\Omega_2(r,\varphi),
\end{cases}
\label{System_Polar_General}
\end{align}
where $\mu\in\mathbb{R}$, $\omega\neq 0$ and $\chi_{j+1}$, $\Omega_j$, $j=1,2$, are continuous in their variables.
\end{corollary}
Note that system \eqref{System_Polar_General} is a generalization of \eqref{System2DpolarNF} in which $\chi_3$ and $\Omega_2$ depend now on $r$.
\begin{proof}
Following the proof of Proposition \ref{Thm_Gen} we write system \eqref{System_Polar_General} analogous to \eqref{System2Dparam} with $$\Psi(r,\varphi) = \frac{r\mu + r^2\chi_2(\varphi) + r^3\chi_3(r,\varphi)}{\omega + r\Omega_1(\varphi) + r^2\Omega_2(r,\varphi)}.$$
Upon subtracting the leading order part of \eqref{Expansion2_r'}, a direct computation produces a remainder term of order $\mathcal{O}(r^3)$, which leads to the claimed result.
\end{proof}
Next we show how these results can be directly used to determine the Hopf bifurcation and its super- or subcriticality.
Starting with the simplest model, we return to system \eqref{General2D_AV} with $f_q, g_q, f_c, g_c \equiv 0$, i.e., \eqref{e:planar0}.
Recall $\sigma_{_\#}=2a_{11}+a_{12}+b_{21}+2b_{22}$ from \eqref{sigma1} was identified as determining the criticality in Corollary \ref{c_averaging}. With the direct method we obtain the following.
\begin{theorem}
\label{1stPart}
If $\sigma_{_\#}\neq 0$, then there exists an interval $I$ around $\mu=0$ such that at $\mu=0$ system \eqref{e:planar0} with $f,g$ from \eqref{e:quadnonlin} undergoes a degenerate Hopf bifurcation in $\mu$ where the leading order amplitudes of the locally unique periodic orbits is given by \eqref{periodic_orbit}.
In particular, the unique bifurcating branch of periodic solutions emerges subcritically if {$\sgn(\sigma_{_\#})>0$} and supercritically if {$\sgn(\sigma_{_\#})<0$}. Moreover, the bifurcating periodic orbits have exchanged stability with the equilibrium at $r=0$, i.e., are stable if they exist for $\mu>0$ and unstable if this is for $\mu<0$.
\end{theorem}
\begin{proof}
Taking polar coordinates $(v,w)=(r\cos{\varphi},r\sin{\varphi})$ for system \eqref{e:planar0} gives \eqref{System2Dpolar}, where $\chi_2(\varphi)$ and $\Omega_1(\varphi)$ are as in \eqref{chi} and \eqref{Omega}, respectively.
Applying Proposition \ref{Thm_Gen}
and computing the integral of $\chi_2$
in each quadrant as in the proof of Theorem~\ref{t_averaging}, we obtain
\eqref{periodic_orbit}, and the criticality follows as in Corollary~\ref{c_averaging}.
Finally, the exchange of stability is due to the monotonicity of the $1$-dimensional Poincar\'e Map.
\end{proof}
We next note that, proceeding as for Corollary \ref{hot2D}, the coefficients from the quadratic and cubic terms do not affect the bifurcation to leading order.
\begin{corollary}
\label{thm_cubic}
If $\sigma_{_\#}\neq 0$, then the statement of Theorem \ref{1stPart} holds for the more general system \eqref{General2D_AV}. In particular, $f_q$, $g_q$, $f_c$, $g_c$ do not affect $\sigma_{_\#}$ and the leading order bifurcation.
\end{corollary}
Having investigated $\sigma_{_\#}\neq 0$, we next consider the degenerate case $\sigma_{_\#}= 0$. For that, recall Remark~\ref{2nd_Lyap} and {$\sigma_2$ from} \eqref{sigma2}.
\begin{theorem}
\label{2ndPart}
If $\sigma_{_\#}= 0$ and $\sigma_2\neq 0$, then there exists an interval $I$ around $\mu=0$ such that at $\mu=0$ system \eqref{General2D_AV} undergoes a degenerate Hopf bifurcation in $\mu$ where the leading order amplitude of the locally unique periodic orbit is given by
\begin{equation}\label{r_2nd_HB}
r_0=\sqrt{\frac{2\pi\omega}{\sigma_2}\mu}+\mathcal{O}\left(\mu\right).
\end{equation}
In particular, the unique bifurcating branch of periodic solutions emerges subcritically if\\
$\sgn(\omega\sigma_2)<0$ and supercritically if $\sgn(\omega\sigma_2)>0$. Moreover, the bifurcating periodic orbits have exchanged stability with the equilibrium at $r=0$, i.e., are stable if they exist for $\mu>0$ and unstable if this is for $\mu<0$.
\end{theorem}
\begin{proof}
Proceeding as before, we write \eqref{General2D_AV} in polar coordinates $(v,w)=(r\cos{\varphi},r\sin{\varphi})$ and change the time parametrization to obtain the form \eqref{System2Dparam} for the radial equation.
On the one hand, we expand the solution $r(\varphi)=r(\varphi;r_0)$ with $r(0)=r_0$ as
\begin{equation}
r'(\varphi)=\alpha_1'(\varphi)r_0 + \alpha_2'(\varphi)r_0^2 + \alpha_3'(\varphi)r_0^3 + \mathcal{O}\left(r_0^4\right),
\label{2Expansion_r}
\end{equation}
where $\alpha_1(0)=1$ and $\alpha_2(0)=\alpha_3(0)=0$.
On the other hand, we compute the Taylor expansion of $r'$, from \eqref{System2Dparam}, up to third order in $r=0$ as
\begin{equation*}
r' = \Psi(r,\varphi) = k_1 r + k_2 r^2 + k_3 r^3 + \mathcal{O}\left(r^4\right),
\end{equation*}
where we use $\Psi(0,\varphi)=0$ and the notation \eqref{ks} as well as
\begin{equation*}
k_3(\varphi) := \frac{1}{3!}\partial^3_r\Psi(0,\varphi)= \frac{-\omega\chi_2(\varphi)\Omega_1(\varphi) + \mu\Omega_1(\varphi)^2}{\omega^3}.
\end{equation*}
Analogous to the proof of Proposition \ref{Thm_Gen}, using \eqref{2Expansion_r} and its derivate, and comparing coefficients, we obtain the ODEs
\begin{align*}
\alpha_1' &= k_1\alpha_1,
& \alpha_2' &= k_1\alpha_2 + k_2\alpha_1^2,
& \alpha_3' &= k_1\alpha_3 + 2k_2\alpha_1\alpha_2 + k_3\alpha_1^3.
\end{align*}
We solve these by variation of constants, using $\alpha_1(0)=1$ and $\alpha_2(0)=\alpha_3(0)=0$ as
\begin{align*}
\alpha_1(\varphi) &= e^{k_1 \varphi}, \\
\alpha_2(\varphi) &= \int_0^\varphi e^{k_1(\varphi+s)}k_2(s)\mathrm{d} s, \\
\alpha_3(\varphi) &= e^{k_1\varphi}\left[2\int_0^\varphi k_2(s)\alpha_2(s)\mathrm{d} s + \int_0^\varphi e^{2k_1s}k_3(s)\mathrm{d} s \right].
\end{align*}
Periodic orbits are the solutions with $r_0\neq 0$ of
\begin{equation}\label{e:2ndper}
0 = r(2\pi)-r(0) = r_0\int_0^{2\pi}\alpha_1'(\varphi)\mathrm{d}\varphi + r_0^2\int_0^{2\pi}\alpha_2'(\varphi)\mathrm{d}\varphi + r_0^3\int_0^{2\pi}\alpha_3'(\varphi)\mathrm{d}\varphi + \mathcal{O}\left(r_0^4\right).
\end{equation}
Straightforward computations give $\alpha_j(2\pi)-\alpha_j(0) = \Gamma_j + \mathcal{O}\left(\mu\right)$, $j=2,3$,
where $\Gamma_2 = \frac{4}{3\omega}\sigma_{_\#}=0$ and $\Gamma_3 = \frac{1}{\omega^2}\left( \frac{32}{9}\sigma_{_\#}^2-\sigma_2 \right)=-\frac{\sigma_2}{\omega^2}$ since $\sigma_{_\#}=0$.
Substitution into the equation \eqref{e:2ndper} for periodic orbits and dividing out $r_0\neq 0$ yields the bifurcation equation
\begin{equation}\label{Thm2_eq}
0 = \frac{2\pi}{\omega}\mu +
\Gamma_3 r_0^2 + \mathcal{O}\left(\mu^2 + \mu r_0 + r_0^3\right).
\end{equation}
Here the implicit function theorem applies a priori to provide a unique branch $\mu(r_0)$ with
\begin{equation}\label{e:bifeq2}
\mu =
\frac{\sigma_2}{2\pi\omega}r_0^2 + \mathcal{O}\left(r_0^3\right).
\end{equation}
Solving this for $r_0$ provides \eqref{r_2nd_HB}, where the square root to be real requires $\mu\omega\sigma_2 > 0$, which gives the claimed sub/supercriticality.
\end{proof}
This last theorem readily extends to the analogue of the so-called Bautin bifurcation for smooth vector fields, also called generalized Hopf bifurcation, which unfolds from zero first Lyapunov coefficient and identifies a curve of fold points. From \eqref{Thm2_eq} we directly derive the loci fold points in the $(\mu, \sigma_{_\#})$-parameter plane as
\[
\mu = -\frac{2\omega^2}{9\sigma_2} \sigma_{_\#}^2
\]
to leading order with respect to $\sigma_{_\#}$.
Notably, the loci of fold points for the smooth Bautin bifurcation also lie on a quadratic curve in terms of the first Lyapunov coefficient. This last similarity is due to the fact that the ODE of the smooth case has no even terms in the radial component, $\dot{r} = \mu r + \sigma_s r^3 + \sigma_l r^5$, leading to $\mu = -\frac{\sigma_s^2}{4\sigma_{l}}$. In the $(\mu,\sigma_{_\#})$-parameter plane, the origin corresponds to the Bautin point and the vertical axis, $\mu=0$, to the sub- and supercritical Hopf bifurcations for positive- and negative values of $\sigma_{_\#}$, respectively.
\section{Generalizations}\label{s:general}
In this section we discuss analogous bifurcation results for the generalization from the absolute value, \eqref{gen_abs_val}, and then turn to higher dimensional systems as well as general linear form of the linear part.
\subsection{Generalization from the absolute value}\label{s:appplanar}
Recall our notation for different left and right slopes \eqref{gen_abs_val}, and consider the generalized canonical equation
\begin{equation}\label{e:genscalar}
\dot{u} = \mu u+\sigma_{_\#} u^j [u]_{\pn}^{\pp},
\end{equation}
with left slope $p_-$, right slope $p_+$ and $j\in\mathbb{N}$ measuring the degree of smoothness such that the right-hand side is $C^j$ but not $C^{j+1}$ smooth. Sample bifurcation diagrams for $j=1$ and $j=2$ are plotted in Figure \ref{f:genslopes} for $\sigma_{_\#}=-1$.
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width= 0.25\linewidth]{1D_generalization_1.pdf}\hspace*{1cm}
&\includegraphics[width= 0.25\linewidth]{1D_generalization_2.pdf}\\
(a) & (b)
\end{tabular}
\caption{Degenerated supercritical pitchfork bifurcation of \eqref{e:genscalar} for {$p_-=-1$, $p_+=5$} of degree $j=1$ (a) and $j=2$ (b).}\label{f:genslopes}
\end{figure}
The case $j=2$ highlights that also lack of smoothness in the cubic terms impacts the bifurcation in general. We do not pursue this further here, but analogous to the following discussion, it is possible to derive a modified normal form coefficient $S_c$.
For the Hopf bifurcation analysis, we analogously replace the absolute value in \eqref{e:quadnonlin} by \eqref{gen_abs_val}, and thus replace $f,g$ in \eqref{e:planar0} by
\begin{align}
f\left( v, w; \alpha \right) &= a_{11}v\ABS{v}{\alpha_1} + a_{12}v\ABS{w}{\alpha_2} + a_{21}w\ABS{v}{\alpha_3} + a_{22}w\ABS{w}{\alpha_4},\label{f_general}\\
g\left( v, w; \beta \right) &= b_{11}v\ABS{v}{\beta_1} + b_{12}v\ABS{w}{\beta_2} + b_{21}w\ABS{v}{\beta_3} + b_{22}w\ABS{w}{\beta_4},\label{g_general}
\end{align}
where $\alpha = (\alpha_{1_\pm},\alpha_{2_\pm},\alpha_{3_\pm},\alpha_{4_\pm})$, $\beta = (\beta_{1_\pm},\beta_{2_\pm},\beta_{3_\pm},\beta_{4_\pm}) \in\mathbb{R}^8$.
This generalization leads to the generalized non-smooth first Lyapunov coefficient given by
\begin{equation}\label{e:tildesig}
\widetilde\sigma_{_\#}:=a_{11}(\alpha_{1_+}-\alpha_{1_-})+\frac 1 2 a_{12}(\alpha_{2_+}-\alpha_{2_-})+
\frac 1 2 b_{21}(\beta_{3_+}-\beta_{3_-})+ b_{22}(\beta_{4_+}-\beta_{4_-}).
\end{equation}
Notably, in the smooth case, where the left- and right slopes coincide, we have $\widetilde\sigma_{_\#}=0$, and if left- and right slopes are $-1$ and $1$, respectively, we recover $\sigma_{_\#}$.
\begin{theorem}\label{Thm0_General}
If $\widetilde\sigma_{_\#}\neq 0$, then the statement of Theorem~\ref{1stPart} holds true for \eqref{e:planar0} with $f,g$ from \eqref{f_general}, \eqref{g_general}, respectively, with $\sigma_{_\#}$ replaced by $\widetilde\sigma_{_\#}$.
\end{theorem}
\begin{proof}
Taking polar coordinates
we obtain \eqref{System2Dpolar},
where
\begin{align*}
\chi_2(\varphi) =& \; c^2\left( a_{11}\ABS{c}{\alpha_1}+a_{12}\ABS{s}{\alpha_2} \right) + s^2\left( b_{21}\ABS{c}{\beta_3}+b_{22}\ABS{s}{\beta_4} \right)\\ &+ sc\left( a_{21}\ABS{c}{\alpha_3} + a_{22}\ABS{s}{\alpha_4} + b_{11}\ABS{c}{\beta_1} + b_{12}\ABS{s}{\beta_2} \right),
\end{align*}
{again with $s:=\sin(\varphi)$, $c:=\cos(\varphi)$.}
Applying Proposition \ref{Thm_Gen}
we compute $\int_0^{2\pi}\chi_2(\varphi)\mathrm{d}\varphi$, which gives $\frac{4}{3} \widetilde\sigma_{_\#} $.
Indeed, from \eqref{gen_abs_val} we obtain
\begin{align*}
\int_0^{2\pi} c^2 a_{11}\ABS{c}{\alpha_1} \mathrm{d}\varphi &= \int_0^{\frac{\pi}{2}} c^3 a_{11}\alpha_{1_+} \mathrm{d}\varphi + \int_{\frac{\pi}{2}}^{\frac{3\pi}{2}} c^3 a_{11}\alpha_{1_-} \mathrm{d}\varphi + \int_{\frac{3\pi}{2}}^{2\pi} c^3 a_{11}\alpha_{1_+} \mathrm{d}\varphi \\
&= \frac{4}{3}a_{11}\left( \alpha_{1_+}-\alpha_{1_-} \right), \\
\int_0^{2\pi} c^2 a_{12}\ABS{s}{\alpha_2} \mathrm{d}\varphi &= \int_0^\pi c^2s\ a_{12}\alpha_{2_+} \mathrm{d}\varphi + \int_\pi^{2\pi} c^2s\ a_{12}\alpha_{2_-} \mathrm{d}\varphi \\
&= \frac{2}{3}a_{12}\left( \alpha_{2_+}-\alpha_{2_-} \right),
\end{align*}
and similarly for the other terms. Note that the integral of the {third term on the} right-hand side of $\chi_2$ vanishes due to the symmetry of $sc$.
Thus, we get \eqref{periodic_orbit} with $\sigma_{_\#}$ replaced by $\widetilde\sigma_{_\#}=0$.
\end{proof}
\subsection{$3$D system}\label{s:3D}
In this section we extend the previous results to higher dimensional systems. Recall that Proposition~\ref{prop:inv_man} and Theorem~\ref{t_per_orb} rely on hyperbolicity of the spectrum of $A(0)$ from \eqref{e:abstract} except for a simple pair of complex conjugate eigenvalues. Analogously, averaging theory can be used in this setting to obtain a normal form as in Theorem~\ref{t_averaging}. Here we follow the `direct method' and obtain bifurcation results also without normal hyperbolicity.
To simplify the exposition, we start with the absolute value $\abs{\cdot}$ and consider an extension of the planar quadratic case \eqref{e:planar0}, \eqref{e:quadnonlin}, motivated by the example in \cite{InitialPaper}, which is a simplification of a model used for ship maneuvering. As discussed in \S\ref{s:abstract}, we first assume the linear part is in normal form -- a general linear part will be considered in \S\ref{Gen_linear_part} -- which gives
\begin{equation}
\begin{pmatrix}
\dot{u}\\
\dot{{v}}\\
\dot{{w}}\\
\end{pmatrix} =
\begin{pmatrix}
c_1 u + c_2 u^2 + { c_{3} uv + c_{4} uw } + c_5vw + h\left( v, w \right)\\
\mu v - \omega w + c_6 uv + c_7 uw + f\left( v, w \right)\\
\omega v + \mu w + c_{8} uv + c_{9} uw + g\left( v, w \right)
\end{pmatrix},
\label{3DAbstractSystem}
\end{equation}
where $f,g$ are as in \eqref{e:quadnonlin}, $h\left( v, w \right) = h_{11}v\abs{v} + h_{12}v\abs{w} + h_{21}w\abs{v} + h_{22}w\abs{w}$
and $h_{ij}, c_k$, $\forall i,j\in\{1,2\}$, $\forall k\in\{1,\ldots 9\}$, are real constants, all viewed as parameters. Again we assume $\omega\neq 0$ and take $\mu$ as the bifurcation parameter.
With linear part in normal form in the coordinates of Lemma \ref{l:cyl}, the vector field is actually smooth in the additional variable $u$. It turns out that in the generic case $c_1\neq 0$, this additional smoothness will not be relevant for the leading order analysis, while we make use of it in the degenerate case $c_1=0$.
We define the following quantities that appear in the upcoming results:
\begin{equation}\begin{aligned}
{\overline{\gamma}}_{10} &= e^{\frac{2\pi c_1}{\omega}}-1,
&{\overline{\gamma}}_{20} &= \frac{e^{\frac{2\pi c_1}{\omega}}\left(e^{\frac{2\pi c_1}{\omega}}-1\right)}{c_1} \left( c_2-\frac{c_1\rho_2}{\omega(c_1^2+4\omega^2)} \right),\\[1em]
{\overline{\gamma}}_{02} &= \frac{1}{\omega}e^{\frac{2\pi c_1}{\omega}} \int_0^{2\pi} e^{s\frac{2\mu-c_1}{\omega}}\Upsilon(s) \mathrm{d} s,
&{\overline{\gamma}}_{11} &= \frac{2}{3\omega^2}e^{\frac{2\pi c_1}{\omega}}
\left[ c_1\left(2\tau_2 + \frac{P}
{3\omega}\mu\right) {-3\pi c_4\mu} \right]
+ \mathcal{O}(\mu^2),\\[1em]
{\overline{\delta}}_{01} &= \frac{2\pi\mu}{\omega} + \mathcal{O}(\mu^2),
&{\overline{\delta}}_{02} &= \frac{2}{3\omega}\left[ \sigma_{_\#}\left(2 + \frac{6\pi}{\omega}\mu\right) + \frac{Q
}{3\omega}\mu \right] + \mathcal{O}(\mu^2),\\[1em]
&&{\overline{\delta}}_{11} &= \frac{e^{\frac{2\pi c_1}{\omega}}-1}{c_1}\frac{1}{\omega(c_1^2+4\omega^2)} \big[ \omega\rho_1+R\mu \big] + \mathcal{O}(\mu^2),
\end{aligned}\label{ogammas_odeltas}\end{equation}
where we shortened the notation by lumping the weighted sums of coefficients from $f,g$, and from the smooth quadratic terms, respectively, given by
\begin{align*}
\tau_1&=4a_{22}+5a_{21}-5b_{12}-4b_{11}, \quad
\tau_2=2a_{22}+a_{21}-b_{12}-2b_{11}, \quad \tau_3=a_{11}-a_{12}-b_{21}+b_{22},\\
P& =3\pi(2\tau_2-a_{11}+b_{21})+4\tau_3, \quad Q =-3\pi(b_{11}+a_{21})+2\tau_1, \quad\quad R=2\pi\rho_1-\rho_2,\\
\rho_1&=c_6c_1^2-c_7c_1\omega +2c_6\omega^2-c_8c_1\omega+2c_9\omega^2,
\quad\rho_2=c_8c_1^2-c_9c_1\omega+2c_8\omega^2+c_6c_1\omega-2c_7\omega^2,
\end{align*}
as well as the $h$-dependent
\[
\Upsilon(\varphi) = c_5cs+h_{11}c\abs{c}+h_{12}c\abs{s}+h_{21}s\abs{c}+h_{22}s\abs{s}.
\]
The explicit form of ${\overline{\gamma}}_{02}$ can be found in Appendix \ref{3D_gammas_deltas}.
\begin{theorem}\label{thm3D}
In cylindrical coordinates $(u,v,w)=(u,r\cos{\varphi},r\sin{\varphi})$, up to time shifts, periodic solutions to \eqref{3DAbstractSystem} with $r(0)=r_0, u(0)=u_0$ for $0\leq |\mu| \ll 1$ near $r=u=0$ are in 1-to-1 correspondence with solutions to the algebraic equation system
\begin{align}
0 &= {\overline{\gamma}}_{10} u_0 + {\overline{\gamma}}_{20} u_0^2 + {\overline{\gamma}}_{02} r_0^2 + {\overline{\gamma}}_{11} u_0r_0 + \mathcal{O}\left(3\right),\label{Periodic_R-a}\\
0 &= {\overline{\delta}}_{01} r_0 + {\overline{\delta}}_{02} r_0^2+ {\overline{\delta}}_{11} u_0r_0 + \mathcal{O}\left(3\right),\label{Periodic_R-b}
\end{align}
where $\mathcal{O}(3)$ are terms of at least cubic order in $u_0,r_0$.
\end{theorem}
\begin{proof}
In cylindrical coordinates $(u,v,w)=(u,r\cos{\varphi},r\sin{\varphi})$ system \eqref{3DAbstractSystem} becomes
\begin{align}
\label{polar_system_3D}
\begin{cases}
\dot{u} &= c_1 u + c_2 u^2 + { (c_3 c u + c_4 s u)r } + \Upsilon(\varphi)r^2,\\
\dot{r} &= \left(\mu+\chi_1(\varphi)u\right)r + \chi_2(\varphi) r^2,\\
\dot{\varphi} &= \omega + \Omega_0(\varphi)u + \Omega_1(\varphi) r,
\end{cases}
\end{align}
where
$\chi_1(\varphi) = c_6c^2+(c_7+c_8)cs+c_9s^2$, $\Omega_0(\varphi) = c_8c^2+(c_9-c_6)cs-c_7s^2$,
and the non-smooth functions $\chi_2(\varphi)$ and $\Omega_1(\varphi)$ are as in \eqref{chi} and \eqref{Omega}, respectively.
Upon rescaling time the equations for $u$ and $r$ of the previous system become
\begin{align}\label{eq_u_r}
\begin{cases}
{u}' &= \frac{du/dt}{d\varphi /dt} = \frac{c_1 u + c_2 u^2 + { (c_3 c u + c_4 s u)r } + \Upsilon(\varphi)r^2}{\omega + \Omega_0(\varphi)u + \Omega_1(\varphi) r} =:\Psi_u(u,r,\varphi),\\[10pt]
{r}' &= \frac{dr/dt}{d\varphi /dt} = \frac{\left(\mu+\chi_1(\varphi)u\right)r + \chi_2(\varphi) r^2}{\omega + \Omega_0(\varphi)u + \Omega_1(\varphi) r} =:\Psi_r(u,r,\varphi).
\end{cases}
\end{align}
Taylor expansion of $u'$ and $r'$ in $(u,r)=(0,0)$ up to third order gives:
\begin{subequations}
\begin{align}
\begin{split} \label{UP-a}
u' &= \Psi_u(0,0,\varphi) + \partial_u\Psi_u(0,0,\varphi)u + \partial_r\Psi_u(0,0,\varphi)r \\
&+ \frac{1}{2}\partial_u^2\Psi_u(0,0,\varphi)u^2 + \frac{1}{2}\partial_r^2\Psi_u(0,0,\varphi)r^2 + \partial_{ur}^2\Psi_u(0,0,\varphi)ur + \mathcal{O}\left(3\right),
\end{split}\\[5pt]
\begin{split} \label{UP-b}
r' &= \Psi_r(0,0,\varphi) + \partial_u\Psi_r(0,0,\varphi)u + \partial_r\Psi_r(0,0,\varphi)r \\
&+ \frac{1}{2}\partial_u^2\Psi_r(0,0,\varphi)u^2 + \frac{1}{2}\partial_r^2\Psi_r(0,0,\varphi)r^2 + \partial_{ur}^2\Psi_r(0,0,\varphi)ur + \mathcal{O}\left(3\right).
\end{split}
\end{align}
\end{subequations}
On the other hand, and similarly to the procedure of the $2$-dimensional case, we write $u(\varphi)$ and $r(\varphi)$ as the following expansions with coefficients $\gamma_{ij}, \delta_{ij}$:
\begin{equation}\begin{aligned}
u(\varphi) &= \gamma_{10}(\varphi)u_0 + \gamma_{20}(\varphi)u_0^2 + \gamma_{01}(\varphi)r_0 + \gamma_{02}(\varphi)r_0^2 + \gamma_{11}(\varphi)u_0r_0 + \mathcal{O}\left(3\right), \\
r(\varphi) &= \delta_{10}(\varphi)u_0 + \delta_{20}(\varphi)u_0^2 + \delta_{01}(\varphi)r_0 + \delta_{02}(\varphi)r_0^2 + \delta_{11}(\varphi)u_0r_0 + \mathcal{O}\left(3\right),
\label{UE_RE}
\end{aligned}\end{equation}
with the initial conditions $u(0)=u_0$ and $r(0)=r_0$, which imply $\gamma_{10}(0)=\delta_{01}(0)=1$ and the rest zero.
Substituting (\ref{UE_RE}) into (\ref{UP-a}) and (\ref{UP-b}) and matching the coefficients of the powers of $u_0$ and $r_0$ we get to solve a set of ODEs in order to obtain the expressions for $\gamma_{ij}$ and $\delta_{ij}$ (see Appendix \ref{3D_gammas_deltas} for the details). Using these, the system of boundary value problems $0 = u(2\pi) - u(0)$, $0 = r(2\pi) - r(0)$ for periodic solutions precisely yields
\eqref{Periodic_R-a}, \eqref{Periodic_R-b}, where ${\overline{\gamma}}_{ij}=\gamma_{ij}(2\pi)-\gamma_{ij}(0)$ and ${\overline{\delta}}_{ij}=\delta_{ij}(2\pi)-\delta_{ij}(0)$.
\end{proof}
The solution structure of \eqref{Periodic_R-a}, \eqref{Periodic_R-b} strongly depends on whether $c_1=0$ or not. If not, then the transverse direction is hyperbolic and Theorem~\ref{1stPart} implies a locally unique branch of periodic solutions. In the non-hyperbolic case the situation is different and we note that if $c_1=0$, then
with
$\gamma_{_\#} := 2h_{21} + c_5 + \pi h_{22}$, we have
\begin{equation}\label{e:c20}
{\overline{\gamma}}_{10}=0, \hspace*{0.5cm} {\overline{\gamma}}_{11}={-\frac{2\pi c_4}{\omega^2}\mu+}\mathcal{O}(\mu^2), \hspace*{0.5cm} {\overline{\gamma}}_{20}=\frac{2\pi c_2}{\omega}, \hspace*{0.5cm}
{\overline{\gamma}}_{02} = -\frac{\pi\gamma_{_\#}}{\omega^2}\mu + \mathcal{O}(\mu^2).
\end{equation}
\begin{corollary}\label{c:3D}
Consider system \eqref{3DAbstractSystem} in cylindrical coordinates $(u,v,w)=(u,r\cos{\varphi},r\sin{\varphi})$. If $c_1\neq0$, then $u=u(\varphi;\mu) = \mathcal{O}\left(\mu^2\right)$ and the statement of Theorem~\ref{1stPart} holds true.
If $c_1=0$ and $\omega c_2\gamma_{_\#} \mu>0$, then precisely two curves of periodic solutions bifurcate at $\mu=0$ for $\mu\sigma_{_\#}\leq 0$, each in the sense of Theorem \ref{t_per_orb}, and their initial conditions $r(0)=r_0$, $u(0)=u_0^\pm$ satisfy
\begin{align}
u_0^\pm &= u_0^\pm(\mu) =
\mp \frac{3\pi}{2\sigma_{_\#}}\sqrt{\frac{\gamma_{_\#}}{2\omega c_2} \mu^3} + \mathcal{O}(\mu^{2}) =\mathcal{O}(|\mu|^{3/2}),\label{e:u3D} \\
r_0 &= r_0(\mu)= -\frac{3\pi}{2\sigma_{_\#}}\mu + \mathcal{O}(|\mu|^{3/2}). \label{e:3Dr2}
\end{align}
In case $c_1=0$ and $\omega c_2\gamma_{_\#} \mu<0$, there is no bifurcation through $\mu$.
\end{corollary}
\begin{proof}
In the (transversely) hyperbolic case $c_1\neq 0$ we have ${\overline{\gamma}}_{10}\neq 0$, and thus one may solve \eqref{Periodic_R-a} for $u_0$ by the implicit function theorem as $u_0=u_0(r_0) = \mathcal{O}(r_0^2)$.
Substitution into \eqref{Periodic_R-b} changes the higher order term only, so that to leading order we obtain the same problem as in Theorem~\ref{1stPart} with solution given by \eqref{periodic_orbit}. The stability statement of Theorem~\ref{1stPart} holds true from the existence of a $2$-dimensional Lipschitz continuous invariant manifold given by Proposition \ref{prop:inv_man}.
We now consider $c_1=0$. Using \eqref{e:c20} we can cast \eqref{Periodic_R-a}, \eqref{Periodic_R-b} as
\begin{align}
0 &= \frac{2\pi c_2}{\omega} u_0^2 - \frac{\pi\gamma_{_\#}}{\omega^2} \mu r_0^2 {-\frac{2\pi c_4}{\omega^2}\mu u_0r_0} + \mathcal{O}\left(\mu^2 r_0^2\right) + \mathcal{O}\left(3\right),\label{Periodic_R-aa}\\
0 &= {\overline{\delta}}_{01} r_0 + \frac{4\sigma_{_\#}}{3\omega} r_0^2 + \mathcal{O}\big(|u_0r_0| + |\mu r_0| (|u_0|+ |r_0|)\big) + \mathcal{O}\left(3\right),\label{Periodic_R-bb}
\end{align}
so that we may solve \eqref{Periodic_R-aa} to leading order as
\begin{equation}
\label{exp_u0_of_r0}
u_0 = u_0^\pm(r_0;\mu) = \frac{c_4}{\omega c_2}\mu r_0 \pm r_0\sqrt{\frac{c_4^2}{4c_2^2}\mu^2+\frac{\gamma_{_\#}}{2\omega c_2} \mu} + \mathcal{O}(|\mu|) = \pm r_0\sqrt{\frac{\gamma_{_\#}}{2\omega c_2} \mu} + \mathcal{O}(|\mu|).
\end{equation}
Substitution into \eqref{Periodic_R-b} gives a factor $r_0$ corresponding to the trivial solution $u_0=r_0=0$. For non-trivial solutions we divide by $r_0\neq 0$ and solve the leading order part as
$$ r_0 = - \frac{{\overline{\delta}}_{01}}{\frac{4\sigma_{_\#}}{3\omega} + \mathcal{O}(\sqrt{\mu})} = - \frac{3\pi}{2\sigma_{_\#}}\mu + \mathcal{O}(|\mu|^{3/2}).$$
Next, we substitute this into \eqref{exp_u0_of_r0} and note that perturbation by the higher order terms yields \eqref{e:u3D}, \eqref{e:3Dr2}. These give positive $r_0$ in case $\mu\sigma_{_\#}<0$ and therefore real valued $u_0$ in case $\omega c_2\gamma_{_\#} \mu>0$. However, if $\omega c_2\gamma_{_\#} \mu<0$ then for any $0<|\mu|\ll 1$ either $r_0<0$ or $u_0$ is imaginary.
\end{proof}
We subsequently consider the degenerate case $\sigma_{_\#}=0$, but assume $c_1\neq 0$,
which generalizes Theorem~\ref{2ndPart} to the present $3$-dimensional setting.
We will show that the generalization of $\sigma_2$ is given by $-\omega^2\widetilde{\Gamma}_3$, where
\begin{equation}\label{tilde_o}
\widetilde{\Gamma}_3 := \tilde{\delta}_{03} - \tilde{\delta}_{11}\frac{\tilde{\gamma}_{02}}{{\overline{\gamma}}_{10}},
\end{equation}
with ${\overline{\gamma}}_{10}$ from \eqref{ogammas_odeltas}, and
\begin{equation*}
\begin{aligned}
\tilde{\delta}_{11} &:= {\overline{\delta}}_{11}|_{\mu=0} =\frac{e^{\frac{2\pi c_1}{\omega}}-1}{c_1}\frac{\rho_1}{c_1^2+4\omega^2}, \hspace*{1cm} \tilde{\gamma}_{02} := {\overline{\gamma}}_{02}|_{\mu=0} =\frac{1}{\omega}e^{\frac{2\pi c_1}{\omega}} \int_0^{2\pi}e^{-s\frac{c_1}{\omega}}\Upsilon(s) \mathrm{d} s,\\
\tilde{\delta}_{03} &:= \Gamma_3 + \frac{1}{\omega^2}\int_0^{2\pi} \chi_1(s)\int_0^s e^{c_1\frac{s-\tau}{\omega}}\Upsilon(\tau) \mathrm{d}\tau\mathrm{d} s.
\end{aligned}
\end{equation*}
Here we use the same notation for $\Gamma_3$ as in \eqref{Thm2_eq}, i.e., $\Gamma_3=\frac{2}{\omega^2}\int_0^{2\pi}\chi_2(s) \int_0^{s}\chi_2(\tau)\mathrm{d}\tau \mathrm{d} s-\frac{1}{\omega^2}\int_0^{2\pi}\chi_2(s)\Omega_1\mathrm{d} s$.
Comparing $\widetilde{\Gamma}_3$ with $\Gamma_3$, we thus expect $\widetilde{\Gamma}_3 \neq \Gamma_3$, as a results of the coupling with the additional variable $u$. We omit here the fully explicit approach for $\widetilde{\Gamma}_3$, since the expressions become too lengthy for practical uses. However, for illustration, we consider the simpler case $h=0$ in \eqref{3DAbstractSystem}, which yields
\begin{equation*}
\tilde{\delta}_{03} = \Gamma_3 + \frac{ c_5\pi(c_6+c_9)}{c_1^2+4\omega^2}\left(e^{\frac{2\pi}{\omega}c_1}-1\right), \hspace*{1cm}
\tilde{\delta}_{11}\frac{\tilde{\gamma}_{02}}{{\overline{\gamma}}_{10}} = \frac{c_5\rho_1\omega}{c_1(c_1^2+4\omega^2)^2} \left(e^{\frac{2\pi}{\omega}c_1}-1\right),
\end{equation*}
and thus,
\begin{equation*}
\widetilde{\Gamma}_3 = \Gamma_3 + \frac{c_5\left(e^{\frac{2\pi}{\omega}c_1}-1\right)}{c_1^2+4\omega^2}\left[ \pi (c_6+c_9) - \frac{\rho_1\omega}{c_1(c_1^2+4\omega^2)} \right].
\end{equation*}
\begin{corollary}\label{Second:c:3D}
Consider \eqref{3DAbstractSystem} in cylindrical coordinates $(u,v,w)=(u,r\cos{\varphi},r\sin{\varphi})$ and $\sigma_{_\#}=0$. If $c_1\neq0$, then $u=u(\varphi;\mu) = \mathcal{O}\left(\mu^2\right)$ and the statement of Theorem~\ref{2ndPart} holds true {with} $\sigma_2$ {replaced} by $-\omega^2\widetilde{\Gamma}_3$.
\end{corollary}
\begin{proof}
Upon rescaling time the equations for $u,r$ in cylindrical coordinates of \eqref{3DAbstractSystem} become \eqref{eq_u_r}. Similarly to the proof of Theorem \ref{thm3D}, we compute the Taylor expansion of $u'$ and $r'$ in $(u,r)=(0,0)$ up to forth order (see Appendix \ref{3D_gammas_deltas_second} for the details) and we write $u(\varphi)$ and $r(\varphi)$ as the following expansions:
\begin{equation}\begin{aligned} u(\varphi) &= \gamma_{10}(\varphi)u_0 + \gamma_{20}(\varphi)u_0^2 + \gamma_{30}(\varphi)u_0^3 + \gamma_{01}(\varphi)r_0 + \gamma_{02}(\varphi)r_0^2 + \gamma_{03}(\varphi)r_0^3 \\
&+ \gamma_{11}(\varphi)u_0r_0 + \gamma_{21}(\varphi)u_0^2r_0 + \gamma_{12}(\varphi)u_0r_0^2 + \mathcal{O}\left(4\right), \\[0.5em]
r(\varphi) &= \delta_{10}(\varphi)u_0 + \delta_{20}(\varphi)u_0^2 + \delta_{30}(\varphi)u_0^3 + \delta_{01}(\varphi)r_0 + \delta_{02}(\varphi)r_0^2 + \delta_{03}(\varphi)r_0^3 \\
&+ \delta_{11}(\varphi)u_0r_0 + \delta_{21}(\varphi)u_0^2r_0 + \delta_{12}(\varphi)u_0r_0^2 + \mathcal{O}\left(4\right),
\label{UE_RE_Second}
\end{aligned}\end{equation}
with the initial conditions $u(0)=u_0$ and $r(0)=r_0$, which imply $\gamma_{10}(0)=\delta_{01}(0)=1$ and the rest zero. With these expressions we compute, as before, the functions $\gamma_{ij}$ and $\delta_{ij}$ $\forall i,j\in\mathbb{N}_0$ such that $i+j=3$. Note that the others are the same as for Theorem \ref{thm3D}. The periodic solutions with $r(0)=r_0$, $u(0)=u_0$ for $0\leq |\mu| \ll 1$ near $r=u=0$ are in 1-to-1 correspondence with solutions to the algebraic equation system
\begin{align}
0 &= {\overline{\gamma}}_{10} u_0 + {\overline{\gamma}}_{20} u_0^2 + {\overline{\gamma}}_{30} u_0^3 + {\overline{\gamma}}_{02} r_0^2 + {\overline{\gamma}}_{03} r_0^3 + {\overline{\gamma}}_{11} u_0r_0 + {\overline{\gamma}}_{21} u_0^2r_0 + {\overline{\gamma}}_{12} u_0r_0^2 + \mathcal{O}\left(4\right),\label{Periodic_R-a3}\\
0 &= {\overline{\delta}}_{01} r_0 + {\overline{\delta}}_{02} r_0^2 + {\overline{\delta}}_{03} r_0^3 + {\overline{\delta}}_{11} u_0r_0 + {\overline{\delta}}_{21} u_0^2r_0 + {\overline{\delta}}_{12} u_0r_0^2 + \mathcal{O}\left(4\right),\label{Periodic_R-b3}
\end{align}
where $\mathcal{O}(4)$ are terms of at least fourth order in $u_0$, $r_0$, and ${\overline{\gamma}}_{ij}=\gamma_{ij}(2\pi)-\gamma_{ij}(0)$, ${\overline{\delta}}_{ij}=\delta_{ij}(2\pi)-\delta_{ij}(0)$.
Moreover, since $c_1\neq 0$ we have ${\overline{\gamma}}_{10}\neq 0$. Therefore, we may solve \eqref{Periodic_R-a3} for $u_0$ by the implicit function theorem as $u_0=-\frac{{\overline{\gamma}}_{02}}{{\overline{\gamma}}_{10}}r_0^2+\mathcal{O}\left(r_0^3\right)=\mathcal{O}\left(r_0^2\right)$.
Substitution into \eqref{Periodic_R-b3} and dividing out $r_0\neq 0$ yield
\begin{equation*}
0 = {\overline{\delta}}_{01} + {\overline{\delta}}_{02} r_0 + \left({\overline{\delta}}_{03} - {\overline{\delta}}_{11}\frac{{\overline{\gamma}}_{02}}{{\overline{\gamma}}_{10}}\right)r_0^2 + \mathcal{O}\left(3\right),
\end{equation*}
which we rewrite, to leading order and similarly to \eqref{Thm2_eq} in Theorem~\ref{2ndPart}, as
\begin{equation}\label{Cor_deg}
0 = \frac{2\pi}{\omega}\mu + \widetilde{\Gamma}_2 r_0 + \widetilde{\Gamma}_3 r_0^2 + \mathcal{O}\left(\mu^2 + \mu r_0 + r_0^3\right),
\end{equation}
where $\widetilde{\Gamma}_2={\overline{\delta}}_{02}|_{\mu=0}$, which vanishes for $\sigma_{_\#}=0$ analogous to $\Gamma_2$ in \eqref{Thm2_eq}, and $\widetilde{\Gamma}_3$ is as defined in \eqref{tilde_o}; the expression for ${\overline{\delta}}_{03}$ stems from \eqref{delta_03}.
Hence, the solution for \eqref{Cor_deg} is given by \eqref{r_2nd_HB} replacing $\sigma_2$ by $-\omega^2\widetilde{\Gamma}_3$, which is assumed to be non-zero.
The stability statement of Theorem~\ref{1stPart} holds true from the existence of a $2$-dimensional Lipschitz continuous invariant manifold given by Proposition \ref{prop:inv_man}.
\end{proof}
Lastly, we use these results to extend system \eqref{3DAbstractSystem} to a higher order model with the generalized absolute value \eqref{gen_abs_val} as follows
\begin{equation}
\begin{pmatrix}
\dot{u}\\
\dot{{v}}\\
\dot{{w}}\\
\end{pmatrix} =
\begin{pmatrix}
c_1 u + c_2 u^2 + { c_3 uv + c_4 uw } + c_5 vw + h(v, w; \gamma)\\
\mu v - \omega w + c_6 uv + c_7 uw + f\left( v, w; \alpha \right) + f_q\left( v, w\right) + f_c\left( v, w\right) \\
\omega v + \mu w + c_8 uv + c_9 uw + g\left( v, w; \beta \right) + g_q\left( v, w\right) + g_c\left( v, w \right) \\
\end{pmatrix},
\label{3DAbstractSystem_General}
\end{equation}
where $f\left( v, w; \alpha \right)$ and $g\left( v, w; \beta \right)$ are \eqref{f_general} and \eqref{g_general}, respectively, and the functions $f_q,g_q,f_c,g_c$ are as in system \eqref{General2D_AV}. The expression of $h$ is analogous to $f,g$. {We recall also $\widetilde\sigma_{_\#}$ from \eqref{e:tildesig}.}
\begin{corollary}
\label{Generalization_3D}
If $\widetilde\sigma_{_\#}\neq 0$, the statement of Corollary \ref{c:3D} for system \eqref{3DAbstractSystem_General} holds true with $\sigma_{_\#}$ replaced by $\widetilde\sigma_{_\#}$.
\end{corollary}
\begin{proof}
The proof follows from Theorems \ref{Thm0_General} and \ref{thm3D} and Corollary \ref{thm_cubic}.
\end{proof}
This concludes our analysis for the $3$-dimensional case, which paves the way for the $n$-dimensional case discussed thereafter.
\subsection{$n$D system}\label{s:nD}
We consider the $n$-dimensional generalization of system \eqref{3DAbstractSystem} with additional component $u=(u_1,\cdots,u_{n-2})\in\mathbb{R}^{ n-2}$ given by
\begin{equation}
\begin{pmatrix}
\dot{u}\\
\dot{{v}}\\
\dot{{w}}\\
\end{pmatrix} =
\begin{pmatrix}
\tilde A u + U(u,v,w)\\
\mu v - \omega w + \sum_{i=1}^{n-2}({c_6}_i u_iv + {c_7}_i u_iw) + \tilde f\left( v, w \right)\\
\omega v + \mu w + \sum_{i=1}^{n-2}({c_8}_i u_iv + {c_9}_i u_iw) + \tilde g\left( v, w \right)
\end{pmatrix},
\label{nDAbstractSystem}
\end{equation}
where $\tilde A=({c_1}_{ij})_{1\leq i,j\leq n-2}$ is a $(n-2)\times(n-2)$ matrix
and $U: \mathbb{R}^{n-2}\times\mathbb{R}\times\mathbb{R} \longrightarrow \mathbb{R}^{n-2}$ is a nonlinear function, smooth in $u$ and possibly non-smooth in $v,w$ with absolute values as in \eqref{3DAbstractSystem}. Hence, $U(u,v,w) = \mathcal{O}(2)$,
where $\mathcal{O}(2)$ are terms of at least second order in $u_i,v,w$. The constants ${c_1}_{ij}, {c_6}_i, {c_7}_i, {c_8}_i, {c_9}_i$ are all real $\forall i,j\in\{1,\cdots, n-2\}$, and the functions $\tilde f, \tilde g$ are of the same form as the nonlinear part of system \eqref{General2D_AV}.
We present now analogous results as before for this $n$-dimensional case.
However, we refrain from explicitly determining the coefficients involved.
\begin{theorem}
\label{Thm_nD}
Consider \eqref{nDAbstractSystem} in cylindrical coordinates $(u,v,w)=(u,r\cos{\varphi},r\sin{\varphi})$
analogous to Theorem \ref{thm3D} with $u\in\mathbb{R}^{n-2}$. Up to time shifts, periodic solutions to \eqref{nDAbstractSystem} with $r(0)=r_0$, $u(0)=u_0\in\mathbb{R}^{n-2}$,
for $0\leq |\mu|, r_0, |u_0| \ll 1$ are in 1-to-1 correspondence with solutions to the algebraic $(n-1)$-dimensional system given by equations analogous to \eqref{Periodic_R-a} and \eqref{Periodic_R-b}, where ${\overline{\delta}}_{01},{\overline{\delta}}_{02}$ are scalars and ${\overline{\gamma}}_{10}, {\overline{\gamma}}_{11}, {\overline{\gamma}}_{02}, {\overline{\gamma}}_{20}, {\overline{\delta}}_{11}$ are linear maps and quadratic forms in $n-2$ dimensions.
\end{theorem}
\begin{proof}
The proof is analogous to that of Theorem \ref{thm3D}, now by setting up a boundary value problem with $n-2$ equations for $0=u(2\pi)-u(0)$ and one for $0=r(2\pi)-(0)$. This results in a system of $n-1$ equations formed by direct analogues to \eqref{Periodic_R-a} and \eqref{Periodic_R-b}, where $\mathcal{O}(3)$ contains all terms of at least cubic order in ${u_0}_i, r_0$,
and ${\overline{\gamma}}_{20}u_0^2$ is a quadratic form in $n-2$ dimensions.
\end{proof}
Similar to the $3$-dimensional case, the solution structure of the $(n-1)$-dimensional system \eqref{Periodic_R-a}, \eqref{Periodic_R-b} depends on whether the matrix $\tilde A$ is invertible (i.e., {the full linear part} $A$ satisfies Hypothesis~\ref{h:AG}) or not, as shown in the next result.
\begin{corollary}
\label{c:nD}
Consider \eqref{nDAbstractSystem} in cylindrical coordinates $(u,v,w)=(u,r\cos{\varphi},r\sin{\varphi})$. If $\tilde A$ is invertible,
then the solution vector $u=u(\varphi;\mu)$ is of order $\mathcal{O}\left(\mu^2\right)$ and the statement of Theorem \ref{1stPart} holds true. If $\tilde A$ is not invertible with $1$-dimensional generalized kernel, then there are constants $c_2$, $\gamma_{\#}$ such that the statements of Corollary \ref{c:3D} for $c_1=0$ hold true.
\end{corollary}
\begin{proof}
From Theorem \ref{Thm_nD} we have the corresponding equations \eqref{Periodic_R-a}, \eqref{Periodic_R-b} for the $n$-dimensional system \eqref{nDAbstractSystem}, where ${\overline{\gamma}}_{20}u_0^2$ is a quadratic form in $n-2$ dimensions. If $\tilde A$ is invertible, so is the $(n-2)\times(n-2)$ matrix ${\overline{\gamma}}_{10}=e^{2\pi \tilde A/\omega}-\mathrm{Id}$. Solving the $(n-1)$-dimensional system gives the same as in the proof of Corollary \ref{c:3D} to leading order.
If $\tilde A$ is not invertible, then by assumption it has a $1$-dimensional generalized kernel. In this case, we {change coordinates in the analogue of} \eqref{Periodic_R-a} such that the matrix ${\overline{\gamma}}_{10}$ is block-diagonal with the kernel in the top left, and an invertible $(n-3)\times(n-3)$ block ${\overline{\gamma}}'_{10}$ on the lower right of the matrix. Thus, we split \eqref{Periodic_R-a} into a scalar equation and a $(n-3)$-dimensional system. By the implicit function theorem we solve the equations corresponding to ${\overline{\gamma}}'_{10}$ and substitute the result into the other two equations: the one with the $1$-dimensional kernel and the corresponding \eqref{Periodic_R-b} with ${\overline{\delta}}_{01} = \frac{2\pi\mu}{\omega} + \mathcal{O}(\mu^2)$, ${\overline{\delta}}_{02} = \frac{4\sigma_{_\#}}{3\omega} + \mathcal{O}(\mu)$. We obtain then two scalar equations of the same type as in Corollary \ref{c:3D} for the case $c_1=0$.
\end{proof}
We omit explicit formulas for $c_2, \gamma_{\#}$, but note that these can be provided in terms of data from $\tilde A$.
Before concluding this section, we note that these results directly extend to the more general non-smooth terms \eqref{gen_abs_val} and to additional higher order functions as in \eqref{General2D_AV}.
\begin{corollary}\label{c:nD:gen_abs}
Consider system \eqref{nDAbstractSystem} with $\tilde f, \tilde g$ as the nonlinear part of \eqref{General2D_AV}, but with $f,g$ as in \eqref{f_general}, \eqref{g_general}, respectively. If $\tilde A$ is invertible and $\widetilde\sigma_{_\#}\neq 0$, cf.\ \eqref{e:tildesig}, then the statement of Corollary \ref{c:nD} holds true with $\sigma_{_\#}$ replaced by $\widetilde\sigma_{_\#}$.
\end{corollary}
Recall from \S\ref{s:abstract} that we have presented results for systems where the linear part is in block-diagonal form and normal form for the oscillatory part, while the nonlinear part is smooth in the radial direction. For completeness, we next discuss the case of general linear part, i.e., not necessarily in normal form.
\subsection{General linear part}\label{Gen_linear_part}
Here we show that our analysis also applies to systems with general linear part. First, we consider the planar case \eqref{e:abstractplanar} with
\begin{align*}
f_1(u_1,u_2)&=a_{11}u_1|u_1|+a_{12}u_1|u_2|+a_{21}u_2|u_1|+a_{22}u_2|u_2| +\mathcal{O}(3),\\
f_2(u_1,u_2)&=b_{11}u_1|u_1|+b_{12}u_1|u_2|+b_{21}u_2|u_1|+b_{22}u_2|u_2| +\mathcal{O}(3).
\end{align*}
Under Hypothesis~\ref{h:AG}, changing the linear part of \eqref{e:abstractplanar} to normal form by the associated matrix $\textbf{T}$,
i.e., $\textbf{T}\cdot(v_1,v_2)^T=(u_1,u_2)^T$, the system becomes
\begin{equation}\label{e:abstractplanar:2}
\begin{pmatrix}
\dot v_1\\
\dot v_2
\end{pmatrix} =
\begin{pmatrix}
\mu & -\omega\\
\omega & \mu
\end{pmatrix}\begin{pmatrix}
v_1\\
v_2
\end{pmatrix}+\textbf{T}^{-1}\begin{pmatrix}
g_1\left( v_1, v_2 \right)\\
g_2\left( v_1, v_2 \right)
\end{pmatrix},
\end{equation} where $g_i(v_1,v_2)=f_i\left(\textbf{T}\cdot (v_1,v_1)^T\right)$ for $i\in\{1,2\}$ and with $\textbf{T}=(z_{ij})_{1\leq i,j\leq 2}$, {as well as the shorthand $[[\cdot]]:=\cdot|\cdot|$},
we have
\begin{align*}
g_1(v_1,v_2)&=a_{11} [[z_{11}v_1+z_{12}v_2]] +a_{12}(z_{11}v_1+z_{12}v_2)|z_{21}v_1+z_{22}v_2|\\
&+a_{21}(z_{21}v_1+z_{22}v_2)|z_{11}v_1+z_{12}v_2|+a_{22} [[z_{21}v_1+z_{22}v_2]] +\mathcal{O}(3),\\[0.5em]
g_2(v_1,v_2)&=b_{11} [[z_{11}v_1+z_{12}v_2]] +b_{12}(z_{11}v_1+z_{12}v_2)|z_{21}v_1+z_{22}v_2|\\
&+b_{21}(z_{21}v_1+z_{22}v_2)|z_{11}v_1+z_{12}v_2|+b_{22} [[z_{21}v_1+z_{22}v_2]] +\mathcal{O}(3).
\end{align*}
{We use} polar coordinates for $(v_1,v_2)=(r\cos(\varphi),r\sin(\varphi))$ {as before,} and
\[
(z_{11},z_{12})=(C\cos(\phi),C\sin(\phi)), \hspace*{1cm} (z_{21},z_{22})=(D\cos(\vartheta),D\sin(\vartheta)),
\] where $C,D\in\mathbb{R}$, $\phi,\vartheta\in[0,2\pi)$ are fixed constants.
System \eqref{e:abstractplanar:2} can be written as
\begin{equation}
\begin{cases}
\dot{r} = \mu r+\chi_2(\varphi)r^2 + \mathcal{O}(r^3),\\
\dot{\varphi} = \omega + \Omega_1(\varphi)r + \mathcal{O}(r^2),
\end{cases}
\label{e:abstract:polar}
\end{equation}
where, using trigonometric identities, we have
{\begin{align*}
\chi_2(\varphi) =& \frac{1}{\det(\textbf{T})}\Big( [[\cos(\varphi-\phi)]]C\abs{C}(a_{11}R+b_{11}S) + \cos(\varphi-\phi)|\cos(\varphi-\vartheta)|C\abs{D}(a_{12}R+b_{12}S) \\
&+ \cos(\varphi-\vartheta)|\cos(\varphi-\phi)|\abs{C}D(a_{21}R+b_{21}S) + [[\cos(\varphi-\vartheta)]]D\abs{D}(a_{22}R+b_{22}S) \Big).
\end{align*}}
{with $R:=D\sin(\vartheta-\varphi)$, $S:=C\sin(\varphi-\phi)$.}
By assumption, $\omega\neq 0$ so that
rescaling time in \eqref{e:abstract:polar} analogous to \eqref{e:absper} gives \eqref{new_time} with $M(\varphi)=\mu$ and $W(\varphi)=\omega$.
Following the approach described in \S\ref{s:abstract}, for the analogue of \eqref{r_bar} we obtain
\begin{align}
\Lambda &= \frac{1}{2\pi} \int_0^{2\pi}\frac{\mu}{\omega}\mathrm{d} \varphi= \frac{\mu}{\omega}, \\
\Sigma &= \frac{1}{2\pi} \int_0^{2\pi}\frac{\chi_2(\varphi)}{\omega} \mathrm{d}\varphi,
\label{check_sigma_nnf}
\end{align}
where we set $\mu=0$ in \eqref{check_sigma_nnf} (unlike in \eqref{check_sigma})
and the expression for $\Sigma$ can be determined explicitly. For instance, the first term of $\chi_2(\varphi)$ can be integrated as
\begin{equation*}
\frac{C{\abs{C}D}}{{\det(\textbf{T})}}a_{11}\int_0^{2\pi} [[\cos(\varphi-\phi)]] {\sin(\vartheta-\varphi)} \mathrm{d}\varphi = \frac{8C{\abs{C}D}}{3{\det(\textbf{T})}}a_{11}{\sin(\vartheta-\phi)=\frac{8}{3}\abs{C}a_{11}},
\end{equation*}
{with last equality due to $\det(\textbf{T})=CD\sin(\vartheta-\phi)$.} Computing the integral of $\chi_2(\varphi)$, {equation \eqref{check_sigma_nnf} turns into}
{\begin{equation}\begin{aligned}\label{generalized_sigma}
\Sigma
=\frac{2}{3\pi\omega}&\Big[ 2\abs{C}a_{11} + \abs{D}a_{12} +\abs{C}b_{21} + 2\abs{D}b_{22} \\
&+\cos(\vartheta-\phi)\big(\sgn(C)Da_{21}+\sgn(D)Cb_{12}\big)\Big].
\end{aligned}\end{equation}}
In case $\phi=0$ and $\vartheta=\frac{\pi}{2}$,
{we have $\cos(\vartheta-\phi)=0$ so that the last few terms in \eqref{generalized_sigma} vanish}
and for $C=D=1$ the same expression as in \eqref{averaging_integrals} is obtained, i.e., $\Sigma=\frac{2}{3\pi\omega}\sigma_{_\#}$. Notice that this set of parameters gives $z_{11}=z_{22}=1$, $z_{12}=z_{21}=0$, i.e., $\textbf{T}$ is the identity.
Moreover, we can
{derive} the analogue of \eqref{generalized_sigma} for the generalized non-smooth function \eqref{gen_abs_val} and compute the integrals involved in the generalized $\chi_2(\varphi)$ as in the proof of Theorem \ref{Thm0_General}. For instance, some of them read, omitting the factor $\det(T)^{-1}$,
\begin{align*}
C{\abs{C}}\int_0^{2\pi} \cos(\varphi-\phi)&\left( \ABS{\cos(\varphi-\phi)}{\alpha_1} {D\sin(\vartheta-\varphi)}a_{11}+\ABS{\cos(\varphi-\phi)}{\beta_1}{C\sin(\varphi-\phi)}b_{11}\right) \mathrm{d}\varphi \\
=\frac{4}{3}C{\abs{C}} &{D\sin(\vartheta-\phi)}a_{11}\left(\alpha_{1_+}-\alpha_{1_-}\right),
\end{align*}
\begin{align*}
C{\abs{D}}\int_0^{2\pi} \cos(\varphi-\phi)&\left(\ABS{\cos(\varphi-\vartheta)}{\alpha_2}{D\sin(\vartheta-\varphi)}a_{12}+\ABS{\cos(\varphi-\vartheta)}{\beta_2}{C\sin(\varphi-\phi)}b_{12}\right) \mathrm{d}\varphi \\
= \frac{1}{3}C{\abs{D}}\bigg[ &2{D\sin(\vartheta-\phi)}a_{12}\left(\alpha_{2_+}-\alpha_{2_-}\right)
+{C\sin\big(2(\vartheta-\phi)\big)}b_{12}\left(\beta_{2_+}-\beta_{2_-}\right) \bigg].
\end{align*}
{The full expression can be simplified to }
{\begin{equation*}
\begin{aligned}
\widetilde\Sigma :=
\frac{2}{3\pi\omega} \bigg[&\abs{C}a_{11}\left(\alpha_{1_+}-\alpha_{1_-}\right)+\frac{\abs{D}}{2}a_{12}\left(\alpha_{2_+}-\alpha_{2_-}\right)\\
&+\frac{\abs{C}}{2}b_{21}\left(\beta_{3_+}-\beta_{3_-}\right)
+\abs{D}b_{22}\left(\beta_{4_+}-\beta_{4_-}\right) \\
&+ \frac{\cos(\vartheta-\phi)}{2}\Big(\sgn(C)Da_{21}\left(\alpha_{3_+}-\alpha_{3_-}\right)+\sgn(D)Cb_{12}\left(\beta_{2_+}-\beta_{2_-}\right)\Big) \bigg].
\end{aligned}
\end{equation*}
As above, for $\phi=0$ and $\vartheta=\frac{\pi}{2}$ the last few terms vanish, and for $C=D=1$ we have $\widetilde\Sigma = \frac{2}{3\pi\omega}\widetilde\sigma_{_\#}$ with $\widetilde\sigma_{_\#}$ from \eqref{e:tildesig}.}
\medskip
Furthermore, we can extend these results for the case $n>2$ in the form of a coupled system similar to \eqref{e:cylindrical0} using the approach presented in the proof of Theorem~\ref{t_per_orb}. This gives an integral expression for the generalized first Lyapunov coefficient which provides an explicit algebraic formula for an adjusted $\widetilde\sigma_{_\#}$. We comprise this in the following result.
\begin{theorem}\label{Thm_Gen_Lin_Part}
Consider system \eqref{e:abstract} with general linear part $A(\mu)$, and
satisfying the hypotheses of Theorem \ref{t:abstractnormal}.
The statement of Corollary \ref{c:nD:gen_abs}
holds true with $\widetilde\sigma_{_\#}$ replaced by $\frac{3\pi\omega}{2}\widetilde{\Sigma}$.
\end{theorem}
In particular, this theorem covers system \eqref{e:abstractplanar} with general matrix $A=(m_{ij})_{1\leq i,j\leq 2}$. We also remark that the system considered here is neither of the form of \eqref{3DAbstractSystem} nor \eqref{nDAbstractSystem} in terms of smoothness of the $u$ variable.
\begin{proof}
We proceed as before to get the analogue of system \eqref{e:abstractplanar:2}, i.e., transforming the linear part into a block-diagonal matrix and normal form in the center eigenspace $E^\mathrm{c}$. From Theorem \ref{t:abstractnormal} the nonlinear terms are second order modulus terms, which in this case are of the form $(L_i(v,w)+K_i(u))\ABS{L_j(v,w)+K_j(u)}{p}$, $i,j\in\{1,2\}$, where the functions $L_i(v,w)$ are linear combinations of $v,w$; $K_i(u)$ are linear combinations of the components of the vector $u$, i.e., $u_l$, $\forall l\in\{1,\cdots,n-2\}$; and $p_+,p_-\in\mathbb{R}$ are as in \eqref{gen_abs_val}. Note that $L_1, K_1$ are not necessarily equal to $L_2, K_2$, respectively.
The previous product can be expanded as
\begin{equation}\label{gen_expansion}
(L_i(v,w)+K_i(u))\ABS{L_j(v,w)+K_j(u)}{p} = L_i\ABS{L_j}{p} + \mathcal{O}(L_iK_j+L_jK_i+K_iK_j),
\end{equation}
since the error term $p_+ L_i L_j - L_i\ABS{L_j}{p}$ (resp. $p_- L_i L_j - L_i\ABS{L_j}{p}$ ) is of order $|u|^2$, i.e., contained in the higher order terms of \eqref{gen_expansion}. More precisely, consider the case $L_j+K_j \geq 0$. Then, the error term is $p_+L_i L_j - L_i\ABS{L_j}{p}$, which is zero for $L_j \geq 0$, and otherwise $(p_+ - p_-)L_iL_j$. However, in order to have both $L_j+K_j \geq 0$ and $L_j<0$, the signs of $L_j$ and $K_j$ have to differ, which happens only if these magnitudes are comparable. Hence, $\mathcal{O}(L_j)=\mathcal{O}(K_j)$. For the case $L_j+K_j < 0$ we proceed analogously.
In particular, $\mathcal{O}(L_iK_j+L_jK_i+K_iK_j) = \mathcal{O}(K(\check{K}+L))$, where $K, \check{K}$ are linear combinations of the components of $u$, and $L$ of $v, w$.
Following the proof of Theorem \ref{t_per_orb}, we write $u=r{\tilde u}$ and, together with the change of polar coordinates from above, $L_i=r\cos(\varphi-\zeta_i)$ (where $\zeta_i$ is either $\phi$ or $\vartheta$), so that
$$(L_i(v,w)+rK_i({\tilde u}))\ABS{L_j(v,w)+rK_j({\tilde u})}{p} = r^2\cos(\varphi-\zeta_i)\ABS{\cos(\varphi-\zeta_j)}{p} + r^2\mathcal{O}({\tilde u}).$$
From Theorem \ref{t_per_orb} we have ${\tilde u}=\mathcal{O}(r_0)$ and thus $r^2\mathcal{O}({\tilde u})$ is of higher order. We can then integrate explicitly the leading order as done for \eqref{check_sigma_nnf}.
\end{proof}
We implement now these results to an applied $3$-dimensional model in the field of land vehicles.
\section{A 3D example: shimmying wheel}\label{s:shim}
For illustration of the theory and its practice, we consider as an example the model of a shimmying wheel with contact force analyzed in \cite{SBeregi}, where a towed caster with an elastic tyre is studied.
The equations of motion of the towed wheel can be written as follows:
\begin{equation}
\begin{pmatrix}
\dot{\Omega}\\
\dot{\psi}\\
\dot{q}
\end{pmatrix}=
\mathbf{J}\begin{pmatrix}
{\Omega}\\
{\psi}\\
{q}
\end{pmatrix} + {\tilde{c}_4}
\begin{pmatrix}
q\abs{q}\\
0\\
0
\end{pmatrix},\quad
\textbf{J}:=
\begin{pmatrix}
\tilde{c}_1 & \tilde{c}_2 & \tilde{c}_3\\
1 & 0 & 0\\
\tilde{c}_5 & \tilde{c}_6 & \tilde{c}_7
\end{pmatrix},
\label{System1Beregi}
\end{equation}
where $\psi$ is the yaw angle, $q$ is the deformation angle of the tyre due to the contact with the ground and $\Omega=\dot{\psi}$, and the parameters $\tilde{c}_i\in\mathbb{R}$ are constants determined by the system. We can readily see that there is only one switching surface in this case, namely $\{q=0\}$. Here $\mathbf{J}$ is the Jacobian matrix at the equilibrium point $(\Omega,\psi,q)=(0,0,0)$.
The system is of the form \eqref{e:abstract} and suitable parameter choices yield a pair of complex conjugate eigenvalues crossing the imaginary axis, as well as one non-zero real eigenvalue. The resulting bifurcations were studied in \cite{SBeregi} and termed `dynamic loss of stability'. Here we expound how our approach applies to this system.
Clearly, Theorem~\ref{t_per_orb} applies for any Hopf bifurcation eigenvalue configuration, which proves that a unique branch of periodic solutions bifurcates. In order to identify the direction of bifurcation, we first aim to apply the results of \S\ref{s:3D} and therefore attempt to bring the nonlinear part into a second order modulus form, while also bringing the linear part into Jordan normal form.
We thus suppose the parameters are such that the Jacobian matrix has a pair of complex conjugate eigenvalues $\lambda_{\pm}=\mu\pm i\omega$, where $\mu,\omega,\lambda_3\in\mathbb{R}$, $\omega,\lambda_3\neq 0$, with the corresponding eigenvectors $\textbf{s}_1=\textbf{u}+i\textbf{v}$, $\textbf{s}_2=\textbf{u}-i\textbf{v}$ and $\textbf{s}_3$, where $\textbf{u},\textbf{v},\textbf{s}_3\in\mathbb{R}^3$. Such parameter choices are possible as it can be seen from inspecting the characteristic equation with the Routh-Hurwitz criterion; we omit details and refer to \cite{SBeregi}.
The transformation $\textbf{T}=(\textbf{u} | \textbf{v} |\textbf{s}_3)$
with the new state variables
$(\xi_1,\xi_2,\xi_3)^T=\textbf{T}^{-1}(\Omega,\psi,q)^T$ turns \eqref{System1Beregi} into
\begin{equation}
\begin{pmatrix}
\dot{\xi_1}\\
\dot{\xi_2}\\
\dot{\xi_3}
\end{pmatrix}=
\mathbf{A}
\begin{pmatrix}
\xi_1\\
\xi_2\\
\xi_3
\end{pmatrix}+\textbf{h}_2(\xi_1,\xi_2,\xi_3), \quad
\mathbf{A} = \begin{pmatrix}
\mu & \omega & 0\\
-\omega & \mu & 0\\
0 & 0 & \lambda_3
\end{pmatrix},
\label{system_xi}
\end{equation}
where $\textbf{h}_2$ contains the quadratic terms and reads, using the shorthand $[[\cdot]]:=\cdot|\cdot|$,
\begin{equation}\textbf{h}_2(\xi_1,\xi_2,\xi_3)=
\left(
\tilde{T_{1}},
\tilde{T_{2}},
\tilde{T_{3}}
\right)^T
[[u_3\xi_1+v_3\xi_2+s_3\xi_3]],
\label{h_2_xi}
\end{equation}
{where $u_j, v_j, s_j$, $j\in\{1,2,3\}$ are the} components of the vectors $\textbf{u}, \textbf{v}, \textbf{s}_3$, respectively, and $$\tilde{T}_1:=\tilde{c}_4\frac{v_2s_3-v_3s_2}{\det(\textbf{T})}, \hspace*{0.5cm} \tilde{T}_2:=\tilde{c}_4\frac{s_2u_3-s_3u_2}{\det(\textbf{T})}, \hspace*{0.5cm} \tilde{T}_3:=\tilde{c}_4\frac{u_2v_3-u_3v_2}{\det(\textbf{T})}.$$
If $u_3=v_3=0$, then the nonlinear term $\mathbf{h}_{2}$ in \eqref{h_2_xi} is of second order modulus form:
\begin{equation}\textbf{h}_2(\xi_{1},\xi_{2},\xi_{3})=
s_3\abs{s_3}\left( \tilde{T}_1, \tilde{T}_2, 0\right)^T
\xi_{3}\abs{\xi_{3}},
\label{h_3_d0}
\end{equation}
where $\det(\textbf{T})\neq 0$ implies $s_3\neq 0$.
Here we need no further theory as we can directly solve {\eqref{system_xi}}: the equation for $\xi_{3}$ reads $\dot \xi_{3} = \lambda_3 \xi_{3}$ so that periodic solutions require $\xi_{3}(t)\equiv0$, i.e., $\xi_{3}(0)=0$.
The remaining system for $\xi_{1},\xi_{2}$ is then the purely linear part
\begin{equation*}
\begin{pmatrix}
\dot{\xi_{1}}\\
\dot{\xi_{2}}
\end{pmatrix}=\begin{pmatrix}
\mu & \omega\\
-\omega & \mu
\end{pmatrix}\begin{pmatrix}
\xi_{1}\\
\xi_{2}
\end{pmatrix},
\end{equation*}
and consists of periodic solutions (except the origin) for $\mu=0$. The unique branch of bifurcating periodic solutions is thus vertical, i.e., has $\mu=0$ constant.
Next, we consider the case when one of $u_{3}, v_{3}$ is non-zero. In order to simplify the nonlinear term, we apply a rotation $\mathbf{R}_{\theta}$ about the $\xi_{3}$-axis with angle $\theta$,
which keeps the Jordan normal form matrix invariant, and in the new variables {$(v,w,u)^T=\mathbf{R}_{\theta}^{-1}(\xi_1,\xi_2,\xi_3)^T$, in particular $\xi_{3}=u$,} the nonlinear term from
\eqref{h_2_xi} reads
\begin{align}
\abs{u_3(v\cos{\theta}-w\sin{\theta})+v_3(v\sin{\theta}+w\cos{\theta})+s_3u}
= \abs{\tilde{d} v +w(v_3\cos{\theta}-u_3\sin{\theta})+s_3u},
\label{abs_eqs}
\end{align}
where $\tilde{d}=u_3\cos{{\theta}}+v_3\sin{{\theta}}$.
We select $\theta$ to simplify \eqref{abs_eqs}: if $u_3\neq 0$ we choose $\theta=\tilde{\theta}=\arctan\left(\frac{v_3}{u_3}\right)$ such that the coefficient of $w$ in \eqref{abs_eqs} vanishes, i.e., $v_3\cos{\tilde{\theta}}-u_3\sin{\tilde{\theta}} = 0$. Note that $\tilde{d}\neq 0$ since otherwise $v_{3}\tan\tilde\theta=-u_{3}$, but $\tan \tilde\theta=v_{3}/u_{3}$ so together $v_{3}^2=-u_{3}^{2}$ and thus $u_{3}=v_{3}=0$ (which has been discussed above).
If $u_3=0$ and $v_3\neq 0$ we choose $\theta=\tilde{\theta}=\arctan\left(-\frac{u_3}{v_3}\right)$ such that the coefficient of $v$ vanishes, i.e., $u_3\cos{\tilde{\theta}}+v_3\sin{\tilde{\theta}} = 0$, and the following computation is analogous.
Hence, in case $u_3\neq 0$, system \eqref{system_xi} becomes
\begin{equation}
\begin{pmatrix}
\dot{v}\\
\dot{w}\\
\dot{u}
\end{pmatrix}=
\mathbf{A}
\begin{pmatrix}
v\\
w\\
u
\end{pmatrix}+\textbf{h}_3(v,w,u), \quad
\label{after_rotation}
\textbf{h}_3(v,w,u)=
\begin{pmatrix}
\tilde{T}_1\cos{\tilde{\theta}}+\tilde{T}_2\sin{\tilde{\theta}}\\
-\tilde{T}_1\sin{\tilde{\theta}}+\tilde{T}_2\cos{\tilde{\theta}}\\
\tilde{T}_3
\end{pmatrix}
[[\tilde{d}v+s_3u]].
\end{equation}
Notably, since $\tilde{d}\neq 0$, the nonlinear term is of second order modulus form for $s_3=0$, and we consider this degenerate situation first; as mentioned, the case $u_{3}=0, v_{3}\neq 0$ is analogous.
If $s_3=0$ (which means that the third component of the third eigenvector of the matrix
$\mathbf{T}$ is zero) the nonlinear term in \eqref{after_rotation} is of second order modulus form. We can write system (\ref{after_rotation}) in the notation of system (\ref{3DAbstractSystem}):
\begin{equation}
\begin{pmatrix}
\dot{u}\\
\dot{v}\\
\dot{w}
\end{pmatrix} = \begin{pmatrix}
c_1u + h_{11}v\abs{v}\\
\mu v - \omega w + a_{11}v\abs{v}\\
\omega v + \mu w + b_{11}v\abs{v}
\end{pmatrix},
\end{equation}
where we changed $\omega$ to $-\omega$ and set $c_1:=\lambda_3$, $h_{11}:=\tilde{T}_3\tilde{d}|\tilde{d}|$, $a_{11}:=\left( \tilde{T}_1\cos{\tilde{\theta}}+\tilde{T}_2\sin{\tilde{\theta}} \right)\tilde{d}|\tilde{d}|$ and $b_{11}:=\left( -\tilde{T}_1\sin{\tilde{\theta}}+\tilde{T}_2\cos{\tilde{\theta}} \right)\tilde{d}|\tilde{d}|$. Since $s_{3}=0$ we have $a_{11}=0$ by choice of $\tilde{\theta}$, which implies $\sigma_{_\#}=0$. Furthermore, $\sigma_2=0$ holds so that Theorem \ref{2ndPart} does not apply.
However, at $\mu=0$ we have
$ \ddot{v} = -\omega^2v-\omega b_{11}v\abs{v} = -\frac{\mathrm{d}}{\mathrm{d} v}P$ with potential energy
$$ P(v) = \frac{\omega^2}{2}v^2+\frac{\omega b_{11}}{3}v^2\abs{v}, $$
which is globally convex if $\omega b_{11}\geq 0$ and otherwise convex in an interval around zero and concave outside of it. In both cases there is a vertical branch of periodic solutions, which is either unbounded or bounded by heteroclinic orbits.
\medskip
Let us now come back to \eqref{after_rotation} for $s_{3}\neq 0$, where the nonlinearity is of the form $\mathbf{h}_3=(h_{31}, h_{32}, h_{33})^{T}[[\tilde{d}v+s_3u]]$. We first note that in the cylindrical coordinates from \eqref{e:cylindrical0} with the rescaled $u=r{\tilde u}$ {for $r\neq 0$} we have
\begin{align*}
\dot r &= \mu r + r^{2}[[\tilde d \cos(\varphi) + s_{3}{\tilde u}]](h_{31}\cos(\varphi) + h_{32}\sin(\varphi)),\\
\dot \varphi &= \omega + r[[\tilde d \cos(\varphi) + s_{3}{\tilde u}]](h_{32}\cos(\varphi) - h_{31}\sin(\varphi)),\\
\dot{\tilde u} &= \lambda_{3}{\tilde u} + \tilde{T}_{3} r[[\tilde d \cos(\varphi) + s_3{\tilde u}]].
\end{align*}
Following the notation of the proof of Theorem~\ref{t_per_orb} we have the estimate $|{\tilde u}_{\infty}| = \mathcal{O}(r_{\infty})$ and together with the expansion of the $[[\cdot]]$ terms from proof of Theorem \ref{Thm_Gen_Lin_Part}, we can write
\[
\dot r = \mu r + r^{2}[[{\tilde{d}}\cos(\varphi)]](h_{31}\cos(\varphi) + h_{32}\sin(\varphi)) + \mathcal{O}(r^{2} r_{\infty}).
\]
In the notation of Proposition~\ref{Thm_Gen}, in this case $\chi_{2}(\varphi)= [[{\tilde{d}}\cos(\varphi)]](h_{31}\cos(\varphi) + h_{32}\sin(\varphi))$, and according to Corollary \ref{hot2D} the bifurcating branch ist given by \eqref{General_Result} with
\[
\int_{0}^{2\pi} \chi_{2}(\varphi) \mathrm{d} \varphi = {\tilde{d}|\tilde{d}|} h_{31} \int_{0}^{2\pi} \cos^{2}(\varphi)|\cos(\varphi)| \mathrm{d} \varphi
= \frac 8 3 {\tilde{d}|\tilde{d}|} h_{31} = \frac 8 3{|\tilde{d}|} \frac{{\tilde{d}}s_3\tilde{c}_{4}}{\det(\mathbf{T})}.
\]
Since $\tilde{d}\neq 0$ the direction of bifurcation is determined by the sign of ${\tilde{d}}s_3\tilde{c}_{4}\det(\mathbf{T})$.
Note that {$\tilde{d}$, $s_3$, $\det(\mathbf{T})$ are independent of $\tilde{c}_4$, and} ${\tilde{d}}s_3\tilde{c}_{4}\det(\mathbf{T})=0$ requires $s_{3}=0$ as discussed above, or $\tilde{c}_{4}=0$, which implies vanishing nonlinearity. Thus, in all degenerate cases the branch is vertical and we have proven the following.
\begin{theorem}\label{t:shym}
Any Hopf bifurcation in \eqref{System1Beregi} yields either a vertical branch of periodic solutions, or is super- or subcritical as in {Proposition \ref{Thm_Gen}}.
Using the above notation, the branch is vertical if and only if $\tilde{d}s_{3}\tilde{c}_{4}=0$, where $\tilde{d}=0$ means $u_{3}=v_{3}=0$. The bifurcation is supercritical if ${\tilde{d}}s_3\tilde{c}_{4}\det(\mathbf{T})<0$ and subcritical for positive sign. {In particular, reversing the sign of $\tilde{c}_4$ switches the criticality of the bifurcation.}
\end{theorem}
This conclusion is consistent with the results in \cite{SBeregi}.
\section{Discussion}
In this {paper} we have analyzed Hopf bifurcations in mildly non-smooth systems with piecewise smooth nonlinearity for which standard center manifold reduction and normal form computations cannot be used. By averaging and a direct approach we have derived explicit analogues of Lyapunov coefficients and have discussed some codimension-one degeneracies as well as the modified scaling laws.
In an upcoming paper we will apply these results to models for controlled ship maneuvering, where stabilization by p-control induces a Hopf bifurcation.
We believe this is an interesting class of equations from a theoretical as well as applied viewpoint, arising in a variety of models for macroscopic laws that lack smoothness in the nonlinear part. Among the perspectives, there is an analysis of normal forms for coefficients for other bifurcations in these models, such as
Bogdanov-Takens points. Particularly interesting is the impact on scaling laws, including exponentially small effects
for smooth vector fields.
| 2024-02-18T23:39:56.833Z | 2020-10-13T02:01:58.000Z | algebraic_stack_train_0000 | 908 | 20,520 |
|
proofpile-arXiv_065-4456 | \section{Introduction}\label{sec:IndepModelsWithSZ}
Huh \cite{huh2014} classified the varieties with rational maximum likelihood estimator
using Kapranov's Horn uniformization \cite{kapranov1991}. In spite of the classification,
it can be difficult to tell a priori whether a given model has rational MLE, or not.
Duarte, Marigliano, and Sturmfels \cite{duarte2019} have since applied Huh's
ideas to varieties that are the closure of discrete statistical models.
In the present paper, we study this problem for a family of discrete statistical models
called quasi-independence models, also commonly known as independence models with structural zeros.
Because quasi-independence models have a simple structure whose description is determined by a
bipartite graph, this is a natural test case for trying to apply Huh's theory.
Our complete classification of quasi-independence models with rational MLE
is the main result of the present paper (Theorems \ref{thm:Intro} and \ref{thm:Main}).
Let $X$ and $Y$ be two discrete random variables with $m$ and $n$ states, respectively.
Quasi-independence models describe the situation in which
some combinations of states of $X$ and $Y$ cannot occur together,
but $X$ and $Y$ are otherwise independent of one another.
This condition is known as quasi-independence in the statistics literature \cite{bishop2007}.
Quasi-independence models are basic models that arise
in data analysis with log-linear models.
For example, quasi-independence models arise
in the biomedical field as rater agreement models \cite{agresti1992, rapallo2005}
and in engineering to model system failures at nuclear plants \cite{colombo1988}.
There is a great deal of literature regarding hypothesis testing under the assumption
of quasi-independence, see, for example, \cite{bocci2019, goodman1994, smith1995}.
Results about existence and uniqueness of the maximum likelihood estimate
in quasi-independence models as well as
explicit computations in some cases can be found in \cite[Chapter~5]{bishop2007}.
In order to define quasi-independence models, let $S \subset [m] \times [n]$
be a set of indices, where $[m] = \{1,2, \ldots, m\}$.
These correspond to a matrix with structural zeros
whose observed entries are given by the indices in $S$.
We often use $S$ to refer to both the set of indices and the matrix
representation of this set and abbreviate the
ordered pairs $(i,j)$ in $S$ by $ij$. For all $r$, we denote by $\Delta_{r-1}$ the open $(r-1)$-dimensional probability simplex in $\mathbb{R}^r$,
\[
\Delta_{r-1} := \{ x \in \mathbb{R}^r \mid x_i > 0 \text{ for all } i \text{ and } \sum_{i=1}^r x_i = 1 \}.
\]
\begin{defn}
Let $S \subset [m] \times [n]$. Index the coordinates of $\mathbb{R}^{m+n}$ by $ (s_1, \dots, s_m, t_1, \dots, t_n) = (s,t)$. Let $\mathbb{R}^S$ denote the real vector space of dimension $\#S$ whose coordinates
are indexed by $S$.
Define the monomial map $\phi^S: \mathbb{R}^{m + n} \rightarrow \mathbb{R}^{S}$ by
\[
\phi^S_{ij}(s,t) = s_i t_j.
\]
The \emph{quasi-independence model} associated to $S$ is the model,
\[
\mathcal{M}_S := \phi^S(\mathbb{R}^{m+n}) \cap \Delta_{\#S -1}.
\]
\end{defn}
We note that the Zariski closure of $\mathcal{M}_S$ is a toric variety since it is parametrized by monomials.
To any quasi-independence model, we can associate a bipartite graph in the following way.
\begin{defn}
The \emph{bipartite graph associated to $S$}, denoted $G_S$, is the bipartite graph with independent sets $[m]$ and $[n]$ with an edge between $i$ and $j$ if and only if $(i,j) \in S$. The graph $G_S$ is \emph{chordal bipartite} if every cycle of length greater than or equal to 6 has a chord. The graph $G_S$ is \emph{doubly chordal bipartite} if every cycle of length greater than or equal 6 has at least two chords. We say that $S$ is doubly chordal bipartite if $G_S$ is doubly chordal bipartite.
\end{defn}
Let $u \in \mathbb{N}^{S}$ be a vector of counts of independent, identically distributed (iid) data.
The \emph{maximum likelihood estimate}, or MLE, for $u$ in $\mathcal{M}_S$ is the
distribution $\hat{p} \in \mathcal{M}_S$ that maximizes the probability of observing
the data $u$ over all distributions in the model.
We describe the maximum likelihood estimation problem in more detail in Section \ref{sec:LogLinModels}.
We say that $\mathcal{M}_S$ has \emph{rational MLE} if for generic choices of $u$,
the MLE for $u$ in $\mathcal{M}_S$ can be written as a rational function in the entries of $u$.
We can now state the key result of this paper.
\begin{thm}\label{thm:Intro}
Let $S \subset [m] \times [n]$ and let $\mathcal{M}_S$ be the associated quasi-independence model.
Let $G_S$ be the bipartite graph associated to $S$.
Then $\mathcal{M}_S$ has rational maximum likelihood estimate if and only if
$G_S$ is doubly chordal bipartite.
\end{thm}
Theorem \ref{thm:Main} is a strengthened version of Theorem \ref{thm:Intro}
in which we give an explicit formula for the MLE when $G_S$ is doubly chordal bipartite.
The outline of the rest of the paper is as follows.
In Section \ref{sec:LogLinModels}, we introduce general log-linear models
and their MLEs and discuss some key results on these topics.
In Section \ref{sec:FacialSubsets}, we discuss the notion of a facial submodel of a log-linear model
and prove that facial submodels of models with rational MLE also have rational MLE.
In Section \ref{sec:FacialSZ}, we apply the results of Section \ref{sec:FacialSubsets}
to show that if $G_S$ is not doubly chordal bipartite, then $\mathcal{M}_S$ does not have rational MLE.
The main bulk of the paper is in Sections \ref{sec:Cliques}, \ref{sec:FixedColumn} and \ref{sec:BirchsThm},
where we show that if $G_S$ is doubly chordal bipartite, then the MLE is rational
and we give an explicit formula for it. Section \ref{sec:Cliques} covers combinatorial features
of doubly chordal bipartite graphs and gives the statement of the main Theorem \ref{thm:Main}.
Sections \ref{sec:FixedColumn} and \ref{sec:BirchsThm} are concerned with the verification
that the formula for the MLE is correct.
\section{Log-Linear Models and their Maximum Likelihood Estimates}\label{sec:LogLinModels}
In this section, we collect some results from the literature on log-linear models
and maximum likelihood estimation in these models. These
results will be important tools in the proof of Theorem \ref{thm:Main}.
Let $A \in \mathbb{Z}^{d \times r}$ with entries $a_{ij}$.
Denote by $\mathbf{1}$ the vector of all ones in $\mathbb{Z}^r$.
We assume throughout that $\mathbf{1} \in \mathrm{rowspan}(A)$.
\begin{defn}
The \emph{log-linear model} associated to $A$ is the set of probability distributions,
\[
\mathcal{M}_A := \{ p \in \Delta_{r-1} \mid \log p \in \mathrm{rowspan}(A)\}.
\]
\end{defn}
Algebraic and combinatorial tools are well-suited for the study of log-linear models
since these models have monomial parametrizations.
Define the map $\phi^A: \mathbb{R}^d \rightarrow \mathbb{R}^r$ by
\[
\phi^A_j (t_1, \dots, t_d) = \prod_{i=1}^d t_i^{a_{ij}}.
\]
Then we have that $\mathcal{M}_A = \phi^A(\mathbb{R}^d) \cap \Delta_{r-1}$.
Background on log-linear models can be found in \cite[Chapter~6.2]{sullivant2018}.
Denote by $\mathbb{C}[p] := \mathbb{C}[p_1, \dots, p_r]$ the polynomial ring in $r$ indeterminates.
Let $I_A \subset \mathbb{C}[p]$ denote the vanishing ideal of $\phi^A(\mathbb{R}^d)$
over the algebraically closed field $\mathbb{C}$.
Since $\phi^A$ is a monomial map, $I_A$ is a toric ideal.
For this reason, $\mathcal{M}_A$ is also known as a \emph{toric model}.
Some key properties of $I_A$ are summarized in the following proposition.
\begin{prop}[\cite{sullivant2018}, Proposition 6.2.4]
The toric ideal $I_A$ is a binomial ideal and
\[
I_A = \langle p^u - p^v \mid u,v \in \mathbb{N}^r \text{ and } Au = Av \rangle.
\]
If $\mathbf{1} \in \mathrm{rowspan}(A)$, then $I_A$ is homogeneous.
\end{prop}
Note that the quasi-independence model associated to a set
$S \subset [m] \times [n]$ is a log-linear model
with respect to matrix $A(S)$ constructed in the following way.
We have $A(S) \in \mathbb{Z}^{(m+n) \times \#S}$. The $ij$ column of $A(S)$, denoted $a^{ij}$,
has $k$th entry:
\[
a^{ij}_k = \begin{cases}
1, \text{ if } k = i \\
1, \text{ if } k = m + j \\
0, \text{ otherwise.}
\end{cases}
\]
In this way, $\mathcal{M}_S = \mathcal{M}_{A(S)}$.
Note that $\mathbf{1} \in \mathrm{rowspan}(A(S))$ for all $S$, since it can be written as the sum of the first $m$ rows of $A(S)$.
Given independent, identically distributed (iid) data $u \in \mathbb{N}^r$,
we wish to infer the distribution $p \in \mathcal{M}_A$ that is ``most likely" to have generated it.
This is the central problem of maximum likelihood estimation.
\begin{defn}
Let $\mathcal{M}$ be a discrete statistical model in $\mathbb{R}^r$ and let $u \in \mathbb{N}^r$ be an iid vector of counts.
The \emph{likelihood function} is
\[
L(p \mid u) = \prod_{i=1}^r p_i^{u_i}.
\]
The \emph{maximum likelihood estimate}, or MLE, for $u$ is the distribution in $\mathcal{M}$ that maximizes the likelihood function; that is,
it is the distribution
\[
\hat{p} =\underset{p \in \mathcal{M}}{\mathrm{argmax}} \ L(p \mid u).
\]
\end{defn}
Note that for a fixed $p \in \mathcal{M}$, $L(p \mid u)$ is exactly the probability of observing
$u$ from the distribution $p$.
Hence, the MLE for $u$ is the distribution $\hat{p} \in \mathcal{M}$ that
maximizes the probability of observing $u$.
The map $u \mapsto \hat{p}$
is a function of the data known as the \emph{maximum likelihood estimator}.
We are particularly interested in the case when the
coordinate functions of the maximum likelihood estimator are rational functions of the data.
In this case, we say that $\mathcal{M}$ has \emph{rational MLE}.
The \emph{log-likelihood function} $\ell(p \mid u)$ is the natural logarithm of $L(p \mid u)$.
Note that since the natural log is a concave function, $\ell(p \mid u)$ and $L(p \mid u)$ have the same maximizers.
We define the \emph{maximum likelihood degree} of $\mathcal{M}$ to be
the number of critical points of $\ell(p \mid u)$ for generic $u$.
Huh and Sturmfels \cite{hs2014} show that the maximum likelihood degree is well-defined.
In particular, $\mathcal{M}$ has maximum likelihood degree 1 if and only if it has
rational maximum likelihood estimator \cite{huh2014}.
The following result of Huh gives a characterization of the form of this maximum likelihood estimator, when it exists.
\begin{thm}[\cite{huh2014}]\label{thm:Huh}
A discrete statistical model $\mathcal{M}$ has maximum likelihood degree 1 if and only if
there exists $h = (h_1, \dots, h_r) \in (\mathbb{C}^*)^r$, a positive integer $d$,
and a matrix $B \in \mathbb{Z}^{d \times r}$ with entries $b_{ij}$ whose column sums are zero
such that the map
\[
\Psi: \P^{r-1} \dashrightarrow (\mathbb{C}^*)^r
\]
with coordinate function
\[
\Psi_k(u_1, \dots, u_r) = h_k \prod_{i=1}^d \big(\sum_{j=1}^r b_{ij} u_j \big)^{b_{ik}}
\]
maps dominantly onto $\overline{\mathcal{M}}$. In this case, the function $\Psi$ is
the maximum likelihood estimator for $\mathcal{M}$.
\end{thm}
In this context, the pair $(B,h)$ is called the \emph{Horn pair} that defines $\Psi$, and $\Psi$ is called the \emph{Horn map}. For more details about the Horn map and its connection to the theory of $A$-discriminants, we refer the reader to \cite{duarte2019} and \cite{huh2014}.
\begin{ex}\label{ex:HornMap}
Consider the quasi-independence model
associated to
\[
S = \{ (1,1), (1,2), (1,3), (2,1), (2,2), (2,3), (3,1), (3,2) \}.
\]
This is the log-linear model whose defining matrix is
\[
A = \begin{bmatrix}
1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 \\
1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 & 1 &0 & 0 & 1 \\
0 & 0 & 1 & 0 & 0 & 1 & 0 & 0
\end{bmatrix}.
\]
We index the columns of $A$ by the ordered pairs in $S$
in the given order.
Note that we have $\mathcal{M}_S = \mathcal{M}_{A(S)}.$
Let $u \in \mathbb{N}^{S}$ be a vector of counts of iid data for the model $\mathcal{M}_S$.
According to Theorem \ref{thm:Intro}, $\mathcal{M}_S$ has rational MLE.
Theorem \ref{thm:Main} shows that the associated Horn pair is
\[
B = \begin{bmatrix}
1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 \\
1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 & 1 &0 & 0 & 1 \\
0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 \\
1 & 1 & 0 & 1 & 1 & 0 & 0 & 0 \\
-1 & -1 & -1 & -1 & -1 & -1 & 0 & 0 \\
-1 & -1 & 0 & -1 & -1 & 0 & -1 & -1 \\
-1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 \\
\end{bmatrix}
\]
with $h = (-1, -1, 1, -1, -1, 1, 1, 1)$. The columns of $B$ and $h$ are also indexed by the elements of $S$.
We can use this Horn pair to write the MLE as a rational function of the data.
Denote by $u_{++}$ the sum of all entries of $u$, and abbreviate each ordered pair $(i,j) \in S$ by $ij$.
Then for example, the $(1,3)$ coordinate of the MLE is
\begin{align*}
\hat{p}_{13} &= h_{13}(u_{11} + u_{12} + u_{13})^1(u_{13} + u_{23})^1(u_{11} + u_{12} + u_{13} + u_{21} + u_{22} + u_{23})^{-1} u_{++}^{-1} \\
&= \frac{(u_{11} + u_{12} + u_{13})(u_{13} + u_{23})}{u_{++} (u_{11} + u_{12} + u_{13} + u_{21} + u_{22} + u_{23})}.
\end{align*}
Similarly, the $(2,3)$ coordinate is
\[
\hat{p}_{23} = \frac{(u_{21} + u_{22} + u_{23})(u_{13} + u_{23})}{u_{++} (u_{11} + u_{12} + u_{13} + u_{21} + u_{22} + u_{23})}.
\]
\end{ex}
The following theorem, known as Birch's Theorem, says that the maximum likelihood estimate for $u$ in a log-linear model $\mathcal{M}_A$, if it exists, is the unique distribution $\hat{p}$ in $\mathcal{M}_A$ with the same sufficient statistics as the normalized data. A proof of this result can be found in \cite[Chapter~7]{sullivant2018}.
\begin{thm}[Birch's Theorem]\label{thm:Birch}
Let $A \in \mathbb{Z}^{n \times r}$ such that $\mathbf{1} \in \mathrm{rowspan}(A)$.
Let $u \in \mathbb{R}_{\geq 0}^r$ and let $u_+ = u_1 + \dots + u_r$.
Then the maximum likelihood estimate in the log-linear model $\mathcal{M}_A$ given data $u$
is the unique solution, if it exists, to the equations $Au = u_+Ap$ subject to $p \in \mathcal{M}_A$.
\end{thm}
\begin{ex*}[Example \ref{ex:HornMap}, continued]
Consider the last row $a_6$ of the matrix $A$.
One sufficient statistic of $\mathcal{M}_A$ is $a_6 \cdot u = u_{13} + u_{23}$.
We must check that $a_6 \cdot u = u_{++} a_6 \cdot \hat{p}$.
Indeed, we compute that
\begin{align*}
a_6 \cdot \hat{p} &= \frac{(u_{11} + u_{12} + u_{13})(u_{13} + u_{23})}{u_{++} (u_{11} + u_{12} + u_{13} + u_{21} + u_{22} + u_{23})} +
\frac{(u_{21} + u_{22} + u_{23})(u_{13} + u_{23})}{u_{++} (u_{11} + u_{12} + u_{13} + u_{21} + u_{22} + u_{23})} \\
&= (u_{13} + u_{23}) \frac{(u_{11} + u_{12} + u_{13} + u_{21} + u_{22} + u_{23})}{u_{++} (u_{11} + u_{12} + u_{13} + u_{21} + u_{22} + u_{23}) } \\
&= \frac{u_{13} + u_{23}}{u_{++}},
\end{align*}
as needed.
\end{ex*}
\section{Facial Submodels of Log-Linear Models}\label{sec:FacialSubsets}
In order to prove that a quasi-independence model with rational MLE
must have a doubly chordal bipartite associated graph $G_S$, we first prove a result
that applies to general log-linear models with rational MLE.
Let $A \in \mathbb{Z}^{n \times r}$ be the matrix defining the monomial map for the log-linear model $\mathcal{M}_A$.
Let $I_A$ denote the vanishing ideal of the Zariski closure of $\mathcal{M}_A$.
We assume throughout that $\mathbf{1} \in \mathrm{rowspan}(A)$.
Let $P_A = \mathrm{conv}(A)$, where $\mathrm{conv}(A)$ denotes the convex hull of the columns $\mathbf{a}_1, \dots, \mathbf{a}_r$ of $A$.
We assume throughout that $P_A$ has $n$ facets, $F_1, \dots, F_n$,
and that the $ij$ entry of $A$, denoted $a_{ij}$ is equal to the lattice distance
between the $j$th column of $A$ and facet $F_i$.
This is not a restriction, since one can always reparametrize a log-linear model in this way \cite[Theorem~27]{rauh2011}.
Indeed, given a polytope $Q$, a matrix $A$ that satisfies the above condition
is a \emph{slack matrix} of $Q$, and the convex hull of the columns of $A$ is
affinely isomorphic to $Q$ \cite{gouveia2013}.
Let ${\overline{A}}$ be a matrix whose columns are a subset of $A$.
Without loss of generality, assume that the columns of ${\overline{A}}$ are $\mathbf{a}_1, \dots, \mathbf{a}_s$.
\begin{defn}
The submatrix ${\overline{A}}$ is called a \emph{facial submatrix} of $A$ if $P_{\bar{A}}$ is a face of $P_A$. The corresponding statistical model $\mathcal{M}_{\overline{A}}$ is called a \emph{facial submodel}
of $\mathcal{M}_A$.\footnote{Note that the term ``facial submodel'' is a slight abuse of terminology because $\mathcal{M}_{\overline{A}}$
is not a submodel of $\mathcal{M}_A$. This is because the log-linear model $\mathcal{M}_A$ does
not include distributions on the boundary of the probability simplex. Technically,
$\mathcal{M}_{\overline{A}}$ is a submodel of the closure of $\mathcal{M}_A$.}
\end{defn}
Let $\mathbf{e}_i$ denote the $i$th standard basis vector in $\mathbb{R}^n$.
Then $\mathbf{e}_i \cdot \mathbf{a}_j = 0$ if $\mathbf{a}_j$ lies on $F_i$ and
$\mathbf{e}_i \cdot \mathbf{a}_j \geq 1$ otherwise.
So under our assumptions on $A$, this definition of a facial submatrix of $A$
aligns with the one given in \cite{geiger2006} and \cite{rauh2011}.
We prove the following result
concerning the maximum likelihood estimator for $\mathcal{M}_{{\overline{A}}}$
when $\bar{A}$ is a facial submatrix of $A$.
This result was used implicitly in the proof of Theorem 4.4 of \cite{geiger2006}.
\begin{thm}\label{thm:FacialSubsetMLE}
Let $A \in \mathbb{Z}^{n \times r}$ and let ${\overline{A}} \in \mathbb{Z}^{n \times s}$ consist of the first $s$ columns of $A$.
Suppose that ${\overline{A}}$ is a facial submatrix of $A$.
Let $\mathcal{M}_A$ have rational maximum likelihood estimator $\Psi$ given by the Horn pair $(B,h)$ where $B \in \mathbb{Z}^{d \times r}$
and $h \in (\mathbb{C}^*)^r$.
Let $\overline{B}$ denote the submatrix consisting of the first $s$ columns of $B$ and let $\bar{h} = (h_1, \dots, h_s)$.
Then $\mathcal{M}_{{\overline{A}}}$ has rational maximum likelihood estimator $\overline{\Psi}$ given by the Horn pair $(\overline{B},\bar{h})$.
\end{thm}
In order to prove Theorem \ref{thm:FacialSubsetMLE}, we check the conditions of Birch's theorem.
We do this using the following Lemmas.
\begin{lemma}\label{lem:InVariety}
Let $\overline{\Psi}$ be as in Theorem \ref{thm:FacialSubsetMLE}.
Then for generic $\overline{u} \in \mathbb{R}_{\geq 0}^s$, $\overline{\Psi}(\overline{u})$ is defined.
In this case, $\overline{\Psi}(\overline{u})$ is in the
Zariski closure of $\mathcal{M}_{{\overline{A}}}$.
\end{lemma}
\begin{proof}
Let $u \in \mathbb{R}^r_{\geq 0}$ be given by $u_i = \overline{u}_i$ if $i \leq s$ and $u_i = 0$ if $i > s$.
We claim that when $\Psi(u)$ is defined,
$\overline{\Psi}_k(\overline{u}) = \Psi_k(u)$ for $k \leq s$.
Indeed, each factor of $\Psi_k(u)$ is of the form
\[
\big( \sum_{j =1}^r b_{ij} u_j \big)^{b_{ik}}
\]
for each $i =1, \dots, d$. If the $i$th factor of $\Psi_k$ is not
identically equal to one, then $b_{ik} \neq 0$.
So the $i$th factor has the nonzero summand $b_{ik}u_k$ and is generically nonzero
when evaluated at a point $u$ of the given form.
In particular, this implies that $\Psi_k(u)$ is defined for a generic $u$
of the given form since having $u_j = 0$ for $j > s$ does not make any factor of $\Psi_k$ identically equal to zero.
Setting each $b_{ij} = 0$ when $j > s$ gives that $\overline{\Psi}_k(\overline{u}) = \Psi_k(u)$ when $k \leq s$.
The elements of $I_{{\overline{A}}}$ are those elements of $I_A$ that belong to the polynomial ring $k[p_1, \dots, p_s]$. Let $f \in I_{{\overline{A}}}$.
Since $f \in I_A$ as well, $f(\overline{\Psi}(\overline{u})) = f(\Psi(u)) = 0$, as needed.
\end{proof}
Next we check that the sufficient statistics ${\overline{A}} \overline{u} / \overline{u}_+$ are equal to those of $\overline{\Psi}(\overline{u})$.
\begin{lemma}\label{lem:SuffStats}
Let $\overline{\bfc}$ be a row of ${\overline{A}}$. Then
\[
\frac{\overline{\bfc} \cdot \overline{u}}{\overline{u}_+} = \overline{\bfc} \cdot \overline{\Psi}(\overline{u}).
\]
\end{lemma}
\begin{proof}
Let $\mathbf{c}$ be the row of $A$ corresponding to $\overline{\bfc}$.
Define a sequence $u^{(i)} \in \mathbb{R}^r_{\geq 0}$ by
\[
u_j^{(i)} = \begin{cases}
\overline{u}_j & \text{ if } j \leq s \\
\epsilon^{(i)}_j & \text{ if } j > s,
\end{cases}
\]
where $\lim_{i \rightarrow \infty} \epsilon^{(i)}_j = 0$ for each $j$.
We choose each $\epsilon^{(i)}_j > 0$ generically so that $\Psi(u^{(i)})$ is defined for all $i$.
Since $\overline{u}$ is generic, we have that $\lim_{i \rightarrow \infty} u_+^{(i)} = \overline{u}_+ \neq 0$.
Similarly, we have that $\lim_{i\rightarrow\infty} \mathbf{c} \cdot u^{(i)} = \overline{\bfc} \cdot \overline{u}$.
So
\[
\lim_{i\rightarrow\infty} \frac{\mathbf{c} \cdot u^{(i)}}{u^{(i)}_+} = \frac{\overline{\bfc} \cdot \overline{u}}{\overline{u}_+}.
\]
Since $\Psi(u^{(i)})$ is the maximum likelihood estimate in $\mathcal{M}_A$ for each $u^{(i)}$,
by Birch's theorem we have that
\begin{align*}
\frac{\mathbf{c} \cdot u^{(i)}}{u^{(i)}_+} &= \mathbf{c} \cdot \Psi(u^{(i)}) \\
&= \sum_{i=1}^s c_j \Psi_j(u^{(i)}) + \sum_{j = s+1}^r c_j \Psi_j(u^{(i)}).
\end{align*}
By the arguments in the proof of Lemma \ref{lem:InVariety}, when $k \leq s$,
no factor of $\Psi_k(u^{(i)})$ involves only summands $u_j^{(i)}$ for $j > s$.
So $\lim_{i\rightarrow\infty}\Psi_k(u^{(i)}) = \overline{\Psi}_k(\overline{u})$.
Finally, we claim that for $k > s$, $\lim_{i \rightarrow \infty} \Psi_k(u^{(i)}) = 0$.
Without loss of generality, we may assume that $P_{{\overline{A}}}$ is a facet of $P_A$.
Indeed, if it were not, we could simply iterate these arguments over a
saturated chain of faces between $P_{{\overline{A}}}$ and $P_A$ in the face lattice of $P_A$.
Let $\boldsymbol\alpha = (a_1, \dots, a_r)$ be the row of $A$ corresponding to the facet $P_{{\overline{A}}}$ of $P_A$.
Then $a_j= 0$ if $j \leq s$ and $a_j \geq 1$ if $j > s$.
Since $\Psi(u^{(i)})$ is the maximum likelihood estimate in $\mathcal{M}_A$ for $u^{(i)}$,
by Birch's theorem we have that
\begin{align*}
\boldsymbol\alpha \cdot \Psi(u^{(i)}) &= \frac{1}{u^{(i)}_+} (a_{s+1} u^{(i)}_{s+1} + \dots + a_r u^{(i)}_r) \\
&= \frac{1}{u^{(i)}_+} (a_{s+1} \epsilon^{(i)}_{s+1} + \dots + a_r \epsilon^{(i)}_r).
\end{align*}
Since $\overline{u}_+ \neq 0$, we also have that
\begin{align*}
\lim_{i \rightarrow \infty} \boldsymbol\alpha \cdot \Psi(u^{(i)})
& = \lim_{i \rightarrow \infty} \frac{1}{u^{(i)}_+} (a_{s+1} \epsilon^{(i)}_{s+1} + \dots + a_r \epsilon^{(i)}_r) \\
&= \frac{1}{\overline{u}_+} \lim_{i\rightarrow\infty}(a_{s+1} \epsilon^{(i)}_{s+1} + \dots + a_r \epsilon^{(i)}_r) \\
&= 0.
\end{align*}
Furthermore, for all $i$ and $k$, $\Psi_k(u^{(i)}) > 0$.
So $\lim_{i \rightarrow \infty}\Psi_k(u^{(i)}) \geq 0$. Since each $a_i > 0$ for $i > s$,
this implies that $\lim_{i \rightarrow \infty} \Psi_k(u^{(i)}) = 0$ for all $k > s$.
So we have that
\begin{align*}
\frac{\overline{\bfc} \cdot \overline{u}}{\overline{u}_+} &= \lim_{i \rightarrow \infty} \frac{\mathbf{c} \cdot u^{(i)}}{u^{(i)}_+} \\
&= \lim_{i \rightarrow\infty} \mathbf{c} \cdot \Psi(u^{(i)}) \\
&= \overline{\bfc} \cdot \overline{\Psi}(\overline{u}) + \sum_{j=s+1}^r c_j \big( \lim_{i \rightarrow \infty} \Psi_j(u^{(i)}) \big) \\
&= \overline{\bfc} \cdot \overline{\Psi}(\overline{u}),
\end{align*}
as needed.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:FacialSubsetMLE}]
First, note that $\overline{\Psi}$ is still a rational function of degree zero since deleting columns of $B$ does not affect the remaining column sums.
So $(\overline{B}, \overline{\mathbf{h}})$ is a Horn pair.
By Lemma \ref{lem:InVariety}, we have that $\overline{\Psi}(\overline{u}) \in \overline{\mathcal{M}_{{\overline{A}}}}$.
Since $\mathbf{1} \in \mathrm{rowspan}({\overline{A}})$, it follows from Lemma \ref{lem:SuffStats}
that $\sum_{k=1}^s \overline{\Psi}_k(\overline{u}) = 1$.
Defining a sequence $\{u^{(i)} \}_{i=1}^{\infty}$ as in the proof of Lemma \ref{lem:SuffStats},
we have that $\overline{\Psi}_k(\overline{u}) = \lim_{i \rightarrow\infty} \Psi_k(u^{(i)})$.
So $\overline{\Psi}_k(\overline{u}) \geq 0$ since each $\Psi_k(u^{(i)}) > 0$.
Furthermore, for generic choices of $\overline{u}$, we cannot have $\overline{\Psi}_k(\overline{u}) = 0$.
Indeed, for $k \leq s$, the $i$th factor of $\overline{\Psi}_k(\overline{u})$ has nonzero summand $b_{ik} u_k$.
So none of these factors is zero for generic choices of $u$ of the given form.
Therefore $\overline{\Psi}(\overline{u}) \in \mathcal{M}_{{\overline{A}}} = \overline{\mathcal{M}_{{\overline{A}}}} \cap \Delta_{s-1}$.
By Lemma \ref{lem:SuffStats},
\[
\frac{{\overline{A}} \cdot \overline{u}}{\overline{u}_+} = {\overline{A}} \cdot \overline{\Psi}(\overline{u}).
\]
So by Birch's theorem, $\overline{\Psi}$ is the maximum likelihood estimator for $\mathcal{M}_{{\overline{A}}}$.
\end{proof}
Note that $\overline{\Psi}$ is a dominant map. Indeed, for generic $p \in \mathcal{M}_{{\overline{A}}}$,
$\overline{\Psi}(p)$ is defined. Since $p$ is a probability distribution, $p_{+} = 1$.
By Birch's Theorem, $p$ is the MLE for data vector $p$.
So $\overline{\Psi}(p) = p$.
We close this section by noting that we believe that a natural
generalization of Theorem \ref{thm:FacialSubsetMLE} is also true.
\begin{conj}
Let $A \in \mathbb{Z}^{n \times r}$ and ${\overline{A}} \in \mathbb{Z}^{n \times s}$ a facial submatrix of $A$.
Then the maximum likelihood degree of $\mathcal{M}_A$ is greater than or equal to the
maximum likelihood degree of $\mathcal{M}_{\overline{A}}$.
\end{conj}
\section{Quasi-independence Models with Non-Rational MLE}\label{sec:FacialSZ}
In this section, we show that when $S$ is not doubly chordal bipartite,
the ML-degree of $\mathcal{M}_S$ is strictly greater than one.
We can apply Theorem \ref{thm:FacialSubsetMLE} to quasi-independence models
whose associated bipartite graphs are not doubly chordal bipartite using cycles and the following ``double square" structure.
\begin{figure}
\begin{center}
\caption{The double-square graph associated to the matrix in Example \ref{ex:DoubleSquare}}
\label{Fig:DoubleSquare}
\begin{tikzpicture}
\draw(1,1) -- (0,1)--(0,0)--(2,0)--(2,1)--(1,1)--(1,0);
\draw [fill] plot [only marks, mark=square*] coordinates {(0,0) (1,1) (2,0)};
\draw [fill = white] plot [only marks, mark size=2.5, mark=*] coordinates { (0,1) (1,0) (2,1)};
\node[below] at (0,0) {1};
\node[above] at (1,1) {2};
\node[below] at (2,0) {3};
\node[above] at (0,1) {1};
\node[below] at (1,0) {2};
\node[above] at(2,1){3};
\end{tikzpicture}
\end{center}
\end{figure}
\begin{ex}\label{ex:DoubleSquare}
The minimal example of a chordal bipartite graph that is not doubly chordal bipartite is the \emph{double-square graph}.
The matrix of the double-square graph has the form
\[
\begin{bmatrix}
\star & \star & 0 \\
\star & \star & \star \\
0 & \star & \star
\end{bmatrix},
\]
or any permutation of the rows and columns of this matrix. The resulting graph, pictured in Figure \ref{Fig:DoubleSquare} is two squares joined along an edge.
This is a 6-cycle with exactly one chord and as such, is not doubly chordal bipartite.
\end{ex}
\begin{rmk}
A bipartite graph is doubly chordal bipartite if and only if it is chordal bipartite and does not have the double-square graph as an induced subgraph.
\end{rmk}
We now compute the maximum likelihood degree of models associated to the double square and to cycles of length greater than or equal to 6.
\begin{prop}\label{prop:DSMLdeg}
The maximum likelihood degree of the quasi-independence model
whose associated graph is the double square is 2.
\end{prop}
\begin{proof}
Without loss of generality, let
\[
S = \{11, 12, 21, 22, 23, 32,33\},
\]
so that $G_S$ is a double-square graph. Then the vanishing ideal of $\mathcal{M}_S$ is the ideal
$I(\mathcal{M}_S)\subset \mathbb{C}[p_{ij} \mid ij \in S]$ given by
\[
I(\mathcal{M}_S) = \langle p_{11}p_{22} - p_{12}p_{21}, p_{22}p_{33} - p_{23}p_{32} \rangle.
\]
Define the hyperplane arrangement
\[
\mathcal{H} := \{ p \in \mathbb{C}^S \mid p_{++} \prod_{ij \in S} p_{ij} = 0 \},
\]
where $p_{++}$ denotes the sum of all the coordinates of $p$. Then Proposition 7 of \cite{amendola2019}
implies that the ML-degree of $\mathcal{M}_S$ is the number of solutions to the system
\[
I(\mathcal{M}_S) + \langle A(S) u + u_+ A(S) p \rangle
\]
that lie outside of $\mathcal{H}$ for generic $u$.
Since $A(S)$ encodes the row and column marginals of $u$,
the MLE for $u$ can be written in matrix form as
\[
\begin{bmatrix} u_{11} + \alpha & u_{12} - \alpha & 0 \\
u_{21} - \alpha & u_{22} + \alpha + \beta & u_{23} - \beta \\
0 & u_{32} - \beta & u_{33} + \beta
\end{bmatrix}
\]
for some $\alpha$ and $\beta$.
So computing the MLE is equivalent
to solving for $\alpha$ and $\beta$ in the system
\begin{align*}
(u_{11} + \alpha)(u_{22} + \alpha + \beta) - (u_{12} - \alpha) (u_{21} - \alpha) &= 0 \\
(u_{22} + \alpha + \beta)(u_{33} + \beta) - (u_{23} - \beta) (u_{32} - \beta) &= 0.
\end{align*}
Expanding gives two equations of the form
\begin{align}\label{eqn:DoubleSquareSystem}
\alpha \beta + c_1 \alpha + c_2 \beta + c_3 &= 0 \\
\alpha \beta + d_1 \alpha + d_2 \beta + d_3 &= 0, \nonumber
\end{align}
where each $c_i, d_i$ are polynomials in the entries of $u$.
Solving for $\alpha = -(c_2 \beta + c_3)/(\beta + c_1)$ in the first equation of (\ref{eqn:DoubleSquareSystem})
and substituting into the second gives a degree 2 function of $\beta$, which can have at most two solutions.
Indeed, for generic choices of $u$, this equation has exactly two solutions, neither of which lie on $\mathcal{H}$.
For example, take $u_{11} = u_{12} = u_{21} = u_{22} = 1$ and $u_{23} = u_{32} = u_{33} = 2$.
By performing this substitution in (\ref{eqn:DoubleSquareSystem}) with these values for $u$,
we obtain the degree 2 equation
\begin{equation}\label{eqn:DoubleSquareFxn}
\frac{-\beta^2}{\beta+4} + \frac{7 \beta}{\beta + 4} + 2\beta - 2 = 0.
\end{equation}
After clearing denominators, we obtain that $\beta^2 + 13 \beta - 8 = 0$.
This polynomial has two distinct roots neither of which lie on $\mathcal{H}$,
and (\ref{eqn:DoubleSquareFxn}) is defined at both of these roots.
These are generic conditions on the data; so since there exists a $u$ for which
(\ref{eqn:DoubleSquareSystem}) has exactly two solutions,
the ML-degree of $\mathcal{M}_S$ is 2.
\end{proof}
\begin{prop}\label{prop:CycleMLdeg}
Let $S_k \subset [k] \times [k]$ be a collection of indices such that $G_{S_k}$ is a cycle of length $2k$.
Then the ML-degree of $\mathcal{M}_{S_k}$ is $k$ if $k$ is odd and $(k-1)$ if $k$ is even.
\end{prop}
\begin{proof}
Without loss of generality, we may assume that $S_k = \{ (i,i) \mid i \in [k] \} \cup \{ (i, i+1) \mid i \in [k-1]\} \cup \{(k,1)\}.$
Since $G_{S_k}$ consists of a single cycle, the ideal $I(\mathcal{M}_{S_k})$ is principal. Indeed, it is given by
\begin{equation}
I(\mathcal{M}_{S_k}) = \langle \prod_{i=1}^k p_{i,i} - \prod_{i=1}^{k}p_{i,i+1} \rangle,
\end{equation}
where we set $p_{k,k+1} = p_{k,1}$.
Let $\mathcal{H}$ be the hyperplane arrangement,
\[
\mathcal{H} = \{ p \mid p_{++} \prod_{ij \in S} p_{ij} = 0 \}.
\]
By Proposition 7 of \cite{amendola2019}, ML-degree of $\mathcal{M}_{S_k}$ is the number of solutions to
\begin{equation}\label{eqn:CycleMLdeg}
I(\mathcal{M}_{S_k}) + \langle A(S_k)u - u_+ A(S_k) p \rangle.
\end{equation}
that lie outside of $\mathcal{H}$.
The sufficient statistics of $u$ are of the form $u_{i,i} + u_{i,i+1}$ and
$u_{i-1,i} + u_{i,i}$ where we set $u_{0,1} = u_{k,1}$.
So computing solutions to Equation (\ref{eqn:CycleMLdeg}) is equivalent to
solving for $\alpha \in \mathbb{C}$ in the equation
\begin{equation}\label{eqn:CyclePolynomial}
\prod_{i=1}^k (u_{i,i} + \alpha) - \prod_{i=1}^{k} (u_{i,i+1} - \alpha) = 0.
\end{equation}
The MLE is then of the form $p_{i,i} = (u_{i,i} + \alpha)/u_{++}$ and $p_{i,i+1} = (u_{i,i+1} - \alpha)/u_{++}$.
The degree of this polynomial is $k$ when $k$ is odd and $k-1$ when $k$ is even.
Furthermore, we claim that for generic $u$, none of these solutions lie in $\mathcal{H}$.
Indeed, without loss of generality, suppose that $\bar{p}$ is a solution to (\ref{eqn:CycleMLdeg})
with $\bar{p}_{1,1} = 0$. Then we have that $\alpha = -u_{1,1}$.
So the first term of (\ref{eqn:CyclePolynomial}) is 0.
But then there exists an $i$ such that
\[
u_{i,i+1} - \alpha = u_{i,i+1} + u_{1,1} = 0,
\]
which is a non-generic condition on $u$.
Similarly, since $u$ is generic, we may assume that $u_{++} \neq 0$.
But if $\bar{p}_{++} = 0$, then since each $\bar{p}_{i,i} = (u_{i,i} + \alpha) / u_{++}$ and
$\bar{p}_{i,i+1} = (u_{i,i+1} - \alpha) / u_{++}$, this implies that $u_{++} = 0$, which is a contradiction.
So for generic values of $u$, the roots of (\ref{eqn:CyclePolynomial}) give rise to exactly $k$, resp. $k-1$,
solutions to (\ref{eqn:CycleMLdeg})
that lie outside of $\mathcal{H}$. So the ML-degree of $\mathcal{M}_{S_k}$ is $k$ if $k$ is odd and $k-1$ if $k$ is even.
\end{proof}
\begin{thm}\label{thm:NotDCB}
Let $S$ be such that $G_S$ is not doubly chordal bipartite.
Then $\mathcal{M}_S$ does not have rational MLE.
\end{thm}
\begin{proof}
Suppose that $G_S$ is not doubly chordal bipartite.
Then it has an induced subgraph $H$ that is either a double square or a cycle of length greater than or equal to 6.
Without loss of generality, let the edge set $E(H)$ be a subset of $[k] \times [k]$.
Let $A = A(S)$ and let ${\overline{A}}$ be the submatrix of $A$ consisting of the columns indexed by elements of $E(H)$.
Let the coordinates of $P_A$ and $P_{{\overline{A}}}$ be indexed by $(x_1, \dots, x_m, y_1, \dots, y_n)$.
We claim that ${\overline{A}}$ is a facial submatrix of $A$. Indeed, ${\overline{A}}$ consists of exactly the
vertices of $P_A$ that satisfy $x_i = 0$ for $k < i \leq m$ and $y_j = 0$ for $k < j \leq n$.
Since $P_A$ is a 0/1 polytope, the inequalities $x_i \geq 0$ and $y_j \geq 0$ are valid. So this constitutes a face of $P_A$.
Therefore, by Propositions \ref{prop:DSMLdeg} and \ref{prop:CycleMLdeg},
$A$ has a facial submatrix ${\overline{A}}$ such that $\mathcal{M}_{{\overline{A}}}$ has ML-degree strictly greater than 1.
So by Theorem \ref{thm:FacialSubsetMLE}, the ML-degree of $\mathcal{M}_A = \mathcal{M}_S$ is also strictly greater than 1,
as needed.
\end{proof}
\section{The Clique Formula for the MLE}\label{sec:Cliques}
In this section we state the main result of the paper, which gives
the specific form of the rational maximum likelihood estimates for
quasi-independence models when they exist. These
are described in terms of the complete bipartite subgraphs of the associated graph $G_S$.
A complete bipartite subgraph of $G_S$ corresponds to an entirely nonzero submatrix of $S$.
This motivates our use of the word ``clique" in the following definition.
\begin{defn}
A set of indices $C = \{i_1, \dots, i_r\} \times \{j_1, \dots, j_s\}$ is a \emph{clique} in $S$ if $(i_{\alpha},j_{\beta}) \in S$ for all $1 \leq \alpha \leq r$ and $1 \leq \beta \leq s$. A clique $C$ is maximal if it is not contained in any other clique in $S$.
\end{defn}
We now describe some important sets of cliques in $S$.
\begin{notation}\label{not:MaxInt}
For every pair of indices $(i,j) \in S$, we let $\mathrm{Max}(ij)$ be the set of all maximal cliques in $S$ that contain $(i,j)$.
We let $\mathrm{Int}(ij)$ be the set of all containment-maximal pairwise intersections of elements of $\mathrm{Max}(ij)$.
Similarly, we let $\mathrm{Max}(S)$ denote the set of all maximal cliques in $S$
and $\mathrm{Int}(S)$ denote the set of all maximal intersections of maximal cliques in $S$.
\end{notation}
\begin{ex}\label{ex:RunningExample}
Let $m = 8$ and $n = 9$. Consider the set of indices
\[
S = \{11, 12, 21, 22, 23, 28, 31,32,33,34,41,45,51,56,57,65,76,86,87, 89 \},
\]
where we replace $(i,j)$ with $ij$ for the sake of brevity. The corresponding matrix with structural zeros is
\[
\begin{bmatrix}
\star & \star & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\star & \star & \star & 0 & 0 & 0 & 0 & \star & 0 \\
\star & \star & \star & \star & 0 & 0 & 0 & 0 & 0\\
\star & 0 & 0 & 0 & \star & 0 & 0 & 0 & 0\\
\star & 0 & 0 & 0 & 0 & \star & \star & 0 & 0\\
0 & 0 & 0 & 0 & \star & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & \star & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & \star & \star & 0 & \star
\end{bmatrix}.
\]
We will use this as a running example. The bipartite graph $G_S$ associated to $S$ is pictured in Figure \ref{Fig:RunningGraph}.
In this figure, we use white circles to denote vertices corresponding to rows in $S$ and black squares to denote vertices corresponding to columns in $S$. Note that $G_S$ is doubly chordal bipartite since its only cycle of length 6 has two chords.
In this case, the set of maximal cliques in $S$ is
\begin{align*}
\mathrm{Max}(S) =& \big\{ \{11, 21, 31, 41, 51\}, \{11, 12, 21, 22, 31, 32\}, \{21, 22, 23, 31, 32, 33\}, \{21, 22, 23, 28\}, \\
& \quad \{31, 32, 33, 34\}, \{41, 45\}, \{51, 56, 57\}, \{45,65\}, \{56,76, 86\}, \{56,57,86,87\}, \{86, 87, 89\} \big\}.
\end{align*}
The set of maximal intersections of maximal cliques in $S$ is
\begin{align*}
\mathrm{Int}(S) =& \big\{ \{11, 21, 31\}, \{21, 22, 31,32\}, \{21, 22, 23\}, \{31,32,33\}, \{41\}, \{51\}, \{45\}, \\
& \quad \{56,57\}, \{56,86\}, \{86,87\} \big\}.
\end{align*}
Note, for example, that $\{31,32\}$ is the intersection of the two maximal cliques $\{11, 12, 21, 22, 31, 32\}$ and $\{31, 32, 33, 34\}$.
However it is not in $\mathrm{Int}(S)$ because it is properly contained in the intersection of maximal cliques,
\[
\{11, 12, 21, 22, 31, 32\} \cap \{21, 22, 23, 31, 32, 33\} = \{21, 22, 31, 32\}.
\]
\begin{figure}
\begin{center}
\begin{tikzpicture}
\draw (5,0) -- (5,1)--(6.5,2) -- (6.5,3)--(5,4)--(3.5,3) -- (3.5,2) -- (5,1) -- (5,4)--(7,5)--(10,5);
\draw(3.5,2)--(6.5,3);
\draw(0,5)--(3,5)--(3,4)--(1.5,4)--(1.5,5);
\draw(3,5)--(5,4);
\draw (1.5,4)--(1.5,3);
\draw(6.5,3)--(8,3);
\draw [fill] plot [only marks, mark=square*] coordinates {(5,4) (3.5,2) (6.5,2) (5,0) (8.5,5) (1.5,5) (3,4) (1.5,3) (8,3)};
\draw [fill = white] plot [only marks, mark size=2.5, mark=*] coordinates { (3.5,3) (6.5,3) (5,1) (7,5) (3,5) (10,5) (0,5) (1.5,4)};
\node[left] at (3.5,3) {1};
\node[above] at (6.5,3){2};
\node[right] at (5,1){3};
\node[above] at (7,5){4};
\node[above] at (3,5){5};
\node[above] at (10,5) {6};
\node[above] at (0,5) {7};
\node[left] at (1.5,4) {8};
\node[above] at (5,4) {1};
\node[left] at (3.5,2) {2};
\node[right] at (6.5,2){3};
\node[right] at (5,0){4};
\node[above] at (8.5,5){5};
\node[above] at (1.5,5){6};
\node[below] at (3,4){7};
\node[below] at(1.5,3){9};
\node[above] at (8,3){8};
\end{tikzpicture}
\end{center}
\caption{The bipartite graph associated the matrix $S$ in Example \ref{ex:RunningExample}}
\label{Fig:RunningGraph}
\end{figure}
\end{ex}
Let $u = (u_{ij} \mid (i,j) \in S)$ be a matrix of counts. For any $C \subset S$, we let $C^+$ denote the sum of all the entries of $u$ whose indices are in $C$. That is,
\[
C^+ = \sum_{(i,j) \in C} u_{ij}.
\]
Similarly, we denote the row and column marginals $u_{i+} = \sum_{j: (i,j) \in S} u_{ij}$ and $u_{+j} = \sum_{i: (i,j) \in S} u_{ij}$.
The sum of all entries of $u$ is $u_{++} = \sum_{(i,j) \in S} u_{ij}$.
\begin{thm}\label{thm:Main}
Let $S \subset [m] \times [n]$ be a set of indices with associated
bipartite graph $G_S$ and quasi-independence model $\mathcal{M}_S$.
Then $\mathcal{M}_S$ has rational maximum likelihood estimate if and
only if $G_S$ is doubly chordal bipartite. In particular, if
$u = (u_{ij} \mid (i,j) \in S)$ is a matrix of counts,
the maximum likelihood estimate for $u$ has $ij$th entry
\[
\hat{p}_{ij} = \frac{u_{i+}u_{+j}\displaystyle{\prod_{C \in \mathrm{Int}(ij)} C^+}}{u_{++} \displaystyle{ \prod_{D \in \mathrm{Max}(ij)}D^+}}
\]
where the sets $\mathrm{Max}(ij)$ and $\mathrm{Int}(ij)$ are as in Notation \ref{not:MaxInt}.
\end{thm}
Over the course of the next two sections, we prove various lemmas
that ultimately allow us to prove Theorem \ref{thm:Main}.
\begin{ex}
Consider the set of indices $S$ from Example \ref{ex:RunningExample}.
Let $u$ be a matrix of counts. Consider the maximum likelihood estimate for the $(2,1)$ entry, $\hat{p}_{21}$.
The maximal cliques that contain $21$ are $\{11, 21, 31, 41, 51\}, \{11, 12, 21, 22, 31, 32\}$,
$\{21, 22, 23, 28\}$ and $\{21, 22, 23, 31, 32, 33\}$.
The maximal intersections of maximal cliques that contain $21$ are $\{11, 21,31\}$, $\{21, 22, 23\}$ and $\{21, 22, 31, 32\}$.
Since $S$ is doubly chordal bipartite, we apply Theorem \ref{thm:Main} to obtain that the numerator of $\hat{p}_{21}$ is
\[
(u_{21} + u_{22} + u_{23} + u_{28})(u_{11} + u_{21} + u_{31} + u_{41} + u_{51})(u_{11} + u_{21} + u_{31})(u_{21} + u_{22} + u_{23})(u_{21} + u_{22} + u_{31} + u_{32}).
\]
The denominator of $\hat{p}_{21}$ is {\footnotesize
\[
u_{++}(u_{11} + u_{21} + u_{31} + u_{41} + u_{51})(u_{11} +u_{12} + u_{21} + u_{22} + u_{31} + u_{32})(u_{21} + u_{22} + u_{23} + u_{28})(u_{21} + u_{22} + u_{23} + u_{31} + u_{32} + u_{33}).
\]}
We note that when a maximal clique is a single row or column, as is the case with $\{21, 22, 23, 28\}$ and $\{11, 21, 31, 41, 51\}$, we have cancellation between the numerator and denominator.
\end{ex}
In order to prove Theorem \ref{thm:Main}, we show that $\hat{p}_{ij}$ satisfies the conditions of Birch's theorem.
First, we investigate the intersections of a fixed column of the matrix
with structural zeros with maximal cliques and their intersections.
We prove useful lemmas about the form that these maximal cliques have
that allow us to show that the conditions of Birch's Theorem are satisfied.
In particular, we use them to prove Corollary \ref{cor:SumEntireBlock}, which states
that the column marginal of the formula in Theorem \ref{thm:Main}
given by the fixed column is equal to that of the normalized data.
\section{Intersections of Cliques with a Fixed Column}\label{sec:FixedColumn}
In this section we prove some results that will set the stage for the proof
of Theorem \ref{thm:Main} that appears in Section \ref{sec:BirchsThm}.
To prove that our formulas satisfy Birch's theorem, we need to understand
what happens to sums of these formulas over certain sets of indices.
Let $S \subset [m] \times [n]$ and let $j_0 \in [n]$.
Without loss of generality, we assume that $(1,j_0),\dots,(r,j_0) \in S$,
and that the last $(i,j_0) \not\in S$ for all $i > r$.
Let
\[
N_{j_0} := \{ (1, j_0), \dots, (r, j_0) \}.
\]
We consider $j_0$ to be the index of a column in the matrix representation of $S$, and $1, \dots, r$ to be the indices of its nonzero rows.
Now let $T_0 \mid \dots \mid T_h$ be the coarsest partition of $[n]$ with the property that whenever $j,k \in T_{\ell}$,
\[
\{ i \in [r] \mid (i,j) \in S \} = \{ i \in [r] \mid (i,k) \in S\}.
\]
In the matrix representation of $S$, each $T_{\ell}$ corresponds to a set of columns whose first $r$ rows are identical. The fact that we take $T_0 \mid \dots \mid T_h$ to be the coarsest such partition ensures that the supports of the columns in distinct parts of the partition are distinct.
Define the partition $B_0 \mid \dots \mid B_h$ of $S \cap ([r] \times [n])$ by
$B_{\ell} = \{(i,j) \mid j \in T_{\ell}\}$. Note that one of the $B_{\ell}$ may be empty,
in which case we exclude it from the partition.
We call these $B_{\ell}$ the \emph{blocks} of $S$ corresponding to column $j_0$.
We fix $j_0$ and $B_0, \dots, B_h$ for the entirety of this section,
and we assume without loss of generality that $j_0 \in T_0$.
Denote by $\mathrm{rows}^{j_0}(B_{\alpha})$ the set of all $i \in [r]$ such that $(i,j) \in B_{\alpha}$ for some column index $j$.
Note that this is a subset of the first $1, \dots, r$ rows of $S$,
and that in the matrix representation of $S$, the columns whose indices are in $B_{\alpha}$ may not have the same zero patterns in rows $r+1, \dots m$.
Similarly, for each $j \in [n]$, define $\mathrm{rows}^{j_0}(j)$ to be the set of all $i \in [r]$ such that $(i,j) \in S$;
that is, the elements of $\mathrm{rows}^{j_0}(j)$ are the row indices of the nonzero entries of column $j$ in the
first $r$ rows of $S$. Note that the dependence on $j_0$ in this notation stems from the
fact that the column $j_0$ is used to obtained the partition $B_0 \mid \dots \mid B_h$.
\begin{ex}\label{ex:BlockLabels}
Consider the running example $S$ from Example \ref{ex:RunningExample}, and let $j_0 = 1$ be the first column of $S$.
In this case, $r = 5$ since only the first 5 rows entries of column $j_0$ are nonzero.
Then the blocks associated to $j_0$ consist of the following columns.
\begin{align*}
T_0 & = \{j_0\} = \{1\} & \qquad & T_1 = \{2\} \\
T_2 &= \{3\} & \qquad & T_3 = \{4\} \\
T_4 &= \{5\} & \qquad & T_5 = \{6,7\} \\
T_6 &= \{8\} & \qquad &T_7 = \{9\}.
\end{align*}
We note that although columns 6 and 7 are not the same over the whole matrix, their first five rows are the same.
Since these are the nonzero rows of column $j_0$, columns 6 and 7 belong to the same block.
The $B_i$ associated to each of these sets of column indices are
\begin{align*}
B_0 &= \{11, 21, 31, 41, 51\} & \qquad B_1 &= \{12, 22, 32\} \\
B_2 &= \{23,33\} & \qquad B_3 &= \{34\} \\
B_4 &= \{45\} & \qquad B_5 &= \{56, 57\} \\
B_6 &= \{28\} & \qquad B_7 &= \emptyset
\end{align*}
For instance, $\mathrm{rows}^{j_0}(B_1) = \{1,2,3\}$ and $\mathrm{rows}^{j_0}(B_5) = \{5\}$.
\end{ex}
The following proposition characterizes what configurations of the rows of the $B_{\alpha}$'s
are allowable in order to avoid a cycle with exactly one chord. We call the condition outlined in Proposition \ref{prop:DSFree} the double-squarefree,
or \emph{DS-free} condition.
\begin{prop}[DS-free condition]\label{prop:DSFree}
Let $S$ be doubly chordal bipartite.
Let $\alpha, \beta \in [h]$. If $\mathrm{rows}^{j_0}(B_{\alpha}) \cap \mathrm{rows}^{j_0}(B_{\beta})$ is nonempty,
then $\mathrm{rows}^{j_0}(B_{\alpha}) \subset \mathrm{rows}^{j_0}(B_{\beta})$ or $\mathrm{rows}^{j_0}(B_{\beta}) \subset \mathrm{rows}^{j_0}(B_{\alpha})$.
\end{prop}
\begin{proof}
For the sake of contradiction, suppose without loss of generality that $\mathrm{rows}^{j_0}(B_1) \cap \mathrm{rows}^{j_0}(B_2)$ is nonempty
but neither is contained in the other.
Then let $i_0, i_1 \in \mathrm{rows}^{j_0}(B_1)$ and $i_1, i_2 \in \mathrm{rows}^{j_0}(B_2)$
so that $i_0 \not\in \mathrm{rows}^{j_0}(B_2)$ and $i_2 \not\in \mathrm{rows}^{j_0}(B_1)$.
We have $i_0, i_1, i_2 \in \mathrm{rows}^{j_0}(j_0)$ by definition.
Let $j_1 \in T_1$ and $j_2 \in T_2$.
Then the $\{i_0, i_1, i_2\} \times \{j_0, j_1, j_2\}$ submatrix of $S$ is the matrix of a double-square,
which contradicts that $S$ is doubly chordal bipartite.
\end{proof}
Proposition \ref{prop:DSFree} implies that the sets $\mathrm{rows}^{j_0}(B_{\alpha})$ over
all $\alpha$ have a tree structure ordered by containment.
In fact, we will see that this gives a tree structure on
the maximal cliques in $S$ that intersect $N_{j_0}$.
(Recall that $N_{j_0} = \{ (i, j_0) \in S \} = [r] \times \{j_0\}$).
\begin{ex}
The matrix $S$ from Example \ref{ex:RunningExample} is doubly chordal bipartite, and as such, satisfies the DS-free condition.
If we append a tenth column, $(0, \star , \star ,\star, 0, 0, 0, 0)^T$ to obtain a matrix $S'$, this introduces a new block $B_8$ which just contains column 10.
This matrix violates the DS-free condition
since $\mathrm{rows}^{j_0}(B_1) = \{1,2,3\}$ and $\mathrm{rows}^{j_0}(B_8) = \{2,3,4\}$. Their intersection is nonempty, but neither is contained in another.
Indeed, the $\{1,2,4\} \times \{1,2,10\}$ submatrix of $S'$ is the matrix of a double-square.
\end{ex}
For each pair of indices $ij$ such that $(i,j) \in S$,
let $x_{ij}$ be the polynomial obtained from $\hat{p}_{ij}$ by simultaneously clearing the denominators of all $\hat{p}_{k\ell}$.
That is, to obtain $x_{ij}$, we multiply $\hat{p}_{ij}$ by $u_{++} \prod_{D \in \mathrm{Max}(S)} D^+$ so that
\[
x_{ij} = u_{i+}u_{+j}\displaystyle{\prod_{C \in \mathrm{Int}(ij)} C^+}\displaystyle{ \prod_{D \in \mathrm{Max}(S) \setminus \mathrm{Max}(ij)}D^+}.
\]
Our main goal in this section is to derive a formula for the sum,
\[
\sum_{i \in \mathrm{rows}^{j_0}(B_{\alpha})} x_{ij_0}.
\]
This is the content of
Lemma \ref{lem:SumInBlock}. This formula allows us to verify
that the $j_0$ column marginal of $\hat{p}$ matches that of the normalized data.
In order to simplify this sum,
we must first understand how maximal cliques and their intersections intersect $N_{j_0}$.
For each $B_{\alpha}$ with $\alpha \in [h]$ and $\mathrm{rows}^{j_0}(B_{\alpha}) \neq \emptyset$, we let $D_{\alpha}$ be the clique,
\[
D_{\alpha} = \{ (i,j) \mid i \in \mathrm{rows}^{j_0}(B_{\alpha}) \text{ and } \mathrm{rows}^{j_0}(B_{\alpha}) \subset \mathrm{rows}^{j_0}(j) \}.
\]
In other words, $D_{\alpha}$ is the largest clique that contains $B_{\alpha}$ and intersects $N_{j_0}$.
We call $D_{\alpha}$ the \emph{clique induced by} $B_{\alpha}$.
\begin{ex}\label{ex:MaximalCliques}
Consider our running example $S$ with $j_0 = 1$ and blocks $B_0, \dots, B_7$ as described in Example \ref{ex:BlockLabels}.
Then $N_{j_0} = N_1 = \{11, 21, 31, 41, 51\}.$
The cliques induced by $B_0, \dots, B_7$ are
\begin{align*}
D_0 & = \{11, 21, 31, 41, 51\}, \\
D_1 &= \{11, 12, 21, 22, 31, 32\}, \\
D_2 &= \{21, 22, 23, 31, 32, 33\}, \\
D_3 &= \{31, 32, 33, 34\}, \\
D_4 &= \{41, 45\}, \\
D_5 &= \{51, 56, 57\}, \text{ and }\\
D_6 &= \{21, 22, 23, 28\}.
\end{align*}
There is no $D_7$ since the block $B_7$ is empty. Note that these are exactly the maximal cliques in $S$ that intersect $N_1$. The next proposition proves that this is the case for all DS-free matrices with structural zeros.
\end{ex}
We note that when $D_{\alpha}$ is the clique induced by $B_{\alpha}$,
all of the nonzero rows of $D_{\alpha}$ lie in $[r]$ by definition of an induced clique.
We continue to use the notation $\mathrm{rows}^{j_0}(D_{\alpha})$ since the formation
of the set $B_{\alpha}$ depends on the specified column $j_0$.
For any clique $C$, let $\mathrm{cols}(C) = \{ j \mid (i,j) \in C \text{ for some } j\}$.
\begin{prop}\label{prop:MaxCliqueCharacterization}
For all $\alpha \in [h]$, $D_{\alpha}$ is a maximal clique.
Furthermore, any maximal clique that has nonempty intersection with $N_{j_0}$ is induced by some $B_{\alpha}$.
\end{prop}
\begin{proof}
We will show that $D_{\alpha}$ is maximal by showing that we cannot add any rows or columns to it.
We cannot add any columns to $D_{\alpha}$ by definition.
We cannot add any of rows $1, \dots, r$ to $D_{\alpha}$ since all nonzero rows of $B_{\alpha}$ are already contained in $D_{\alpha}$.
We cannot add any of rows $r+1, \dots, m$ to $D_{\alpha}$ since $j_0$ is a column of $D_{\alpha}$ whose entries in rows $r+1, \dots, m$
are zero.
Note that if we can add one element $(i,j)$ to $D_{\alpha}$, then by definition of a clique,
we must either be able to add all of $\{i\} \times \mathrm{cols}(D_{\alpha})$ or $\mathrm{rows}^{j_0}(D_{\alpha}) \times \{j\}$ to the clique.
Since we cannot add any rows or columns to $D_{\alpha}$, it is a maximal clique.
Now let $D$ be a maximal clique that intersects $N_{j_0}$.
For the sake of contradiction, suppose that $D \neq D_{\alpha}$ for each $\alpha \in [h]$.
Let $j_1$ be a column in $D$ such that $\mathrm{rows}^{j_0}(j_1)$ is minimal among all columns of $D$.
We must have that $(i,j_1) \in B_{\alpha}$ for some $i \in [r]$ and $\alpha \in [h]$.
Since $D \neq D_{\alpha}$, it must be the case that column $j_1$ has a nonzero row $i_1 \in [r]$ that is not in $D$.
Since $D$ is maximal, there must exist another column $j_2$ in $D$ that has a zero in row $i_1$.
Therefore, we have that $\mathrm{rows}^{j_0}(j_1) \not\subset \mathrm{rows}^{j_0}(j_2)$.
Furthermore, $\mathrm{rows}^{j_0}(j_2) \not\subset \mathrm{rows}^{j_0}(j_1)$ by the minimality of $j_1$.
But since $D$ is nonempty, the intersection of $\mathrm{rows}^{j_0}(j_1)$ and $\mathrm{rows}^{j_0}(j_2)$ must be nonempty.
This contradicts Proposition \ref{prop:DSFree}, as needed.
\end{proof}
Proposition \ref{prop:MaxCliqueCharacterization} shows that the maximal cliques
that intersect $N_{j_0}$ are exactly the cliques that are induced by some $B_{\alpha}$.
The DS-free condition gives a poset structure on the set of these maximal cliques
$D_0, \dots, D_h$ that intersect $N_{j_0}$ nontrivially.
\begin{defn}
Let $P(j_0)$ denote the poset with ground set $\{D_0, \dots D_h\}$
and $D_{\alpha} \leq D_{\beta}$ if and only if $\mathrm{rows}^{j_0}(D_{\alpha}) \subset \mathrm{rows}^{j_0}(D_{\beta})$.
\end{defn}
Recall that for a poset $P$ and two elements of its ground set, $p, q \in P$,
we say that $q$ \emph{covers} $p$ if $p < q$ and for any $r \in P$,
if $p \leq r \leq q$, then $r = p$ or $r = q$. We denote such a cover relation by $p \lessdot q$.
The \emph{Hasse diagram} of a poset is a directed acyclic graph on $P$ with an edge
from $p$ to $q$ whenever $p \lessdot q$.
In the case of $P(j_0)$, the Hasse diagram of this poset is a tree since the DS-free condition implies
that any $D_{\alpha}$ is covered by at most one maximal clique.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\draw (0,0) -- (1.5,3) -- (3,0);
\draw(.5,1)--(1,0);
\draw (1.5,3)--(2,0);
\draw[fill] (0,0) circle [radius = .05];
\node[below] at (0,0) {$D_3$};
\draw[fill] (1,0) circle [radius = .05];
\node[below] at (1,0) {$D_6$};
\draw[fill] (2,0) circle [radius = .05];
\node[below] at (2,0) {$D_4$};
\draw[fill] (3,0) circle [radius = .05];
\node[below] at (3,0) {$D_5$};
\draw[fill] (.5,1) circle [radius = .05];
\node[left] at (.5,1) {$D_2$};
\draw[fill] (1,2) circle [radius = .05];
\node[left] at (1,2) {$D_1$};
\draw [fill] (1.5,3) circle[radius = .05];
\node[left] at (1.5,3) {$D_0$};
\end{tikzpicture}
\end{center}
\caption{The poset $P(j_0)$ for Example \ref{ex:Poset}}
\label{Fig:Poset}
\end{figure}
\begin{ex}\label{ex:Poset}
In our running example $S$ with $j_0 = 1$ and blocks $B_0, \dots B_6$ and associated cliques $D_0, \dots, D_6$, the Hasse diagram of the poset $P(j_0)$ is pictured in Figure \ref{Fig:Poset}.
\end{ex}
The next proposition shows that the cover relations in this poset, denoted $D_{\alpha} \lessdot D_{\beta}$
correspond to maximal intersections of maximal cliques that intersect $N_{j_0}$ nontrivially.
Denote by $\mathrm{cols}(D_{\alpha})$ the nonzero columns of the clique $D_{\alpha}$.
We note that if $D_{\alpha} \lessdot D_{\beta}$, then $\mathrm{cols}(D_{\beta}) \subset \mathrm{cols}(D_{\alpha})$.
In particular, this means that if $C = D_{\alpha} \cap D_{\beta}$, then
$C = \mathrm{rows}^{j_0}(D_{\alpha}) \times \mathrm{cols}(D_{\beta})$.
\begin{prop}\label{prop:MaxIntCharacterization}
Let $C = D_{\alpha} \cap D_{\beta}$. Then $C$ is maximal among all pairwise intersections of maximal cliques if and only if $D_{\alpha} \lessdot D_{\beta}$ or $D_{\beta} \lessdot D_{\alpha}$ in $P(j_0)$.
\end{prop}
\begin{proof}
Suppose without loss of generality that $D_{\alpha} \lessdot D_{\beta}$ in $P(j_0)$.
For the sake of contradiction, suppose that $D_{\alpha} \cap D_{\beta} \not\in \mathrm{Int}(S)$.
Then there exists another maximal clique that contains $D_{\alpha} \cap D_{\beta}$.
By Proposition \ref{prop:MaxCliqueCharacterization} and the fact that $D_{\alpha} \cap D_{\beta}$
intersects $N_{j_0}$ nontrivially, we can write this maximal clique as $D_{\gamma}$ for some $\gamma \in [h]$.
Note that we have $\mathrm{rows}^{j_0}(C) = \mathrm{rows}^{j_0}(D_{\alpha})$ and $\mathrm{cols}(C) = \mathrm{cols}(D_{\beta})$.
Therefore $C = \mathrm{rows}^{j_0}(D_{\alpha}) \times \mathrm{cols}(D_{\beta})$.
So $\mathrm{rows}^{j_0}(D_{\alpha}) \subsetneq \mathrm{rows}^{j_0}(D_{\gamma})$ and $\mathrm{cols}(D_{\beta}) \subsetneq \mathrm{cols}(D_{\gamma})$.
In particular, this second inclusion implies that $\mathrm{rows}^{j_0}(D_{\gamma}) \subsetneq \mathrm{rows}^{j_0}(D_{\beta})$.
Indeed, suppose that $i \leq r$ is a row of $D_{\gamma}$ that is not a row of $D_{\beta}$.
Then there exists a column $j$ of $D_{\beta}$ for which $(i,j) \not\in S$.
But since $j$ is also a column of $D_{\gamma}$, this contradicts that $D_{\gamma}$ is a clique.
So we have the proper containments
\[
\mathrm{rows}^{j_0}(D_{\alpha}) \subsetneq \mathrm{rows}^{j_0}(D_{\gamma}) \subsetneq \mathrm{rows}^{j_0}(D_{\beta}),
\]
which contradicts that $D_{\alpha} \lessdot D_{\beta}$ in $P(j_0)$.
So $D_{\alpha} \cap D_{\beta}$ must be maximal.
Now let $C = D_{\alpha} \cap D_{\beta} \in \mathrm{Int}(S)$.
For the sake of contradiction, suppose that $D_{\alpha}$ does not cover $ D_{\beta}$ or vice versa.
Since $C$ is nonempty, without loss of generality we must have
$\mathrm{rows}^{j_0}(D_{\alpha}) \subset \mathrm{rows}^{j_0}(D_{\beta})$ by the DS-free condition.
So there exists a $D_{\gamma}$ such that $D_{\alpha} < D_{\gamma} < D_{\beta}$ in $P(j_0)$.
Therefore we have that
\[
\mathrm{rows}^{j_0}(D_{\alpha}) \subsetneq \mathrm{rows}^{j_0}(D_{\gamma}) \subsetneq \mathrm{rows}^{j_0}(D_{\beta}).
\]
Let $(i,j) \in C$. Then $i$ is a row of $D_{\alpha}$, so it is a row of $D_{\gamma}$.
Furthermore, since $j$ is a column of $D_{\beta}$, $\mathrm{rows}^{j_0}(D_{\beta}) \subset \mathrm{rows}^{j_0}(j)$.
So $\mathrm{rows}^{j_0}(D_{\gamma}) \subset \mathrm{rows}^{j_0}(j)$ and $j$ is a column of $D_{\gamma}$.
Therefore, $C \subsetneq D_{\gamma} \cap D_{\beta}$.
This containment is proper since $\mathrm{rows}^{j_0}(D_{\alpha}) \subsetneq \mathrm{rows}^{j_0}(D_{\gamma})$.
So we have contradicted that $C$ is maximal.
\end{proof}
We can now state the key lemma regarding the sum of the $x_{ij}$s over
$\{ i: (i,j_0) \in D_{\alpha} \}$ for any $\alpha \in [h]$.
\begin{lemma}\label{lem:SumInBlock}
Let $S$ be DS-free and let $D_{\alpha}$ be a maximal clique that intersects $N_{j_0}$. Then
\begin{equation}\label{eqn:SumInBlock}
\sum_{i \in \mathrm{rows}^{j_0}(D_{\alpha})} x_{ij_0} = u_{+j_0}
\Big( \prod_{\substack{C \in \mathrm{Int}(S) \\ C \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D_{\alpha}) \subset \mathrm{rows}^{j_0}(C) }} C^+ \Big)
\Big( \prod_{\substack{D \in \mathrm{Max}(S) \\ D \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D) \subset \mathrm{rows}^{j_0}(D_{\alpha})}} D^+ \Big)
\Big( \prod_{\substack{E \in \mathrm{Max}(S) \\ N_{j_0} \cap D_{\alpha} \cap E = \emptyset}} E^+ \Big)
\end{equation}
\end{lemma}
In order to prove this, we will sum the entries $x_{ij_0}$ over all $i \in \mathrm{rows}^{j_0}( D_{\beta})$ for each $\beta$. We will do this inductively from the bottom of $P(j_0)$.
The key idea of this induction is as follows.
\begin{rmk}\label{rmk:InductionIdea}
If $D_{\alpha_1}, \dots, D_{\alpha_{\ell}}$ are covered by $D_{\beta}$ in $P(j_0)$,
then the rows of $N_{j_0} \cap D_{\beta}$ are partitioned by each $N_{j_0} \cap D_{\alpha_k}$
along with the set of rows that are in $D_{\beta}$ and not in any $D_{\alpha_k}$. The fact that this is a partition follows from the DS-free condition.
Therefore, summing the $x_{ij_0}$ that belong to each clique covered by $D_{\beta}$ and adding in the $x_{ij_0}$s for rows $i$ that are not in any clique covered by $D_{\beta}$
will give us the sum of $x_{ij_0}$ over all $i \in \mathrm{rows}^{j_0}(D_{\beta})$.
\end{rmk}
The next proposition focuses on the factors of the right-hand side of Equation (\ref{eqn:SumInBlock}) that correspond to elements of $\mathrm{Int}(S)$.
It will be used to show that when we perform the induction and move upwards by one cover relation from $D_{\alpha}$ to $D_{\beta}$ in the poset $P(j_0)$,
all but one of these factors stays the same. The only one that no longer appears in the product corresponds to the maximal intersection $D_{\alpha} \cap D_{\beta}$.
\begin{prop}\label{prop:IntersectionsFactorOut}
Let $D_{\alpha} \lessdot D_{\beta}$ in $P(j_0)$.
Let $C \in \mathrm{Int}(S)$ intersect $N_{j_0}$ nontrivially so that $\mathrm{rows}^{j_0}(D_{\alpha}) \subset \mathrm{rows}^{j_0}(C)$.
Then either $C = D_{\alpha} \cap D_{\beta}$ or $\mathrm{rows}^{j_0}(D_{\beta}) \subset \mathrm{rows}^{j_0}(C)$.
\end{prop}
\begin{proof}
Without loss of generality, let $C = D_1 \cap D_2$.
Proposition \ref{prop:MaxIntCharacterization} tells us that $C$ must be of this form.
By the same proposition, we may assume without loss of generality that $D_2 \lessdot D_1$,
so $\mathrm{rows}^{j_0}(C) = \mathrm{rows}^{j_0}(D_2)$.
Suppose that $\mathrm{rows}^{j_0}(D_{\beta}) \not\subset \mathrm{rows}^{j_0}(D_2)$.
Since $\mathrm{rows}^{j_0}(D_{\alpha}) \subset \mathrm{rows}^{j_0}(D_2)$ and $\mathrm{rows}^{j_0}(D_{\alpha}) \subset \mathrm{rows}^{j_0}(D_{\beta})$,
we must have that $\mathrm{rows}^{j_0}(D_2) \cap \mathrm{rows}^{j_0}(D_{\beta})$ is nonempty.
So by the DS-free condition, $\mathrm{rows}^{j_0}(D_2) \subsetneq \mathrm{rows}^{j_0}(D_{\beta})$.
So we have the chain of inclusions,
\[
\mathrm{rows}^{j_0}(D_{\alpha}) \subset \mathrm{rows}^{j_0}(D_2) \subsetneq \mathrm{rows}^{j_0}(D_{\beta}).
\]
But since $D_{\beta}$ covers $D_{\alpha}$ in $P(j_0)$,
and every element of $P(j_0)$ is covered by at most one element,
this implies that $\alpha = 1$ and $\beta = 2$, so $C = D_{\alpha} \cap D_{\beta}$, as needed.
\end{proof}
The following proposition focuses on the factors of the right-hand side of Equation (\ref{eqn:SumInBlock}) that correspond to elements of $\mathrm{Max}(S)$.
It gives a correspondence between the factors of this product for $D_{\alpha}$ and all but one of the factors of this product for $D_{\beta}$ when we have the cover relation $D_{\alpha} \lessdot D_{\beta}$ in $P(j_0)$.
\begin{prop}\label{prop:MaxCliquesFactorOut}
For any $D_{\alpha}, D_{\beta} \in P(j_0)$, define the following sets:
\begin{align*}
R_{\alpha} &= \{D\in \mathrm{Max}(S) \mid D \cap N_{j_0} \neq \emptyset, \mathrm{rows}^{j_0}(D) \subset \mathrm{rows}^{j_0}(D_{\alpha}) \} \cup \{E \in \mathrm{Max}(S) \mid N_{j_0} \cap D_{\alpha} \cap E = \emptyset \} \\
\overline{R}_{\beta} &= \{D \in \mathrm{Max}(S) \mid D \cap N_{j_0} \neq \emptyset, \mathrm{rows}^{j_0}(D) \subsetneq \mathrm{rows}^{j_0}(D_{\beta}) \} \cup \{E \in \mathrm{Max}(S) \mid N_{j_0} \cap D_{\beta} \cap E = \emptyset \}.
\end{align*}
If $D_{\alpha} \lessdot D_{\beta}$ in $P(j_0)$, then $R_{\alpha} = \overline{R}_{\beta}$.
\end{prop}
\begin{proof}
First let $D \in R_{\alpha}$.
If $\mathrm{rows}^{j_0}(D) \subset \mathrm{rows}^{j_0}(D_{\alpha})$ and $D \cap N_{j_0} \neq \emptyset$, then since $\mathrm{rows}^{j_0}(D_{\alpha}) \subsetneq \mathrm{rows}^{j_0}(D_{\beta})$,
we have that $\mathrm{rows}^{j_0}(D) \subsetneq \mathrm{rows}^{j_0}(D_{\beta})$.
So $D \in \overline{R}_{\beta}$.
Otherwise, we have $N_{j_0} \cap D_{\alpha} \cap D = \emptyset$. There are now two cases.
\emph{Case 1:} If $N_{j_0} \cap D = \emptyset$, then $N_{j_0} \cap D_{\beta} \cap D = \emptyset$ as well.
So $D \in \overline{R}_{\beta}$.
\emph{Case 2:} Suppose that $N_{j_0} \cap D \neq \emptyset$ and $D_{\alpha} \cap D = \emptyset$.
If $D_{\beta} \cap D$ is empty as well, then $D \in \overline{R}_{\beta}$.
Otherwise, suppose $D_{\beta} \cap D \neq \emptyset$.
Then we must have that $\mathrm{rows}^{j_0}(D) \subsetneq \mathrm{rows}^{j_0}(D_{\beta})$
by the fact that $\mathrm{rows}^{j_0}(D_{\beta}) \not\subset \mathrm{rows}^{j_0}(D)$ and the DS-free condition.
So $D \in \overline{R}_{\beta}$ in this case as well.
Note that it is never the case that $N_{j_0} \cap D \neq \emptyset$ and
$D_{\alpha} \cap D \neq \emptyset$ but $N_{j_0} \cap D \cap D_{\alpha} = \emptyset$
since $j_0$ is a column of $D_{\alpha}$.
So we have shown that $R_{\alpha} \subset \overline{R}_{\beta}$.
Now let $D \in \overline{R}_{\beta}$. We have two cases again.
\emph{Case 1:} First, consider the case in which $\mathrm{rows}^{j_0}(D) \subsetneq \mathrm{rows}^{j_0}(D_{\beta})$ and $D \cap N_{j_0} \neq \emptyset$.
If $\mathrm{rows}^{j_0}(D) \subset \mathrm{rows}^{j_0}(D_{\alpha})$, then $D \in R_{\alpha}$, as needed.
Otherwise, by the DS-free condition, there are two cases.
\emph{Case 1a:} If $\mathrm{rows}^{j_0}(D_{\alpha}) \subsetneq \mathrm{rows}^{j_0}(D)$, then we have the chain of containments,
\[
\mathrm{rows}^{j_0}(D_{\alpha}) \subsetneq \mathrm{rows}^{j_0}(D) \subsetneq \mathrm{rows}^{j_0}(D_{\beta}),
\]
which contradicts that $D_{\alpha} \lessdot D_{\beta}$ in $P(j_0)$. So this case cannot actually occur.
\emph{Case 1b:} If $\mathrm{rows}^{j_0}(D_{\alpha}) \cap \mathrm{rows}^{j_0}(D) = \emptyset $,
then we have that $N_{j_0} \cap D_{\alpha} \cap D = \emptyset $.
Therefore, $D \in R_{\alpha}$, as needed.
\emph{Case 2:} The final case is when $N_{j_0} \cap D_{\beta} \cap D = \emptyset $.
In this case, since $\mathrm{rows}^{j_0}(D_{\alpha}) \subset \mathrm{rows}^{j_0}(D_{\beta})$,
we have that $N_{j_0} \cap D_{\alpha} \cap D = \emptyset $ as well.
So $D \in R_{\alpha}$.
So we have shown that $\overline{R}_{\beta} \subset R_{\alpha}$, as needed.
\end{proof}
\begin{rmk}\label{rmk:MaxCliquesFactorOut}
Note that Proposition \ref{prop:MaxCliquesFactorOut} implies that whenever $D_{\alpha}$ and $D_{\beta}$
are covered by the same element of $P(j_0)$,
we have that $R_{\alpha} = R_{\beta}$.
This shows that the left-hand side of Equation (\ref{eqn:SumInBlock}) for $D_{\alpha}$ and $D_{\beta}$ consist of the same terms that come from cliques in $\mathrm{Max}(S)$.
\end{rmk}
Let $D_{\alpha_1}, \dots, D_{\alpha_{\ell}} \lessdot D_{\beta}$. As we discussed in Remark \ref{rmk:InductionIdea}, in order to sum the values of $x_{ij_0}$ over $N_{j_0} \cap D_{\beta}$, we must understand the sum over $x_{ij_0}$ for those rows $i$ such that $i \in \mathrm{rows}^{j_0}(D_{\beta})$ but $i \not\in \mathrm{rows}^{j_0}(D_{\alpha_k})$ for all $k$. The following proposition concerns the sum of the $x_{ij_0}$ over these values of $i$.
\begin{prop}\label{prop:SumExtraEntries}
Let $D_{\alpha_1}, \dots, D_{\alpha_{\ell}} \lessdot D_{\beta}$. Let $r_1, \dots, r_a$ be the rows of $D_{\beta}$ that are not in any $D_{\alpha_k}$ for $k=1, \dots, \ell$. Then
\begin{equation}\label{eqn:SumExtraEntries}
\sum_{i=1}^a x_{r_i j_0} =
u_{+j_0} \Big( \prod_{\substack{C \in \mathrm{Int}(S) \\ C \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D_{\beta}) \subset \mathrm{rows}^{j_0}(C)}}C^+ \Big)
\Big(\prod_{\substack{D \in \mathrm{Max}(S) \\ D \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D) \subsetneq \mathrm{rows}^{j_0}(D_{\beta})}} D^+ \Big)
\Big(\prod_{\substack{ E \in \mathrm{Max}(S) \\ N_{j_0} \cap D_{\beta} \cap E = \emptyset}} E^+\Big)
\Big(\sum_{i=1}^a u_{r_i +}\Big)
\end{equation}
\end{prop}
\begin{proof}
Without loss of generality, we will let $D_2, \dots, D_{\ell} \lessdot D_{1}$,
and let rows $1, \dots, a$ be the rows of $D_{1}$ that are not rows of any $D_\alpha$ for $\alpha = 2,\dots,\ell$.
Let $i \in [a]$.
Recall that
\[
x_{ij_0} = u_{i+}u_{+j_0}\displaystyle{\prod_{C \in \mathrm{Int}(ij_0)} C^+}\displaystyle{ \prod_{D \in \mathrm{Max}(S) \setminus \mathrm{Max}(ij_0)}D^+}.
\]
We first consider the maximum cliques $D$ with $(i,j_0) \not\in D$.
If $N_{j_0} \cap D_{1} \cap D = \emptyset$, then $D^+$ is a term of $x_{ij_0}$ for all $i = 1,\dots,a$.
So $D^+$ is a factor of both the left-hand and right-hand sides of Equation (\ref{eqn:SumExtraEntries}).
Otherwise, we have $N_{j_0} \cap D_1 \cap D \neq \emptyset$.
In particular, this means that $N_{j_0} \cap D \neq \emptyset$.
Since $\mathrm{rows}^{j_0}(D_1) \cap \mathrm{rows}^{j_0}(D) \neq \emptyset$, and $(i,j_0) \not\in D$,
we must have $\mathrm{rows}^{j_0}(D) \subsetneq \mathrm{rows}^{j_0}(D_1)$ by the DS-free condition.
Furthermore, for all maximal cliques $D$ with $\mathrm{rows}^{j_0}(D) \subsetneq \mathrm{rows}^{j_0}(D_1)$,
we have $(i,j_0) \not\in D$.
Indeed, $D \cap N_{j_0} \neq \emptyset$, so by Proposition \ref{prop:MaxCliqueCharacterization},
we have $D = D_{\gamma}$ for some $\gamma$
with $D_{\gamma} < D_1$ in $P(j_0)$.
So $\mathrm{rows}^{j_0}(D_{\gamma}) \subset \mathrm{rows}^{j_0}(D_{\alpha})$ for some $\alpha \in \{2,\dots,\ell\}$.
Since $(i,j_0) \not\in D_{\alpha}$, we have $(i,j_0) \not\in D_{\gamma}$ as well.
Therefore, the factors $D^+$ corresponding to maximal cliques in each $x_{ij_0}$ are the same for all $i \in [a]$,
and are exactly those with $\mathrm{rows}^{j_0}(D) \subsetneq \mathrm{rows}^{j_0}(D_1)$ or $N_{j_0} \cap D_1 \cap D = \emptyset$.
Now let $(i,j_0) \in C$ where $C \in \mathrm{Int}(S)$ and $C \cap N_{j_0} \neq \emptyset$.
By Proposition \ref{prop:MaxIntCharacterization}, we have $C = D_{\gamma} \cap D_{\delta}$
where $D_{\gamma} \lessdot D_{\delta}$ in $P(j_0)$.
Since $C \cap D_1$ is nonempty, we must have that
$\mathrm{rows}^{j_0}(D_{\gamma}) \subset \mathrm{rows}^{j_0}(D_1)$ or $\mathrm{rows}^{j_0}(D_1) \subset \mathrm{rows}^{j_0}(D_{\gamma})$,
and similarly for $D_{\delta}$.
But since $i \in [a]$, $(i,j_0) \not\in D_{\alpha}$ for any $D_{\alpha} < D_1$.
So we must have $\mathrm{rows}^{j_0}(D_1) \subset \mathrm{rows}^{j_0}(D_{\gamma}) \subset \mathrm{rows}^{j_0}(D_{\delta}).$
Therefore, $\mathrm{rows}^{j_0}(D_1) \subset \mathrm{rows}^{j_0}(C)$.
Furthermore, we have that $(i,j_0) \in C$ for all $C \in \mathrm{Int}(S)$ with $\mathrm{rows}^{j_0}(D_1) \subset \mathrm{rows}^{j_0}(C)$.
So the factors $C^+$ corresponding to maximal intersections of maximal cliques are exactly
those with $\mathrm{rows}^{j_0}(D_1) \subset \mathrm{rows}^{j_0}(C)$ in each $x_{ij_0}$.
Therefore, we have that
\begin{align*}
\sum_{i=1}^a x_{ij_0} & = \sum_{i=1}^a u_{i+}u_{+j_0}\Big(\prod_{C \in \mathrm{Int}(ij_0)} C^+ \Big) \Big(\prod_{D \in \mathrm{Max}(S) \setminus \mathrm{Max}(ij_0)}D^+\Big) \\
&= \Big( \prod_{\substack{C \in \mathrm{Int}(S) \\ C \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D_1) \subset \mathrm{rows}^{j_0}(C)}}C^+ \Big)
\Big(\prod_{\substack{D \in \mathrm{Max}(S) \\ D \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D) \subsetneq \mathrm{rows}^{j_0}(D_1)}} D^+ \Big)
\Big(\prod_{\substack{ E \in \mathrm{Max}(S) \\ N_{j_0} \cap D_1 \cap E = \emptyset}} E^+\Big)
\Big(\sum_{i=1}^a u_{i +} u_{+j_0}\Big) \\
&= u_{+j_0}\Big( \prod_{\substack{C \in \mathrm{Int}(S) \\ C \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D_1) \subset \mathrm{rows}^{j_0}(C)}}C^+ \Big)
\Big(\prod_{\substack{D \in \mathrm{Max}(S) \\ D \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D) \subsetneq \mathrm{rows}^{j_0}(D_1)}} D^+ \Big)
\Big(\prod_{\substack{ E \in \mathrm{Max}(S) \\ N_{j_0} \cap D_1 \cap E = \emptyset}} E^+\Big)
\Big(\sum_{i=1}^a u_{i+} \Big),
\end{align*}
as needed.
\end{proof}
Finally, the following proposition gives a way to write $D_{\beta}^+$ as a sum over its intersections with the elements of $P(j_0)$ that it covers, along with the rows of $D_{\beta}$ that are not rows of any clique that it covers.
\begin{prop}\label{prop:SumOfUnfactoredTerms}
Let $D_{\alpha_1}, \dots, D_{\alpha_{\ell}} \lessdot D_{\beta}$. Let $r_1, \dots, r_a$ be the rows of $D_{\beta}$ that are not in any $D_{\alpha_i}$ for $i=1, \dots, \ell$. Then
\begin{equation}\label{eqn:SumOfUnfactoredTerms}
D_{\beta}^+ = \sum_{i=1}^a u_{r_i +} + \sum_{i=1}^{\ell} (D_{\alpha_i} \cap D_{\beta})^+.
\end{equation}
\end{prop}
\begin{proof}
Without loss of generality, we will let $D_2, \dots, D_{\ell} \lessdot D_{1}$,
and let rows $1, \dots, a$ be the rows of $D_1$ that are not rows of any $D_\alpha$ for $\alpha = 2, \dots,\ell$.
First note that each $u_{ij}$ that appears on the right-hand side of Equation (\ref{eqn:SumOfUnfactoredTerms})
is a term of $D_1^+$.
Indeed, if $(i,j) \in D_{\alpha} \cap D_1$ for some $\alpha = 2,\dots,\ell$, this is clear.
Otherwise, we have $i \in [a]$.
For the sake of contradiction, suppose that there exists a column $j$ so that $(i,j) \not\in D_1$ but $(i,j) \in S$.
But then $\mathrm{rows}^{j_0}(D_1) \cap \mathrm{rows}^{j_0}(j)$ is non-empty.
So by the DS-free condition, either $\mathrm{rows}^{j_0}(D_1) \subset \mathrm{rows}^{j_0}(j)$ or $\mathrm{rows}^{j_0}(j) \subsetneq \mathrm{rows}^{j_0}(D_1)$.
If $\mathrm{rows}^{j_0}(D_1) \subset \mathrm{rows}^{j_0}(j)$, then $j$ is a column of $D_1$ by definition, which is a contradiction.
If $\mathrm{rows}^{j_0}(j) \subsetneq \mathrm{rows}^{j_0}(D_1)$, then column $j$ belongs to some block $B_{\eta}$
with $\mathrm{rows}^{j_0}(D_{\eta}) \subsetneq \mathrm{rows}^{j_0}(D_1)$.
But this contradicts that row $i$ is not in any $D_{\alpha}$ for $\alpha = 2,\dots,\ell$.
Now it remains to show that all the terms in $D_1^+$ appear in the right-hand side of Equation (\ref{eqn:SumOfUnfactoredTerms}).
Let $(i,j) \in D_1$.
If $i \in [a]$, then $u_{ij}$ is a term in the right-hand side, as needed.
Otherwise, $i \in \mathrm{rows}^{j_0}(D_{\alpha})$ for some $\alpha \in \{2,\dots,\ell\}$.
Since $\mathrm{cols}(D_1) \subset \mathrm{cols}(D_{\alpha})$ by definition,
we must have $j \in \mathrm{cols}(D_{\alpha})$. So $(i,j) \in D_{\alpha}$.
Therefore, $u_{ij}$ is a term in $(D_{\alpha} \cap D_1)^+$.
Finally, since $D_{\gamma} \cap D_{\delta} = \emptyset$ for all
$\gamma, \delta \in \{2,\dots,l\}$ with $\gamma \neq \delta$,
no term is repeated.
\end{proof}
We can now use these propositions to prove Lemma \ref{lem:SumInBlock}.
\begin{proof}[Proof of Lemma \ref{lem:SumInBlock}]
We will induct over the poset $P(j_0)$.
For the base case, we let $D_{\alpha}$ be minimal in $P(j_0)$.
First, by Proposition \ref{prop:SumOfUnfactoredTerms} we have that
\[
D_{\alpha}^+ = \sum_{i \in \mathrm{rows}^{j_0}(D_{\alpha})} u_{i+}
\]
since $D_{\alpha}$ does not cover any element of $P(j_0)$.
Let $C \in \mathrm{Int}(S)$ with $\mathrm{rows}^{j_0}(D_{\alpha}) \subset \mathrm{rows}^{j_0}(C)$
such that $C \cap N_{j_0} \neq \emptyset$.
Then for any $i \in \mathrm{rows}^{j_0}(D_{\alpha})$, $(i,j_0) \in C$.
So $C \in \mathrm{Int}(ij_0)$ and $C^+$ is a factor of $x_{ij_0}$.
If $C \in \mathrm{Int}(ij_0)$, then $N_{j_0} \cap C \neq \emptyset$.
It remains to be shown that all factors of $x_{ij_0}$, $C^+$ corresponding to maximal intersections of maximal cliques
have $\mathrm{rows}^{j_0}(D_{\alpha}) \subset \mathrm{rows}^{j_0}(C)$.
Let $(i,j_0) \in D_{\alpha}$.
Let $D_{\beta}$ and $D_{\gamma}$ be maximal cliques such that $C = D_{\beta} \cap D_{\gamma} \in \mathrm{Int}(ij_0)$.
Then we have $\mathrm{rows}^{j_0}(D_{\alpha}) \cap \mathrm{rows}^{j_0}(D_{\beta})$ and $\mathrm{rows}^{j_0}(D_{\alpha}) \cap \mathrm{rows}^{j_0}(D_{\gamma})$ nonempty.
Since $D_{\alpha}$ is minimal, this implies that $\mathrm{rows}^{j_0}(D_{\alpha}) \subset \mathrm{rows}^{j_0}(D_{\beta}), \mathrm{rows}^{j_0}(D_{\gamma})$.
So $\mathrm{rows}^{j_0}(D_{\alpha}) \subset \mathrm{rows}^{j_0}(C)$.
Therefore the factors of each $x_{ij_0}$ with $(i,j_0) \in D_{\alpha}$
that correspond to maximal intersections of maximal cliques are
\[
\prod_{\substack{C \in \mathrm{Int}(S) \\ C \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D_{\alpha}) \subset \mathrm{rows}^{j_0}(C)}} C^+,
\]
as needed. The other factors of each $x_{ij_0}$ for $(i,j_0) \in D_{\alpha}$ are of the form
\[
\prod_{\substack{E \in \mathrm{Max}(S) \\ (i,j_0) \not\in E}} E^+.
\]
Since all $(i,j_0) \in D_{\alpha}$ are contained in the same maximal cliques when $D_{\alpha}$ is minimal in $P(j_0)$,
the terms corresponding to maximal cliques in each $x_{ij_0}$ are of the form
\[
\prod_{\substack{E \in \mathrm{Max}(S) \\ N_{j_0} \cap D_{\alpha} \cap E= \emptyset}} E^+,
\]
as needed.
So we have that
\begin{align*}
\sum_{(i,j_0) \in D_{\alpha}} x_{ij_0} &= \sum_{(i,j_0) \in D_{\alpha}} u_{i+} u_{+j_0}
\Big(\prod_{\substack{C \in \mathrm{Int}(S) \\ C \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D_{\alpha}) \subset \mathrm{rows}^{j_0}(C)}} C^+ \Big)
\Big( \prod_{\substack{E \in \mathrm{Max}(S) \\ N_{j_0} \cap D_{\alpha} \cap E= \emptyset}} E^+ \Big) \\
&= u_{+j_0} \Big( \sum_{i \in \mathrm{rows}^{j_0}(D_{\alpha})} u_{i+} \Big)
\Big(\prod_{\substack{C \in \mathrm{Int}(S) \\ C \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D_{\alpha}) \subset \mathrm{rows}^{j_0}(C)}} C^+ \Big)
\Big( \prod_{\substack{E \in \mathrm{Max}(S) \\ N_{j_0} \cap D_{\alpha} \cap E= \emptyset}} E^+ \Big) \\
&= u_{+j_0} D_{\alpha}^+ \Big(\prod_{\substack{C \in \mathrm{Int}(S) \\ C \cap N_{j_0} \neq \emptyset\\ \mathrm{rows}^{j_0}(D_{\alpha}) \subset \mathrm{rows}^{j_0}(C)}} C^+ \Big)
\Big( \prod_{\substack{E \in \mathrm{Max}(S) \\ N_{j_0} \cap D_{\alpha} \cap E= \emptyset}} E^+ \Big).
\end{align*}
Since $D_{\alpha}$ is the only maximal clique whose rows are contained in $D_{\alpha}$, we have that
\[
D_{\alpha}^+ = \prod_{\substack{D \in \mathrm{Max}(S) \\ D \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D) \subset \mathrm{rows}^{j_0}(D_{\alpha})}} D^+.
\]
So the lemma holds for the base case.
Without loss of generality, let $D_2, \dots, D_{\ell} \lessdot D_1$ in $P(j_0)$. Let rows $1, \dots, a$ be the rows of $D_1$ that are not in any $D_{\alpha}$ with $\alpha = 2, \dots,\ell$. We have the following chain of equalities.
\begin{align*} \allowdisplaybreaks[4]
\sum_{(i,j_0) \in D_1} x_{ij_0} &= u_{+j_0} \sum_{i=1}^a x_{ij_0} + \sum_{\alpha=2}^{\ell} \sum_{(i,j_0) \in D_{\alpha}} x_{ij_0} \\
&= u_{+j_0} \Big( \prod_{\substack{C \in \mathrm{Int}(S) \\ C \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D_{\beta}) \subset \mathrm{rows}^{j_0}(C)}}C^+ \Big)
\Big(\prod_{\substack{D \in \mathrm{Max}(S) \\ D \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D) \subsetneq \mathrm{rows}^{j_0}(D_{\beta})}} D^+ \Big)
\Big(\prod_{\substack{ E \in \mathrm{Max}(S) \\ N_{j_0} \cap D_{\beta} \cap E = \emptyset}} E^+\Big)
\Big(\sum_{i=1}^p u_{i+} \Big) \\
& \qquad \qquad + \sum_{\alpha=2}^{\ell} \sum_{(i,j_0) \in D_{\alpha}} x_{ij_0}\\
&= u_{+j_0}\Big( \prod_{\substack{C \in \mathrm{Int}(S) \\ C \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D_{\beta}) \subset \mathrm{rows}^{j_0}(C)}}C^+ \Big)
\Big(\prod_{\substack{D \in \mathrm{Max}(S) \\ D \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D) \subsetneq \mathrm{rows}^{j_0}(D_{\beta})}} D^+ \Big)
\Big(\prod_{\substack{ E \in \mathrm{Max}(S) \\ N_{j_0} \cap D_{\beta} \cap E = \emptyset}} E^+\Big)
\Big(\sum_{i=1}^a u_{i+} \Big) \\
& \qquad \qquad + \sum_{\alpha =2}^{\ell} u_{+j_0}
\Big( \prod_{\substack{C \in \mathrm{Int}(S) \\ C \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D_{\alpha}) \subset \mathrm{rows}^{j_0}(C)}} C^+ \Big)
\Big( \prod_{\substack{D \in \mathrm{Max}(S) \\ D \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D) \subset \mathrm{rows}^{j_0}(D_{\alpha})}} D^+ \Big)
\Big( \prod_{\substack{E \in \mathrm{Max}(S) \\ N_{j_0} \cap D_{\alpha} \cap E = \emptyset}} E^+ \Big) \\
&= u_{+j_0} \Big(\prod_{\substack{D \in \mathrm{Max}(S) \\ D \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D) \subsetneq \mathrm{rows}^{j_0}(D_1)}} D^+ \Big)
\Big( \prod_{\substack{E \in \mathrm{Max}(S) \\ N_{j_0} \cap D_1 \cap E = \emptyset}} E^+ \Big) \\
& \qquad \qquad \times
\Big(\Big( \sum_{i=1}^a u_{i+} \times \prod_{\substack{C \in \mathrm{Int}(S) \\ C \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D_1) \subset \mathrm{rows}^{j_0}(C)}} C^+ \Big)
+ \Big(\sum_{\alpha =2}^{\ell} \prod_{\substack{C \in \mathrm{Int}(S) \\ \mathrm{rows}^{j_0}(D_{\alpha}) \subset \mathrm{rows}^{j_0}(C)}} C^+ \Big) \Big) \\
&= u_{+j_0} \Big(\prod_{\substack{D \in \mathrm{Max}(S) \\ D \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D) \subsetneq \mathrm{rows}^{j_0}(D_1)}} D^+ \Big)
\Big( \prod_{\substack{E \in \mathrm{Max}(S) \\ N_{j_0} \cap D_1 \cap E = \emptyset}} E^+ \Big) \\
& \qquad \qquad \times
\Big( \prod_{\substack{C \in \mathrm{Int}(S) \\ C \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D_1) \subset \mathrm{rows}^{j_0}(C)}} C^+ \Big)
\Big( \sum_{i=1}^a u_{i+} + \sum_{\alpha =2}^{\ell} (D_{\alpha} \cap D_1)^+ \Big) \\
&=u_{+j_0} \Big(\prod_{\substack{D \in \mathrm{Max}(S \\ D \cap N_{j_0} \neq \emptyset) \\ \mathrm{rows}^{j_0}(D) \subsetneq \mathrm{rows}^{j_0}(D_1)}} D^+ \Big)
\Big( \prod_{\substack{E \in \mathrm{Max}(S) \\ N_{j_0} \cap D_1 \cap E = \emptyset}} E^+ \Big)
\Big( \prod_{\substack{C \in \mathrm{Int}(S) \\ C \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D_1) \subset \mathrm{rows}^{j_0}(C)}} C^+ \Big)
\Big( D_1^+ \Big) \\
&= u_{+j_0} \Big(\prod_{\substack{D \in \mathrm{Max}(S) \\ D \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D) \subset \mathrm{rows}^{j_0}(D_1)}} D^+ \Big)
\Big( \prod_{\substack{E \in \mathrm{Max}(S) \\ N_{j_0} \cap D_1 \cap E = \emptyset}} E^+ \Big)
\Big( \prod_{\substack{C \in \mathrm{Int}(S) \\ C \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D_1) \subset \mathrm{rows}^{j_0}(C)}} C^+ \Big)
\end{align*}
The second equality follows from Proposition \ref{prop:SumExtraEntries}. The third equality is an application of the inductive hypothesis. The fourth equality follows from Proposition \ref{prop:MaxCliquesFactorOut} along with Remark \ref{rmk:MaxCliquesFactorOut}. The fifth equality follows from Proposition \ref{prop:IntersectionsFactorOut}. The sixth equality follows from Proposition \ref{prop:SumOfUnfactoredTerms}. The seventh inequality follows from the fact that $D_1$ is the only clique whose rows are equal to $\mathrm{rows}^{j_0}(D_1)$. This completes our proof by induction.
\end{proof}
\section{Checking the Conditions of Birch's Theorem}\label{sec:BirchsThm}
In the previous section, we wrote a formula for the sum of $x_{ij_0}$ where
$i$ ranges over the rows of some maximal clique $D_{\alpha}$.
Since the block $B_0$ induces its own maximal clique,
Lemma \ref{lem:SumInBlock} allows us to write the sum of the $x_{ij_0}$s for $1 \leq i \leq r$ in the following concise way.
This in turn verifies that the proposed maximum likelihood estimate $\hat{p}$
has the same sufficient statistics as the normalized data $u / u_{++}$,
which is one of the conditions of Birch's theorem.
\begin{cor}\label{cor:SumEntireBlock}
Let $S$ be DS-free. Then for any column $j_0$,
\[
\sum_{i=1}^r x_{ij_0} = u_{+j_0} \prod_{D \in \mathrm{Max}(S)} D^+.
\]
\end{cor}
\begin{proof}
The poset $P(j_0)$ has a unique maximal element $D_0$ with $\mathrm{rows}^{j_0}(D_0) = \mathrm{rows}^{j_0}(j_0)$.
Note that $D_0$ may include more columns than $j_0$
since it may have columns whose nonzero rows are the same as or contain those of $j_0$.
By Proposition \ref{prop:MaxIntCharacterization}, there are no maximal intersections of maximal cliques $C$
with $\mathrm{rows}^{j_0}(D_0) \subset \mathrm{rows}^{j_0}(C)$, since $D_0$ is maximal in $P(j_0)$.
It follows from Proposition \ref{prop:MaxCliqueCharacterization}
that a maximal clique $D$ intersects $N_{j_0}$ if and only if it has $\mathrm{rows}^{j_0}(D) \subset \mathrm{rows}^{j_0}(D_0)$.
Since $N_{j_0} \subset D_0$, we have that $N_{j_0} \cap D_0 \cap E = N_{j_0} \cap E$ for any clique $E$.
By Lemma \ref{lem:SumInBlock}, we have
\begin{align*}
\sum_{i=1}^r x_{ij_0} &= u_{+j_0} \Big( \prod_{\substack{D \in \mathrm{Max}(S) \\ D \cap N_{j_0} \neq \emptyset \\ \mathrm{rows}^{j_0}(D) \subset \mathrm{rows}^{j_0}(D_0)}}D^+ \Big)
\Big( \prod_{\substack{E \in \mathrm{Max}(S) \\ N_{j_0} \cap D_0 \cap E = \emptyset}} E^+ \Big)\\
&= u_{+j_0} \Big( \prod_{\substack{D \in \mathrm{Max}(S) \\ D \cap N_{j_0} \neq \emptyset}} D^+ \Big)
\Big(\prod_{\substack{E \in \mathrm{Max}(S) \\ E \cap N_{j_0} = \emptyset}} E^+ \Big) \\
&= u_{+j_0} \prod_{D \in \mathrm{Max}(S)} D^+,
\end{align*}
as needed.
\end{proof}
Now we will address the condition of Birch's theorem which states that the maximum likelihood estimate must satisfy the equations defining $\mathcal{M}_S$.
\begin{lemma}\label{lem:MLEinModel}
Let $S$ be doubly chordal bipartite. Let $u \in \mathbb{R}^{S}$ be a generic matrix of counts. Then the point $(\hat{p}_{ij} \mid (i,j) \in S)$ specified in Theorem \ref{thm:Main} is in the Zariski closure of $\mathcal{M}_S$.
\end{lemma}
In order to prove this lemma, we must first describe the vanishing ideal of $\mathcal{M}_S$. We denote this ideal $\mathcal{I}(\mathcal{M}_S)$. It is a subset of the polynomial ring in $\#S$ variables,
\[R = \mathbb{C}[p_{ij} \mid ij \in S].\]
\begin{prop}\label{prop:DefiningEquations}
Let $S$ be chordal bipartite. Then $\mathcal{I}(\mathcal{M}_S)$ is generated by the $2 \times 2$ minors of the matrix form of $S$ that contain no zeros. That is, $\mathcal{I}(\mathcal{M}_S)$ is generated by all binomials of the form
\[
p_{ij} p_{k\ell} - p_{i \ell} p_{kj},
\]
such that $(i,j), (k,\ell), (i,\ell), (k,j) \in S$.
\end{prop}
\begin{proof}
This follows from results in \cite[Chapter~10.1]{aoki2012}. The \emph{loops} on $S$ correspond to cycles in $G_S$. The \emph{df 1 loops} as defined in \cite[Chapter~10.1]{aoki2012} are those whose support does not properly contain the support of any other loop; that is, they correspond to cycles in $G_S$ with no chords. Since $G_S$ is chordal bipartite, each of these cycles contain exactly four edges. Therefore the df 1 loops on $S$ all have degree two, and each corresponds to a $2 \times 2$ minor of $S$ by definition. Theorem 10.1 of \cite{aoki2012} states that the df 1 loops form a Markov basis for $\mathcal{M}_S$. Therefore, by the Fundamental Theorem of Markov Bases \cite[Theorem~3.1]{diaconis1998}, the $2 \times 2$ minors of $S$ form a generating set for $\mathcal{I}(\mathcal{M}_S)$.
\end{proof}
\begin{ex}
Consider the matrix $S$ from Example \ref{ex:RunningExample}.
In Figure \ref{Fig:RunningGraph}, we see that $G(S)$ has exactly one cycle. This cycle corresponds to the only $2\times2$ minor in $S$ that contains no zeros,
which is the $\{2,3\} \times \{1,2\}$ submatrix.
Therefore the (complex) Zariski closure of $\mathcal{M}_S$ is the variety of the ideal generated by the polynomial $p_{21}p_{32} - p_{31}p_{22}$.
\end{ex}
\begin{prop}\label{prop:ModelEquationsSatisfied}
Let $S$ be set of indices such that $G_S$ is doubly chordal bipartite. Let $\{i_1,i_2\} \times \{j_1,j_2\}$ be a set of indices that corresponds to a $2 \times 2$ minor of $S$ that contains no zeros. Let $\hat{p}_{i_1j_1}, \hat{p}_{i_2 j_2}, \hat{p}_{i_1j_2}, \hat{p}_{i_1j_2}$ be as defined in Theorem \ref{thm:Main}. Then
\begin{equation}\label{eqn:2by2minor}
\hat{p}_{i_1j_1}\hat{p}_{i_2j_2} = \hat{p}_{i_1j_2} \hat{p}_{i_2 j_1}
\end{equation}
\end{prop}
\begin{proof}
The terms $u_{i_1+}, u_{i_2+}, u_{+j_1}$ and $u_{+j_2}$ each appear once in the numerator on each side of Equation (\ref{eqn:2by2minor}), and $u_{++}^2$ appears in both denominators. Furthermore if $(i_1,j_1)$ and $(i_2,j_2)$ are both contained in any clique in $S$, then $(i_1,j_2)$ and $(i_2,j_1)$ are also in the clique by definition. So any term that is squared in the numerator or denominator on one side of Equation (\ref{eqn:2by2minor}) is also squared on the other side. Therefore it suffices to show that $\mathrm{Max}(i_1j_1) \cup \mathrm{Max}(i_2j_2) = \mathrm{Max}(i_1j_2) \cup \mathrm{Max}(i_2j_1)$ and $\mathrm{Int}(i_1j_1) \cup \mathrm{Int}(i_2j_2) = \mathrm{Int}(i_1j_2) \cup \mathrm{Int}(i_2j_1)$.
First, we will show that $\mathrm{Max}(i_1j_1) \cup \mathrm{Max}(i_2j_2) = \mathrm{Max}(i_1j_2) \cup \mathrm{Max}(i_2j_1)$.
Let $D \in \mathrm{Max}(i_1j_1)$. If $(i_2,j_1) \in D$, then we are done.
Now suppose that $(i_2,j_1) \not\in D$.
Since $D$ intersects column $j_1$, by Proposition \ref{prop:MaxCliqueCharacterization} we know that $D$ has the form $D_{\alpha}$
for some block of columns $B_{\alpha}$ that are identical on $\mathrm{rows}^{j_1}(j_1)$.
Let $\mathrm{rows}^{j_1}(D_{\alpha})$ denote the set of nonzero rows of $D_{\alpha}$ that are also nonzero rows of $j_1$.
Since $(i_2,j_1) \not\in D$, we have that $i_2 \not\in \mathrm{rows}^{j_1}(D_{\alpha})$ while $i_1 \in \mathrm{rows}^{j_1}(D_{\alpha})$.
Since $\mathrm{rows}^{j_1}(j_2) \cap \mathrm{rows}^{j_1}(D_{\alpha})$ is nonempty, and since $\mathrm{rows}^{j_1}(j_2) \not\subset \mathrm{rows}^{j_1}(D_{\alpha})$,
we must have that $\mathrm{rows}^{j_1}(D_{\alpha}) \subset \mathrm{rows}^{j_1}(j_2)$ by the DS-free condition.
Therefore $(i_1,j_2) \in D_{\alpha}$ by definition of $D_{\alpha}$.
So $D_{\alpha} = D \in \mathrm{Max}(i_1,j_2)$, as needed.
Switching the roles of $i_1$ and $i_2$ or the roles of $j_1$ and $j_2$ yields the desired equality.
Now let $C \in \mathrm{Int}(i_1,j_1)$. Then $C = D_{\alpha} \cap D_{\beta}$ where $D_{\beta} \lessdot D_{\alpha}$ in the poset $P(j_1)$ by Proposition \ref{prop:MaxIntCharacterization}.
If $(i_2, j_1) \in C$, then we are done.
Now suppose that $(i_2,j_1) \not\in C$.
Then we have that $i_2 \not\in \mathrm{rows}^{j_1}(D_{\beta})$, whereas $i_1 \in \mathrm{rows}^{j_1}(D_{\alpha})$ and $i_1 \in \mathrm{rows}^{j_1}(D_{\beta})$.
So we must have that $\mathrm{rows}^{j_1}(D_{\beta}) \subsetneq \mathrm{rows}^{j_1}(j_2)$ by the DS-free condition.
Since $\mathrm{rows}^{j_1}(D_{\alpha}) \cap \mathrm{rows}^{j_1}(j_2)$ is nonempty, we must have that
$\mathrm{rows}^{j_1}(D_{\alpha}) \subset \mathrm{rows}^{j_1}(j_2)$. This follows from the DS-free condition and the fact that $D_{\alpha}$ covers $D_{\beta}$ in the poset $P(j_1)$.
Therefore $(i_1,j_2) \in D_{\alpha}, D_{\beta}$ by definition of these cliques.
So $C \in \mathrm{Int}(i_1,j_2)$, as needed.
Again, switching the roles of $i_1$ and $i_2$ or the roles of $j_1$ and $j_2$ in the above proof yields the desired equality.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:MLEinModel}]
By Proposition \ref{prop:DefiningEquations}, the vanishing ideal of $\mathcal{M}_S$ consists of all fully-observed $2 \times 2$ minors of $S$.
By Proposition \ref{prop:ModelEquationsSatisfied}, each of these $2 \times 2$ minors vanishes when evaluated on $\hat{p}$.
\end{proof}
We can now prove Theorem \ref{thm:Main}.
\begin{proof}[Proof of Theorem \ref{thm:Main}]
Let $G_S$ be doubly chordal bipartite.
Let $u \in \mathbb{R}_+^S$ be a matrix of counts.
By Corollary \ref{cor:SumEntireBlock}, the column marginals of $u_{++} \hat{p}$ are equal to those of $u$.
Switching the roles of rows and columns in all of the proofs used to obtain this corollary shows that the row marginals are also equal.
Corollary \ref{cor:SumEntireBlock} also implies that $\hat{p}_{++} = 1$ since the vector of all ones is in the rowspan of $A(S)$.
So by Lemma \ref{lem:MLEinModel} and the fact each $\hat{p}$ is positive, $\hat{p} \in \mathcal{M}_S$.
Hence by Birch's theorem, $\hat{p}$ is the maximum likelihood estimate for $u$.
The other direction is exactly the contrapositive of Theorem \ref{thm:NotDCB}.
\end{proof}
\section*{Acknowledgments}
Jane Coons was partially supported by the US National Science Foundation (DGE 1746939).
Seth Sullivant was partially supported by the US National Science Foundation (DMS 1615660).
\bibliographystyle{acm}
| 2024-02-18T23:39:56.855Z | 2020-10-28T01:26:45.000Z | algebraic_stack_train_0000 | 911 | 15,998 |
|
proofpile-arXiv_065-4481 | \section{Introduction}\label{sec:Introduction}
The Bombieri-Vinogradov Theorem \cite{Bombieri,Vinogradov} states that for every $A>0$ and $B=B(A)$ sufficiently large in terms of $A$ we have
\begin{equation}
\sum_{q\le x^{1/2}/(\log{x})^B}\sup_{(a,q)=1}\Bigl|\pi(x;q,a)-\frac{\pi(x)}{\phi(q)}\Bigr|\ll_A\frac{x}{(\log{x})^A},
\label{eq:BV}
\end{equation}
where $\pi(x)$ is the number of primes less than $x$, and $\pi(x;q,a)$ is the number of primes less than $x$ congruent to $a\Mod{q}$. This implies that primes of size $x$ are roughly equidistributed in residue classes to moduli of size up to $x^{1/2-\epsilon}$, on average over the moduli. For many applications in analytic number theory (particularly sieve methods) this estimate is very important, and serves as an adequate substitute for the Generalized Riemann Hypothesis (which would imply a similar statement for each \textit{individual} arithmetic progression).
We believe that one should be able to improve \eqref{eq:BV} to allow for larger moduli, but unfortunately we do not know how to establish \eqref{eq:BV} with the summation extended to $q\le x^{1/2+\delta}$ for any fixed $\delta>0$. The Elliott-Halberstam Conjecture \cite{ElliottHalberstam} is the strongest statement of this type, and asserts that for any $\epsilon,A>0$
\begin{equation}
\sum_{q\le x^{1-\epsilon} }\sup_{(a,q)=1}\Bigl|\pi(x;q,a)-\frac{\pi(x)}{\phi(q)}\Bigr|\ll_{\epsilon,A}\frac{x}{(\log{x})^A}.
\label{eq:EH}
\end{equation}
Quantitatively stronger variants of \eqref{eq:BV} such as \eqref{eq:EH} would naturally give quantitatively stronger estimates of various quantities in analytic number theory relying on \eqref{eq:BV}.
In many applications, particularly those coming from sieve methods, one does not quite need to have the full strength of an estimate of the type \eqref{eq:BV}. It is often sufficient to measure the difference between $\pi(x;q,a)$ and $\pi(x)/\phi(q)$ only for a fixed bounded integer $a$ (such as $a=1$ or $a=2$) rather than taking the worst residue class in each arithmetic progression. Moreover, it is also often sufficient to measure the difference between $\pi(x;q,a)$ and $\pi(x)/\phi(q)$ with `well-factorable' weights (which naturally appear in sieve problems) rather than absolute values. With these technical weakenings we \textit{can} produce estimates analogous to \eqref{eq:BV} which involve moduli larger than $x^{1/2}$. Formally, we define `well-factorable' weights as follows.
\begin{dfntn}[Well factorable]
Let $Q\in\mathbb{R}$. We say a sequence $\lambda_q$ is \textbf{well factorable of level $Q$} if, for any choice of factorization $Q=Q_1Q_2$ with $Q_1,Q_2\ge 1$, there exist two sequences $\gamma^{(1)}_{q_1},\gamma^{(2)}_{q_2}$ such that:
\begin{enumerate}
\item $|\gamma^{(1)}_{q_1}|,|\gamma^{(2)}_{q_2}|\le 1$ for all $q_1,q_2$.
\item $\gamma^{(i)}_{q}$ is supported on $1\le q\le Q_i$ for $i\in\{1,2\}$.
\item We have
\[
\lambda_q=\sum_{q=q_1q_2}\gamma^{(1)}_{q_1}\gamma^{(2)}_{q_2}.
\]
\end{enumerate}
\end{dfntn}
The following celebrated result of Bombieri-Friedlander-Iwaniec \cite[Theorem 10]{BFI1} then gives a bound allowing for moduli as large as $x^{4/7-\epsilon}$ in this setting.
\begin{nthrm}[Bombieri, Friedlander, Iwaniec]\label{nthrm:BFI1}
Let $a\in\mathbb{Z}$ and $A,\epsilon>0$. Let $\lambda_q$ be a sequence which is well-factorable of level $Q\le x^{4/7-\epsilon}$. Then we have
\[
\sum_{\substack{q\le Q\\(q,a)=1}}\lambda_q\Bigl(\pi(x;q,a)-\frac{\pi(x)}{\phi(q)}\Bigr)\ll_{a,A,\epsilon}\frac{x}{(\log{x})^A}.
\]
\end{nthrm}
In this paper we consider weights satisfying a slightly stronger condition of being `triply well factorable'. For these weights we can improve on the range of moduli.
\begin{dfntn}[Triply well factorable]\label{dfntn:WellFactorable}
Let $Q\in\mathbb{R}$. We say a sequence $\lambda_q$ is \textbf{triply well factorable of level $Q$} if, for any choice of factorization $Q=Q_1Q_2Q_3$ with $Q_1,Q_2,Q_3\ge 1$, there exist three sequences $\gamma^{(1)}_{q_1},\gamma^{(2)}_{q_2},\gamma^{(3)}_{q_3}$ such that:
\begin{enumerate}
\item $|\gamma^{(1)}_{q_1}|,|\gamma^{(2)}_{q_2}|,|\gamma^{(3)}_{q_3}|\le 1$ for all $q_1,q_2,q_3$.
\item $\gamma^{(i)}_{q}$ is supported on $1\le q\le Q_i$ for $i\in\{1,2,3\}$.
\item We have
\[
\lambda_q=\sum_{q=q_1q_2q_3}\gamma^{(1)}_{q_1}\gamma^{(2)}_{q_2}\gamma^{(3)}_{q_3}.
\]
\end{enumerate}
\end{dfntn}
With this definition, we are able to state our main result.
\begin{thrm}\label{thrm:Factorable}
Let $a\in\mathbb{Z}$ and $A,\epsilon>0$. Let $\lambda_q$ be triply well factorable of level $Q\le x^{3/5-\epsilon}$. Then we have
\[
\sum_{\substack{q\le Q\\ (a,q)=1}}\lambda_q\Bigl(\pi(x;q,a)-\frac{\pi(x)}{\phi(q)}\Bigr)\ll_{a,A,\epsilon} \frac{x}{(\log{x})^A}.
\]
\end{thrm}
The main point of this theorem is the quantitative improvement over Theorem \ref{nthrm:BFI1} allowing us to handle moduli as large as $x^{3/5-\epsilon}$ (instead of $x^{4/7-\epsilon}$). Theorem \ref{thrm:Factorable} has the disadvantage that it has a stronger requirement that the weights be triply well factorable rather than merely well-factorable, but we expect that Theorem \ref{thrm:Factorable} (or the ideas underlying it) will enable us to obtain quantitative improvements to several problems in analytic number theory where the best estimates currently rely on Theorem \ref{nthrm:BFI1}.
It appears that handling moduli of size $x^{3/5-\epsilon}$ is the limit of the current method. In particular, there appears to be no further benefit of imposing stronger constraints on the coefficients such as being `quadruply well factorable'.
As mentioned above, the main applications of such results come when using sieves. Standard sieve weights are not well-factorable (and so not triply well factorable), but Iwaniec \cite{IwaniecFactorable} showed that a slight variant of the upper bound $\beta$-sieve weights of level $D$ (which produces essentially identical results to the standard $\beta$-sieve weights) is a linear combination of sequences which are well-factorable of level $D$ provided $\beta\ge 1$. In particular, Theorem \ref{nthrm:BFI1} applies to the factorable variant of the upper bound sieve weights for the linear ($\beta=1$) sieve, for example.
The factorable variant of the $\beta$-sieve weights of level $D$ are a linear combination of triply well factorable sequences of level $D$ provided $\beta\ge 2$, and so Theorem \ref{thrm:Factorable} automatically applies to these weights. Unfortunately it is the linear ($\beta=1$) sieve weights which are most important for many applications, and these are not triply well factorable of level $D$ (despite essentially being well-factorable). Despite this, the linear sieve weights have good factorization properties, and turns out that linear sieve weights of level $x^{7/12}$ are very close to being triply well factorable of level $x^{3/5}$. In particular, we have the following result.
\begin{thrm}\label{thrm:Linear}
Let $a\in\mathbb{Z}$ and $A,\epsilon>0$. Let $\lambda^+_d$ be the well-factorable upper bound sieve weights for the linear sieve of level $D\le x^{7/12-\epsilon}$. Then we have
\[
\sum_{\substack{q\le x^{7/12-\epsilon}\\ (q,a)=1} }\lambda_q^+\Bigl(\pi(x;q,a)-\frac{\pi(x)}{\phi(q)}\Bigr)\ll_{a,A,\epsilon} \frac{x}{(\log{x})^A}.
\]
\end{thrm}
This enables us to get good savings for the error term weighted by the linear sieve for larger moduli than was previously known. In particular, Theorem \ref{thrm:Linear} extends the range of moduli we are able to handle from the from the Bombieri-Friedlander-Iwaniec result \cite[Theorem 10]{BFI1} handling moduli of size $x^{4/7-\epsilon}$ to dealing with moduli of size $x^{7/12-\epsilon}$.
It is likely Theorem \ref{thrm:Linear} directly improves several results based on sieves. It doesn't directly improve upon estimates such as the upper bound for the number of twin primes, but we expect the underlying methods to give a suitable improvement for several such applications when combined with technique such as Chen's switching principle or Harman's sieve (see \cite{Harman,Chen,FouvryGrupp,FouvryGrupp2,Wu}). We intend to address this and related results in future work. Moreover, we expect that there are other upper bound sieves closely related to the linear sieve which are much closer to triply well factorable, and so we expect technical variants of Theorem \ref{thrm:Linear} adapted to these sieve weights to give additional improvements.
\begin{rmk}
Drappeau \cite{Drappeau} proved equidistribution for smooth numbers in arithmetic progressions to moduli $x^{3/5-\epsilon}$. The advantage of flexible factorizations of smooth numbers allows one to use the most efficient estimates on convolutions (but one has to overcome additional difficulties in secondary main terms). Since our work essentially reduces to the original estimates of Bombieri--Friedlander--Iwaniec in these cases, it provides no benefit in this setting, but partially explains why we have the same limitiation of $x^{3/5-\epsilon}$.
\end{rmk}
\section{Proof outline}
The proof of Theorem \ref{thrm:Factorable} is a generalization of the method used to prove Theorem \ref{nthrm:BFI1}, and essentially includes the proof of Theorem \ref{nthrm:BFI1} as a special case. As with previous approaches, we use a combinatorial decomposition for the primes (Heath-Brown's identity) to reduce the problem to estimating bilinear quantities in arithmetic progressions. By Fourier expansion and several intermediate manipulations this reduces to estimating certain multidimensional exponential sums, which are ultimately bounded using the work of Deshouillers-Iwaniec \cite{DeshouillersIwaniec} coming from the spectral theory of automorphic forms via the Kuznetsov trace formula.
To obtain an improvement over the previous works we exploit the additional flexibility of factorizations of the moduli to benefit from the fact that now the weights can be factored into three pieces rather than two. This gives us enough room to balance the sums appearing from diagonal and off-diagonal terms perfectly in a wide range.
More specifically, let us recall the main ideas behind Theorem \ref{nthrm:BFI1}. A combinatorial decomposition leaves us to estimate for various ranges of $N,M,Q,R$
\[
\sum_{q\sim Q}\gamma_q \sum_{r\sim R}\lambda_r \sum_{m\sim M}\beta_m\sum_{n\sim N}\alpha_n\Bigl(\mathbf{1}_{n m\equiv a\Mod{qr}}-\frac{\mathbf{1}_{(n m,qr)=1}}{\phi(qr)}\Bigr),
\]
for essentially arbitrary 1-bounded sequences $\gamma_q,\lambda_r,\beta_m,\alpha_n$. Applying Cauchy-Schwarz in the $m$, $q$ variables and then Fourier expanding the $m$-summation and using Bezout's identity reduces this to bounding something like
\[
\sum_{q\sim Q}\sum_{\substack{n_1,n_2\sim N\\ n_1\equiv n_2\Mod{q}}}\alpha_{n_1}\overline{\alpha_{n_2}}\sum_{r_1,r_2\sim R}\lambda_{r_1}\overline{\lambda_{r_2}}\sum_{h\sim H}e\Bigl(\frac{ah \overline{n_2 q r_1}(n_1-n_2)}{n_1r_2}\Bigr),
\]
where $H\approx NQR^2/x$. Writing $n_1-n_2=qf$ and switching the $q$-summation to an $f$-summation, then applying Cauchy-Schwarz in the $n_1,n_2,f,r_2$ variables leaves us to bound
\[
\sum_{f\sim N/Q}\sum_{\substack{n_1,n_2\sim N\\ n_1\equiv n_2\Mod{f}}}\sum_{r_2\sim R}\Bigl|\sum_{r_1\sim R}\gamma_{r_1}\sum_{h\sim H}e\Bigl(\frac{a h f \overline{r_1 n_2}}{n_1r_2}\Bigr)\Bigr|^2.
\]
Bombieri-Friedlander-Iwaniec then drop the congruence condition on $n_1,n_2$, combine $n_1,r_2$ into a new variable $c$ and then estimate the resulting exponential sums via the bounds of Deshouillers-Iwaniec. This involves applying the Kuznetsov trace formula for the congruence subgroup $\Gamma_0(r_1r_1')$. This is a large level (of size $R^2$), which means that the resulting bounds deteriorate rapidly with $R$. We make use of the fact that if the moduli factorize suitably, then we can reduce this level at the cost of worsening the diagonal terms slightly.
In particular, if the above $\lambda_r$ coefficients were of the form $\kappa_s\star\nu_t$, then instead we could apply the final Cauchy-Schwarz in $f$, $n_1$, $n_2$, $r_2$ and $s_1$, leaving us instead to bound
\[
\sum_{f\sim N/Q}\sum_{\substack{n_1,n_2\sim N\\ n_1\equiv n_2\Mod{f}}}\sum_{r_2\sim R}\sum_{s_1\sim S}\Bigl|\sum_{t_1\sim T}\nu_{t_1}\sum_{h\sim H}e\Bigl(\frac{a h f \overline{s_1 t_1 n_2}}{n_1r_2}\Bigr)\Bigr|^2.
\]
(Here $ST\approx R$). Here we have increased the diagonal contribution by a factor $S$, but now the level of the relevant congruence subgroup has dropped from $R^2$ to $T^2$. By dropping the congruence condition, combining $c=n_1r_2$ and $d=n_2s_1$, we can then apply the Deshouillers-Iwaniec estimates in a more efficient manner, giving an additional saving over the previous approach in all the important regimes. This ultimately allows us to handle moduli as large as $x^{3/5-\epsilon}$ in Theorem \ref{thrm:Factorable}. On its own this approach doesn't quite cover all relevant ranges for $N$, but combining it with known estimates for the divisor function in arithmetic progressions (based on the Weil bound) allows us to cover the remaining ranges.
We view the main interest of Theorem \ref{thrm:Factorable} and Theorem \ref{thrm:Linear} as their applicability to sieve problems. It is therefore unfortunate that Theorem \ref{thrm:Factorable} doesn't apply directly to the (well-factorable variant of the) linear sieve weights. To overcome this limitation, it is therefore necessary for us to exploit the fact that our main technical result on convolutions (Proposition \ref{prpstn:MainProp}) actually gives a stronger estimate than what is captured by Theorem \ref{thrm:Factorable}. Moreover, it is necessary to study the precise construction of the linear sieve weights to show that they enjoy good factorization properties. Indeed, we recall the support set for the upper bound linear sieve weights of level $D$ is
\[
\mathcal{D}^+(D)=\Bigl\{p_1\cdots p_r:\, p_1\ge p_2\ge \dots \ge p_r,\,\,p_1\cdots p_{2j}p_{2j+1}^3\le D\text{ for $0\le j<r/2$}\Bigr\}.
\]
If $p_1\cdots p_r$ is close to $D$, we must have the most of the $p_i$'s are very small, and so the weights are supported on very well factorable numbers. It is only really the largest few prime factors $p_i$ which obstruct finding factors in given ranges, and so by explicitly handling them we can exploit this structure much more fully.
Proposition \ref{prpstn:Factorization} is a technical combinatorial proposition showing that the the linear sieve weights enjoy rather stronger factorization properties that simply what is captured through being well-factorable. Although these are not sufficient for triple well-factorability, they are sufficient for our more technical conditions coming from Proposition \ref{prpstn:MainProp}. This ultimately leads to Theorem \ref{thrm:Linear}.
\section{Acknowledgements}
I would like to thank John Friedlander, Ben Green, Henryk Iwaniec and Kyle Pratt for useful discussions and suggestions. JM is supported by a Royal Society Wolfson Merit Award, and this project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 851318).
\section{Notation}
We will use the Vinogradov $\ll$ and $\gg$ asymptotic notation, and the big oh $O(\cdot)$ and $o(\cdot)$ asymptotic notation. $f\asymp g$ will denote the conditions $f\ll g$ and $g\ll f$ both hold. Dependence on a parameter will be denoted by a subscript. We will view $a$ (the residue class $\Mod{q}$) as a fixed positive integer throughout the paper, and any constants implied by asymptotic notation will be allowed to depend on $a$ from this point onwards. Similarly, throughout the paper, we will let $\epsilon$ be a single fixed small real number; $\epsilon=10^{-100}$ would probably suffice. Any bounds in our asymptotic notation will also be allowed to depend on $\epsilon$.
The letter $p$ will always be reserved to denote a prime number. We use $\phi$ to denote the Euler totient function, $e(x):=e^{2\pi i x}$ the complex exponential, $\tau_k(n)$ the $k$-fold divisor function, $\mu(n)$ the M\"obius function. We let $P^-(n)$, $P^+(n)$ denote the smallest and largest prime factors of $n$ respectively, and $\hat{f}$ denote the Fourier transform of $f$ over $\mathbb{R}$ - i.e. $\hat{f}(\xi)=\int_{-\infty}^{\infty}f(t)e(-\xi t)dt$. We use $\mathbf{1}$ to denote the indicator function of a statement. For example,
\[
\mathbf{1}_{n\equiv a\Mod{q}}=\begin{cases}1,\qquad &\text{if }n\equiv a\Mod{q},\\
0,&\text{otherwise}.
\end{cases}
\]
For $(n,q)=1$, we will use $\overline{n}$ to denote the inverse of the integer $n$ modulo $q$; the modulus will be clear from the context. For example, we may write $e(a\overline{n}/q)$ - here $\overline{n}$ is interpreted as the integer $m\in \{0,\dots,q-1\}$ such that $m n\equiv 1\Mod{q}$. Occasionally we will also use $\overline{\lambda}$ to denote complex conjugation; the distinction of the usage should be clear from the context. For a complex sequence $\alpha_{n_1,\dots,n_k}$, $\|\alpha\|_2$ will denote the $\ell^2$ norm $\|\alpha\|_2=(\sum_{n_1,\dots,n_k}|\alpha_{n_1,\dots,n_k}|^2)^{1/2}$.
Summations assumed to be over all positive integers unless noted otherwise. We use the notation $n\sim N$ to denote the conditions $N<n\le 2N$.
We will let $z_0:=x^{1/(\log\log{x})^3}$ and $y_0:=x^{1/\log\log{x}}$ two parameters depending on $x$, which we will think of as a large quantity. We will let $\psi_0:\mathbb{R}\rightarrow\mathbb{R}$ denote a fixed smooth function supported on $[1/2,5/2]$ which is identically equal to $1$ on the interval $[1,2]$ and satisfies the derivative bounds $\|\psi_0^{(j)}\|_\infty\ll (4^j j!)^2$ for all $j\ge 0$. (See \cite[Page 368, Corollary]{BFI2} for the construction of such a function.)
We will repeatedly make use of the following condition.
\begin{dfntn}[Siegel-Walfisz condition]
We say that a complex sequence $\alpha_n$ satisfies the \textbf{Siegel-Walfisz condition} if for every $d\ge 1$, $q\ge 1$ and $(a,q)=1$ and every $A>1$ we have
\begin{equation}
\Bigl|\sum_{\substack{n\sim N\\ n\equiv a\Mod{q}\\ (n,d)=1}}\alpha_n-\frac{1}{\phi(q)}\sum_{\substack{n\sim N\\ (n,d q)=1}}\alpha_n\Bigr|\ll_A \frac{N\tau(d)^{O(1)}}{(\log{N})^A}.
\label{eq:SiegelWalfisz}
\end{equation}
\end{dfntn}
We note that $\alpha_n$ satisfies the Siegel-Walfisz condition if $\alpha_n=1$ or if $\alpha_n=\mu(n)$.
\section{Proof of Theorem \ref{thrm:Factorable}}\label{sec:Factorable}
In this section we establish Theorem \ref{thrm:Factorable} assuming two propositions, namely Proposition \ref{prpstn:WellFactorable} and Proposition \ref{prpstn:DoubleDivisor}, given below.
\begin{prpstn}[Well-factorable Type II estimate]\label{prpstn:WellFactorable}
Let $\lambda_q$ be triply well factorable of level $Q\le x^{3/5-10\epsilon}$, let $NM\asymp x$ with
\[
x^\epsilon\le N\le x^{2/5}.
\]
Let $\alpha_n,\beta_m$ be complex sequences such that $|\alpha_n|,|\beta_n|\le \tau(n)^{B_0}$ and $\alpha_n$ satisfies the Siegel-Walfisz condition \eqref{eq:SiegelWalfisz} and is supported on $P^-(n)\ge z_0$. Then we have that for every choice of $A>0$ and every interval $\mathcal{I}\subseteq[x,2x]$
\[
\sum_{q\le Q}\lambda_q\sum_{n\sim N}\alpha_n\sum_{\substack{m\sim M\\ mn\in\mathcal{I}}}\beta_m\Bigl(\mathbf{1}_{nm\equiv a\Mod{q}}-\frac{\mathbf{1}_{(nm,q)=1}}{\phi(q)}\Bigr)\ll_{A,B_0}\frac{x}{(\log{x})^A}.
\]
\end{prpstn}
Proposition \ref{prpstn:WellFactorable} is our key new ingredient behind the proof, and will be established in Section \ref{sec:WellFactorable}.
\begin{prpstn}[Divisor function in arithmetic progressions]\label{prpstn:DoubleDivisor}
Let $N_1,N_2\ge x^{3\epsilon}$ and $N_1N_2M\asymp x$ and
\begin{align*}
Q&\le \Bigl(\frac{x}{M}\Bigr)^{2/3-3\epsilon}.
\end{align*}
Let $\mathcal{I}\subset[x,2x]$ be an interval, and let $\alpha_m$ a complex sequence with $|\alpha_m|\le \tau(m)^{B_0}$. Then we have that for every $A>0$
\[
\sum_{q\sim Q}\Bigl|\sum_{\substack{n_1\sim N_1\\ P^-(n)\ge z_0}}\sum_{\substack{n_2\sim N_2\\ P^-(n)\ge z_0}}\sum_{\substack{m\sim M\\ m n_1n_2\in\mathcal{I} }}\alpha_m\Bigl(\mathbf{1}_{m n_1 n_2\equiv a\Mod{q}}-\frac{\mathbf{1}_{(m n_1 n_2,q)=1}}{\phi(q)}\Bigr)\Bigr|\ll_{A,B_0} \frac{x}{(\log{x})^A}.
\]
Moreover, the same result holds when the summand is multiplied by $\log{n_1}$.
\end{prpstn}
Proposition \ref{prpstn:DoubleDivisor} is essentially a known result (due to independent unpublished work of Selberg and Hooley, but following quickly from the Weil bound for Kloosterman sums), but for concreteness we give a proof in Section \ref{sec:DoubleDivisor}.
Finally, we require a suitable combinatorial decomposition of the primes.
\begin{lmm}[Heath-Brown identity]\label{lmm:HeathBrown}
Let $k\ge 1$ and $n\le 2x$. Then we have
\[
\Lambda(n)=\sum_{j=1}^k (-1)^j \binom{k}{j}\sum_{\substack{n=n_1\cdots n_{j}m_1\cdots m_j\\ m_1,\dots,m_j\le 2x^{1/k}}}\mu(m_1)\cdots \mu(m_j)\log{n_{1}}.
\]
\end{lmm}
\begin{proof}
See \cite{HBVaughan}.
\end{proof}
\begin{lmm}[Consequence of the fundamental lemma of the sieve]\label{lmm:Fundamental}
Let $q,t,x\ge 2$ satisfy $q x^\epsilon \le t$ and let $(b,q)=1$. Recall $z_0=x^{1/(\log\log{x})^3}$. Then we have
\[
\sum_{\substack{n\le t\\ n\equiv b\Mod{q}\\ P^-(n)\ge z_0}}1=\frac{1}{\phi(q)}\sum_{\substack{n\le t\\ P^-(n)\ge z_0}}1+O_A\Bigl(\frac{t}{q(\log{x})^A}\Bigr).
\]
\end{lmm}
\begin{proof}
This is an immediate consequence of the fundamental lemma of sieve methods - see, for example, \cite[Theorem 6.12]{Opera}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thrm:Factorable} assuming Proposition \ref{prpstn:WellFactorable} and \ref{prpstn:DoubleDivisor}]
By partial summation (noting that prime powers contribute negligibly and retaining the conditon $P^-(n)\ge z_0$), it suffices to show that for all $t\in [x,2x]$
\[
\sum_{q\le x^{3/5-\epsilon} }\lambda_q\sum_{\substack{x\le n\le t\\ P^-(n)\ge z_0}}\Lambda(n)\Bigl(\mathbf{1}_{n\equiv a\Mod{q}}-\frac{\mathbf{1}_{(n,q)=1}}{\phi(q)}\Bigr)\ll_A \frac{x}{(\log{x})^A}.
\]
We now apply Lemma \ref{lmm:HeathBrown} with $k=3$ to expand $\Lambda(n)$ into various subsums, and put each variable into one of $O(\log^6{x})$ dyadic intervals. Thus it suffices to show that for all choices of $N_1,N_2,N_3,M_1,M_2,M_3$ with $M_1M_2M_3N_1N_2N_3\asymp x$ and $M_i\le x^{1/3}$ we have
\begin{align*}
\sum_{q\le x^{3/5-\epsilon} }\lambda_q\sum_{\substack{m_1,m_2,m_3,n_1,n_2,n_3\\ n_i\sim N_i\,\forall i\\ m_i\sim M_i\,\forall i\\ x\le n \le t\\ P^-(n_i),P^-(m_i)\ge z_0\,\forall i}}\mu(m_1)\mu(m_2)\mu(m_3)(\log{n_1})\Bigl(\mathbf{1}_{n\equiv a\Mod{q}}-\frac{\mathbf{1}_{(n,q)=1}}{\phi(q)}\Bigr)\\
\ll_A \frac{x}{(\log{x})^{A+6}},
\end{align*}
where we have written $n=n_1n_2n_3m_1m_2m_3$ in the expression above for convenience.
By grouping all but one variable together, Proposition \ref{prpstn:WellFactorable} gives this if any of the $N_i$ or $M_i$ lie in the interval $[x^\epsilon,x^{2/5}]$, and so we may assume all are either smaller than $x^\epsilon$ or larger than $x^{2/5}$. Since $M_i\le x^{1/3}\le x^{2/5}$, we may assume that $M_1,M_2,M_3\le x^\epsilon$. There can be at most two of the $N_i$'s which are larger than $x^{2/5}$ since $M_1M_2M_3N_1N_2N_3\asymp x$.
If only one of the $N_i$'s are greater than $x^{2/5}$ then they must be of size $\gg x^{1-5\epsilon}>x^\epsilon q$, and so the result is trivial by summing over this variable first and using Lemma \ref{lmm:Fundamental}.
If two of the $N_i$'s are larger than $x^{2/5}$ and all the other variables are less than $x^\epsilon$, then the result follows immediately from Proposition \ref{prpstn:DoubleDivisor}. This gives the result.
\end{proof}
To complete the proof of Theorem \ref{thrm:Factorable}, we are left to establish Propositions \ref{prpstn:WellFactorable} and \ref{prpstn:DoubleDivisor}, which we will ultimately do in Sections \ref{sec:WellFactorable} and \ref{sec:DoubleDivisor} respectively.
\section{Preparatory lemmas}\label{sec:Lemmas}
\begin{lmm}[Divisor function bounds]\label{lmm:Divisor}
Let $|b|< x-y$ and $y\ge q x^\epsilon$. Then we have
\[
\sum_{\substack{x-y\le n\le x\\ n\equiv a\Mod{q}}}\tau(n)^C\tau(n-b)^C\ll \frac{y}{q} (\tau(q)\log{x})^{O_{C}(1)}.
\]
\end{lmm}
\begin{proof}
This follows from Shiu's Theorem \cite{Shiu}, and is given in \cite[Lemma 7.7]{May1}.
\end{proof}
\begin{lmm}[Separation of variables from inequalities]\label{lmm:Separation}
Let $Q_1Q_2\le x^{1-\epsilon}$. Let $N_1,\dots, N_r\ge z_0$ satisfy $N_1\cdots N_r\asymp x$. Let $\alpha_{n_1,\dots,n_r}$ be a complex sequence with $|\alpha_{n_1,\dots,n_r}|\le (\tau(n_1)\cdots \tau(n_r))^{B_0}$. Then, for any choice of $A>0$ there is a constant $C=C(A,B_0,r)$ and intervals $\mathcal{I}_1,\dots,\mathcal{I}_r$ with $\mathcal{I}_j\subseteq [P_j,2P_j]$ of length $\le P_j(\log{x})^{-C}$ such that
\begin{align*}
\sum_{q_1\sim Q_1}\sum_{\substack{q_2\sim Q_2\\ (q_1q_2,a)=1}}&\Bigl|\,\sideset{}{^*}\sum_{\substack{n_1,\dots,n_r\\ n_i\sim N_i\forall i}}\alpha_{n_1,\dots,n_r}S_{n_1\cdots n_r}\Bigr|\\
&\ll_r \frac{x}{(\log{x})^A}+(\log{x})^{r C}\sum_{q_1\sim Q_1}\sum_{\substack{q_2\sim Q_2\\ (q_1q_2,a)=1}}\Bigl|\sum_{\substack{n_1,\dots,n_r\\ n_i\in \mathcal{I}_i\forall i}}\alpha_{n_1,\dots,n_r}S_{n_1\cdots n_r}\Bigr|.
\end{align*}
Here $\sum^*$ means that the summation is restricted to $O(1)$ inequalities of the form $n_1^{\alpha_1}\cdots n_r^{\alpha_r}\le B$ for some constants $\alpha_1,\dots \alpha_r$ and some quantity $B$. The implied constant may depend on all such exponents $\alpha_i$, but none of the quantities $B$.
\end{lmm}
\begin{proof}
This is \cite[Lemma 7.10]{May1}.
\end{proof}
\begin{lmm}[Poisson Summation]\label{lmm:Completion}
Let $C>0$ and $f:\mathbb{R}\rightarrow\mathbb{R}$ be a smooth function which is supported on $[-10,10]$ and satisfies $\|f^{(j)}\|_\infty\ll_j (\log{x})^{j C}$ for all $j\ge 0$, and let $M,q\le x$. Then we have
\[
\sum_{m\equiv a\Mod{q}} f\Bigl(\frac{m}{M}\Bigr)=\frac{M}{q}\hat{f}(0)+\frac{M}{q}\sum_{1\le |h|\le H}\hat{f}\Bigl(\frac{h M}{q}\Bigr)e\Bigl(\frac{ah}{q}\Bigr)+O_C(x^{-100}),
\]
for any choice of $H>q x^\epsilon/M$.
\end{lmm}
\begin{proof}
This follows from \cite[Lemma 12.4]{May1}.
\end{proof}
\begin{lmm}[Summation with coprimality constraint]\label{lmm:TrivialCompletion}
Let $C>0$ and $f:\mathbb{R}\rightarrow\mathbb{R}$ be a smooth function which is supported on $[-10,10]$ and satisfies $\|f^{(j)}\|_\infty\ll_j (\log{x})^{j C}$ for all $j\ge 0$. Then we have
\[
\sum_{(m,q)=1}f\Bigl(\frac{m}{M}\Bigr)=\frac{\phi(q)}{q}M+O(\tau(q)(\log{x})^{2C}).
\]
\end{lmm}
\begin{proof}
This is \cite[Lemma 12.6]{May1}.
\end{proof}
\begin{lmm}\label{lmm:SiegelWalfiszMaintain}
Let $C,B>0$ be constants and let $\alpha_n$ be a sequence satisfing the Siegel-Walfisz condition \eqref{eq:SiegelWalfisz}, supported on $n\le 2x$ with $P^-(n)\ge z_0=x^{1/(\log\log{x})^3}$ and satisfying $|\alpha_n|\le \tau(n)^B$. Then $\mathbf{1}_{\tau(n)\le (\log{x})^C}\alpha_n$ also satisfies the Siegel-Walfisz condition.
\end{lmm}
\begin{proof}
This is \cite[Lemma 12.7]{May1}.
\end{proof}
\begin{lmm}[Most moduli have small square-full part]\label{lmm:Squarefree}
Let $\gamma_b,c_q$ be complex sequences satisfying $|\gamma_b|, |c_b|\le \tau(b)^{B_0}$ and recall $z_0:=x^{1/(\log\log{x})^3}$. Let $sq(n)$ denote the square-full part of $n$. (i.e. $sq(n)=\prod_{p:p^2|n}p^{\nu_p(n)}$). Then for every $A>0$ we have that
\[
\sum_{\substack{q\sim Q\\ sq(q)\ge z_0}}c_q\sum_{b\le B}\gamma_b\Bigl(\mathbf{1}_{b\equiv a\Mod{q}}-\frac{\mathbf{1}_{(b,q)=1}}{\phi(q)}\Bigr)\ll_{A,B_0} \frac{x}{(\log{x})^A}.
\]
\end{lmm}
\begin{proof}
This is \cite[Lemma 12.9]{May1}.
\end{proof}
\begin{lmm}[Most moduli have small $z_0$-smooth part]\label{lmm:RoughModuli}
Let $Q<x^{1-\epsilon}$. Let $\gamma_b,c_q$ be complex sequences with $|\gamma_b|,|c_b|\le \tau(n)^{B_0}$ and recall $z_0:=x^{1/(\log\log{x})^3}$ and $y_0:=x^{1/\log\log{x}}$. Let $sm(n;z)$ denote the $z$-smooth part of $n$. (i.e. $sm(n;z)=\prod_{p\le z}p^{\nu_p(n)}$). Then for every $A>0$ we have that
\[
\sum_{\substack{q\sim Q\\ sm(q;z_0)\ge y_0}}c_q\sum_{b\le x}\gamma_b\Bigl(\mathbf{1}_{b\equiv a\Mod{q}}-\frac{\mathbf{1}_{(b,q)=1}}{\phi(q)}\Bigr)\ll_{A,B_0} \frac{x}{(\log{x})^A}.
\]
\end{lmm}
\begin{proof}
This is \cite[Lemma 12.10]{May1}.
\end{proof}
\begin{prpstn}[Reduction to exponential sums]\label{prpstn:GeneralDispersion}
Let $\alpha_n,\beta_m,\gamma_{q,d},\lambda_{q,d,r}$ be complex sequences with $|\alpha_n|,|\beta_n|\le \tau(n)^{B_0}$ and $|\gamma_{q,d}|\le \tau(q d)^{B_0}$ and $|\lambda_{q,d,r}|\le \tau(q d r)^{B_0}$. Let $\alpha_n$ and $\lambda_{q,d,r}$ be supported on integers with $P^-(n)\ge z_0$ and $P^-(r)\ge z_0$, and let $\alpha_n$ satisfy the Siegel-Walfisz condition \eqref{eq:SiegelWalfisz}. Let
\[
\mathscr{S}:=\sum_{\substack{d\sim D\\ (d,a)=1}}\sum_{\substack{q\sim Q\\ (q,a)=1}}\sum_{\substack{r\sim R\\ (r,a)=1}}\lambda_{q,d,r}\gamma_{q,d}\sum_{m\sim M}\beta_m\sum_{n\sim N}\alpha_n\Bigl(\mathbf{1}_{m n\equiv a\Mod{q r d}}-\frac{\mathbf{1}_{(m n,q r d)=1}}{\phi(q r d)}\Bigr).
\]
Let $A>0$ and $C=C(A,B_0)$ be sufficiently large in terms of $A,B_0$, and let $N,M$ satisfy
\[
N>Q D (\log{x})^{C},\qquad M>(\log{x})^C.
\]
Then we have
\[
|\mathscr{S}|\ll_{A,B_0} \frac{x}{(\log{x})^A}+M D^{1/2}Q^{1/2}(\log{x})^{O_{B_0}(1)}\Bigl(|\mathscr{E}_1|^{1/2}+|\mathscr{E}_2|^{1/2}\Bigr),
\]
where
\begin{align*}
\mathscr{E}_{1}&:=\sum_{\substack{q\\ (q,a)=1}}\sum_{\substack{d\sim D\\ (d,a)=1}}\sum_{\substack{r_1,r_2\sim R\\ (r_1r_2,a)=1}}\psi_0\Bigl(\frac{q}{Q}\Bigr)\frac{\lambda_{q,d,r_1}\overline{\lambda_{q,d,r_2}} }{\phi(q d r_2)q d r_1}\sum_{\substack{n_1,n_2\sim N\\ (n_1,q d r_1)=1\\(n_2,q d r_2)=1}}\alpha_{n_1}\overline{\alpha_{n_2}}\\
&\qquad \times\sum_{1\le |h|\le H_1}\hat{\psi}_0\Bigl(\frac{h M}{q d r_1}\Bigr)e\Bigl( \frac{a h \overline{ n_1}}{q d r_1}\Bigr),\\
\mathscr{E}_2&:=\sum_{\substack{q\\ (q,a)=1}}\psi_0\Bigl(\frac{q}{Q}\Bigr)\sum_{\substack{d\sim D\\ (d,a)=1}}\sum_{\substack{r_1,r_2\sim R\\ (r_1,a r_2)=1\\ (r_2,a q d r_1)=1}}\frac{\lambda_{q,d,r_1}\overline{\lambda_{q,d,r_2}}}{q d r_1 r_2}\sum_{\substack{n_1,n_2\sim N\\ n_1\equiv n_2\Mod{q d}\\ (n_1,n_2 q d r_1)=1\\(n_2,n_1 q d r_2)=1\\ |n_1-n_2|\ge N/(\log{x})^C}}\alpha_{n_1}\overline{\alpha_{n_2}}\\
&\qquad \times\sum_{1\le |h|\le H_2}\hat{\psi}_0\Bigl(\frac{h M}{q d r_1 r_2}\Bigr)e\Bigl(\frac{ah\overline{n_1r_2}}{q d r_1}+\frac{ah\overline{n_2 q d r_1}}{r_2}\Bigr),\\
H_1&:=\frac{Q D R}{M}\log^5{x},\\
H_2&:=\frac{Q D R^2}{M}\log^5{x}.
\end{align*}
\end{prpstn}
\begin{proof}
This is \cite[Proposition 13.4]{May1} with $E=1$.
\end{proof}
\begin{lmm}[Simplification of exponential sum]\label{lmm:Simplification}
Let $N,M,Q,R \le x$ with $NM\asymp x$ and
\begin{align}
Q R&<x^{2/3},\label{eq:CrudeSize}\\
Q R^2&< M x^{1-2\epsilon}.\label{eq:CrudeSize2}
\end{align}
Let $\lambda_{q,r}$ and $\alpha_n$ be complex sequences supported on $P^-(n),P^-(r)\ge z_0$ with $|\lambda_{q,r}|\le \tau(qr)^{B_0}$ and $|\alpha_n|\le \tau(n)^{B_0}$. Let $H:=\frac{Q R^2}{M}\log^5{x}$ and let
\begin{align*}
\mathscr{E}&:=\sum_{\substack{(q,a)=1}}\psi_0\Bigl(\frac{q}{Q}\Bigr)\sum_{\substack{r_1,r_2\sim R\\ (r_1,a r_2)=1\\ (r_2,a q r_2)=1}}\frac{\lambda_{q,r_1}\overline{\lambda_{q,r_2}}}{q r_1 r_2}\sum_{\substack{n_1,n_2\sim N\\ n_1\equiv n_2\Mod{q}\\ (n_1,n_2qr_1)=1\\(n_2,n_1qr_2)=1\\ |n_1-n_2|\ge N/(\log{x})^C}}\alpha_{n_1}\overline{\alpha_{n_2}}\\
&\qquad\qquad \times\sum_{1\le |h|\le H}\hat{\psi}_0\Bigl(\frac{h M}{q r_1 r_2}\Bigr)e\Bigl(\frac{ah\overline{n_1 r_2}}{q r_1}+\frac{ah\overline{n_2 q r_1}}{r_2}\Bigr).
\end{align*}
Then we have (uniformly in $C$)
\[
\mathscr{E}\ll_{B_0}\exp((\log\log{x})^5)\sup_{\substack{H'\le H\\ Q'\le 2Q\\ R_1,R_2\le 2R}}|\mathscr{E}'|+\frac{N^2}{Qx^\epsilon},
\]
where
\[
\mathscr{E}'=\sum_{\substack{Q\le q\le Q'\\ (q,a)=1}}\sum_{\substack{R\le r_1\le R_1\\ R\le r_2\le R_2\\ (r_1a r_2)=1\\ (r_2,a q r_1)=1}}\frac{\lambda_{q,r_1}\overline{\lambda_{q,r_2}}}{q r_1 r_2}\sum_{\substack{n_1,n_2\sim N\\ n_1\equiv n_2\Mod{q}\\ (n_1,qr_1n_2)=1\\ (n_2,qr_2n_1)=1\\ (n_1r_2,n_2)\in\mathcal{N}\\ |n_1-n_2|\ge N/(\log{x})^C}}\alpha_{n_1}\overline{\alpha_{n_2}}\sum_{1\le |h| \le H'} e\Bigl(\frac{ ah\overline{n_2 q r_1}(n_1-n_2)}{n_1 r_2}\Bigr),
\]
and $\mathcal{N}$ is a set with the property that if $(a,b)\in\mathcal{N}$ and $(a',b')\in\mathcal{N}$ then we have $\gcd(a,b')=\gcd(a',b)=1$.
\end{lmm}
\begin{proof}
This is \cite[Lemma 13.5]{May1}.
\end{proof}
\begin{lmm}[Second exponential sum estimate]\label{lmm:BFI2}
Let
\begin{align}
D R N^{3/2}&< x^{1-2\epsilon},\\
Q D R&< x^{1-2\epsilon}.
\end{align}
Let $\alpha_n$, $\lambda_{d,r}$ be complex sequences with $|\lambda_{d,r}|,|\alpha_n|\le x^{o(1)}$. Let $H_1:=N Q D R(\log{x})^5/x$ and let
\[
\widetilde{\mathscr{B}}:=\sum_{\substack{q\\ (q,a)=1}}\sum_{\substack{d\sim D\\ (d,a)=1}}\sum_{\substack{r_1,r_2\sim R\\ (r_1r_2,a)=1}}\psi_0\Bigl(\frac{q}{Q}\Bigr)\frac{\lambda_{d,r_1}\overline{\lambda_{d,r_2}} }{\phi(q d r_2)q d r_1}\sum_{\substack{n_1,n_2\sim N\\ (n_1,q d r_1)=1\\(n_2,q d r_2)=1}}\alpha_{n_1}\overline{\alpha_{n_2}}\sum_{1\le |h|\le H_1}\hat{\psi}_0\Bigl(\frac{h M}{q d r_1}\Bigr)e\Bigl( \frac{a h \overline{ n_1}}{q d r_1}\Bigr)
\]
Then we have
\[
\widetilde{\mathscr{B}}\ll\frac{N^2}{Q D x^\epsilon}.
\]
\end{lmm}
\begin{proof}
This follows from the same argument used to prove \cite[Lemma 17.3]{May1}.
\end{proof}
\begin{lmm}[Reduction to smoothed sums]\label{lmm:SmoothReduction}
Let $N\ge x^\epsilon$ and $z\le z_0$ and let $\alpha_m$, $c_q$ be 1-bounded complex sequences.
Imagine that for every choice of $N',D,A,C>0$ with $N' D\asymp N$ and $D\le y_0$, and every smooth function $f$ supported on $[1/2,5/2]$ satisfying $f^{(j)}\ll_j (\log{x})^{C j}$, and for every $1$-bounded complex sequence $\beta_d$ we have the estimate
\[
\sum_{q\sim Q} c_q\sum_{m\sim M}\alpha_m\sum_{d\sim D}\beta_d\sum_{n'}f\Bigl(\frac{n'}{N'}\Bigr)\Bigl(\mathbf{1}_{m n' d\equiv a\Mod{q}}-\frac{\mathbf{1}_{(m n' d,q)=1}}{\phi(q)}\Bigr)\ll_{A,C} \frac{x}{(\log{x})^A}.
\]
Then for any $B>0$ and every interval $\mathcal{I}\subseteq [N,2N]$ we have
\[
\sum_{q\sim Q}c_q \sum_{m\sim M}\alpha_m\sum_{\substack{n\in\mathcal{I}\\ P^-(n)>z}}\Bigl(\mathbf{1}_{mn\equiv a\Mod{q}}-\frac{\mathbf{1}_{(m n,q)=1}}{\phi(q)}\Bigr)\ll_{B} \frac{x}{(\log{x})^B}.
\]
\end{lmm}
\begin{proof}
This is \cite[Lemma 18.2]{May1}.
\end{proof}
\begin{lmm}[Deshouillers-Iwaniec estimate]\label{lmm:DeshouillersIwaniec}
Let $b_{n,r,s}$ be a 1-bounded sequence and $R,S,N,D,C\ll x^{O(1)}$. Let $g(c,d)=g_0(c/C,d/D)$ where $g_0$ is a smooth function supported on $[1/2,5/2]\times [1/2,5/2]$. Then we have
\[
\sum_{r\sim R} \sum_{\substack{s\sim S\\ (r,s)=1}}\sum_{n\sim N}b_{n,r,s}\sum_{d\sim D}\sum_{\substack{c\sim C\\ (rd,sc)=1}}g(c,d) e\Bigl(\frac{n\overline{dr}}{cs}\Bigr)\ll_{g_0} x^\epsilon \Bigl(\sum_{r\sim R}\sum_{s\sim S}\sum_{n\sim N}|b_{n,r,s}|^2\Bigr)^{1/2}\mathscr{J}.
\]
where
\[
\mathscr{J}^2=CS(RS+N)(C+DR)+C^2 D S\sqrt{(RS+N)R}+D^2NR.
\]
\end{lmm}
\begin{proof}
This is \cite[Theorem 12]{DeshouillersIwaniec} (correcting a minor typo in the last term of $\mathscr{J}^2$).
\end{proof}
\section{Double divisor function estimates}\label{sec:DoubleDivisor}
In this section we establish Proposition \ref{prpstn:DoubleDivisor}, which is a quick consequence of the Weil bound and the fundamental lemma of sieves. Although well-known, we give a full argument for completeness (it might also help the reader motivate \cite[Section 19]{May1} on the triple divisor function). These estimates are not a bottleneck for our results, and in fact several much stronger results could be used here (see, for example \cite{FouvryIwaniecDivisor}).
\begin{lmm}[Smoothed divisor function estimate]\label{lmm:DoubleDivisor}
Let $N_1,N_2,M,Q\ge 1$ satisfy $x^{2\epsilon}\le N_1\le N_2$, $N_1N_2M\asymp x$ and
\begin{align*}
Q&\le \frac{x^{2/3-2\epsilon} }{M^{2/3}}.
\end{align*}
Let $\psi_1$ and $\psi_2$ be smooth functions supported on $[1/2,5/2]$ satisfying $\psi_1^{(j)},\psi_2^{(j)}\ll_j (\log{x})^{j C}$ and let $\alpha_m$ be a 1-bounded complex sequence. Let
\[
\mathscr{K}:=\sup_{\substack{(a,q)=1\\ q\sim Q}}\Bigl|\sum_{m\sim M}\alpha_m\sum_{n_1,n_2}\psi_1\Bigl(\frac{n_1}{N_1}\Bigr)\psi_2\Bigl(\frac{n_2}{N_2}\Bigr)\Bigl(\mathbf{1}_{mn_1n_2\equiv a\Mod{q}}-\frac{\mathbf{1}_{(m n_1n_2,q)=1}}{\phi(q)}\Bigr)\Bigr|.
\]
Then we have
\[
\mathscr{K}\ll_C \frac{x^{1-\epsilon}}{Q}.
\]
\end{lmm}
(It is unimportant for this paper that Proposition \ref{prpstn:DoubleDivisor} holds pointwise for $q$ and uniformly over all $(a,q)=1$, but the proof is no harder.)
\begin{proof}
Let the supremum occur at $a$ and $q$. We have that $\mathscr{K}=\mathscr{K}_{2}-\mathscr{K}_{1}$, where
\begin{align*}
\mathscr{K}_{1}&:=\frac{1}{\phi(q)}\sum_{\substack{m\sim M\\ (m,q)=1}}\alpha_m\sum_{\substack{n_1,n_2\\ (m n_1 n_2,q)=1}}\psi_1\Bigl(\frac{n_1}{N_1}\Bigr)\psi_2\Bigl(\frac{n_2}{N_2}\Bigr),\\
\mathscr{K}_{2}&:=\sum_{\substack{m\sim M\\ (m,q)=1}}\alpha_m\sum_{(n_2,q)=1}\psi_2\Bigl(\frac{n_2}{N_2}\Bigr)\sum_{\substack{n_1\\ n_1\equiv a\overline{m n_2}\Mod{q}}}\psi_1\Bigl(\frac{n_1}{N_1}\Bigr).
\end{align*}
By Lemma \ref{lmm:TrivialCompletion}, since $N_1\le N_2$ we have
\[
\sum_{\substack{n_1,n_2\\ (m n_1n_2,q)=1}}\psi_1\Bigl(\frac{n_1}{N_1}\Bigr)\psi_2\Bigl(\frac{n_2}{N_2}\Bigr)=\frac{\phi(q)^2}{q^2}N_1 N_2 \hat{\psi_1}(0)\hat{\psi_2}(0)+O(N_2 x^{o(1)}).
\]
This implies that
\[
\mathscr{K}_{1}=\mathscr{K}_{MT}+O\Bigl(\frac{x^{1+o(1)}}{Q N_1}\Bigr),
\]
where
\[
\mathscr{K}_{MT}:=N_1 N_2 \hat{\psi_1}(0)\hat{\psi_2}(0)\frac{\phi(q)}{q^2}\sum_{\substack{m\sim M\\ (m,q)=1}}\alpha_m.
\]
By Lemma \ref{lmm:Completion} we have that for $H_1:=x^\epsilon Q/N_1$
\[
\sum_{\substack{n_1\\ n_1\equiv a\overline{m n_2}\Mod{q}}}\psi_1\Bigl(\frac{n_1}{N_1}\Bigr)=\frac{N_1}{q}\hat{\psi}(0)+\frac{N_1}{q}\sum_{1\le |h_1|\le H_1}\hat{\psi_1}\Bigl(\frac{h_1 N_1}{q}\Bigr)e\Bigl(\frac{a h_1 \overline{m n_2}}{q}\Bigr)+O(x^{-10}).
\]
The final term makes a negligible contribution to $\mathscr{K}_{2}$. By Lemma \ref{lmm:TrivialCompletion}, the first term contributes to $\mathscr{K}_{2}$ a total
\[
\frac{N_1 \hat{\psi_1}(0)}{q}\sum_{\substack{m\sim M\\ (m,q)=1}}\alpha_m \sum_{(n_2,q)=1}\psi_2\Bigl(\frac{n_2}{N_2}\Bigr)=\mathscr{K}_{MT}+O\Bigl(\frac{x^{1+o(1)}}{Q N_2}\Bigr).
\]
Finally, by another application of Lemma \ref{lmm:Completion} we have
\begin{align*}
\sum_{(n_2,q)=1}\psi_2\Bigl(\frac{n_2}{N_2}\Bigr)&e\Bigl(\frac{a h_1 \overline{m n_2}}{q}\Bigr)=\frac{N_2}{q}\hat{\psi_2}(0)\sum_{(b,q)=1}e\Bigl(\frac{ah_1 \overline{m b}}{q}\Bigr)\\
&+\frac{N_2}{q}\sum_{1\le |h_2|\le H_2}\hat{\psi_2}\Bigl(\frac{ h_2 N_2}{q}\Bigr)\sum_{(b,q)=1}e\Bigl(\frac{a h_1 \overline{m b}+h_2 b}{q}\Bigr)+O(x^{-10}).
\end{align*}
The inner sum in the first term is a Ramanujan sum and so of size $O((h_1,q))$. The inner sum in the second term is a Kloosterman sum, and so of size $O(q^{1/2+o(1)}(h_1,h_2,q))$. The final term contributes a negligible amount. Thus we see that these terms contribute a total
\begin{align*}
&\ll \frac{N_1 N_2}{Q^2}\sum_{m\sim M}\sum_{\substack{1\le |h_1|\le H_1}}(h_1,q)+\frac{x^{o(1)}N_1 N_2}{Q^{3/2}}\sum_{m\sim M}\sum_{\substack{1\le |h_1|\le H_1\\ 1\le |h_2|\le H_2}}(h_1,h_2,q)\\
&\ll \frac{x^{o(1)} N_1 N_2 M H_1}{Q^2}+\frac{x^{o(1)} N_1 N_2 M H_1 H_2}{Q^{3/2}}\\
&\ll \frac{x^{1+o(1)}}{Q N_1}+x^{o(1)}M Q^{1/2}.
\end{align*}
Putting this together, we obtain
\[
\mathscr{K}\ll \frac{x^{1+o(1)}}{Q N_1}+x^{o(1)} M Q^{1/2}.
\]
This gives the result provided
\begin{align}
x^{2\epsilon}&\le N_1,\\
Q&\le \frac{x^{2/3-2\epsilon}}{M^{2/3}}.
\end{align}
This gives the result.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prpstn:DoubleDivisor}]
First we note that by Lemma \ref{lmm:Divisor} and the trivial bound, those $m$ with $|\alpha_m|\ge(\log{x})^C$ contribute a total $\ll x(\log{x})^{O_{B_0}(1)-C}$. This is negligible if $C=C(A,B_0)$ is large enough so, by dividing through by $(\log{x})^{C}$ and considering $A+C$ in place of $A$, it suffices to show the result when $|\alpha_m|\le 1$.
We apply Lemma \ref{lmm:Separation} to remove the condition $m n_1n_2\in\mathcal{I}$. Thus it suffices to show for every $B>0$ and every choice of interval $\mathcal{I}_M\subseteq[M,2M]$, $\mathcal{I}_1\subseteq[N_1,2N_1]$ and $\mathcal{I}_2\subseteq[N_2,2N_2]$ that we have
\[
\sum_{q\sim Q}\Bigl|\sum_{\substack{n_1\in \mathcal{I}_1\\ P^-(n)\ge z_0}}\sum_{\substack{n_2\in \mathcal{I}_2\\ P^-(n)\ge z_0}}\sum_{\substack{m\in\mathcal{I}_M }}\alpha_m\Bigl(\mathbf{1}_{m n_1n_2\equiv a\Mod{q}}-\frac{\mathbf{1}_{(mn_1n_2,q)=1}}{\phi(q)}\Bigr)\Bigr|\ll_B \frac{x}{(\log{x})^B}.
\]
We now remove the absolute values by inserting 1-bounded coefficients $c_q$. By two applications of Lemma \ref{lmm:SmoothReduction} with $z=z_0$, then see that it is sufficient to show that for every $A,C>0$, every choice of smooth functions $f_1,f_2$ supported on $[1/2,5/2]$ with $f_i^{(j)}\ll_j (\log{x})^{C j}$ and for every 1-bounded sequence $\beta_{d_1,d_2}$ and for every choice of $D_1,D_2,N_1',N_2'$ with $D_1,D_2\le y_0$ and $N_1'D_1\asymp N_1$, $N_2'D_2\asymp N_2$ we have that
\begin{align*}
&\sum_{q\sim Q} c_q\sum_{\substack{d_1\sim D_1\\ d_2\sim D_2}}\beta_{d_1,d_2}\sum_{n'_1,n_2'}f_1\Bigl(\frac{n_1'}{N'_1}\Bigr)f_2\Bigl(\frac{n_2'}{N_2'}\Bigr)\\
&\qquad\times\sum_{m\in \mathcal{I}_M}\alpha_m \Bigl(\mathbf{1}_{m n'_1 n_2' d_1 d_2\equiv a\Mod{q}}-\frac{\mathbf{1}_{(m n_1' n_2' d_1 d_2,q)=1}}{\phi(q)}\Bigr)\ll_{A,C} \frac{x}{(\log{x})^A}.
\end{align*}
Grouping together $m,d_1,d_2$, we see that Lemma \ref{lmm:DoubleDivisor} now gives the result, recalling that $D_1,D_2\le y_0=x^{o(1)}$ so $N_1'=N_1x^{-o(1)}\ge x^{2\epsilon}$ and $N_2'=N_2x^{-o(1)}\ge x^{2\epsilon}$ and $Q\le x^{2/3-3\epsilon}/M^{2/3}\le x^{2/3-2\epsilon}/(D_1D_2M)^{2/3}$.
An identical argument works if the summand is multiplied by $\log{n_1}$, since this just slightly adjusts the smooth functions appearing.
\end{proof}
\section{Well-factorable estimates}\label{sec:WellFactorable}
In this section we establish Proposition \ref{prpstn:WellFactorable}, which is the key result behind Theorem \ref{thrm:Factorable}. This can be viewed as a refinement of \cite[Theorem 1]{BFI1}. Indeed, Proposition \ref{prpstn:WellFactorable} essentially includes \cite[Theorem 1]{BFI1} as the special case $R=1$. The key advantage in our setup is to make use of the additional flexibility afforded by having a third factor available when manipulating the exponential sums. The argument does not have a specific regime when it is weakest; the critical case for Theorem \ref{thrm:Factorable} is the whole range $x^{1/10}\le N\le x^{1/3}$. (The terms with $N\le x^{1/10}$ or $N>x^{1/3}$ can be handled by a combination of the result for $N\in[x^{1/10},x^{1/3}]$ and Proposition \ref{prpstn:DoubleDivisor}.)
\begin{lmm}[Well-factorable exponential sum estimate]\label{lmm:Factorable}
Let $Q'\le 2Q$, $H'\le x^{o(1)} QR^2 S^2/M$, $NM\asymp x$ and
\begin{align}
N^2 R^2 S&< x^{1-7\epsilon},\\
N^2 R^3 S^4 Q&<x^{2-14\epsilon},\\
N R^2 S^5 Q&<x^{2-14\epsilon}.
\end{align}
Let $\gamma_r,\lambda_s,\alpha_n$ be 1-bounded complex coefficients, and let
\begin{align*}
\mathscr{W}&:=\sum_{\substack{Q\le q\le Q'\\ (q,a)=1}}\sum_{\substack{r_1,r_2\sim R}}\sum_{\substack{s_1,s_2\sim S \\ (r_1s_1,a r_2s_2)=1\\ (r_2s_2,a q d r_1 s_1)=1\\ r_1s_1\le B_1\\ r_2s_2\le B_2}}\frac{\gamma_{r_1}\lambda_{s_1}\overline{\gamma_{r_2}\lambda_{s_2}}}{r_1r_2s_1s_2q}\sum_{\substack{n_1,n_2\sim N \\ n_1\equiv n_2\Mod{q d}\\ (n_1,n_2 q d r_1 s_1)=1\\ (n_2,n_1 q d r_2 s_2)=1\\ (n_1r_2s_2,n_2)\in\mathcal{N}\\ |n_1-n_2|\ge N/(\log{x})^C }}\alpha_{n_1}\overline{\alpha_{n_2}}\\
&\qquad\times\sum_{1\le |h| \le H'}e\Bigl(\frac{ah(n_1-n_2)\overline{n_2 r_1 s_1 d q}}{ n_1 r_2s_2}\Bigr)
\end{align*}
for some $(d,a)=1$ where $\mathcal{N}$ is a set with the property that if $(a,b)\in\mathcal{N}$ and $(a',b')\in\mathcal{N}$ then $\gcd(a,b')=\gcd(a',b)=1$.
Then we have
\[
\mathscr{W}\ll \frac{N^2}{Q x^\epsilon}.
\]
\end{lmm}
\begin{proof}
We first make a change of variables. Since we have $n_1\equiv n_2\Mod{q d}$, we let $f d q=n_1-n_2$ for some integer $|f|\le 2N/d Q\le 2N/Q$, and we wish to replace $q$ with $(n_1-n_2)/d f$. We see that
\[
(n_1-n_2)\overline{d q}=f\Mod{n_1 r_2 s_2}.
\]
Thus the exponential simplifies to
\[
e\Bigl(\frac{ah f\overline{r_1s_1n_2}}{n_1r_2s_2}\Bigr).
\]
The conditions $(n_1,n_2)=1$ and $n_1\equiv n_2\Mod{d q}$ automatically imply $(n_1n_2,d q)=1$, and so we find
\begin{align*}
\mathscr{W}&=\sum_{1\le |f|\le 2N/Q}\sum_{\substack{r_1,r_2\sim R\\ (r_1r_2,a)=1}}\sum_{\substack{s_2\sim S\\ (r_2s_2,a d r_1)=1\\ r_2s_2\le B_2}}\sideset{}{'}\sum_{\substack{n_1,n_2\sim N\\ n_1\equiv n_2\Mod{d f}}}\frac{\gamma_{r_1}\overline{\gamma_{r_2}\lambda_{s_2}}d f}{r_1r_2 s_2(n_1-n_2)}\\
&\qquad\times\sum_{\substack{s_1\sim S\\ (s_1,a n_1 r_2 s_2)=1\\ r_1s_1\le B_1}}\frac{\lambda_{s_1}}{s_1}\sum_{1\le |h|\le H'}\alpha_{n_1}\overline{\alpha_{n_2}}e\Bigl(\frac{a h f\overline{r_1s_1n_2}}{n_1r_2s_2}\Bigr).
\end{align*}
Here we have used $\sum'$ to denote that fact that we have suppressed the conditions
\begin{align*} &(n_1,n_2 r_1 s_1)=1,&\quad &(n_2,n_1 r_2s_2)=1,&\quad& (n_1r_2s_2,n_2)\in\mathcal{N},&\\
&|n_1-n_2|\ge N/(\log{x})^C,& \quad& ((n_1-n_2)/d f,a r_2 s_2)=1,&\quad &Q d f\le n_1-n_2\le Q' d f. &
\end{align*}
We first remove the dependency between $r_1$ and $s_1$ from the constraint $r_1s_1\le B_1$ by noting
\begin{align*}
\mathbf{1}_{r_1s_1\le B_1}&=\int_0^1\Bigl(\sum_{j\le B_1/r_1}e(-j\theta)\Bigr)e(s_1\theta)d\theta\\
&=\int_0^1 c_{r_1,\theta} \min\Bigl(\frac{B_1}{R},|\theta|^{-1}\Bigr)e(s_1\theta)d\theta
\end{align*}
for some 1-bounded coefficients $c_{r_1,\theta}$. Thus
\begin{align*}
\mathscr{W}&=\int_0^1\min\Bigl(\frac{B_1}{R},|\theta|^{-1}\Bigr)\mathscr{W}_2(\theta)d\theta\ll (\log{x}) \sup_{\theta}|\mathscr{W}_2(\theta)|,
\end{align*}
where $\mathscr{W}_2=\mathscr{W}_2(\theta)$ is given by
\begin{align*}
\mathscr{W}_2&:=\sum_{1\le |f|\le 2N/Q}\sum_{\substack{r_1,r_2\sim R\\ (r_1r_2,a)=1}}\sum_{\substack{s_2\sim S\\ (r_2s_2,a d r_1)=1\\ r_2s_2\le B_2}}\sideset{}{'}\sum_{\substack{n_1,n_2\sim N\\ n_1\equiv n_2\Mod{d f}}}\frac{\gamma_{r_1}c_{r_1,\theta}\overline{\gamma_{r_2}\lambda_{s_2}}d f}{r_1r_2 s_2(n_1-n_2)}\\
&\qquad\times\sum_{\substack{s_1\sim S\\ (s_1,an_1r_2s_2)=1}}\frac{e(s_1\theta)\lambda_{s_1}}{s_1}\sum_{1\le |h|\le H'}\alpha_{n_1}\overline{\alpha_{n_2}}e\Bigl(\frac{a h f\overline{r_1s_1n_2}}{n_1r_2s_2}\Bigr).
\end{align*}
In order to show $\mathscr{W}\ll N^2/(Qx^\epsilon)$ we see it is sufficient to show $\mathscr{W}_2\ll N^2/(Q x^{2\epsilon})$. We now apply Cauchy-Schwarz in the $f$, $n_1$, $n_2$, $r_1$, $r_2$ and $s_2$ variables. This gives
\begin{align*}
\mathscr{W}_2\ll \frac{N R S^{1/2}(\log{x})^2}{Q R^2 S^2}\mathscr{W}_3^{1/2},
\end{align*}
where
\begin{align*}
\mathscr{W}_3&:=\sum_{1\le |f|\le 2N/Q}\sum_{\substack{n_1,n_2\sim N\\ n_1\equiv n_2\Mod{d f}}}\sum_{r_1,r_2\sim R}\\
&\qquad\times\sum_{\substack{s_2\sim S\\ (n_2r_1,n_1r_2s_2)=1}}\Bigl|\sum_{\substack{s_1\sim S\\ (s_1,an_1r_2s_2)=1}}\sum_{1<|h|\le H'}\lambda_{s_1}' e\Bigl(\frac{ah f\overline{r_1s_1n_2}}{n_1r_2s_2}\Bigr)\Bigr|^2,
\end{align*}
and where
\[
\lambda_s':=\frac{S}{s}\lambda_{s}e(s\theta)
\]
are 1-bounded coefficients. Note that we have dropped many of the constraints on the summation for an upper bound. In order to show that $\mathscr{W}_2\ll N^2/(Q x^{2\epsilon})$ we see it is sufficient to show that $\mathscr{W}_3\ll N^2 R^2 S^3/x^{5\epsilon}$. We first drop the congruence condition on $n_1,n_2\Mod{d f}$ for an upper bound, and then we combine $n_2r_1$ into a single variable $b$ and $n_1 r_2 s_2$ into a single variable $c$. Using the divisor bound to control the number of representations of $c$ and $b$, and inserting a smooth majorant, this gives
\begin{align*}
\mathscr{W}_3&\le x^{o(1)}\sup_{\substack{ B \ll N R\\ C\ll N R S \\ F\ll N/Q}} \mathscr{W}_4,
\end{align*}
where
\begin{align*}
\mathscr{W}_4&:=\sum_{b}\sum_{\substack{c\\ (b,c)=1}}g(b,c)\sum_{f\sim F}\Bigl|\sum_{\substack{s_1\sim S\\ (s_1,a c)=1}}\sum_{1<|h|\le H'}\lambda_{s_1}' e\Bigl(\frac{a h f\overline{b s_1}}{c}\Bigr)\Bigr|^2\\
g(b,c)&:=\psi_0\Bigl(\frac{b}{B}\Bigr)\psi_0\Bigl(\frac{c}{C}\Bigr).
\end{align*}
In order to show $\mathscr{W}_3\ll N^2 R^2 S^3/x^{5\epsilon}$, it is sufficient to show that
\begin{equation}
\mathscr{W}_4\ll \frac{N^2R^2 S^3}{x^{6\epsilon}}.\label{eq:W4Target}
\end{equation}
We expand the square and swap the order of summation, giving
\[
\mathscr{W}_4=\sum_{\substack{s_1,s_2\sim S\\ (s_1s_2,a)=1}}\sum_{1<|h_1|,|h_2|\le H'}\lambda_{s_1}'\overline{\lambda_{s_2}'}\sum_{b}\sum_{f\sim F}\sum_{\substack{c\\ (c,b s_1s_2)=1}}g(b,c)e\Bigl(a f\ell\frac{\overline{b s_1s_2}}{c}\Bigr),
\]
where
\[
\ell=h_1s_1-h_2s_2.
\]
We now split the sum according to whether $\ell=0$ or not.
\[
\mathscr{W}_4=\mathscr{W}_{\ell=0}+\mathscr{W}_{\ell\ne 0}.
\]
To show \eqref{eq:W4Target} it is sufficient to show
\begin{equation}
\mathscr{W}_{\ell=0}\ll \frac{N^2R^2 S^3}{x^{6\epsilon}}\qquad\text{and}\qquad \mathscr{W}_{\ell\ne 0}\ll \frac{N^2R^2 S^3}{x^{6\epsilon}}.\label{eq:WTargets}
\end{equation}
We first consider $\mathscr{W}_{\ell=0}$, and so terms with $h_1s_1=h_2s_2$. Given $h_1,s_1$ there are at most $x^{o(1)}$ choices of $h_2,s_2$, and so at most $x^{o(1)}HS$ choices of $h_1,h_2,s_1,s_2$. Thus we see that
\begin{align*}
\mathscr{W}_{\ell=0} \ll x^{o(1)}H S B F C&\ll x^{o(1)}\frac{R^2 S^2 Q}{M}\cdot S\cdot N R\cdot \frac{N}{Q}\cdot N R S\\
&\ll \frac{N^4 R^4 S^4}{x^{1-\epsilon}}.\label{eq:WDiag}
\end{align*}
This gives an acceptably small contribution for \eqref{eq:WTargets} provided
\[
\frac{N^4 R^4 S^4}{x^{1-\epsilon}}\ll \frac{N^2 R^2 S^3}{x^{6\epsilon}},
\]
which rearranges to
\begin{equation}
N^2 R^2 S\ll x^{1-7\epsilon}.\label{eq:FactorCond1}
\end{equation}
We now consider $\mathscr{W}_{\ell\ne 0}$. We let $y=a f(h_1 s_1-h_2s_2)\ll x^{o(1)} N R^2 S^3/M$ and $z=s_1 s_2\ll S^2$. Putting these variables in dyadic intervals and using the symmetry between $y$ and $-y$, we see that
\[
\mathscr{W}_{\ell\ne 0}\ll \log{x}\sum_{z\sim Z}\sum_{y\sim Y}b_{z,y}\Bigl|\sum_{b}\sum_{\substack{c\\ (c,z b)=1}}g(b,c)e\Bigl(\frac{y\overline{z b}}{c}\Bigr)\Bigr|,
\]
where $Z\asymp S^2$, $Y\ll x^{o(1)} N R^2 S^3/M$ and
\[
b_{z,y}=\mathop{\sum_{s_1,s_2\sim S}\sum_{1\le |h_1|,|h_2|\le H'}\sum_{f\sim F}}\limits_{\substack{s_1 s_2=z\\ a f(h_1s_1-h_2s_2)=y}}1.
\]
By Lemma \ref{lmm:DeshouillersIwaniec} we have that
\begin{equation}
\mathscr{W}_{\ell\ne 0}\ll x^\epsilon \Bigl(\sum_{z\sim Z}\sum_{y\sim Y}b_{z,y}^2\Bigr)^{1/2}\mathscr{J},
\label{eq:WOffDiag1}
\end{equation}
where
\begin{equation}
\mathscr{J}^2\ll C(Z+Y)(C+B Z)+C^2B\sqrt{ (Z+Y)Z}+B^2 Y Z.
\label{eq:JBound}
\end{equation}
We first consider the $b_{z,y}$ terms. We note that given a choice of $z,y$ there are $x^{o(1)}$ choices of $s_1,s_2,k,f$ with $z=s_1s_2$ and $y=a k f$ by the divisor bound. Thus by Cauchy-Schwarz, we see that
\begin{align*}
\sum_{z\sim Z}\sum_{y\sim Y}b_{z,y}^2&=\sum_{z\sim Z}\sum_{y\sim Y}\Bigl(\sum_{\substack{s_1,s_2\sim S\\ s_1 s_2=z}}\sum_{\substack{f\sim F}}\sum_{\substack{ k\\ ak f=y}}\sum_{\substack{1\le |h_1|,|h_2|\ll H \\ h_1 s_1-h_2s_2=k}}1\Bigr)^2\\
&\ll x^{o(1)}\sum_{s_1,s_2\sim S}\sum_{f\sim F}\sum_{k}\Bigl(\sum_{\substack{1\le |h_1|,|h_2|\ll H\\ h_1s_1-h_2s_2=k}}1\Bigr)^2\\
&\ll x^{o(1)} F \sum_{s_1,s_2\sim S}\sum_{\substack{ 1\le |h_1|,|h_1'|,|h_2|,|h_2'|\ll H \\ (h_1-h_1')s_1=(h_2-h_2')s_2}}1
\end{align*}
We consider the inner sum. If $(h_1-h_1')s_1=0=(h_2-h_2')s_2$ we must have $h_1=h_1'$, $h_2=h_2'$, so there are $O(H^2S^2)$ choices $h_1,h_1',h_2,h_2',s_1,s_2$. If instead $(h_1-h_1')s_1=(h_2-h_2')s_2\ne 0$ there are $O(H S)$ choices of $t=(h_1-h_1')s_1\ne 0$. Given a choice of $t$, there are $x^{o(1)}$ choices of $s_1,s_2,h_1-h_1',h_2-h_2'$. Thus there are $O(H^3S)$ choices $h_1,h_1',h_2,h_2',s_1,s_2$ with $(h_1-h_1')s_1=(h_2-h_2')s_2\ne 0$. Thus
\begin{align*}
\sum_{z\sim Z}\sum_{y\sim Y}b_{z,y}^2&\ll x^{o(1)} \frac{N}{Q} (H^2 S^2+H^3 S)\\
&\ll x^\epsilon\Bigl(\frac{N R^4 S^6 Q}{M^2}+\frac{N R^6 S^7 Q^2}{M^3}\Bigr).
\end{align*}
In particular, since we are assuming that $N^2R^2S<x^{1-\epsilon}\le MN$ and $N>Q$, we have
\[
M>R^2S N>R^2 S Q.
\]
Thus this simplifies to give
\begin{equation}
\sum_{z\sim Z}\sum_{y\sim Y}b_{z,y}^2\ll x^\epsilon \frac{N R^4 S^6 Q}{M^2}.
\label{eq:OffDiagSeq}
\end{equation}
We now consider $\mathscr{J}$. Since the bound \eqref{eq:JBound} is increasing and polynomial in $C,B,Z,Y$, the maximal value is at most $x^{o(1)}$ times the value when $C=N R S$, $Z=S^2$, $Y=N R^2 S^3/M$ and $B=N R$, and so it suffices to consider this case. We note that our bound $M>R^2 S N$ from \eqref{eq:FactorCond1} then implies that that $Z>Y$, and so, noting that $B Z>C$ and $C^2 B Z > B^2 Y Z$, this simplifies our bound for $\mathscr{J}$ to
\begin{align}
\mathscr{J}^2&\ll x^{o(1)}(C B Z^2+C^2 B Z+B^2 Y Z)\nonumber\\
&\ll x^{\epsilon}(C B Z^2+C^2 B Z)\nonumber\\
&=x^{\epsilon} N^2 R^2 S^5+x^\epsilon N^3 R^3 S^4.
\label{eq:FactorJBound}
\end{align}
Putting together \eqref{eq:WOffDiag1}, \eqref{eq:OffDiagSeq} and \eqref{eq:FactorJBound}, we obtain
\begin{align*}
\mathscr{W}_{\ell\ne 0}&\ll x^\epsilon \Bigl(x^\epsilon \frac{N R^4 S^6 Q}{M^2}\Bigr)^{1/2}\Bigl(x^\epsilon N^2 R^2 S^5+ x^\epsilon N^3 R^3 S^4\Bigr)^{1/2}\\
&\ll x^{2\epsilon}\Bigl(\frac{N^3 R^6 S^{11} Q}{M^2}+\frac{N^4 R^7 S^{10}Q}{M^2}\Bigr)^{1/2}.
\end{align*}
Thus we obtain \eqref{eq:WTargets} if
\[
N^3 R^6 S^{11}Q+N^4 R^7 S^{10}Q<x^{-14\epsilon} N^4 R^4 S^6 M^2.
\]
recalling $NM\asymp x$, we see that this occurs if we have
\begin{align}
N R^2 S^5 Q&<x^{2-14\epsilon},\\
N^2 R^3 S^4 Q&<x^{2-14\epsilon}.
\end{align}
This gives the result.
\end{proof}
\begin{prpstn}[Well-factorable estimate for convolutions]\label{prpstn:MainProp}
Let $NM\asymp x$ and $Q_1,Q_2,Q_3$ satisfy
\begin{align*}
Q_1&<\frac{N}{x^\epsilon},\\
N^2 Q_2 Q_3^2&<x^{1-8\epsilon},\\
N^2 Q_1 Q_2^4 Q_3^3&<x^{2-15\epsilon},\\
N Q_1 Q_2^5 Q_3^2&<x^{2-15\epsilon}.
\end{align*}
Let $\alpha_n,\beta_m$ be $1$-bounded complex sequences such that $\alpha_n$ satisfies the Siegel-Walfisz condition \eqref{eq:SiegelWalfisz} and $\alpha_n$ is supported on $n$ with all prime factors bigger than $z_0=x^{1/(\log\log{x})^3}$. Let $\gamma_{q_1},\lambda_{q_2},\nu_{q_3}$ be 1-bounded complex coefficients supported on $(q_i,a)=1$ for $i\in\{1,2,3\}$. Let
\[
\Delta(q):=\sum_{n\sim N}\alpha_n\sum_{m\sim M}\beta_m\Bigl(\mathbf{1}_{nm\equiv a\Mod{q}}-\frac{\mathbf{1}_{(n m,q)=1}}{\phi(q)}\Bigr).
\]
Then for every $A>0$ we have
\[
\sum_{q_1\sim Q_1}\sum_{q_2\sim Q_2}\sum_{q_3\sim Q_3}\gamma_{q_1}\lambda_{q_2}\nu_{q_3}\Delta(q_1q_2q_3)\ll_A\frac{x}{(\log{x})^A}.
\]
\end{prpstn}
\begin{proof}
First we factor $q_2=q_2'q_2''$ and $q_3=q_3'q_3''$ where $P^-(q_2'),P^-(q_3')>z_0\ge P^+(q_2''),P^+(q_3'')$ into parts with large and small prime factors. By putting these in dyadic intervals, we see that it suffices to show for every $A>0$ and every choice of $Q_2'Q_2''\asymp Q_2$, $Q_3'Q_3''\asymp Q_3$ that
\begin{align*}
&\sum_{q_1\sim Q_1}\sum_{\substack{q_2'\sim Q_2'\\ P^-(q_2')>z_0}}\sum_{\substack{q_2''\sim Q_2''\\ P^+(q_2'')\le z_0}}\sum_{\substack{q_3'\sim Q_3'\\ P^-(q_3')\ge z_0}}\sum_{\substack{q_3''\sim Q_3''\\ P^+(q_3'')\le z_0}}\gamma_{q_1}\lambda_{q_2'q_2''}\nu_{q_3'q_3''}\Delta(q_1q_2'q_2''q_3'q_3'')\ll_A\frac{x}{(\log{x})^A}.
\end{align*}
By Lemma \ref{lmm:RoughModuli} we have the result unless $Q_2'',Q_3''\le y_0=x^{1/\log\log{x}}$. We let $d=q_2'' q_3''$ and define
\[
\lambda_{q,d,r}:=\mathbf{1}_{P^-(r)> z_0}\sum_{\substack{q_2''q_3''=d\\ q_1\sim Q_1\\ q_2''\sim Q_2''\\ q_3''\sim Q_3''\\ P^+(q_2''q_3'')\le z_0}}\,\,\sum_{\substack{q_2'q_3'=r\\ q_2'\sim Q_2'\\ q_3'\sim Q_3'}}\lambda_{q_2'q_2''}\nu_{q_3'q_3''}.
\]
We note that $\lambda_{q,d,r}$ doesn't depend on $q$. With this definition we see it suffices to show that for every $A>0$ and every choice of $D,R$ with $D R\asymp Q_2 Q_3$ and $D\le y_0^2$ we have that
\[
\sum_{q\sim Q_1}\sum_{d\sim D}\sum_{r\sim R}\gamma_{q}\lambda_{q,d,r}\Delta(q d r)\ll_A \frac{x}{(\log{x})^A}.
\]
We now apply Proposition \ref{prpstn:GeneralDispersion} (we may apply this since $N>Q_1 x^\epsilon>Q_1D(\log{x})^C$ and $N<x^{1-\epsilon}$ by assumption of the lemma). This shows that it suffices to show that
\[
|\mathscr{E}_1|+|\mathscr{E}_2|\ll \frac{N^2}{D Q_1 y_0},
\]
where
\begin{align*}
\mathscr{E}_{1}&:=\sum_{\substack{q\\ (q,a)=1}}\sum_{\substack{d\sim D\\ (d,a)=1}}\sum_{\substack{r_1,r_2\sim R\\ (r_1r_2,a)=1}}\psi_0\Bigl(\frac{q}{Q_1}\Bigr)\frac{\lambda_{q,d,r_1}\overline{\lambda_{q,d,r_2}} }{\phi(q d r_2)q d r_1}\sum_{\substack{n_1,n_2\sim N\\ (n_1,q d r_1)=1\\(n_2,q d r_2)=1}}\alpha_{n_1}\overline{\alpha_{n_2}}\\
&\qquad \times\sum_{1\le |h|\le H_1}\hat{\psi}_0\Bigl(\frac{h M}{q d r_1}\Bigr)e\Bigl( \frac{a h \overline{ n_1}}{q d r_1}\Bigr),\\
\mathscr{E}_2&:=\sum_{\substack{q\\ (q,a)=1}}\psi_0\Bigl(\frac{q}{Q_1}\Bigr)\sum_{\substack{d\sim D\\ (d,a)=1}}\sum_{\substack{r_1,r_2\sim R\\ (r_1,a r_2)=1\\ (r_2,a q d r_1)=1}}\frac{\lambda_{q,d,r_1}\overline{\lambda_{q,d,r_2}}}{q d r_1 r_2}\sum_{\substack{n_1,n_2\sim N\\ n_1\equiv n_2\Mod{q d}\\ (n_1,n_2 q d r_1)=1\\(n_2,n_1 q d r_2)=1\\ |n_1-n_2|\ge N/(\log{x})^C}}\alpha_{n_1}\overline{\alpha_{n_2}}\\
&\qquad \times\sum_{1\le |h|\le H_2}\hat{\psi}_0\Bigl(\frac{h M}{q d r_1 r_2}\Bigr)e\Bigl(\frac{ah\overline{n_1r_2}}{q d r_1}+\frac{ah\overline{n_2 q d r_1}}{r_2}\Bigr),\\
H_1&:=\frac{Q D R}{M}\log^5{x},\\
H_2&:=\frac{Q D R^2}{M}\log^5{x}.
\end{align*}
Since $\lambda_{q,d,r}$ is independent of $q$, we may apply Lemma \ref{lmm:BFI2} to conclude that
\[
\mathscr{E}_1\ll \frac{N^2}{Q_1 D x^\epsilon},
\]
provided we have
\begin{align}
D R N^{3/2}&<x^{1-2\epsilon},\\
Q_1 D R&<x^{1-2\epsilon}.
\end{align}
These are both implied by the conditions of the lemma, recalling that $DR\asymp Q_2Q_3$. Thus it suffices to bound $\mathscr{E}_2$. Since $D\le y_0^2=x^{o(1)}$, it suffices to show
\[
\mathscr{E}_3\ll \frac{N^2}{Q_1 x^{\epsilon/10}},
\]
for each $d\le y_0^2$, where $\mathscr{E}_3=\mathscr{E}_3(d)$ is given by
\begin{align*}
\mathscr{E}_3&:=\sum_{\substack{(q,a)=1}}\psi_0\Bigl(\frac{q}{Q_1}\Bigr)\sum_{\substack{r_1,r_2\sim R\\ (r_1,a r_2)=1\\ (r_2,a q d r_1)=1}}\frac{\lambda_{q,d,r_1}\overline{\lambda_{q,d,r_2}}}{q r_1 r_2}\sum_{\substack{n_1,n_2\sim N\\ n_1\equiv n_2\Mod{q d}\\ (n_1,n_2 q d r_1)=1\\(n_2,n_1 q d r_2)=1\\ |n_1-n_2|\ge N/(\log{x})^C}}\alpha_{n_1}\overline{\alpha_{n_2}}\\
&\qquad\qquad \times\sum_{1\le |h|\le H_2}\hat{\psi}_0\Bigl(\frac{h M}{q d r_1 r_2}\Bigr)e\Bigl(\frac{ah\overline{n_1r_2}}{q d r_1}+\frac{ah\overline{n_2 q d r_1}}{r_2}\Bigr).
\end{align*}
Since $\lambda_{q,d,r}$ is independent of $q$ and we treat each $d$ separately, we may suppress the $q,d$ dependence by writing $\lambda_{r}$ in place of $\lambda_{q,d,r}$. We now apply Lemma \ref{lmm:Simplification}. This shows it suffices to show that
\[
\mathscr{E}'\ll \frac{N^2}{Q_1 x^{\epsilon/2}},
\]
where
\[
\mathscr{E}':=\sum_{\substack{Q_1\le q\le Q_1'\\ (q,a)=1}}\sum_{\substack{R\le r_1\le R_1\\ R\le r_2\le R_2\\ (r_1, a r_2)=1\\ (r_2,a q d r_1)=1}}\frac{\lambda_{r_1}\overline{\lambda_{r_2}}}{q d r_1 r_2}\sum_{\substack{n_1,n_2\sim N\\ n_1\equiv n_2\Mod{q d}\\ (n_1,q d r_1n_2)=1\\ (n_2,q d r_2n_1)=1\\ (n_1r_2,n_2)\in\mathcal{N}}}\alpha_{n_1}\overline{\alpha_{n_2}}\sum_{1\le |h| \le H'} e\Bigl(\frac{ ah\overline{n_2 q d r_1}(n_1-n_2)}{n_1 r_2}\Bigr),
\]
and where $Q_1'\le 2Q_1$ and $R_1,R_2\le 2R$ and $H'\le H_2$.
We recall the definition of $\lambda_{q,d,r}$ and expand it as a sum. Since $d$ is fixed, there are $x^{o(1)}$ possible choices of $q_2'',q_3''$. Fixing one such choice, we then see $\mathscr{E}'$ is precisely of the form considered in Lemma \ref{lmm:Factorable}. This then gives the result, provided
\begin{align*}
Q_1&<\frac{N}{x^\epsilon},\\
N^2 Q_2' Q_3'{}^2 &<x^{1-7\epsilon},\\
N^2 Q_1 Q_2'{}^4 Q_3'{}^3&<x^{2-14\epsilon},\\
N Q_1 Q_2'{}^5 Q_3'{}^2&<x^{2-14\epsilon}.
\end{align*}
Since $Q_2'\le Q_2$ and $Q_3'\le Q_3$, these bounds follow from the assumptions of the lemma.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prpstn:WellFactorable}]
First we note that by Lemma \ref{lmm:Divisor} the set of $n,m$ with $\max(|\alpha_n|,|\beta_m|)\ge(\log{x})^C$ and $n m\equiv a\Mod{q}$ has size $\ll x(\log{x})^{O_{B_0}(1)-C}/q$, so these terms contribute negligibly if $C=C(A,B_0)$ is large enough. Thus, by dividing through by $(\log{x})^{2C}$ and considering $A+2C$ in place of $A$, it suffices to show the result when all the sequences are 1-bounded. ($\alpha_n$ still satisfies \eqref{eq:SiegelWalfisz} by Lemma \ref{lmm:SiegelWalfiszMaintain}.) The result follows from the Bombieri-Vinogradov Theorem if $Q\le x^{1/2-\epsilon}$, so we may assume that $Q\in[x^{1/2-\epsilon},x^{3/5-10\epsilon}]$.
We use Lemma \ref{lmm:Separation} to remove the condition $n m\in\mathcal{I}$, and see suffices to show for $B=B(A)$ sufficiently large in terms of $A$
\begin{equation}
\sum_{q\le x^{3/5-\epsilon}}\lambda_q\sum_{n\in\mathcal{I}_N}\alpha_n\sum_{\substack{m\in \mathcal{I}_M}}\beta_m\Bigl(\mathbf{1}_{n m\equiv a\Mod{q}}-\frac{\mathbf{1}_{(nm,q)=1}}{\phi(q)}\Bigr)\ll_B\frac{x}{(\log{x})^B}
\label{eq:FactorableTarget}
\end{equation}
uniformly over all intervals $\mathcal{I}_N\subseteq[N,2N]$ and $\mathcal{I}_M\subseteq[M,2M]$.
Let us define for $x^\epsilon\le N\le x^{2/5}$
\[
Q_1:=\frac{N}{x^\epsilon},\qquad Q_2:=\frac{Q}{x^{2/5-\epsilon}},\qquad Q_3:=\frac{x^{2/5}}{N}.
\]
We note that $Q_1Q_2Q_3=Q$ and $Q_1,Q_2,Q_3\ge 1$. Since $\lambda_q$ is triply well factorable of level $Q$, we can write
\begin{equation}
\lambda_{q}=\sum_{q_1q_2q_3=q}\gamma^{(1)}_{q_1}\gamma^{(2)}_{q_2}\gamma^{(3)}_{q_3},
\label{eq:LambdaFact}
\end{equation}
for some 1-bounded sequences $\gamma^{(1)},\gamma^{(2)},\gamma^{(3)}$ with $\gamma^{(i)}_q$ supported on $q\le Q_i$ for $i\in\{1,2,3\}$.
We now substitute \eqref{eq:LambdaFact} into \eqref{eq:FactorableTarget} and put each of $q_1,q_2,q_3$ into one of $O(\log^3{x})$ dyadic intervals $(Q_1',2Q_1']$, $(Q_2',2Q_2']$ and $(Q_3',2Q_3']$ respectively. Since $Q_1'\le Q_1$, $Q_2'\le Q_2$ and $Q_3'\le Q_3$ and $Q_1Q_2Q_3=Q\le x^{3/5-10\epsilon}$ we have
\begin{align*}
Q_1'&\le\frac{N}{x^\epsilon},\\
N^2 Q_2' Q_3'{}^2 &\le Qx^{2/5+\epsilon} <x^{1-7\epsilon},\\
N^2 Q_1' Q_2'{}^4 Q_3'{}^3&\le \frac{Q^4}{x^{2/5-3\epsilon}} <x^{2-14\epsilon},\\
N Q_1' Q_2'{}^5 Q_3'{}^2&\le \frac{Q^5}{x^{6/5-4\epsilon}}<x^{2-14\epsilon}.
\end{align*}
Thus, we see that Proposition \ref{prpstn:MainProp} now gives the result.
\end{proof}
We have now established both Proposition \ref{prpstn:DoubleDivisor} and Proposition \ref{prpstn:WellFactorable}, and so completed the proof of Theorem \ref{thrm:Factorable}.
\section{Proof of Theorem \ref{thrm:Linear}}\label{sec:Corollary}
\allowdisplaybreaks
First we recall some details of the construction of sieve weights associated to the linear sieve. We refer the reader to \cite{IwaniecFactorable} or \cite[Chapter 12.7]{Opera} for more details. The standard upper bound sieve weights $\lambda^+_d$ for the linear sieve of level $D$ are given by
\[
\lambda^+_d:=
\begin{cases}
\mu(d),\qquad &d\in\mathcal{D}^+(D),\\
0,&\text{otherwise,}
\end{cases}
\]
where
\[
\mathcal{D}^+(D):=\Bigl\{p_1\cdots p_r:\, p_1\ge p_2\ge \dots \ge p_r,\,\,p_1\cdots p_{2j}p_{2j+1}^3\le D\text{ for $0\le j<r/2$}\Bigr\}.
\]
Moreover, we recall the variant $\tilde{\lambda}_d^+$ of these sieve weights where one does not distinguish between the sizes of primes $p_j\in [D_j,D_j^{1+\eta}]$ with $D_j>x^\epsilon$ for some small constant $\eta>0$. (i.e. if $d=p_1\cdots p_r$ with $p_j\in [D_j,D_j^{1+\eta}]$ and $D_1\ge \dots \ge D_r\ge x^\epsilon$, then $\tilde{\lambda}^+_d=(-1)^r$ if $D_1\ge \dots \ge D_r$ and $D_1\cdots D_{2j}D_{2j+1}^3\le D^{1/(1+\eta)}$ for all $0\le j<r/2$, and otherwise $\tilde{\lambda}^+_d=0$.) This variant is a well-factorable function in the sense that for any choice of $D_1D_2=D$ we can write $\tilde{\lambda}^+=\sum_{1\le j\le \epsilon^{-1}}\alpha^{(j)}\star\beta^{(j)}$ where $\alpha^{(j)}_n$ is a sequence supported on $n\le D_1$ and $\beta^{(j)}_m$ is supported on $m\le D_2$. The construction of the sequence $\tilde{\lambda}_d^+$ follows from the fact that if $d\in\mathcal{D}^+(D)$ and $D=D_1D_2$ then $d=d_1d_2$ with $d_1\le D_1$ and $d_2\le D_2$. This produces essentially the same results as the original weights when combined with a fundamental lemma type sieve to remove prime factors less than $x^\epsilon$.
In view of Proposition \ref{prpstn:MainProp} and Proposition \ref{prpstn:DoubleDivisor}, in order to prove Theorem \ref{thrm:Linear} it suffices to construct a similar variant $\hat{\lambda}_d^+$ such that for every $N\in [x^\epsilon,x^{1/3+\epsilon}]$ we can write $\hat{\lambda}_d^+=\sum_{1\le j\le \epsilon^{-1}}\alpha^{(j)}\star\beta^{(j)}\star\gamma^{(j)}$ with $\alpha^{(j)}_n$ supported on $n\le D_1$ and $\beta^{(j)}_n$ supported on $n\le D_2$ and $\gamma^{(j)}_n$ supported on $n\le D_3$ for some choice of $D_1,D_2,D_3$ satisfying
\begin{align*}
D_1<\frac{N}{x^\epsilon},\quad
N^2 D_2 D_3^2<x^{1-8\epsilon},\quad
N^2 D_1 D_2^4 D_3^3<x^{2-15\epsilon},\quad
N D_1 D_2^5 D_3^2<x^{2-15\epsilon}.
\end{align*}
An identical argument to the construction of $\tilde{\lambda}_d^+$ shows that we can construct such a sequence $\hat{\lambda}_d^+$ if every $d\in \mathcal{D}^+(D)$ can be written as $d=d_1d_2d_3$ with $d_1\le D_1$, $d_2\le D_2$ and $d_3\le D_3$ satisfying the above constraints. Thus, in order to prove Theorem \ref{thrm:Linear} it suffices to establish the following result.
\begin{prpstn}[Factorization of elements of $\mathcal{D}^+(D)$]\label{prpstn:Factorization}
Let $0<\delta<1/1000$ and let $D=x^{7/12-50\delta}$, $x^{2\delta} \le N\le x^{1/3+\delta/2}$ and $d\in\mathcal{D}^+(D)$. Then there is a factorization $d=d_1d_2d_3$ such that
\begin{align*}
d_1&\le\frac{N}{x^\delta},\\
N^2d_2d_3^2&\le x^{1-\delta},\\
N^2d_1d_2^4d_3^3&\le x^{2-\delta},\\
N d_1d_2^5d_3^2&\le x^{2-\delta}.
\end{align*}
\end{prpstn}
\begin{proof}
Let $d=p_1\cdots p_r\in\mathcal{D}^+$. We split the argument into several cases depending on the size of the factors.
\textbf{Case 1: $p_1\ge D^2/x^{1-3\delta}$.}
Let $D_1:=N x^{-\delta}$, $d_2:=D_2:=p_1$ and $D_3:=D x^\delta/(Np_1)$. We note that $D_1,D_2,D_3\ge 1$ from our bounds on $N$ and $p_1^3\le D$. Since $p_2\le p_1\le D^{1/3}$ and $D_1D_3=D/p_1$ we see that $p_2^2\le D_1D_3$, so either $p_2\le D_1$ or $p_2\le D_3$. Moreover, we see that since $p_1\cdots p_{j-1}p_j^2\le D$ for all $j\le r$, we have that $p_j^2\le D_1D_3/(p_2\cdots p_{j-1})$ for all $j\ge 3$. Thus, by considering $p_2$, $p_3$, $\dots$ in turn, we can greedily form products $d_1$ and $d_3$ with $d_1\le D_1$ and $d_3\le D_3$ and $d_1d_3=p_2\cdots p_r$. We now see that since $D^2/x^{1-\delta}\le p_1\le D^{1/3}$, we have
\begin{align*}
d_1&\le D_1\le \frac{N}{x^\delta},\\
N^2d_2d_3^2&\le N^2D_2D_3^2\le x^{2\delta}D^2/p_1<x^{1-\delta},\\
N^2d_1d_2^4d_3^3&\le N^2D_1D_2^4D_3^3\le x^{2\delta}D^3p_1<x^{2-\delta},\\
N d_1d_2^5d_3^2&\le ND_1D_2^5D_3^2\le x^\delta D^2 p_1^3<x^{2-\delta},
\end{align*}
so this factorization satisfies the conditions.
\textbf{Case 2: $p_2p_3\ge D^2/x^{1-3\delta}$.}
This is similar to the case above. Without loss of generality we may assume we are not in Case 1, so $p_1,p_2,p_3,p_4<D^2/x^{1-3\delta}$. We now set $D_1:=N x^{-2\delta}$, $d_2:=D_2:=p_2p_3$ and $D_3:=x^{2\delta} D/(Np_2p_3)$. Note that
\[
p_2p_3\le p_2^{1/3}(p_1p_2p_3^3)^{1/3}\le \frac{D^{2/3}}{x^{1/3-\delta}}D^{1/3}= \frac{D}{x^{1/3-\delta}}.
\]
In particular, $D_1,D_2,D_3\ge 1$ and we have that $D_1D_3=D/p_2p_3\ge x^{1/3-\delta}$. Thus $p_1^2<D^4/x^{2-6\delta}< x^{1/3-\delta}\le D_1D_3$, and $p_4^2\le D_1D_3/p_1$ since $p_1p_2p_3p_4^2\le D$. Moreover, for $j\ge 5$ we have $p_1\cdots p_{j-1}p_j^2\le D$, so $p_j^2\le D_1D_3/(p_1p_4\dots p_{j-1})$. We can greedily form products $d_1\le D_1$ and $d_3\le D_3$ out of $p_1p_4\cdots p_r$, by considering each prime in turn. We now see that since $D^2/x^{1-3\delta}\le p_2p_3<x^{1/4}$, we have
\begin{align*}
d_1&\le D_1\le\frac{N}{x^\delta},\\
N^2d_2d_3^2&\le N^2D_2D_3^2\le x^{2\delta} D^2/(p_2p_3)\le x^{1-\delta},\\
N^2d_1d_2^4d_3^3&\le N^2D_1D_2^4D_3^3\le x^{2\delta}D^3p_2p_3\le x^{2-\delta},\\
Nd_1d_2^5d_3^2&ND_1D_2^5D_3^2\le x^\delta D^2(p_2p_3)^3\le x^{2-\delta},
\end{align*}
so this gives a suitable factorization.
\textbf{Case 3: $p_1p_4\ge D^2/x^{1-3\delta}$.}
We may assume we are not in Case 1 or 2. In particular $\max(p_1,p_2p_3)< D^2/x^{1-3\delta}$, so $p_1p_4\le p_1(p_2p_3)^{1/2}<D^3/x^{3/2-9\delta/2}<D/x^{1/3-\delta/2}$, and the argument is completely analogous to the case above, choosing $D_1:=N x^{-2\delta}$, $d_2:=D_2:=p_1p_4$ and $D_3:=x^{2\delta} D/(Np_1p_4)$, using the fact that $D^2/x^{1-3\delta}\le p_1p_4<x^{1/4}$.
\textbf{Case 4: $p_1p_4<D^2/x^{1-3\delta}$ and $p_2p_3<D^2/x^{1-3\delta}$. }
We set $D_1:=N x^{-\delta}$, $D_2:=D^2/x^{1-3\delta}$ and $D_3:=x^{1-2\delta}/(D N)$, noting that these are all at least 1. We see that one of $D_1$ or $D_3$ is also at least $D^2/x^{1-3\delta}$, since their product is $x^{1-3\delta}/D>x^{9/24}>D^4/x^{2-6\delta}$. We now wish to greedily form products $d_1\le D_1$, $d_2\le D_2$ and $d_3\le D_3$ by considering primes in turn. We start with $d_2=p_1p_4<D_2$ and either $d_1=1$ and $d_3=p_2p_3$ or $d_1=p_2p_3$ and $d_3=1$ depending on whether $p_2p_3>D_1$ or not. We now greedily form a sequence, where at the $j^{th}$ step we replace one of the $d_i$ with $d_ip_j$ provided $d_i p_j<D_i$ (the choice of $i\in\{1,2,3\}$ does not matter if there are multiple possibilities with $d_i p_j<D_i$), and we start with $j=5$. We stop if either we have included our final prime $p_r$ in one of the $d_i$, or there is a stage $j$ when $p_j d_1>D_1$, $p_j d_2>D_2$ and $p_j d_3>D_3$. If we stop because we have exhausted all our primes, then we see that we have found $d_1\le D_1$, $d_2\le D_2$ and $d_3\le D_3$ such that $d_1d_2d_3=p_1\cdots p_r$. It is then easy to verify that
\begin{align*}
d_1&\le D_1\le \frac{N}{x^\delta},\\
N^2d_2d_3^2&\le N^2D_2D_3^2\le x^{1-\delta},\\
N^2d_1d_2^4d_3^3&\le N^2 D_1D_2^4D_3^3\le \frac{D^5}{x^{1-5\delta}}<x^{2-\delta},\\
Nd_1d_2^5d_3^2&\le ND_1D_2^5D_3^2\le \frac{D^8}{x^{3-10\delta}}<x^{2-\delta}.
\end{align*}
Thus we just need to consider the situation when at some stage $j$ we have $p_j d_1>D_1$, $p_j d_2>D_2$ and $p_j d_3>D_3$. We see that this must first occur when $j$ is even, since for odd $j$ we have $p_j^3\le D/(p_1\cdots p_{j-1})=D_1D_2D_3/(d_1d_2d_3)$ and so $p_j\le \max(D_1/d_1,D_2/d_2,D_3/d_3)$. We must also have $j\ge 6$ since $j>4$ and is even. This implies $(p_j)^7\le p_1\cdots p_4p_5^3\le D$, so $p_j\le D^{1/7}\le x^{1/12-6\delta}$.
We now set $d_2':=d_2p_j$ and $D_2':=D_2x^{1/12-6\delta}$, so that $D_2\le d_2'\le D_2'$. We set $D_3':=D_2 D_3/d_2'$. For all $\ell>j$ we have $p_{\ell}^2<D_1D_3'/(d_1d_3p_{j+1}\cdots p_{\ell-1})$, so we can greedily make products $d_1'\le D_1$ and $d_3'\le D_3'$ with $d_1'd_3'=d_1d_3p_{j+1}\cdots p_r$. In particular, we then have $d=d_1'd_2'd_3'$. We then verify
\begin{align*}
d_1'&\le D_1\le \frac{N}{x^\delta},\\
N^2 d_2'(d_3')^2&\le N^2d_2'(D_3')^2=\frac{N^2 D_2^2 D_3^2}{d_2'}\le N^2 D_2 D_3^2=x^{1-\delta},\\
N^2 d_1' (d_2')^4(d_3')^3&\le N^2 D_1 (d_2')^4(D_3')^3\le N^2D_1 D_2^4D_3^3 x^{1/12-6\delta}\le \frac{D^5}{x^{11/12+\delta}}<x^{2-\delta},\\
N d_1' (d_2')^5 (d_3')^2 &\le N^2 D_1 (d_2')^5 (D_3')^2\le N^2D_1D_2^5D_3^2 x^{1/4-18\delta} \le \frac{D^8}{x^{11/4+8\delta}}\le x^{2-\delta}.
\end{align*}
We have now covered all cases, and so completed the proof of Proposition \ref{prpstn:Factorization}.
\end{proof}
\begin{rmk}
By considering the situation when $N=x^{1/3}$, $p_1\approx p_2\approx D^{2/7}$, $p_3\approx p_4\approx D^{1/7}$, and $p_j$ for $j\ge 5$ are small but satisfy $p_1\cdots p_r\approx D$, we see that Proposition \ref{prpstn:Factorization} cannot be extended to $D=x^{7/12+\delta}$ unless we impose further restrictions on $N$ or the $p_i$.
\end{rmk}
\bibliographystyle{plain}
| 2024-02-18T23:39:56.943Z | 2020-06-15T02:14:09.000Z | algebraic_stack_train_0000 | 915 | 13,956 |
|
proofpile-arXiv_065-4774 | \section{Introduction}
Imagine you are interested in learning an accurate estimate of the probability that the United States unemployment rate for a particular month will fall below 10\%. You could choose to spend hours digging through news articles, reading financial reports, and weighing various opinions against each other, eventually coming up with a reasonably informed estimate. However, you could potentially save yourself a lot of hassle (and obtain a better estimate!) by appealing to the wisdom of crowds.
A \emph{prediction market} is a financial market designed for information aggregation.
For example, in a cost function based prediction market~\cite{CP07}, the organizer (or \emph{market maker}) trades a set of securities corresponding to each potential outcome of an event. The market maker might offer a security that pays \$1 if and only if the United States unemployment rate for January 2010 is above 10\%. A risk neutral trader who believes that the true probability that the unemployment rate will be above 10\% is $p$ should be willing to buy a share of this security at any price below $\$p$. Similarly, he should be willing to sell a share of this security at any price above $\$p$. For this reason, the current market price of this security can be viewed as the population's collective estimate of how likely it is that the unemployment rate will be above 10\%.
These estimates have proved quite accurate in practice in a wide variety of domains. (See \citet{LHI09} for an impressive assortment of examples.) The theory of rational expectations equilibria offers some insight into why prediction markets in general should converge to accurate prices, but is plagued by strong assumptions and no-trade theorems~\cite{PS07}. Furthermore, this theory says nothing of why particular prediction market mechanisms, such as Hanson's increasingly popular Logarithmic Market Scoring Rule (LMSR) ~\cite{H03,H07}, might produce more accurate estimates than others in practice. In this work, we aim to provide additional insight into the learning power of particular market mechanisms by highlighting the deep mathematical connections between prediction markets and no-regret learning.
It should come as no surprise that there is a connection between prediction markets and learning. The theories of markets and learning are built upon many of the same fundamental concepts, such as proper scoring rules (called proper losses in the learning community) and Bregman divergences. To our knowledge, \citet{CFLPW08} were the first to formally demonstrate a connection, showing that the standard Randomized Weighted Majority regret bound~\cite{FS97} can be used as a starting point to rederive the well-known bound on the worst-case loss of a LMSR marker maker. (They went on to show that PermELearn, an extension of Weighted Majority to permutation learning~\cite{HW09}, can be used to efficiently run LMSR over combinatorial outcome spaces for betting on rankings.) As we show in Section~\ref{sec:connection}, the converse is also true; the Weighted Majority regret bound can be derived directly from the bound on the worst-case loss of a market maker using LMSR. However, the connection goes much deeper.
In Section~\ref{sec:connection}, we show how \emph{any} cost function based prediction market with bounded loss can be interpreted as a no-regret learning algorithm. Furthermore, if the loss of the market maker is bounded, this bound can be used to derive an $O(\sqrt{T})$ regret bound for the corresponding learning algorithm. The key ides is to view the \emph{trades} made in the market as \emph{losses} observed by the learning algorithm. We can then think of the market maker as learning a probability distribution over outcomes by treating each observed trade as a training instance.
In Section~\ref{sec:connections}, we go on to show that the class of \emph{convex} cost function based markets exactly corresponds to the class of Follow the Regularized Leader learning algorithms~\cite{SS07,HK08,H09} in which weights are chosen at each time step to minimize a combination of empirical loss and a convex regularization term. This allows us to interpret the selection of a cost function for the market as the selection of a regularizer for the learning problem. Furthermore, we prove an equivalence between another common class of prediction markets, \emph{market scoring rules}, and convex cost function based markets,\footnote{A similar but weaker correspondence between market scoring rules and cost function based markets was discussed in \citet{CP07} and \citet{ADPWY09}.} which immediately implies that market scoring rules can be interpreted as Follow the Regularized Leader algorithms too. These connections provide insight into why it is that prediction markets tend to yield such accurate estimates in practice.
Before describing our results in more detail, we review the relevant concepts and results from the literature on prediction markets and no-regret learning in Sections~\ref{sec:predmarkets} and~\ref{sec:experts}.
\section{Prediction Markets}
\label{sec:predmarkets}
In recent years, a variety of compelling prediction market mechanisms have been proposed and studied, including standard call market mechanisms and Pennock's dynamic parimutuel markets~\cite{P04}. In this work we focus on two broad classes of mechanisms: Hanson's market scoring rules~\cite{H03,H07} and cost function based prediction markets as described in \citet{CP07}. We also briefly discuss the related class of Sequential Convex Parimutuel Mechanisms~\cite{ADPWY09} in Section~\ref{sec:scpm}.
\subsection{Market Scoring Rules}
\emph{Scoring rules} have long been used in the evaluation of probabilistic forecasts. In the context of prediction markets and elicitation, scoring rules are used to encourage individuals to make careful assessments and truthfully report their beliefs~\cite{Savage:71,GKO05,LPS08}. In the context of machine learning, scoring rules are used as loss functions to evaluate and compare the performance of different algorithms~\cite{BSS05,RW09}.
Formally, let $\{1,\cdots,N\}$ be a set of mutually exclusive and exhaustive outcomes of a future event. A scoring rule $\vec{s}$ maps a probability distribution ${\vec{p}}$ to a score $s_i({\vec{p}})$ for each outcome $i$, with $s_i({\vec{p}})$ taking values in the extended real line $[-\infty, \infty]$. Intuitively, this score represents the reward of a forecaster might receive for predicting the distribution ${\vec{p}}$ if the outcome turns out to be $i$. A scoring rule is said to be \emph{regular} relative to the probability simplex ${\Delta}_N$ if $\sum_{i=1}^N p_i s_i({\vec{p}}\,') \in [-\infty, \infty)$ for all ${\vec{p}}, {\vec{p}}\,' \in {\Delta}_N$, with $\sum_{i=1}^N p_i s_i({\vec{p}}) \in (-\infty, \infty)$. This implies that $s_i(\vec{p})$ is finite whenever $p_i > 0$. A scoring rule is said to be \emph{proper} if a risk-neutral forecaster who believes the true distribution over outcomes to be ${\vec{p}}$ has no incentive to report any alternate distribution ${\vec{p}}\,'$, that is, if $\sum_{i=1}^N p_i s_i({\vec{p}}) \geq \sum_{i=1}^N p_i s_i({\vec{p}}\,')$ for all distributions ${\vec{p}}\,'$. The rule is \emph{strictly proper} if this inequality holds with equality only when ${\vec{p}} = {\vec{p}}\,'$.
Two examples of regular, strictly proper scoring rules commonly used in both elicitation and in machine learning are the the quadratic scoring rule~\cite{B50}:
\begin{equation}
s_i({\vec{p}}) = a_i + b\left(2p_i - \sum_{i=1}^N p_i^2 \right)
\label{eqn:qsr}
\end{equation}
and the logarithmic scoring rule~\cite{G52}:
\begin{equation}
s_i({\vec{p}}) = a_i + b \log(p_i)
\label{eqn:lsr}
\end{equation}
with arbitrary parameters $a_1, \cdots, a_N$ and parameter $b > 0$.
The uses and properties of scoring rules are too extensive to cover in detail here. For a nice survey, see \citet{Gneiting:07}.
\emph{Market scoring rules} were developed by \citet{H03,H07} as a method of using scoring rules to pool opinions from many different forecasters. Market scoring rules are sequentially shared scoring rules. Formally, the market maintains a current probability distribution ${\vec{p}}$. At any time, a trader can enter the market and change this distribution to an arbitrary distribution ${\vec{p}}\,'$ of her choice.\footnote{While ${\vec{p}}\,'$ may be arbitrary, in some market scoring rules, such as the LMSR, distributions that place a weight of 0 on any outcome are not allowed because it requires the trader to pay infinite amount of money if the outcome with reported probability 0 actually happens.} If the outcome turns out to be $i$, she receives a (possibly negative) payoff of $s_i({\vec{p}}\,') - s_i({\vec{p}})$. For example, in the popular Logarithmic Market Scoring Rule (LMSR), which is based on the logarithmic scoring rule in Equation~\ref{eqn:lsr}, a trader who changes the distribution from ${\vec{p}}$ to ${\vec{p}}\,'$ receives a payoff of $b \log (p'_i / p_i)$.
Since the trader has no control over ${\vec{p}}$, a myopic trader who believes the true distribution to be $\vec{r}$ maximizes her expected payoff by maximizing $\sum_i r_i s_i ({\vec{p}}\,')$. Thus if $\vec{s}$ is a strictly proper scoring rule, traders have an incentive to change the market's distribution to match their true beliefs. The idea is that if traders update their own beliefs over time based on market activity, the market's distribution should eventually converge to the collective beliefs of the population.
Each trader in a market scoring rule is essentially responsible for paying the previous trader's score. Thus the market maker is responsible only for paying the score of the final trader. Let ${\vec{p}}_0$ be the initial probability distribution of the market. The worst case loss of the market maker is then
\[
\max_{i \in \{1,\cdots,N\}} \max_{{\vec{p}} \in {\Delta}_N} \left(s_i({\vec{p}}) - s_i({\vec{p}}_0) \right) .
\]
The worst case loss of the market maker running an LMSR initialized to the uniform distribution is $b \log N$.
Note that the parameters $a_1, \cdots, a_N$ of the logarithmic scoring rule do not affect either the payoff of traders or the loss of the market maker in the LMSR. For simplicity, in the remainder of this paper when discussing the LMSR we assume that $a_i = 0$ for all $i$.
\subsection{Cost Function Based Markets}
As before, let $\{1,\cdots,N\}$ be a set of mutually exclusive and exhaustive outcomes of an event. In a cost function based market, a market maker offers a security corresponding to each outcome $i$. The security associated with outcome $i$ pays off \$1 if $i$ happens, and \$0 otherwise.\footnote{The dynamic parimutuel market falls outside this framework since the winning payoff depends on future trades.}
Different mechanisms can be used to determine how these securities are priced. Each mechanism is specified using a differentiable \emph{cost function} $C: \mathbb{R}^N \rightarrow \mathbb{R}$. This cost function is simply a potential function describing the amount of money currently wagered in the market as a function of the quantity of shares purchased. If $q_i$ is the number of shares of security $i$ currently held by traders, and a trader would like to purchase $r_i$ shares of each security (where $r_i$ could be zero or even negative, representing the sale of shares), the trader must pay $C({\vec{q}} + \r) - C({\vec{q}})$ to the market maker. The instantaneous price of security $i$ (that is, the price per share of an infinitely small number of shares) is then
$p_i = \partial C({\vec{q}})/\partial q_i$.
We say that a cost function is \emph{valid} if the associated prices satisfy two simple conditions:
\begin{enumerate}
\item For every $i \in \{1,\cdots,N\}$ and every $\vec{q} \in \mathbb{R}^N$, $p_i(\vec{q}) \geq 0$.
\item For every $\vec{q} \in \mathbb{R}^N$, $\sum_{i=1}^N p_i(\vec{q}) =1$ ~.
\end{enumerate}
The first condition ensures that the price of a security is never negative. If the current price of the security associated with an outcome $i$ were negative, a trader could purchase shares of this security at a guaranteed profit. The second condition ensures that the prices of all securities sum to 1. If the prices summed to something less than (respectively, greater than) 1, then a trader could purchase (respectively, sell) small equal quantities of each security for a guaranteed profit. Together, these conditions ensure that there are no arbitrage opportunities in the market.
These conditions also ensure that the current prices can always be viewed as a valid probability distribution over the outcome space. In fact, these prices represent the market's current estimate of the probability that outcome $i$ will occur.
The following theorem gives sufficient and necessary conditions for the cost function $C$ to be valid. While these properties of cost functions have been discussed elsewhere~\cite{CP07,ADPWY09}, the fact that they are both sufficient and necessary for any valid cost function $C$ is important for our later analysis. As such, we state the full proof here for completeness.
\begin{theorem}
A cost function $C$ is valid if and only if it satisfies the following three properties:
\begin{enumerate}
\item {\sc Differentiability:} The partial derivatives $\partial C({\vec{q}})/\partial q_i$ exist for all $\vec{q}\in \mathbb{R}^N$ and $i\in\{1, \dots, N\}$.
\item {\sc Increasing Monotonicity:} For any $\vec{q}$ and $\vec{q}\,'$, if $\vec{q} \geq \vec{q}\,'$, then $C(\vec{q}) \geq C(\vec{q}\,')$.
\item {\sc Positive Translation Invariance:} For any $\vec{q}$ and any constant $k$, $C(\vec{q} + k\vec{1}) = C(\vec{q}) + k$.
\end{enumerate}
\label{thm:validcostfunc}
\end{theorem}
\begin{proof}
Differentiability is necessary and sufficient for the price functions to be well-defined at all points. It is easy to see that requiring the cost function to be monotonic is equivalent to requiring that $p_i(\vec{q}) \geq 0$ for all $i$ and $\vec{q}$. We will show that requiring positive translation invariance is equivalent to requiring that the prices always sum to one.
First, assume that $\sum_{i=1}^N p_i(\vec{q}) = 1$ for all $\vec{q}$. For any fixed value of $\vec{q}$, define $\vec{u} = \vec{u}(a) = {\vec{q}} + a \vec{1}$ and let $u_i$ be the $i$th component of $\vec{u}$. Then for any $k$,
\begin{eqnarray*}
C(\vec{q} + k \vec{1}) - C(\vec{q})
&=& \int_{0}^k \frac{d C(\vec{q} + a \vec{1})}{d a} da
\\
&=& \int_{0}^k \sum_{i=1}^N \frac{\partial C(\vec{u})}{\partial u_i} \frac{\partial u_i}{\partial a} da
\\
&=& \int_{0}^k \sum_{i=1}^N p_i(\vec{u}) da
= k~.
\end{eqnarray*}
This is precisely translation invariance.
Now assume instead that positive translation invariance holds. Fix any arbitrary $\vec{q}\,'$ and $k$ and define $\vec{q} = \vec{q}\,' + k\vec{1}$. Notice that by setting $\vec{q}\,'$ and $k$ appropriately, we can make $\vec{q}$ take on any arbitrary values. We have,
\[
\frac{\partial C(\vec{q})}{\partial k}
= \sum_{i=1}^N\frac{\partial C(\vec{q})}{\partial q_i} \frac{\partial q_i}{\partial k}
= \sum_{i=1}^N p_i(\vec{q}).
\]
By translation invariance, $C(\vec{q}~'+k\vec{1}) = C(\vec{q}~') + k$. Thus,
\[
\frac{\partial C(\vec{q})}{\partial k} = \frac{\partial (C(\vec{q}~') + k) }{\partial k}=1.
\]
Combining the two equations, we have $\sum_{i=1}^N p_i(\vec{q}) = 1$.
\end{proof}
One quantity that is useful for comparing different market mechanisms is the worst-case loss of the market maker,
\[
\max_{{\vec{q}} \in \mathbb{R}^N} \left( \max_{i \in \{1,\cdots,N\}} q_i - (C({\vec{q}}) - C(\vec{0})) \right) ~.
\]
This is simply the difference between the maximum amount that the market maker might have to pay the winners and the amount of money collected by the market maker.
The Logarithmic Market Scoring Rule described above can be specified as a cost function based prediction market~\cite{H03,CP07}. Then cost function of the LMSR is
\[
C(\vec{q}) = b \log \sum_{i=1}^N {\textrm{e}}^{q_{i}/b} ~,
\]
and the corresponding prices are
\[
p_{i}(\vec{q})
= \frac{\partial C(\vec{q})}{\partial q_{i}}
= \frac{ {\textrm{e}}^{q_{i}/b}}{\sum_{j=1}^N {\textrm{e}}^{q_{j}/b}} ~.
\]
This formulation is equivalent to the market scoring rule formulation in the sense that a trader who changes the market probabilities from $\r$ to $\r\,'$ in the MSR formulation receives the same payoff for every outcome $i$ as a trader who changes the quantity vectors from any ${\vec{q}}$ to ${\vec{q}}\,'$ such that $p({\vec{q}}) = \r$ and $p({\vec{q}}\,') = \r\,'$ in the cost function formulation.
\section{Learning from Expert Advice}
\label{sec:experts}
We now briefly review the problem of learning from expert advice. In this framework, an algorithm makes a sequence of predictions based on the advice of a set of $N$ \emph{experts} and receives a corresponding sequence of \emph{losses}.\footnote{This framework could be formalized equally well in terms of \emph{gains}, but losses are more common in the literature.} The goal of the algorithm is to achieve a \emph{cumulative loss} that is ``almost as low'' as the cumulative loss of the best performing expert in hindsight. No statistical assumptions are made about these losses. Indeed, algorithms are expected to perform well even if the sequence of losses is chosen by an adversary.
Formally, at every time step $t \in \{1,\cdots,T\}$, every expert $i \in \{1,\cdots, N\}$ receives a loss $\ell_{i,t}\in [0,1]$. The cumulative loss of expert $i$ at time $T$ is then defined as ${L}_{i,T} = \sum_{t=1}^T \ell_{i,t}$. An algorithm ${\mathcal{A}}$ maintains a weight $w_{i,t}$ for each expert $i$ at time $t$, where $\sum_{i=1}^n w_{i,t} = 1$. These weights can be viewed as a distribution over the experts. The algorithm then receives its own instantaneous loss $\ell_{{\mathcal{A}},t} =\sum_{i=1}^n w_{i,t} \ell_{i,t}$, which can be interpreted as the expected loss the algorithm would receive if it always chose an expert to follow according to the current distribution. The cumulative loss of ${\mathcal{A}}$ up to time $T$ is defined in the natural way as ${L}_{{\mathcal{A}},T} = \sum_{t=1}^T \ell_{{\mathcal{A}},t}=\sum_{t=1}^T \sum_{i=1}^n w_{i,t} \ell_{i,t}$.
It is unreasonable to expect the algorithm to achieve a small cumulative loss if none of the experts perform well. As such, it is typical to measure the performance of an algorithm in terms of its \emph{regret}, defined to be the difference between the cumulative loss of the algorithm and the loss of the best performing expert, that is,
\[
{L}_{{\mathcal{A}},T} - \min_{i \in \{1,\cdots,N\}} {L}_{i,T} .
\]
An algorithm is said to have \emph{no regret} if the average per time step regret approaches $0$ as $T$ approaches infinity.
The popular Randomized Weighted Majority (WM) algorithm~\cite{LW94,FS97} is an example of a no-regret algorithm. Weighted Majority uses weights
\[
w_{i,t} = \frac{{\textrm{e}}^{-\eta {L}_{i,t}}}{\sum_{j=1}^n {\textrm{e}}^{-\eta{L}_{j,t}}} ,
\]
where $\eta > 0$ is a tunable parameter known as the \emph{learning rate}. It is well known that the regret of WM after $T$ trials can be bounded as
\[
{L}_{WM(\eta),T} - \min_{i \in \{1,\cdots,N\}} {L}_{i,T} \leq \eta T + \frac{\log N}{\eta}.
\]
When $T$ is known in advance, setting $\eta=\sqrt{\log N /T}$ yields the standard $O(\sqrt{T \log N})$ regret bound.
It has been shown that the weights chosen by Weighted Majority are precisely those that minimize a combination of empirical loss and an entropic regularization term~\cite{KW97,KW99,HW09}. More specifically, the weights at time $t$ are precisely those that minimize
\[
\sum_{i=1}^N w_{i} {L}_{i,t-1} - \frac{1}{\eta} {\textrm{H}}(\vec{w})
\]
among all $\vec{w} \in {\Delta}_N$, where ${\textrm{H}}$ is the entropy. This makes Weighted Majority an example of broader class of algorithms collectively known as \emph{Follow the Regularized Leader} algorithms~\cite{SS07,HK08,H09}. This class of algorithms grew out of the following fundamental insight of \citet{KV05}.
Consider first the aptly named \emph{Follow the Leader} algorithm, which chooses weights at time $t$ to minimize $\sum_{i=1}^N w_{i,t} {L}_{i,t-1}$. This algorithm simply places all of its weight on the single expert (or set of experts) with the best performance on previous examples. As such, this algorithm can be highly unstable, dramatically changing its weights from one time step to the next. It is easy to see that Follow the Leader suffers $\Omega(T)$ regret in the worst case when the best expert changes frequently. For example, if there are only two experts with losses starting at $\angles{1/2,0}$ and then alternating $\angles{0,1}, \angles{1,0}, \angles{0,1}, \angles{1,0}, \cdots$, then FTL places a weight of 1 on the losing expert at every point in time.
To overcome this instability, \citet{KV05} suggested adding a random perturbation to the empirical loss of each expert, and choosing the expert that minimizes this perturbed loss.\footnote{A very similar algorithm was originally developed and analyzed by Hannan in the 1950s~\cite{H57}.} However, in general this perturbation need not be random. Instead of adding a random perturbation, it is possible to gain the necessary stability by adding a \emph{regularizer} ${\mathcal{R}}$ and choosing weights to minimize
\begin{equation}
\sum_{i=1}^N w_{i,t} {L}_{i,t-1} + \frac{1}{\eta} {\mathcal{R}}(\vec{w}_t) ~.
\label{eqn:ftrl}
\end{equation}
This Follow the Regularized Leader (FTRL) approach gets around the instability of FTL and guarantees low regret for a wide variety of regularizers, as evidenced by the following bound of \citet{HK08}.
\begin{lemma}[\citet{HK08}] For any regularizer ${\mathcal{R}}$, the regret of FTRL can be bounded as
\begin{eqnarray*}
\lefteqn{{L}_{FTRL({\mathcal{R}},\eta),T} - \min_{i \in \{1,\cdots,N\}} {L}_{i,T} }
\\
&\leq&
\sum_{t=1}^T \sum_{i=1}^N \ell_{i,t} (w_{i,t} - w_{i,t+1})
+ \frac{1}{\eta} \left({\mathcal{R}}(\vec{w}_{T}) - {\mathcal{R}}(\vec{w}_0)\right) ~.
\end{eqnarray*}
\label{lem:ftrlbound}
\end{lemma}
This lemma quantifies the trade-off that must be considered when choosing a regularizer. If the range of the regularizer is too small, the weights will change dramatically from one round to the next, and the first term in the bound will be large. On the other hand, if the range of the regularizer is too big, the weights that are chosen will be too far from the true loss minimizers and the second term will blow up.
It is generally assumed that the regularizer ${\mathcal{R}}$ is strictly convex. This assumption ensures that Equation~\ref{eqn:ftrl} has a unique minimizer and that this minimizer can be computed efficiently. \citet{H09} shows that if ${\mathcal{R}}$ is strictly convex then it is possible to achieve a regret of $O(\sqrt{T})$. In particular, by optimizing $\eta$ appropriately the regret bound in Lemma~\ref{lem:ftrlbound} can be upper bounded by
\begin{equation}
2 \sqrt{2 \lambda \max_{\vec{w},\vec{w}\,' \in {\Delta}_N} ({\mathcal{R}}(\vec{w}) - {\mathcal{R}}(\vec{w}\,')) T}
\label{eqn:lambdabound}
\end{equation}
where $\lambda = \max_{\ell \in [0,1]^N, \vec{w} \in {\Delta}_N} \ell^T [\nabla^2 {\mathcal{R}}(\vec{w})]^{-1} \ell$.
\section{Interpreting Prediction Markets as No-Regret Learners}
\label{sec:connection}
With this foundation in place, we are ready to describe how any bounded loss market maker can be interpreted as an algorithm for learning from expert advice. The key idea is to equate the \emph{trades} made in the market with the \emph{losses} observed by the learning algorithm. We can then view the market maker as essentially learning a probability distribution over outcomes by treating each observed trade as a training instance.
More formally, consider any cost function based market maker with instantaneous price functions $p_i$ for each outcome $i$. We convert such a market maker to an algorithm for learning from expert advice by setting the weight of expert $i$ at time $t$ using
\begin{equation}
w_{i,t} = p_i(-\epsilon \vec{{L}}_{t-1}) ,
\label{eqn:weights}
\end{equation}
where $\epsilon > 0$ is a tunable parameter and $\vec{{L}}_{t-1} = \langle {L}_{1,t-1},\cdots,{L}_{N,t-1} \rangle$ is the vector of cumulative losses at time $t-1$. In other words, the weight on expert $i$ at time $t$ in the learning algorithm is the instantaneous price of security $i$ in the market when $-\epsilon {L}_{j,t-1}$ shares have been purchased (or $\epsilon {L}_{j,t-1}$ shares have been sold) of each security $j$. We discuss the role of the parameter $\epsilon$ in more detail below.
First note that for any valid cost function based prediction market, setting the weights as in Equation~\ref{eqn:weights} entails valid expert learning algorithm. Since the prices of any valid prediction market must be non-negative and sum to one, the weights of the resulting algorithm are guaranteed to satisfy these properties too. Furthermore, the weights are a function of only the past losses of each expert, which the algorithm is permitted to observe.
Below we show that applying this conversion to any bounded-loss market maker with slowly changing prices yields a learning algorithm with $O(\sqrt{T})$ regret. The quality of the regret bound obtained depends on the trade-off between market maker loss and how quickly the prices change. We then show how this bound can be used to rederive the standard regret bound of Weighted Majority, the converse of the result of \citet{CFLPW08}.
\subsection{A Bound on Regret}
\label{sec:regretbound}
In order to derive a regret bound for the learning algorithm defined in Equation~\ref{eqn:weights}, it is necessary to make some restrictions on how quickly the prices in the market change. If market prices change too quickly, the resulting learning algorithm will be unstable and will suffer high worst-case regret, as was the case with the naive Follow The Leader algorithm described in Section~\ref{sec:experts}. To capture this idea, we introduce the notion of $\phi$-stability, defined as follows.
\begin{definition}
We say that a set of price functions $\vec{p}$ is \emph{$\phi$-stable} for a constant $\phi$ if $p_i$ is continuous and piecewise differentiable for all $i \in \{1,\cdots,N\}$ and $\sum_{i=1}^N \sum_{j=1}^N \abs{D_{i,j}(\vec{t})} \leq \phi$ for all $\vec{t}$, where
\[
D_{i,j}(\vec{t}) =
\begin{cases}
\left. \frac{\partial p_i({\vec{q}})}{\partial q_j} \right|_{{\vec{q}} = \vec{t}}
& \textrm{if $\frac{\partial p_i({\vec{q}})}{\partial q_j}$ is defined at $\vec{t}$,}
\\
0
& \textrm{otherwise.}
\end{cases}
\]
\end{definition}
Defining $\phi$-stability in terms of the $D_{i,j}$ allows us to quantify how slowly the prices change even when the price functions are not differentiable at all points. We can then derive a regret bound for the resulting learning algorithm using the following simple lemma. This lemma states that when the quantity vector in the market is ${\vec{q}}$, if the price functions are $\phi$-stable, then the amount of money that the market maker would collect for the purchase of a small quantity $r_i$ of each security $i$ is not too far from the amount that the market maker would have collected had he instead priced the shares according to the fixed price $\vec{p}({\vec{q}})$.
\begin{lemma}
Let $C$ be any valid cost function yielding $\phi$-stable prices. For any $\epsilon > 0$, any ${\vec{q}} \in \mathbb{R}^N$, and any $\r \in \mathbb{R}^N$ such that $\abs{r_i} \leq \epsilon$ for $i \in
\{1,\cdots,N\}$,
\[
\abs{
\left(C({\vec{q}}+\r) - C({\vec{q}}) \right)
- \sum_{i=1}^N p_i({\vec{q}}) r_i
}
\leq \frac{\epsilon^2 \phi}{2} ~.
\]
\label{lem:pricingdiffbound}
\label{LEM:PRICINGDIFFBOUND}
\end{lemma}
The proof is in Appendix~\ref{app:pricingdiffbound}.
With this lemma in place, we are ready to derive the regret bound. In the following theorem, it is assumed that $T$ is known a priori and therefore can be used to set $\epsilon$. If $T$ is not known in advance, a standard ``doubling trick'' can be applied~\cite{C-B+97}. The idea behind the doubling trick is to partition time into periods of exponentially increasing length, restarting the algorithm each period. This leads to similar bounds with only an extra factor of $\log(T)$.
\begin{theorem}
Let $C$ be any valid cost function yielding $\phi$-stable prices. Let $B$ be a bound on the worst-case loss of the market maker mechanism associated with $C$. Let ${\mathcal{A}}$ be the expert learning algorithm with weights as in Equation~\ref{eqn:weights} with $\epsilon = \sqrt{2B/(\phi T)}$. Then for any sequence of expert losses $\ell_{i,t} \in [0,1]$ over $T$ time steps, \[ {L}_{{\mathcal{A}},T} - \min_{i \in \{1,\cdots,N\}} {L}_{i,T} \leq \sqrt{2 B \phi T} ~. \]
\label{thm:mainreduction}
\end{theorem}
\begin{proof}
By setting the weights as in Equation~\ref{eqn:weights}, we are essentially simulating a market over $N$ outcomes. Let $r_{i,t}$ denote the number of shares of outcome $i$ purchased at time step $t$ in this simulated market, and denote by $\r_t$ the vector of these quantities for all $i$. Note that $r_{i,t}$ is completely in our control since we are simply simulating a market, thus we can choose to set $r_{i,t} = -\epsilon \ell_{i,t}$ for all $i$ and $t$. We have that $r_{i,t} \in [-\epsilon,0]$ for all $i$ and $t$ since $\ell_{i,t} \in [0,1]$. Let $q_{i,t} = \sum_{t'=1}^t r_{i,t'}$ be the total number of outstanding shares of security $i$ after time $t$, with ${\vec{q}}_t$ denoting the vector over all $i$. The weight assigned to expert $i$ at round $t$ of the learning algorithm corresponds to the instantaneous price of security $i$ in the simulated market immediately before round $t$, that is, $w_{i,t} = p_i(-\epsilon \vec{{L}}_{t-1}) = p_i({\vec{q}}_{t-1})$.
By the definition of worst-case market maker loss, $\max_{i} q_{i,t} - (C({\vec{q}}_t) - C(\vec{0})) \leq B$. It is easy to see that we can rewrite the left-hand side of this equation to obtain
\[
\max_{i \in \{1,\cdots,N\}} \sum_{t=1}^T r_{i,t}
- \sum_{t=1}^t \left(C({\vec{q}}_{t}) - C({\vec{q}}_{t-1}) \right)
\leq B ~.
\]
From Lemma~\ref{lem:pricingdiffbound}, this gives us that
\[
\max_{i \in \{1,\cdots,N\}} \sum_{t=1}^T r_{i,t}
- \sum_{t=1}^t
\left( \sum_{i=1}^N p_i({\vec{q}}_{t-1}) r_{i,t}
+ \frac{\epsilon^2 \phi}{2} \right)
\leq B .
\]
Substituting $p_i({\vec{q}}_{t-1}) = w_{i,t}$ and $r_{i,t} = - \epsilon \ell_{i,t}$, we get
\[
\max_{i \in \{1,\cdots,N\}} \sum_{t=1}^T \left(- \epsilon \ell_{i,t} \right)
- \sum_{t=1}^t \sum_{i=1}^N w_{i,t} \left( - \epsilon \ell_{i,t} \right)
\leq B + \frac{\epsilon^2 \phi T}{2}
\]
and so
\begin{eqnarray*}
{L}_{{\mathcal{A}},T} - \min_{i \in \{1,\cdots,N\}} {L}_{i,T}
&=& \sum_{t=1}^t \sum_{i=1}^N w_{i,t} \ell_{i,t}
- \min_i \sum_{t=1}^T \ell_{i,t}
\\
&\leq& \frac{B}{\epsilon} + \frac{\epsilon \phi T}{2} .
\end{eqnarray*}
Setting $\epsilon = \sqrt{2B/(\phi T)}$ yields the bound.
\end{proof}
\subsection{Rederiving the Weighted Majority Bound}
\label{sec:wmbound}
\citet{CFLPW08} showed that the Weighted Majority regret bound can be used as a starting point to rederive the worst case loss of $b \log N$ of an LMSR market maker. Here we show that the converse is also true; by applying Theorem~\ref{thm:mainreduction}, we can rederive the Weighted Majority bound from the bounded market maker loss of LMSR.
In order to apply Theorem~\ref{thm:mainreduction}, we must provide a bound on how quickly LMSR prices can change. This is given in the following lemma, the proof of which is in Appendix~\ref{app:lmsrderiv}.
\begin{lemma}
Let ${\vec{p}}$ be the pricing function of a LMSR with parameter $b > 0$. Then
\[
\sum_{i=1}^N \sum_{j=1}^N \abs{\frac{\partial p_i({\vec{q}})}{\partial q_j}}
\leq \frac{2}{b} ~.
\]
\label{lem:lmsrderiv}
\label{LEM:LMSRDERIV}
\end{lemma}
Using Equation~\ref{eqn:weights} to transform the LMSR into a learning algorithm, we end up with weights
\[
w_{i,t}
= \frac{ {\textrm{e}}^{- \epsilon{L}_{i,t-1}/b}}{\sum_{j=1}^N {\textrm{e}}^{- \epsilon {L}_{j,t-1}/b}} ~.
\]
Setting $\epsilon = \sqrt{2B/(\phi T)} = b \sqrt{\log N/T}$, we see that these weights are equivalent to those used by Weighted Majority with the learning rate $\eta = \epsilon / b = \sqrt{\log N/T}$. As mentioned above, this is the optimal setting of $\eta$. Notice that these weights do not depend on the value of the parameter $b$ in the prediction market.
We can now apply Theorem~\ref{thm:mainreduction} to rederive the standard Weighted Majority regret bound stated in Section~\ref{sec:experts}. In particular, setting $B = b \log N$ and $\phi = 2/b$, we get that when $\eta = \sqrt{\log(N)/T}$,
\[
{L}_{WM,T} - \min_{i \in \{1,\cdots,N\}} {L}_{i,T}
\leq 2 \sqrt{T \log N} ~.
\]
\section{Connections Between Market Scoring Rules, Cost Functions, and Regularization}
\label{sec:connections}
In this section, we establish the formal connections among market scoring rules, cost function based markets, and the class of Follow the Regularized Leaders algorithms. We start with a representation theorem for cost function based markets, which is crucial in our later analysis.
\subsection{A Representation Theorem for Convex Cost Functions}
\label{sec:riskmeasures}
In this section we show a representation theorem for convex cost functions. The proof of this theorem relies on the connection between convex cost functions and a class of functions known in the finance literature as convex risk measures, which was first noted by \citet{ADPWY09}. Convex risk measures were originally introduced by \citet{FS02} to model different attitudes towards risk in financial markets. A \emph{risk measure} $\rho$ can be viewed as a mapping from a vector of returns (corresponding to each possible outcome of an event) to a real number. The interpretation is that a vector of returns $\vec{x}$ is ``preferred to'' the vector $\vec{x}\,'$ under a risk measure $\rho$ if and only if $\rho(\vec{x}) < \rho(\vec{x}\,')$.
Formally, a function $\rho$ is a \emph{convex risk measure} if it satisfies the following three properties:
\begin{enumerate}
\item {\sc Convexity:} $\rho(\vec{x})$ is a convex function of $\vec{x}$.
\item {\sc Decreasing Monotonicity:} For any $\vec{x}$ and $\vec{x}\,'$, if $\vec{x} \geq \vec{x}\,'$, then $\rho(\vec{x}) \leq \rho(\vec{x}\,')$.
\item {\sc Negative Translation Invariance:} For any $\vec{x}$ and value $k$, $\rho(\vec{x} + k\vec{1}) = \rho(\vec{x}) - k$.
\end{enumerate}
The financial interpretations of these properties are not important in our setting. More interesting for us is that \citet{FS02} provide a representation theorem that states that a function $\rho$ is a convex risk measure if and only if it can be represented as
\[
\rho(\vec{x}) = \sup_{\vec{p}\in {\Delta}_N} \left( - \sum_{i=1}^N p_i x_i - \alpha (\vec{p})\right)
\]
where $\alpha:{\Delta}_N \to (-\infty, \infty]$ is a convex, lower semi-continuous function referred to as a \emph{penalty function}.
This fact is useful because it allows us to obtain the following result, which was alluded to informally by \citet{ADPWY09}. The full proof is included here for completeness.
\begin{lemma}
A function $C$ is a valid convex cost function if and only if it is differentiable and can be represented as
\begin{equation}
C(\vec{q}) = \sup_{\vec{p}\in {\Delta}_N} \left(\sum_{i=1}^N p_i q_i - \alpha (\vec{p})\right)
\label{eqn:riskrep}
\end{equation}
for a convex and lower semi-continuous function $\alpha$. Furthermore, for any quantity vector ${\vec{q}}$, the price vector $\vec{p}({\vec{q}})$ corresponding to $C$ is the distribution $\vec{p}$ maximizing $\sum_{i=1}^N p_i q_i - \alpha (\vec{p})$.
\label{lem:costrepresentation}
\end{lemma}
\begin{proof}
Consider any differentiable function $C : \mathbb{R}^N \rightarrow \mathbb{R}$. Let $\rho({\vec{q}}) = C(-{\vec{q}})$. Clearly by definition, $\rho$ satisfies decreasing monotonicity if and only if $C$ satisfies increasing monotonicity, and $\rho$ satisfies negative translation invariance if and only if $C$ satisfies positive translation invariance. Furthermore, $\rho$ is convex if and only if $C$ is convex. By Theorem~\ref{thm:validcostfunc}, this implies that $C$ is a valid convex cost function if and only if $\rho$ is a convex risk measure. The first half of the lemma then follows immediately from the representation theorem of \citet{FS02}.
Now, because $\alpha (\vec{p})$ is guaranteed to be convex, $\sum_{i=1}^N p_i q_i - \alpha (\vec{p})$ is a concave function of $\vec{p}$. The constraints $\sum_{i=1}^N p_i = 1$ and $p_i \geq 0$ define a closed convex feasible set. Thus, the problem of maximizing $\sum_{i=1}^N p_i q_i - \alpha (\vec{p})$ with respect to $\vec{p}$ has a global optimal solution and first-order KKT conditions are both necessary and sufficient. Let $\vec{p}\,^*(\vec{q})$ denote an optimal $\vec{p}$ for this optimization problem. Then, $C(\vec{q}) = \sum_{i=1}^N p_i^*({\vec{q}}) q_i - \alpha(\vec{p}\,^*(\vec{q}))$. By the envelope theorem~\cite{MS02}, if $C(\vec{q})$ is differentiable, we have that for any $i$, $p_i^*(\vec{q}) = \partial C(\vec{q})/\partial q_i =p_i(\vec{q})$. Thus the market prices are precisely those which maximize the inner expression of the cost function.
\end{proof}
Furthermore, by a version of the envelope theorem~\cite{K93}, to ensure that $C$ is differentiable, it is sufficient to show that $\alpha$ is strictly convex and differentiable.
\begin{corollary}
A function $C$ is a valid convex cost function if it can be represented as in Equation~\ref{eqn:riskrep} for a strictly convex and differentiable function $\alpha$. For any ${\vec{q}}$, the price vector $\vec{p}({\vec{q}})$ is the distribution $\vec{p}$ maximizing $\sum_{i=1}^N p_i q_i - \alpha (\vec{p})$.
\label{cor:costrepresentation}
\end{corollary}
The ability to represent any valid cost function in this form allows us to define a bound on the worst-case loss of the market maker in terms of the penalty function of the corresponding convex risk measure.
\begin{lemma}
The worst-case loss of the market maker defined by the cost function in Equation~\ref{eqn:riskrep} is no more than
\[
\sup_{\vec{p},\vec{p}\,' \in {\Delta}_N} \left(\alpha(\vec{p}) - \alpha(\vec{p}\,')\right).
\]
\label{lem:worstcaseloss}
\end{lemma}
\begin{proof}
The worst-case loss of the market maker is
\begin{eqnarray*}
\lefteqn{\max_{{\vec{q}} \in \mathbb{R}^N} \left( \max_{i\in \{1,\cdots,N\}} q_i - C({\vec{q}})\right) + C(\vec{0}) }
\\
&=& \max_{{\vec{q}} \in \mathbb{R}^N}
\left( \max_{i \in \{1,\cdots,N\}} q_i
- \sup_{\vec{p}\in {\Delta}_N} \left(\sum_{i=1}^N p_i q_i - \alpha (\vec{p})\right) \right)
\\
&& + \sup_{\vec{p}\,'\in {\Delta}_N} \left(-\alpha (\vec{p}\,')\right)
\\
&\leq& \max_{{\vec{q}} \in \mathbb{R}^N}
\!\left(\!\max_{i \in \{1,\cdots,N\}} q_i
- \!\left(\!\sup_{\vec{p}\in {\Delta}_N} \sum_{i=1}^N p_i q_i
- \sup_{\vec{p}\in {\Delta}_N} \left(\alpha (\vec{p})\right)\!\right)\!\right)
\\
&& + \sup_{\vec{p}\,'\in {\Delta}_N} \left(-\alpha (\vec{p}\,')\right)
\\
&=&
\max_{{\vec{q}} \in \mathbb{R}^N} \left( \max_{i \in \{1,\cdots,N\}} q_i -\max_{i \in \{1,\cdots,N\}} q_i \right)
+ \sup_{\vec{p}\in {\Delta}_N} \left(\alpha (\vec{p})\right)
\\
&& + \sup_{\vec{p}\,'\in {\Delta}_N} \left(-\alpha (\vec{p}\,')\right)
\\
&=& \sup_{\vec{p},\vec{p}\,' \in {\Delta}_N} \left(\alpha(\vec{p}) - \alpha(\vec{p}\,')\right).
\end{eqnarray*}
The inequality follows from the fact that for any functions $f$ and $g$ over any domain $\mathcal{X}$, $\sup_{x \in \mathcal{X}} (f(x) - g(x)) \geq \sup_{x \in \mathcal{X}} f(x) - \sup_{x' \in \mathcal{X}} g(x').$
\end{proof}
\subsection{Convex Cost Functions and Market Scoring Rules}
As described in Section~\ref{sec:predmarkets}, the Logarithmic Market Scoring Rule market maker can be defined as either a market scoring rule or a cost function based market. The LMSR is not unique in this regard. As we show in this section, any regular, strictly proper market scoring rule with differentiable scoring functions can be represented as a cost function based market. Likewise, any convex cost function satisfying a few mild conditions corresponds to a market scoring rule. As long as the market probabilities are nonzero, the market scoring rule and corresponding cost function based market are equivalent. More precisely, a trader who changes the market probabilities from $\r$ to $\r\,'$ in the market scoring rule is guaranteed to receive the same payoff for every outcome $i$ as a trader who changes the quantity vectors from any ${\vec{q}}$ to ${\vec{q}}\,'$ such that $p({\vec{q}}) = \r$ and $p({\vec{q}}\,') = \r\,'$ in the cost function formulation as long as every component of $\r$ and $\r\,'$ is nonzero. Moreover, any price vector that is achievable in the market scoring rule (that is, any ${\vec{p}}$ for which $s_i(\vec{p})$ is finite for all $i$) is achievable by the cost function based market.
The fact that there exists a correspondence between certain market scoring rules and certain cost function based markets was noted by \citet{CP07}. They pointed out that the MSR with scoring function $\vec{s}$ and the cost function based market with cost function $C$ are equivalent if for all ${\vec{q}}$ and all outcomes $i$, $C({\vec{q}}) = q_i - s_i(\vec{p})$. However, they did not provide any guarantees about the circumstances under which this condition can be satisfied. \citet{ADPWY09} also made use of the equivalence between markets when this strong condition holds. Our result gives very general precise conditions under which an MSR is equivalent to a cost function based market.
Recall from Lemma~\ref{lem:costrepresentation} that any convex cost function $C$ can be represented as $C({\vec{q}}) = \sup_{\vec{p}\in {\Delta}_N} \left(\sum_{i=1}^N p_i q_i - \alpha (\vec{p})\right)$ for a convex function $\alpha$. Let $\alpha_C$ denote the function $\alpha$ corresponding to the cost function $C$. In the following, we consider cost functions derived from scoring rules $\vec{s}$ by setting
\begin{equation}
\label{eqn:derivedcost}
\alpha_C(\vec{p}) = \sum_{i=1}^N p_i s_i (\vec{p})
\end{equation}
and scoring rules derived from convex cost functions with
\begin{equation}
\label{eqn:scoringrule}
s_i(\vec{p}) = \alpha_C(\vec{p}) - \sum_{j=1}^N \frac{\partial \alpha_C(\vec{p})}{\partial p_j} p_j +\frac{\partial \alpha_C(\vec{p})}{\partial p_i}.
\end{equation}
We show that there is a mapping between a mildly restricted class of convex cost function based markets and a mildly restricted class of strictly proper market scoring rules such that for every pair in the mapping, Equations~\ref{eqn:derivedcost} and~\ref{eqn:scoringrule} both hold. Furthermore, we show that the markets satisfying these equations are equivalent in the sense described above.
\begin{theorem}
There is a one-to-one and onto mapping between the set of convex cost function based markets with strictly convex and differentiable potential functions $\alpha_C$ and the class of strictly proper, regular market scoring rules with differentiable scoring functions $\vec{s}$ such that for each pair in the mapping, Equations~\ref{eqn:derivedcost} and~\ref{eqn:scoringrule} hold.
Furthermore, each pair of markets in this mapping are equivalent when prices for all outcomes are positive, that is, the profit of a trade is the same in the two markets if the trade starts with the same market prices and results in the same market prices and the prices for all outcomes are positive before and after the trade. Additionally, every price vector $\vec{p}$ achievable in the market scoring rule is achievable in the cost function based market.
\label{thm:msrequivalence}
\end{theorem}
\begin{proof}
We first show that the function $\alpha_C$ in Equation~\ref{eqn:derivedcost} is strictly convex and differentiable and the scoring rule in Equation~\ref{eqn:scoringrule} is regular, strictly proper and differentiable. We then show that Equations~\ref{eqn:derivedcost} and~\ref{eqn:scoringrule} are equivalent. Finally, we show the equivalence between the two markets.
Consider the function $\alpha_C$ in Equation~\ref{eqn:derivedcost}. Since we have assumed that $s_i$ is differentiable for all $i$, $\alpha_C$ is differentiable too. Additionally, it is known that a scoring rule is strictly proper only if its expected value is strictly convex~\cite{Gneiting:07}, so $\alpha_C$ is strictly convex.
Consider the scoring rule defined in Equation~\ref{eqn:scoringrule}. By Theorem 1 of~\citet{Gneiting:07}, a regular scoring rule $s_i(\vec{p})$ is strictly proper if and only if there exists a strictly convex function $G(\vec{p})$ such that
\begin{equation}
s_i(\vec{p}) = G(\vec{p}) - \sum_{j=1}^N p_j \dot{G}_j(\vec{p}) + \dot{G}_i(\vec{p}) ,
\label{eqn:grform}
\end{equation}
where $G_j(\vec{p})$ is any subderivative of $G$ with respect to $p_j$ (if $G$ is differentiable, $\dot{G}_j = \partial G(\vec{p}) / \partial p_j$). This immediately implies that the scoring rule defined in Equation~\ref{eqn:scoringrule} is a regular strictly proper scoring rule since $\alpha(\vec{p})$ is strictly convex. We will see below that $s_i$ is also differentiable.
It is easy to see that Equation~\ref{eqn:scoringrule} implies Equation~\ref{eqn:derivedcost}. Suppose Equation~\ref{eqn:scoringrule} holds. Then
\begin{eqnarray*}
\sum_{i=1}^N p_i s_i (\vec{p})
&=& \sum_{i=1}^N p_i \!\left(\!\alpha_C(\vec{p}) - \sum_{j=1}^N \frac{\partial \alpha_C(\vec{p})}{\partial p_j} p_j +\frac{\partial \alpha_C(\vec{p})}{\partial p_i}\!\right)\!
\\
&=& \alpha_C(\vec{p}) ~.
\end{eqnarray*}
This also shows that $s_i$ is differentiable for all $i$, since the derivative of $\alpha_C$ is well-defined at all points and
\[
\frac{\partial \alpha_C(\vec{p})}{\partial p_i}
= s_i(\vec{p}) + \sum_{i=1}^N \frac{\partial s_i(\vec{p})}{\partial p_i} ~.
\]
To see that Equation~\ref{eqn:derivedcost} implies Equation~\ref{eqn:scoringrule}, suppose that Equation~\ref{eqn:derivedcost} holds. We know that the scoring rule $\vec{s}$ can be expressed as in Equation~\ref{eqn:grform} for some function $G$. For this particular $G$,
\[
\alpha_C(\vec{p})
= \sum_{i=1}^N p_i \left(G(\vec{p}) -\sum_{j=1}^N p_j \dot{G}_j(\vec{p}) + \dot{G}_i(\vec{p}) \right)
= G(\vec{p}) ~.
\]
Since $G(\vec{p}) = \alpha_C(\vec{p})$ and $\alpha_C$ is differentiable (meaning that $\partial \alpha_C / \partial p_i$ is the only subderivative of $\alpha_C$ with respect to $p_i$), this implies Equation~\ref{eqn:scoringrule}.
We have established the equivalence between Equations~\ref{eqn:derivedcost} and~\ref{eqn:scoringrule}. We now show that a trader gets exactly the same profit for any realized outcome in the two markets if the market prices are positive.
Suppose in the cost function based market a trader changes the outstanding shares from $\vec{q}$ to $\vec{q}\,'$. This trade changes the market price from $\vec{p}(\vec{q})$ to $\vec{p}(\vec{q}\,')$. If outcome $i$ occurs, the trader's profit is
\begin{eqnarray}
\lefteqn{(q_i' -q_i) - \left(C(\vec{q}\,')-C(\vec{q})\right)}
\nonumber\\
&=& (q_i' -q_i) - \left(\sum_{j=1}^N p_j(\vec{q}\,') q_j' - \alpha_C(\vec{p}(\vec{q}\,'))\right)
\nonumber\\
&& + \left(\sum_{j=1}^N p_j(\vec{q}) q_j - \alpha_C(\vec{p}(\vec{q}))\right)
\nonumber\\
&=& \left(q_i'- \sum_{j=1}^N p_j(\vec{q}\,') q_j' +\alpha_C(\vec{p}(\vec{q}\,'))\right)
\nonumber\\
&& -\left(q_i- \sum_{j=1}^N p_j(\vec{q}) q_j +\alpha_C(\vec{p}(\vec{q}))\right).
\label{eqn:profit}
\end{eqnarray}
From Lemma~\ref{lem:costrepresentation}, we know that $\vec{p}(\vec{q})$ is the optimal solution to the convex optimization $\max_{\vec{p} \in {\Delta}_N} \left(\sum_{i=1}^N p_i q_i - \alpha_C(\vec{p})\right)$.
The Lagrange function of this optimization problem is
\[
L = \left(\sum_{i=1}^N p_i q_i - \alpha_C(\vec{p})\right) - \lambda (\sum_{i=1}^N p_i -1)+\sum_{i=1}^N \mu_i p_i.
\]
Since $\vec{p}(\vec{q})$ is optimal, the KKT conditions require that $\partial L / \partial p_i =0$, which implies that for all $i$,
\begin{equation}
\label{eqn:foc}
q_i = \frac{\partial \alpha_C(\vec{p}(\vec{q}))}{\partial p_i(\vec{q})}+\lambda(\vec{q}) - \mu_i(\vec{q}) ,
\end{equation}
where $\mu_i(\vec{q}) \geq 0$ and $\mu_i(\vec{q})p_i(\vec{q}) =0$. Plugging (\ref{eqn:foc}) into (\ref{eqn:profit}), we have
\begin{align}
&(q_i' -q_i) - \left(C(\vec{q}\,')-C(\vec{q})\right)
\nonumber \\
&=\left(\! \frac{\partial \alpha_C(\vec{p}(\vec{q}\,'))}{\partial p_i(\vec{q}\,')} -\sum_{j=1}^N p_j(\vec{q}\,')\frac{\partial \alpha_C(\vec{p}(\vec{q}\,'))}{\partial p_j(\vec{q}\,')} + \alpha_C(\vec{p}(\vec{q}\,')) - \mu_i(\vec{q}\,')\!\right)
\nonumber \\
&\quad -\left(\frac{\partial \alpha_C(\vec{p}(\vec{q}))}{\partial p_i(\vec{q})} -\sum_{j=1}^N p_j(\vec{q})\frac{\partial \alpha_C(\vec{p}(\vec{q}))}{\partial p_j(\vec{q})} + \alpha_C(\vec{p}(\vec{q})) - \mu_i(\vec{q})\right)
\nonumber \\
&=\left(s_i(\vec{p}(\vec{q}\,')) - \mu_i(\vec{q}\,')\right) - \left(s_i(\vec{p}(\vec{q})) - \mu_i(\vec{q})\right).
\label{eqn:costfuncprofit}
\end{align}
When $p_i(\vec{q}) >0$ and $p_i(\vec{q}\,') >0$, $\mu_i(\vec{q}) = \mu_i(\vec{q}\,') = 0$. In this case, the profit of the trader in the cost function based market is the same as that in the market scoring rule market when he changes the market probability from $\vec{p}(\vec{q})$ to $\vec{p}(\vec{q}\,')$.
Finally, observe that using the cost function based market it is possible to achieve any price vector $\vec{r}$ with finite scores $s_i(\vec{r})$ by setting $q_i = s_i(\vec{r})$ for all $i$. By Lemma~\ref{lem:costrepresentation}, for this setting of ${\vec{q}}$, $p({\vec{q}})$ is the vector ${\vec{p}}$ that maximizes $\sum_{i=}^N p_i s_i(\vec{r}) - \sum_{i=1}^N p_i s_i(\vec{p})$. Since $\vec{s}$ is strictly proper, this is maximized at $\vec{p} = \vec{r}$. Since $\vec{s}$ is regular, this implies that it is possible to achieve any prices in the interior of the probability simplex using the cost function based market (and any prices ${\vec{p}}$ on the exterior as long as $s_i({\vec{p}})$ is finite for all $i$).
\end{proof}
\subsection{Convex Cost Functions and FTRL}
\label{sec:ftrlconnection}
Consider a prediction market with a convex cost function represented as $C({\vec{q}}) = \sup_{\vec{p}\in {\Delta}_N} \left(\sum_{i=1}^N p_i q_i - \alpha (\vec{p})\right)$ and the corresponding learning algorithm with weights $w_{i,t} = p_i(-\epsilon \vec{{L}}_{t-1})$. (Recall that $\vec{{L}}_{t-1} = \langle {L}_{1,t-1},\cdots,{L}_{N,t-1} \rangle$ is the vector of cumulative losses at time $t-1$.) By Lemma~\ref{lem:costrepresentation}, the weights chosen at time $t$ are those that maximize the expression $- \epsilon \sum_{i=1}^N w_i {L}_{i,t-1} - \alpha (\vec{w})$, or equivalently, those that minimize the expression
\[
\sum_{i=1}^N w_i {L}_{i,t-1} + \frac{1}{\epsilon} \alpha (\vec{w}) ~.
\]
This expression is of precisely the same form as Equation~\ref{eqn:ftrl}, with $\alpha$ playing the role of the regularizer and $\epsilon$ controlling the trade-off between the regularizer and the empirical loss. This implies that every convex cost function based prediction market can be interpreted as a Follow the Regularized Leader algorithm with a convex regularizer! By applying Theorem~\ref{thm:mainreduction} and Lemma~\ref{lem:worstcaseloss}, we can easily bound the regret of the resulting algorithm as follows.
\begin{theorem}
Let $C$ be any valid convex cost function yielding $\phi$-stable prices, and let $\alpha_C$ be the penalty function associated with $C$. Let ${\mathcal{A}}$ be the expert learning algorithm with weights as in Equation~\ref{eqn:weights} with $\epsilon = \sqrt{2 \sup_{\vec{p},\vec{p}\,' \in {\Delta}_N} (\alpha_C(\vec{p}) - \alpha_C(\vec{p}\,'))/(\phi T)}$. Then for any sequence of expert losses $\ell_{i,t} \in [0,1]$ over $T$ time steps,
\[
{L}_{{\mathcal{A}},T} - \min_{i \in \{1,\cdots,N\}} {L}_{i,T}
\leq
\sqrt{2 T \phi \sup_{\vec{p},\vec{p}\,' \in {\Delta}_N} \left(\alpha_C(\vec{p}) - \alpha_C(\vec{p}\,')\right)} ~.
\]
\label{thm:ftrlreduction}
\end{theorem}
This bound is very similar to the bound for FTRL given in Equation~\ref{eqn:lambdabound}, with $\phi$ playing the role of $\lambda$.
The connections we established in the previous section imply that every strictly proper market scoring rule can also be interpreted as a FTRL algorithm, now with a \emph{strictly} convex regularizer. Conversely, any FTRL algorithm with a differentiable and strictly convex regularizer can be viewed as choosing weights at time $t$ to minimize the quantity
\[
\sum_{i=1}^N w_i \left( \epsilon {L}_{i,t-1} + s_i (\vec{w})\right)
\]
for a strictly proper scoring rule $\vec{s}$. Perhaps it is no surprise that the weight updates of FTRL algorithms can be framed in terms of proper scoring rules given that proper scoring rules are commonly used as loss functions in machine learning~\cite{BSS05,RW09} and FTRL has previously been connected to Bregman divergences~\cite{SS07,HK08,H09} which are known to be related to scoring rules~\cite{Gneiting:07}.
This connection hints at why market scoring rules and convex cost function based markets may be able to obtain accurate estimates of probability distributions in practice. Both types of markets are essentially \emph{learning} the distributions by treating market trades as training data. Beyond that, both markets correspond to well-understood learning algorithms with stable weights and guarantees of no regret.
\subsection{Relation to the SCPM}
\label{sec:scpm}
\citet{ADPWY09} present another way of describing convex cost function based prediction markets, which they call the Sequential Convex Pari-Mutuel Mechanism (SCPM). The SCPM is defined in terms of limit orders instead of market prices, but the underlying mathematics are essentially the same. In the SCPM, traders specify a maximum quantity of shares that they would like to purchase and a maximum price per share that they are willing to spend. The market then decides how many shares of the trade to accept by solving a convex optimization problem.
\citet{ADPWY09} show that for every SCPM, there is an equivalent convex cost function based market. For each limit order, the number of shares accepted by the market maker in the SCPM is the minimum of the number of shares requested by the trader and the number of shares that it would take to drive the market price of the shares in the corresponding cost function based market to the limit price of the trader. Thus our results imply that any SCPM mechanism can also be interpreted as a Follow the Regularized Leader algorithm for learning from expert advice.
We remark that \citet{ADPWY09} also describe an interpretation of the SCPM in terms of convex risk measures and suggest that the associated penalty function is related to the underlying problem of learning the distribution over outcomes. However, their interpretation is very different from ours. They view the penalty function as characterizing ``the market maker's commitment to learning the true distribution'' since it impacts both the worst case market maker loss and the willingness of the market maker to accept limit orders. On the contrary, we view the penalty function as a regularizer necessary to make the market prices stable.
\section{Example: The Quadratic MSR and Online Gradient Descent}
In the previous section we described the relationship between market scoring rules, cost function based markets with convex cost functions, and Follow the Regularized Leader algorithms. We discussed how the Logarithmic Market Scoring Rule can be represented equivalently as a cost function based market, and how it corresponds to Weighted Majority in the expert learning setting. In this section, we illustrate the relationship through another example. In particular, we show that the Quadratic Market Scoring Rule can be written equivalently as a cost function based market (namely the Quad-SCPM of \citet{ADPWY09}). We then show that this market corresponds to the well-studied online gradient descent algorithm in the learning setting and give a bound on the regret of this algorithm using Theorem~\ref{thm:ftrlreduction}.
The Quadratic Market Scoring Rule (QMSR) is the market scoring rule corresponding to the quadratic scoring function in Equation~\ref{eqn:qsr}. As was the case in the LMSR, the parameters $a_1, \cdots, a_N$ do not affect the prices or payments of this market. As such, we assume that $a_i = 0$ for all $i$.
Theorem~\ref{thm:msrequivalence} implies that we can construct a cost function based market with equivalent payoffs to the QMSR whenever prices are nonzero using the cost function
\begin{eqnarray*}
C(\vec{q})
&=& \sup_{\vec{p}\in {\Delta}_N} \left(\sum_{i=1}^N p_i q_i - \sum_{i=1}^N p_i
b\left(2p_i - \sum_{i=1}^N p_i^2 \right) \right)
\\
&=& \sup_{\vec{p}\in {\Delta}_N} \left(\sum_{i=1}^N p_i q_i - b \sum_{i=1}^N p_i^2 \right) ~.
\end{eqnarray*}
This is precisely the cost function associated with the Quad-SCPM market with a uniform prior, which was previously known to be equivalent to the QMSR when prices are nonzero~\cite{ADPWY09}. The worst case loss of the market maker in both markets is $b (N-1)/N$.
Following the argument in Section~\ref{sec:ftrlconnection}, this market corresponds to the FTRL algorithm with regularizer $\eta = 1/b$ and ${\mathcal{R}}(\vec{w}) = \sum_{i=1}^N w_i^2$. It has been observed that using FTRL with a regularizer of this form is equivalent to online gradient descent~\cite{HAK07,H09}. Thus we can use Theorem~\ref{thm:ftrlreduction} to show a regret bound for gradient descent.
We first show that the Quad-SCPM prices are $\phi$-stable for $\phi = (N^2-1)/(2b) < N^2/(2b)$. (See Appendix~\ref{app:quad} for details.) We can therefore apply Theorem~\ref{thm:ftrlreduction} using $\phi = N^2/(2b)$ and $\sup_{\vec{p},\vec{p}\,' \in {\Delta}_N} \left(\alpha(\vec{p}) - \alpha(\vec{p}\,')\right) = b(N-1)/N < b$ to see that for gradient descent,
\[
{L}_{GD,T} - \min_{i \in \{1,\cdots,N\}} {L}_{i,T}
\leq N\sqrt{T} ~.
\]
This matches the known regret bound for general gradient descent applied to the experts setting~\cite{Z03}.
\section{Discussion}
We have demonstrated the elegant mathematical connection between market scoring rules, cost function based prediction markets, and no-regret learning. This connection is thought-provoking on its own, as it yields to new interpretations of well-known prediction market mechanisms. The interpretation of the penalty function as a regularizer can shed some light on which market scoring rule or cost function based market is best to run under different assumptions about traders.
Additionally, this connection has the potential to be of use in the design of new prediction market mechanisms and learning algorithms. In recent years there has been an interest in finding ways to tractably run market scoring rules over combinatorial or infinite outcome spaces~\cite{CGP08,GCP09,CFLPW08}. For example, a market maker might wish to accept bets over permutations (``horse A will finish the race ahead of horse B''), Boolean spaces (``either a Democrat will win the 2010 senate race in Delaware or a Democrat will win in North Dakota''), or real numbers (``Google's revenue in the first quarter of 2010 will be between $\$x$ and $\$y$''), in which case simply running a naive implementation of an LMSR (for example) would be infeasible. As mentioned above, by exploiting the connection between Weighted Majority and the LMSR, \citet{CFLPW08} showed that an extension of the Weighted Majority algorithm to permutation learning~\cite{HW09} could be used to approximate prices in an LMSR over permutations. Given our new understanding of the connection between markets and learning and the growing literature on no-regret algorithms for large or infinite sets of experts~\cite{HP05}, it seems likely that similar learning-based techniques could be developed to calculate market prices for other types of large outcome spaces too.
\bibliographystyle{plainnat}
{\small{ | 2024-02-18T23:39:58.180Z | 2010-02-27T00:27:23.000Z | algebraic_stack_train_0000 | 968 | 10,222 |
|
proofpile-arXiv_065-4784 | \section{\textbf{Introduction}}
Limit cycles are isolated closed curves in an autonomous system in a phase
plane. Determination of shape and number of limit cycles has been a
challenging problem in the theory of autonomous systems.\ Lienard system has
been a field of active interest in recent past because of its relevance in
various physical and mathematical problem $\left[ \cite{Jordan Smith}%
-\cite{Chen Llibre Zhang}\right] $. Recently non-smooth Lienard systems even
allowing discontinuities $\cite{Llibre Ponce Torres}$ are also being studied.
Here we consider the Lienard equation of the type%
\begin{equation}
\ddot{x}+f\left( x\right) \dot{x}+g\left( x\right) =0. \label{Lienard Eq}%
\end{equation}
The Lienard equation $\left( \ref{Lienard Eq}\right) $ can be written as a
non-standard autonomous system%
\begin{equation}
\dot{x}=y-F\left( x\right) ,\quad\dot{y}=-g\left( x\right)
\label{Lienard System}%
\end{equation}
where $F\left( x\right) =\int_{0}^{x}f\left( u\right) du$. The phase plane
defined by $\left( \ref{Lienard System}\right) $ is known as Lienard plane.
Lienard gave a uniqueness theorem $\left[ \cite{Jordan Smith},\cite{Zhing
Tongren Wenzao}\right] $ for periodic cycles for a general class of equations
when $F\left( x\right) $ is an odd function and satisfies a monotonicity
condition as $x\rightarrow\infty$. A challenging problem is the determination
of the number of limit cycles for a given polynomial $F\left( x\right) $ of
degree $\left( m\right) $ for the system $\left( \ref{Lienard System}%
\right) $ $\left[ \cite{Palit Datta},\cite{Llibre Ponce Torres},\cite{Chen
Llibre Zhang}\right] $. Recently we have presented a new method for proving
the existence of exactly two limit cycles of a Lienard system \cite{Palit
Datta}. Recall that the proof of Lienard theorem depends on the existence of
an odd function $F\left( x\right) $ with zeros at $x=0$ and $x=\pm a$
$\left( a>0\right) $ and that $F\left( x\right) >0$ for $x>a$ and tends to
$\infty$ as $x\rightarrow\infty$. To weaker this assumption, we note at first
that the existence of a limit cycle is still assured if there exists a value
$\bar{\alpha}>a$ $($called an efficient upper estimate of the amplitude of the
limit cycle$)$ such that $F\left( x\right) $ is increasing for $a\leq
x<\bar{\alpha}<L_{1}$, where $L_{1}$ is the first extremum of $F\left(
x\right) ,~x>a.$ Based on this observation we are then able to generalize the
standard theorem for the existence of exactly two limit cycles. Our theorem
not only extends the class of $F\left( x\right) $ considered by Odani
$\left[ \cite{Odani N},\cite{Odani}\right] $, but also that of the more
recent work of Chen et al \cite{Chen Llibre Zhang} [See \cite{Palit Datta} for
more details].
In the present paper we prove the theorem for the existence of exactly $N$
limit cycles for the system $\left( \ref{Lienard Eq}\right) $. In the second
part of the paper we present an algorithm to generate any desired number of
limit cycles around the origin, which is the only critical point for the
system $\left( \ref{Lienard Eq}\right) $. Limit cycles represent an
important class of nonlinear periodic oscillations. Existence of such
nonlinear periodic cycles have been established in various natural and
biological systems $\left[ \cite{Jordan Smith},\cite{Zhing Tongren
Wenzao},\cite{Goldberger}\right] $. It is well known that mammalian
heartbeats may follow a non-linear oscillatory patterns under certain
$($physiological$)$ constraints $\cite{Goldberger}$. However, sometimes it
becomes very difficult to obtain total information about a nonlinear system
due to various natural constraints, as a result of which we obtain only a
partial or incomplete data \cite{Donaho}. Our objective is to fill up those
gaps and construct a Lienard system that may be considered to model the
dynamics of the missing part of the phenomena in an efficient manner.
To state this in another way, let us suppose that the Lienard system is
defined only on a bounded region $\left[ -a_{1},a_{1}\right] $, $a_{1}>0$
having one $($or at most a finite number of$)$ limit cycles in that region.
Our aim is to develop an algorithm to extend the Lienard system minimally
throughout the plane accommodating a given number of limit cycles in the
extended region. By minimal extension we mean that the graph $\left(
x,F\left( x\right) \right) $, of the function $F$ which is initially
defined only in $\left\vert x\right\vert <a_{1}$ is extended beyond the line
$x=a_{1}$ iteratively as an action induced by two suitably chosen functions
$\phi\left( x\right) $ and $H\left( x\right) $ so that $\phi$ acts on the
abscissa $x$ and $H$ acts on the ordinate $F\left( x\right) $ respectively.
Accordingly the desired extension $\tilde{F}\left( x\right) $ of $F\left(
x\right) $, $x>a_{1}$ is realized as $H\circ F\left( x\right) =\tilde
{F}\circ\phi\left( x\right) $. The choice of $\phi$ and $H$ is motivated by
theorem \ref{Th n Limit Cycle} so that the extension $\tilde{F}$ satisfies the
conditions of the said theorem. It turns out that $\phi$ can simply be a
bijective function, while $H$ may be any monotonic function admitting
$\bar{\alpha}<L$ $($c.f. equation $\left( \ref{Definition of alfaBar}\right)
)$, $L$ being the unique extremum of $\tilde{F}\left( x\right) $,
$x\in\left[ a_{1},a_{2}\right] $, $\tilde{F}\left( a_{1}\right) =\tilde
{F}\left( a_{2}\right) =0$.
The paper is organized as follows. In section \ref{Preli} we introduced our
notations. In section \ref{Existence of N limit Cycles} we have proved an
extension of the theorem in $\cite{Palit Datta}$ for existence of exactly $N$
limit cycles in the Lienard equation. In section \ref{Construction} we present
the construction by which we can get a system of the form $\left(
\ref{Lienard Eq}\right) $ having any desired number of limit cycles around a
single critical point. Examples in support of this algorithm are studied in
section \ref{Examples}.
\section{\textbf{Notations}\label{Preli}}
We recall that \cite{Jordan Smith} by symmetry of paths, a typical phase path
$YQY^{\prime}$ of the system $\left( \ref{Lienard System}\right) $ becomes a
limit cycle iff $OY=OY^{\prime}$.
We consider,%
\begin{equation}
v\left( x,y\right) =\int_{0}^{x}g\left( u\right) du+\frac{1}{2}y^{2}
\label{Potential Function}%
\end{equation}
and%
\begin{equation}
v_{YQY^{\prime}}=v_{Y^{\prime}}-v_{Y}=\int\limits_{YQY^{\prime}}dv
\label{Potential Integral in v}%
\end{equation}
\begin{figure}[h]
\begin{center}
\includegraphics[height=6cm,width=4.5cm]{Figure0301}
\end{center}
\caption{{}Typical path for the Lienard theorem}%
\end{figure}It follows that%
\begin{equation}
dv=ydy+gdx=Fdy \label{Potential Differential}%
\end{equation}
so that%
\begin{equation}
OY=OY^{\prime}\Longleftrightarrow V_{YQY^{\prime}}=0\text{.}
\label{Potential Zero}%
\end{equation}
We define,%
\begin{gather*}
G\left( x\right) =\int_{0}^{x}g\left( u\right) du\\
y_{+}\left( 0\right) =OY,\quad y_{-}\left( 0\right) =OY^{\prime}%
\end{gather*}
and let $Q$ has coordinates $\left( \alpha,F\left( \alpha\right) \right)
$. Let $\alpha^{\prime}$ and $\alpha^{\prime\prime}$ be respectively two
positive roots of the equations%
\begin{align}
G\left( \alpha\right) & =\frac{1}{2}y_{+}^{2}\left( 0\right) -\frac
{1}{2}F^{2}\left( \alpha\right) \label{Definition of alfaDash1}\\
\text{and }G\left( \alpha\right) & =\frac{1}{2}y_{-}^{2}\left( 0\right)
-\frac{1}{2}F^{2}\left( \alpha\right) \label{Definition of alfaDash2}%
\end{align}
Also let,%
\begin{equation}
\bar{\alpha}=\max\left\{ \alpha^{\prime},\alpha^{\prime\prime}\right\}
\label{Definition of alfaBar}%
\end{equation}
We show that $V_{YQY^{\prime}}$ has a simple zero at an $\alpha\leq\bar
{\alpha}$ \cite{Palit Datta} for the system $\left( \ref{Lienard System}%
\right) $ in Lienard theorem. It turns out that $\bar{\alpha}$ provides an
efficient estimate of the amplitude of the unique limit cycle of the Van der
Pol equation \cite{Palit Datta}. This result has been extended in \cite{Palit
Datta} for the existence of exactly two limit cycles as stated in the
following theorem.
\begin{theorem}
\label{Th 2 Limit Cycle}Let $f$ and $g$ be two functions satisfying the
following \linebreak properties.\newline%
\begin{tabular}
[c]{rl}%
$\left( i\right) $ & $f$ and $g$ are continuous;\\
$\left( ii\right) $ & $F$ and $g$ are odd functions and $g\left( x\right)
>0$ for $x>0$.;\\
$\left( iii\right) $ & $F$ has $+ve$ simple zeros only at $x=a_{1}$,
$x=a_{2}$ for some $a_{1}>0$ and\\
& some $a_{2}>\bar{\alpha}$, $\bar{\alpha}$ being defined by $\left(
\text{\ref{Definition of alfaBar}}\right) $ and $\bar{\alpha}<L$, where $L$
is the first\\
& local maxima of $F\left( x\right) $ in $\left[ a_{1},a_{2}\right] ;$\\
$\left( iv\right) $ & $F$ is monotonic increasing in $a_{1}<x\leq\bar
{\alpha}$ and $F\left( x\right) \rightarrow-\infty$ as $x\rightarrow\infty
$\\
& monotonically for $x>a_{2};$%
\end{tabular}
\newline Then the equation $\left( \text{\ref{Lienard Eq}}\right) $ has
exactly two limit cycles around the origin.
\end{theorem}
It has been shown \cite{Palit Datta} that these two limit cycles are simple in
the sense that neither can bifurcate under any small $C^{1}$ perturbation
satisfying the conditions of theorem \ref{Th 2 Limit Cycle}. The existence of
$\bar{\alpha}$ satisfying an equation of the form $\left(
\ref{Definition of alfaBar}\right) $ ensures the existence of two distinct
limit cycles.
\section{\textbf{Existence of Exactly }$N$\textbf{ limit cycles for
\protect\linebreak Lienard System}\label{Existence of N limit Cycles}}
We generalize theorem \ref{Th 2 Limit Cycle} as follows.
\begin{theorem}
\label{Th n Limit Cycle}Let $f$ and $g$ be two functions satisfying the
following \linebreak properties.\newline%
\begin{tabular}
[c]{rl}%
$\left( i\right) $ & $f$ and $g$ are continuous;\\
$\left( ii\right) $ & $F$ and $g$ are odd functions and $g\left( x\right)
>0$ for $x>0$.;\\
$\left( iii\right) $ & $F$ has $N$ number of $+ve$ simple zeros only at
$x=a_{i}$, $i=1,2,\ldots,N$\\
& where $0<a_{1}<a_{2}<\ldots<a_{N}$ such that in each interval $I_{i}=\left[
a_{i},a_{i+1}\right] $,\\
& $i=1,2,\ldots,N-1$, there exists $\bar{\alpha}_{i}$, satisfying properties
given by $\left( \text{\ref{Definition of alfaBar}}\right) $,\\
& such that $\bar{\alpha}_{i}<L_{i}$ where $L_{i}$ is the unique extremum in
$I_{i}$,\\
& $i=1,\ldots,N-2$ and $L_{N-1}$, the first local extremum in $\left[
a_{N-1},a_{N}\right] $.\\
$\left( iv\right) $ & $F$ is monotonic in $a_{i}<x\leq\bar{\alpha}_{i}$
$\forall$ $i$ and $\left\vert F\left( x\right) \right\vert \rightarrow
\infty$ as $x\rightarrow\infty$\\
& monotonically for $x>a_{N}$.
\end{tabular}
\newline Then the equation $\left( \text{\ref{Lienard Eq}}\right) $ has
exactly $N$ limit cycles around the origin, all are simple.
\end{theorem}
\begin{figure}[h]
\begin{center}
\includegraphics[height=8cm,width=10cm]{Figure0302}
\end{center}
\caption{{}}%
\label{Typical n Path}%
\end{figure}
\textit{Proof.} We shall prove the theorem by showing the result that each
limit cycle intersects the $x-$axis at a point lying in the open interval
$\left( \bar{\alpha}_{i},\bar{\alpha}_{i+1}\right] $, $i=0,1,2,\ldots,N-1$,
where $\bar{\alpha}_{0}=L_{0}$ is the local minima of $F\left( x\right) $ in
$\left[ 0,a_{1}\right] $. By Lienard theorem and theorem
$\ref{Th 2 Limit Cycle}$ it follows that the result is true for $N=1$ and
$N=2$. We shall now prove the theorem by induction. We assume that the theorem
is true for $N=n-1$ and we shall prove that it is true for $N=n$. We prove the
theorem by taking $n$ as an odd $+ve$ integer so that $\left( n-1\right) $
is even. The case for which $n$ is even can similarly be proved and so is
omitted. It can be shown that \cite{Jordan Smith}, $V_{YQY^{\prime}}$ changes
its sign from $+ve$ to $-ve$ as $Q$ moves out of $A_{1}\left( a_{1},0\right)
$ along the curve $y=F\left( x\right) $ and hence vanishes there due to its
continuity and generates the first limit cycle around the origin. Next, in
\cite{Palit Datta} we see $V_{YQY^{\prime}}$ again changes its sign from $-ve$
to $+ve$ and generates the second limit cycle around the first. Also, we see
that for existence of second limit cycle we need the existence of the point
$\bar{\alpha}$, which we denote here as $\bar{\alpha}_{1}$.
Since by induction hypothesis the theorem is true for $N=n-1$, so it follows
that in each and every interval $\left( \bar{\alpha}_{k},\bar{\alpha}%
_{k+1}\right] $, $k=0,1,2,\ldots,n-2$ the system $\left(
\ref{Lienard System}\right) $ has a limit cycle and the outermost limit cycle
cuts the $x-$ axis somewhere in $\left( \bar{\alpha}_{n-1},\infty\right) $.
Also $V_{YQY^{\prime}}$ changes its sign alternately as the point $Q$ moves
out of $a_{i}$'s, $i=1,2,\ldots,n-1$. Since $\left( n-1\right) $ is even, it
follows that $V_{YQY^{\prime}}$ changes its sign from $+ve$ to $-ve$ as $Q$
moves out of $a_{n-2}$ along the curve $y=F\left( x\right) $. Since there is
only one limit cycle in the region $\left( \bar{\alpha}_{n-1},\infty\right)
$, so it is clear that $V_{YQY^{\prime}}$ must change its sign from $-ve$ to
$+ve$ once and only once as $Q$ moves out of $A_{n-1}\left( a_{n-1},0\right)
$ along the curve $y=F\left( x\right) $. Also it follows that once
$V_{YQY^{\prime}}$ becomes $+ve$ so that it does not vanish further, otherwise
we would get one more limit cycle, contradicting the hypothesis so that total
number of limit cycle become $n$. We now try to find an estimate of $\alpha$
for which $V_{YQY^{\prime}}$ vanishes for the last time.
We shall now prove that the result is true for $N=n$ and so we assume that all
the hypotheses or conditions of this theorem are true for $N=n$. So, we get
one more point $\bar{\alpha}_{n}$ and another root $a_{n}$, ensuring the fact
that $V_{YQY^{\prime}}$ vanishes as $Q$ moves out of $A_{n-1}$ through the
curve $y=F\left( x\right) $, thus accommodating a unique limit cycle in the
interval $\left( \bar{\alpha}_{n-1},\bar{\alpha}_{n}\right] $.
By the result discussed so far it follows that $V_{YQY^{\prime}}>0$ when
$\alpha$ lies in certain suitable small right neighbourhood of $\bar{\alpha
}_{n-1}$. We shall prove that $V_{YQY^{\prime}}$ ultimately becomes $-ve$ and
remains $-ve$ as $Q$ moves out of $A_{n}\left( a_{n},0\right) $ along the
curve $y=F\left( x\right) $ generating the unique limit cycle and hence
proving the required result for $N=n$.
We draw straight line segments $X_{k}X_{k}^{\prime}$, $k=1,2,3,\ldots,n$,
passing through $A_{k}$ and parallel to $y$-axis as shown in figure
\ref{Typical n Path}. For convenience, we shall call the points $X_{n}%
,X_{n}^{\prime},Y,Y^{\prime}$ as $B,B^{\prime},X_{0},X_{0}^{\prime}$
respectively. We write the curves\newline$\left. {}\right. $\hfill
$\Gamma_{k}=X_{k-1}X_{k},\quad\Gamma_{k}^{\prime}=X_{k}^{\prime}%
X_{k-1}^{\prime},\quad k=1,2,3,\ldots,n$\hfill$\left. {}\right. $\newline so
that\newline$\left. {}\right. $\hfill$YQY^{\prime}=X_{0}QX_{0}^{\prime}=%
{\textstyle\sum\limits_{k=1}^{n}}
\Gamma_{k}+X_{n}QX_{n}^{\prime}+%
{\textstyle\sum\limits_{k=1}^{n}}
\Gamma_{k}^{\prime}=%
{\textstyle\sum\limits_{k=1}^{n}}
\left( \Gamma_{k}+\Gamma_{k}^{\prime}\right) +BQB^{\prime}$\hfill$\left.
{}\right. $\newline and%
\begin{equation}
V_{YQY^{\prime}}=%
{\textstyle\sum\limits_{k=1}^{n}}
\left( V_{\Gamma_{k}}+V_{\Gamma_{k}^{\prime}}\right) +V_{BQB^{\prime}%
}\text{.} \label{Potential Sum}%
\end{equation}
We shall prove the result through the following steps.
\subsubsection*{Step $\left( A\right) :$ As $Q$ moves out of $A_{n}$ along
$A_{n}C$, $V_{\Gamma_{k}}+V_{\Gamma_{k}^{\prime}}$ is $+ve$ and monotonic
decreasing for odd $k$.}
$\left. {}\right. \hspace{18pt}$We choose two points $Q\left(
\alpha,F\left( \alpha\right) \right) $ and $Q_{1}\left( \alpha
_{1},F\left( \alpha_{1}\right) \right) $ on the curve of $F\left(
x\right) $, where $\alpha_{1}>\alpha>a_{n}$. Let $YQY^{\prime}$ and
$Y_{1}Q_{1}Y_{1}^{\prime}$ be two phase paths through $Q$ and $Q_{1}$
respectively. We have already taken $Y=X_{0}$, $Y^{\prime}=X_{0}^{\prime}$,
$B=X_{n}$ and $B^{\prime}=X_{n}^{\prime}$. We now take $Y_{1}=Z_{0}$,
$Y_{1}^{\prime}=Z_{0}^{\prime}$, $B_{1}=Z_{n}$, $B_{1}^{\prime}=Z_{n}^{\prime
}$ and $Z_{k}Z_{k}^{\prime}$ as the extension of the line segment $X_{k}%
X_{k}^{\prime}~\forall~k$. Also we write $Z_{k-1}Z_{k}=\Lambda_{k}$ and
$Z_{k}^{\prime}Z_{k-1}^{\prime}=\Lambda_{k}^{\prime}$. If $k$ is odd, then on
the segments $\Gamma_{k}$ and $\Lambda_{k}$ we have $y>0$, $F\left( x\right)
<0$ and $y-F\left( x\right) >0$. Now,\newline$\left. {}\right. $%
\hfill$0<\left[ y-F\left( x\right) \right] _{\Gamma_{k}}<\left[
y-F\left( x\right) \right] _{\Lambda_{k}}$.\hfill$\left. {}\right.
$\newline Since $g\left( x\right) >0$ for $x>0$ so we have\newline$\left.
{}\right. $\hfill$\left[ \dfrac{-g\left( x\right) }{y-F\left( x\right)
}\right] _{\Gamma_{k}}<\left[ \dfrac{-g\left( x\right) }{y-F\left(
x\right) }\right] _{\Lambda_{k}}<0$.\hfill$\left. {}\right. $\newline So,
by $\left( \ref{Lienard System}\right) $ we get%
\begin{equation}
\left[ \frac{dy}{dx}\right] _{\Gamma_{k}}<\left[ \frac{dy}{dx}\right]
_{\Lambda_{k}}<0\text{.} \label{Gradient Comparison1}%
\end{equation}
Therefore by $\left( \ref{Gradient Comparison1}\right) $ we have\newline%
$\left. {}\right. $\hfill$V_{\Gamma_{k}}=%
{\displaystyle\int\limits_{\Gamma_{k}}}
F~dy=%
{\displaystyle\int\limits_{\Gamma_{k}}}
\left( -F\right) \left( -\dfrac{dy}{dx}\right) dx>%
{\displaystyle\int\limits_{\Lambda_{k}}}
\left( -F\right) \left( -\dfrac{dy}{dx}\right) dx=%
{\displaystyle\int\limits_{\Lambda_{k}}}
F~dy=V_{\Lambda_{k}}$.\hfill$\left. {}\right. $\newline Since $F\left(
x\right) $ and $dy=\dot{y}dt=-g\left( x\right) dt$ are both $-ve$ along
$\Lambda_{k}$ for odd $k$, so we have%
\begin{equation}
V_{\Gamma_{k}}>V_{\Lambda_{k}}=\int\limits_{\Lambda_{k}}F~dy>0\text{.}
\label{Potential Comparison1}%
\end{equation}
Next, on the segments $\Gamma_{k}^{\prime}$ and $\Lambda_{k}^{\prime}$ we have
$y<0$, $F\left( x\right) <0$ and $y-F\left( x\right) <0$. Now,\newline%
$\left. {}\right. $\hfill$0>\left[ y-F\left( x\right) \right]
_{\Gamma_{k}^{\prime}}>\left[ y-F\left( x\right) \right] _{\Lambda
_{k}^{\prime}}$.\hfill$\left. {}\right. $\newline So, by $\left(
\ref{Lienard System}\right) $ we get%
\begin{equation}
\left[ \frac{dy}{dx}\right] _{\Gamma_{k}^{\prime}}>\left[ \frac{dy}%
{dx}\right] _{\Lambda_{k}^{\prime}}>0\text{.} \label{Gradient Comparison2}%
\end{equation}
Therefore by $\left( \ref{Gradient Comparison2}\right) $ we have%
\[
V_{\Gamma_{k}^{\prime}}=\int\limits_{\Gamma_{k}^{\prime}}F~dy=\int
\limits_{-\Gamma_{k}^{\prime}}\left( -F\right) \frac{dy}{dx}dx>\int
\limits_{-\Lambda_{k}^{\prime}}\left( -F\right) \frac{dy}{dx}dx=\int
\limits_{\Lambda_{k}^{\prime}}F~dy=V_{\Lambda_{k}^{\prime}}\text{.}%
\]
Since $F\left( x\right) $ and $dy=\dot{y}dt=-g\left( x\right) dt$ are both
$-ve$ along $\Lambda_{k}^{\prime}$ for odd $k$, so we have%
\begin{equation}
V_{\Gamma_{k}^{\prime}}>V_{\Lambda_{k}^{\prime}}=\int\limits_{\Lambda
_{k}^{\prime}}F~dy>0\text{.} \label{Potential Comparison2}%
\end{equation}
From $\left( \ref{Potential Comparison1}\right) $ and $\left(
\ref{Potential Comparison2}\right) $ we have%
\[
V_{\Gamma_{k}}+V_{\Gamma_{k}^{\prime}}>V_{\Lambda_{k}}+V_{\Lambda_{k}^{\prime
}}>0\text{.}%
\]
Therefore $V_{\Gamma_{k}}+V_{\Gamma_{k}^{\prime}}$ is $+ve$ and monotone
decreasing as the point $Q$ moves out of $A_{n}$ along $A_{n}C$.
\subsubsection*{Step $\left( B\right) :$ As $Q$ moves out from $A_{n}$ along
$A_{n}C$, $V_{\Gamma_{k}}+V_{\Gamma_{k}^{\prime}}$ is $-ve$ and monotonic
increasing for even $k$.}
$\left. {}\right. \hspace{18pt}$On the segments $\Gamma_{k}$ and
$\Lambda_{k}$ we have $y>0$, $F\left( x\right) >0$ and $y-F\left( x\right)
>0$. Now,%
\[
0<\left[ y-F\left( x\right) \right] _{\Gamma_{k}}<\left[ y-F\left(
x\right) \right] _{\Lambda_{k}}\text{.}%
\]
Since $g\left( x\right) >0$ for $x>0$ so we have%
\[
\left[ \frac{-g\left( x\right) }{y-F\left( x\right) }\right]
_{\Gamma_{k}}<\left[ \frac{-g\left( x\right) }{y-F\left( x\right)
}\right] _{\Lambda_{k}}<0\text{.}%
\]
So, by $\left( \ref{Lienard System}\right) $ we get%
\begin{equation}
\left[ \frac{dy}{dx}\right] _{\Gamma_{k}}<\left[ \frac{dy}{dx}\right]
_{\Lambda_{k}}<0\text{.} \label{Gradient Comparison3}%
\end{equation}
Therefore by $\left( \ref{Gradient Comparison3}\right) $ we have%
\[
V_{\Gamma_{k}}=\int\limits_{\Gamma_{k}}F~dy=\int\limits_{\Gamma_{k}}F\frac
{dy}{dx}dx<\int\limits_{\Lambda_{k}}F\frac{dy}{dx}dx=\int\limits_{\Lambda_{k}%
}F~dy=V_{\Lambda_{k}}\text{.}%
\]
Since $F\left( x\right) >0$ and $dy=\dot{y}dt=-g\left( x\right) dt<0$
along $\Lambda_{k}$ for even $k$, so we have%
\begin{equation}
V_{\Gamma_{k}}<V_{\Lambda_{k}}=\int\limits_{\Lambda_{k}}F~dy<0\text{.}
\label{Potential Comparison3}%
\end{equation}
Next, on the segments $\Gamma_{k}^{\prime}$ and $\Lambda_{k}^{\prime}$ we have
$y<0$, $F\left( x\right) >0$ and $y-F\left( x\right) <0$. Now,%
\[
0>\left[ y-F\left( x\right) \right] _{\Gamma_{k}^{\prime}}>\left[
y-F\left( x\right) \right] _{\Lambda_{k}^{\prime}}%
\]
so that by $\left( \ref{Lienard System}\right) $ we get%
\begin{equation}
\left[ \frac{dy}{dx}\right] _{\Gamma_{k}^{\prime}}>\left[ \frac{dy}%
{dx}\right] _{\Lambda_{k}^{\prime}}>0\text{.} \label{Gradient Comparison4}%
\end{equation}
Therefore by $\left( \ref{Gradient Comparison4}\right) $ we have%
\[
V_{\Gamma_{k}^{\prime}}=\int\limits_{\Gamma_{k}^{\prime}}F~dy=\int
\limits_{-\Gamma_{k}^{\prime}}F\left( -\frac{dy}{dx}\right) dx<\int
\limits_{-\Lambda_{k}^{\prime}}F\left( -\frac{dy}{dx}\right) dx=\int
\limits_{\Lambda_{k}^{\prime}}F~dy=V_{\Lambda_{k}^{\prime}}\text{.}%
\]
Since $F\left( x\right) >0$ and $dy=\dot{y}dt=-g\left( x\right) dt<0$
along $\Lambda_{k}^{\prime}$ for even $k$, so we have%
\begin{equation}
V_{\Gamma_{k}^{\prime}}<V_{\Lambda_{k}^{\prime}}=\int\limits_{\Lambda
_{k}^{\prime}}F~dy<0\text{.} \label{Potential Comparison4}%
\end{equation}
From $\left( \ref{Potential Comparison3}\right) $ and $\left(
\ref{Potential Comparison4}\right) $ we have%
\[
V_{\Gamma_{k}}+V_{\Gamma_{k}^{\prime}}<V_{\Lambda_{k}}+V_{\Lambda_{k}^{\prime
}}<0\text{.}%
\]
Therefore $V_{\Gamma_{k}}+V_{\Gamma_{k}^{\prime}}$ is $-ve$ and monotone
increasing as the point $Q$ moves out of $A_{n}$ along $A_{n}C$.
\subsubsection*{Step $\left( C\right) :$ $V_{BQB^{\prime}}$ is $-ve$ and
monotone decreasing and tends to $-\infty$ as $Q$ tends to infinity along
$A_{n}C$.}
$\left. {}\right. \hspace{18pt}$On $BQB^{\prime}$ and $B_{1}Q_{1}%
B_{1}^{\prime}$ we have $F\left( x\right) >0$. We draw $BH_{1}$ and
$B^{\prime}H_{1}^{\prime}$ parallel to $x$-axis. Since $F\left( x\right) >0$
and $dy=\dot{y}dt=-g\left( x\right) dt<0$ along $B_{1}Q_{1}B_{1}^{\prime}$
so%
\[
V_{B_{1}Q_{1}B_{1}^{\prime}}=\int\limits_{B_{1}Q_{1}B_{1}^{\prime}}%
F~dy\leq\int\limits_{H_{1}Q_{1}H_{1}^{\prime}}F~dy\text{.}%
\]
Since $\left[ F\left( x\right) \right] _{H_{1}Q_{1}H_{1}^{\prime}}%
\geq\left[ F\left( x\right) \right] _{BQB^{\prime}}$ and $F\left(
x\right) >0$, $dy=\dot{y}dt=-g\left( x\right) dt<0$ along $BQB^{\prime}$ so
we have%
\begin{equation}
V_{B_{1}Q_{1}B_{1}^{\prime}}\leq\int\limits_{H_{1}Q_{1}H_{1}^{\prime}}%
F~dy\leq\int\limits_{BQB^{\prime}}F~dy=V_{BQB^{\prime}}<0\text{.}
\label{Potential Comparison5}%
\end{equation}
Let $S$ be a point on $y=F\left( x\right) $, to the right of $A_{n}$ and let
$BQB^{\prime}$ be an arbitrary path, with $Q$ to the right of $S$. The
straight line $PSNP^{\prime}$ is parallel to the $y$-axis. Since $F\left(
x\right) >0$ and $dy=\dot{y}dt=-g\left( x\right) dt<0$ along $BQB^{\prime}$
and $PQP^{\prime}$ is a part of $BQB^{\prime}$ so we have%
\begin{equation}
V_{BQB^{\prime}}=\int\limits_{BQB^{\prime}}F~dy=-\int\limits_{B^{\prime}%
QB}F~dy\leq-\int\limits_{P^{\prime}QP}F~dy\text{.}
\label{Potential Comparison5 in y}%
\end{equation}
By hypothesis $F$ is monotone increasing on $A_{n}C$ and so $F\left(
x\right) \geq NS$ on $PQP^{\prime}$ and hence $\left(
\ref{Potential Comparison5 in y}\right) $ gives%
\[
V_{BQB^{\prime}}\leq-\int\limits_{P^{\prime}QP}NS\cdot dy=-NS\int
\limits_{P^{\prime}QP}dy=-NS\cdot PP^{\prime}\leq NS\cdot NP\text{.}%
\]
But as $Q$ goes to infinity towards the right, so $NP\rightarrow\infty$ and
hence by the above relation it follows that $V_{BQB^{\prime}}\rightarrow
-\infty$.
\subsubsection*{Step $\left( D\right) :$}
$\left. {}\right. \hspace{18pt}$From steps $\left( A\right) $ and $\left(
B\right) $ it follows that $%
{\textstyle\sum\limits_{k=1}^{n}}
\left( V_{\Gamma_{k}}+V_{\Gamma_{k}^{\prime}}\right) $ in $\left(
\ref{Potential Sum}\right) $ is bounded. Therefore as $Q$ moves to infinity
from the right of $A_{n}$ ultimately the quantity $V_{BQB^{\prime}}$ dominates
and hence $V_{YQY^{\prime}}$ monotonically decreases to $-\infty$ to the right
of $A_{n}$. The monotone decreasing nature of $V_{BQB^{\prime}}$ inherits the
same nature to $V_{YQY^{\prime}}$ as $Q$ moves out of $A_{n}$ along the curve
$y=F\left( x\right) $.
By the construction of $\bar{\alpha}_{n}$ it is clear that $V_{YQY^{\prime}%
}>0$ at a point on the left of $A_{n}$ and ultimately it becomes $-ve$ when
the point $Q$ is at the right of $A_{n}$. So, by monotonic decreasing nature
of $V_{YQY^{\prime}}$ it can vanish only once as the point $Q$ moves out of
$A_{n}$ along the curve $y=F\left( x\right) $. Thus, there is a unique path
for which $V_{YQY^{\prime}}=0$. By $\left( \ref{Potential Zero}\right) $ and
symmetry of the path it follows that the path is closed and the proof is complete.
\section{\textbf{Construction of a Lienard System with\protect\linebreak
Desired Number of Limit Cycles}\label{Construction}}
We now present an algorithm by which we can form a Lienard system with as many
limit cycles as required. We present the technique for two limit cycles around
a single critical point. This technique can similarly be extended for $n$
number of limit cycles. As stated in the introduction, \ this algorithm is
expected to become relevant in a physical model with partial or incomplete data.
Suppose in a given physical or dynamical problem, the function $F$ of the
Lienard equation $\left( \ref{Lienard Eq}\right) $ is well defined only with
a finite interval $\left[ -a_{1},a_{1}\right] $ denoting $F\left( x\right)
=f_{1}\left( x\right) $ for $x\in\left[ -a_{1},a_{1}\right] $ and
satisfying the conditions:\newline%
\begin{tabular}
[c]{ll}%
$\left( i\right) $ & $f_{1}$ is a continuous odd function having only one
$+ve$ zero $a_{1}$\\
$\left( ii\right) $ & $xf_{1}\left( x\right) <0$~$\forall~x\in\left[
-a_{1},a_{1}\right] $\\
$\left( iii\right) $ & $f_{1}$ has a unique local minimum at the point
$L_{0}$ within $\left( a_{0},a_{1}\right) $, $a_{0}=0$.
\end{tabular}
\newline Suppose it is also known that the system has a limit cycle just
outside the interval $\left[ -a_{1},a_{1}\right] $. We have no information
of $F\left( x\right) $ beyond the interval. Our aim is to develop an
algorithm to determine a function $f_{2}$ as a restriction of $F$ in an
interval of the form $\left[ a_{1},a_{2}\right] $ so that it satisfies the
conditions of the theorem \ref{Th n Limit Cycle}, ensuring the second limit
cycle outside $\left[ a_{1},a_{2}\right] $. Now to determine $f_{2}$
precisely from the information of $f_{1}$ in $\left[ a_{0},a_{1}\right] $ we
need to define two functions $\phi_{1}$ and $H_{1}$ so that we get the
abscissa and ordinates of $f_{2}$ in the interval $\left[ a_{1},a_{2}\right]
$ respectively. The choice of $\phi_{1}$ is motivated by Odani's Choice
function \cite{Odani N} $($c.f. remarks \ref{Choice Function Remark} for
further details$)$. The functions $\phi_{1}$ and $H_{1}$ are defined as
follows:\newline the functions $\phi_{1L}$ and $\phi_{1R}$ are bijective such
that%
\begin{gather*}%
\begin{array}
[c]{c}%
\phi_{1L}:\left[ a_{0},L_{0}\right] \rightarrow\left[ a_{1},L_{1}\right]
,\quad\phi_{1L}\left( a_{0}\right) =a_{1},\phi_{1L}\left( L_{0}\right)
=L_{1}\\
\phi_{1R}:\left[ L_{0},a_{1}\right] \rightarrow\left[ L_{1},a_{2}\right]
,\quad\phi_{1R}\left( L_{0}\right) =L_{1},\phi_{1R}\left( a_{1}\right)
=a_{2}%
\end{array}
\\%
\begin{array}
[c]{c}%
\phi_{1}\left( s\right) =\left\{
\begin{array}
[c]{c}%
\phi_{1L}\left( s\right) ,\quad s\in\left[ a_{0},L_{0}\right] \\
\phi_{1R}\left( s\right) ,\quad s\in\left[ L_{0},a_{1}\right]
\end{array}
\right.
\end{array}
\end{gather*}
and $H_{1}$ is monotone decreasing on $\left[ 0,f_{1}\left( L_{0}\right)
\right] $ such that%
\begin{equation}
H_{1}\circ f_{1}:=f_{2}\circ\phi_{1}\text{.} \label{Objective of f2}%
\end{equation}
To make the definition $\left( \ref{Objective of f2}\right) $ explicit we
define at first two monotone functions $f_{2L}^{\ast}$ and $f_{2R}^{\ast}$ and
then introduce $H_{1}$ parametrically by the help of two monotone decreasing
functions $H_{1L}$ and $H_{1R}$ on $\left[ 0,f_{1}\left( L_{0}\right)
\right] $ as%
\begin{align*}
H_{1L} & :f_{1L}\left( s\right) \rightarrow f_{2L}^{\ast}\left( \phi
_{1L}\left( s\right) \right) ,\quad s\in\left[ a_{0},L_{0}\right] \\
H_{1R} & :f_{1R}\left( s\right) \rightarrow f_{2R}^{\ast}\left( \phi
_{1R}\left( s\right) \right) ,\quad s\in\left[ L_{0},a_{1}\right] \\
H_{1}\left( x\right) & =\left\{
\begin{array}
[c]{c}%
H_{1L}\left( x\right) ,\quad\text{if }x=f_{1L}\left( s\right) \text{,
}s\in\left[ a_{0},L_{0}\right] \\
H_{1R}\left( x\right) ,\quad\text{if }x=f_{1R}\left( s\right) \text{,
}s\in\left[ L_{0},a_{1}\right]
\end{array}
\right. \text{.}%
\end{align*}
The choice of $f_{2}^{\ast}$ is made on the basis of $f_{1}\left( x\right) $
defined on $\left[ -a_{1},a_{1}\right] $ and the second zero $a_{2}$ of
$F\left( x\right) $ that must lie close to but nevertheless, less than the
expected amplitude of the second limit cycle. We define the functions $f_{2L}$
and $f_{2R}$ as%
\begin{align*}
f_{2L} & :\phi_{1L}\left( s\right) \rightarrow H_{1L}\left( f_{1L}\left(
s\right) \right) ,\quad s\in\left[ a_{0},L_{0}\right] \\
f_{2R} & :\phi_{1R}\left( s\right) \rightarrow H_{1R}\left( f_{1R}\left(
s\right) \right) ,\quad s\in\left[ L_{0},a_{1}\right] \text{.}%
\end{align*}
We should note that in the definition of $\phi_{1L}$ and $\phi_{1R}$ we have
used the conditions $\phi_{1L}\left( a_{0}\right) =a_{1},\phi_{1L}\left(
L_{0}\right) =L_{1}$ and $\phi_{1R}\left( L_{0}\right) =L_{1},\phi
_{1R}\left( a_{1}\right) =a_{2}$. We could also have used the conditions
$\phi_{1L}\left( a_{0}\right) =L_{1},\phi_{1L}\left( L_{0}\right) =a_{1}$
and $\phi_{1R}\left( L_{0}\right) =a_{2},\phi_{1R}\left( a_{1}\right)
=L_{1}$ instead, but in that case the function $H_{1}$ and $H_{2}$ must be
monotone increasing.\newline If $x\in\left[ a_{1},L_{1}\right] $, then
$x=\phi_{1L}\left( s\right) $ for some $s\in\left[ a_{0},L_{0}\right] $.
Therefore,%
\[
f_{2L}\left( x\right) =f_{2L}\left( \phi_{1L}\left( s\right) \right)
=H_{1L}\left( f_{1L}\left( s\right) \right) =f_{2L}^{\ast}\left(
\phi_{1L}\left( s\right) \right) =f_{2L}^{\ast}\left( x\right) \text{.}%
\]
So,%
\[
f_{2L}=f_{2L}^{\ast}\text{.}%
\]
Next, if $x\in\left[ L_{1},a_{2}\right] $, then $x=\phi_{1R}\left(
s\right) $ for some $s\in\left[ L_{0},a_{1}\right] $. Therefore,%
\[
f_{2R}\left( x\right) =f_{2R}\left( \phi_{1R}\left( s\right) \right)
=H_{1R}\left( f_{1R}\left( s\right) \right) =f_{2R}^{\ast}\left(
\phi_{1R}\left( s\right) \right) =f_{2R}^{\ast}\left( x\right) \text{.}%
\]
So,%
\[
f_{2R}=f_{2R}^{\ast}\text{.}%
\]
Thus the unknown functions $f_{2L}$ and $f_{2R}$ can be expressed by known
functions $f_{2L}^{\ast}$ and $f_{2R}^{\ast}$ so that we have%
\begin{align*}
f_{2}\left( x\right) & =\left\{
\begin{array}
[c]{c}%
f_{2L}\left( x\right) ,\quad x\in\left[ a_{1},L_{1}\right] \\
f_{2R}\left( x\right) ,\quad x\in\left[ L_{1},a_{2}\right]
\end{array}
\right. \\
& =\left\{
\begin{array}
[c]{c}%
f_{2L}^{\ast}\left( x\right) ,\quad x\in\left[ a_{1},L_{1}\right] \\
f_{2R}^{\ast}\left( x\right) ,\quad x\in\left[ L_{1},a_{2}\right]
\end{array}
\right.
\end{align*}
Next we construct the restriction $f_{3}$ of the function $F$ in $\left[
a_{2},a_{3}\right] $ having unique local maximum at $L_{2}$ $($say$)$ in
$\left( a_{2},a_{3}\right) $. We assume two bijective functions%
\begin{align*}
\phi_{2L} & :\left[ a_{1},L_{1}\right] \rightarrow\left[ a_{2}%
,L_{2}\right] ,\quad\phi_{2L}\left( a_{1}\right) =a_{2},\phi_{2L}\left(
L_{1}\right) =L_{2}\\
\text{and }\phi_{2R} & :\left[ L_{1},a_{2}\right] \rightarrow\left[
L_{2},a_{3}\right] ,\quad\phi_{2R}\left( L_{1}\right) =L_{2},\phi
_{2R}\left( a_{2}\right) =a_{3}%
\end{align*}
and two more functions $f_{3L}^{\ast}$ and $f_{3R}^{\ast}$. We define two
monotone decreasing functions $H_{2L}$ and $H_{2R}$ on $\left[ 0,f_{2}\left(
L_{1}\right) \right] $ parametrically as%
\begin{align*}
H_{2L} & :f_{2L}\left( s\right) \rightarrow f_{3L}^{\ast}\left( \phi
_{2L}\left( s\right) \right) ,\quad s\in\left[ a_{1},L_{1}\right] \\
H_{2R} & :f_{2R}\left( s\right) \rightarrow f_{3R}^{\ast}\left( \phi
_{2R}\left( s\right) \right) ,\quad s\in\left[ L_{1},a_{2}\right]
\text{.}%
\end{align*}
We define%
\begin{align*}
f_{3L} & :\phi_{2L}\left( s\right) \rightarrow H_{2L}\left( f_{2L}\left(
s\right) \right) ,\quad s\in\left[ a_{1},L_{1}\right] \\
f_{3R} & :\phi_{2R}\left( s\right) \rightarrow H_{2R}\left( f_{2R}\left(
s\right) \right) ,\quad s\in\left[ L_{1},a_{2}\right]
\end{align*}
so that as shown above we have%
\[
f_{3L}=f_{3L}^{\ast}\text{ and }f_{3R}=f_{3R}^{\ast}\text{.}%
\]
Therefore,%
\begin{align*}
f_{3}\left( x\right) & =\left\{
\begin{array}
[c]{c}%
f_{3L}\left( x\right) ,\quad x\in\left[ a_{2},L_{2}\right] \\
f_{3R}\left( x\right) ,\quad x\in\left[ L_{2},a_{3}\right]
\end{array}
\right. \\
& =\left\{
\begin{array}
[c]{c}%
f_{3L}^{\ast}\left( x\right) ,\quad x\in\left[ a_{2},L_{2}\right] \\
f_{3R}^{\ast}\left( x\right) ,\quad x\in\left[ L_{2},a_{3}\right]
\end{array}
\right.
\end{align*}
We observe that%
\begin{align*}
f_{3L} & :\phi_{2L}\left( \phi_{1L}\left( s\right) \right) \rightarrow
H_{2L}\left( f_{2L}\left( \phi_{1L}\left( s\right) \right) \right)
,\quad s\in\left[ a_{0},L_{0}\right] \\
f_{3R} & :\phi_{2R}\left( \phi_{1R}\left( s\right) \right) \rightarrow
H_{2R}\left( f_{2R}\left( \phi_{1R}\left( s\right) \right) \right)
,\quad s\in\left[ L_{0},a_{1}\right] \text{.}%
\end{align*}
We can similarly proceed and construct all the restrictions $f_{k}$ of the
function $F$ in $\left[ a_{k-1},a_{k}\right] $ for $k=4,5,6,\ldots,N$ so
that the corresponding Lienard system have exactly $N$ limit cycles. Thus an
incomplete Lienard system can be extended iteratively over larger and larger
intervals of $x$, having as many $($simple$)$ limit cycles as desired. We
note, however, that the choice of iterated functions has as large
arbitrariness except for the required minimal conditions of monotonicity
satisfying theorem \ref{Th n Limit Cycle}. The number of limit cycles for each
such choices remain invariant. The problem of reconstructing data with a given
number of limit cycles and having specified shapes is left for future study.
We now illustrate the above construction by the following examples.
\section{\textbf{Examples}\label{Examples}}
We now present some examples following the construction described in section
\ref{Construction}. Here the figures has been drawn using Mathematica 5.1.
\begin{example}
\label{Ex Construction 1}Let $a_{1}=0.2$, $a_{2}=0.5$ and%
\[
f_{1}\left( x\right) =0.15-0.25\sqrt{1-\frac{\left( x-0.1\right) ^{2}%
}{0.125^{2}}},\quad-0.2\leq x\leq0.2\text{.}%
\]
Here, $L_{0}=0.1$. Let $L_{1}=0.3$. Let us choose%
\begin{align*}
f_{2L}^{\ast}\left( x\right) & =-0.15+0.25\sqrt{1-\frac{\left(
x-0.3\right) ^{2}}{\left( 0.125\right) ^{2}}}\\
f_{2R}^{\ast}\left( x\right) & =-0.15+0.25\sqrt{1-\frac{\left(
x-0.3\right) ^{2}}{\left( 0.25\right) ^{2}}}\text{.}%
\end{align*}
Also, let%
\[
\phi_{1L}\left( s\right) =\sqrt{As^{2}+B}\text{.}%
\]
To determine the unknown parameters $A$ and $B$ we assume that $\phi
_{1L}\left( a_{0}\right) =a_{1}$, $\phi_{1L}\left( L_{0}\right) =L_{1}$.
Then $A=5$ and $B=0.04$. Next, let%
\[
\phi_{1R}\left( s\right) =\sqrt{A^{\prime}s^{2}+B^{\prime}}%
\]
with $\phi_{1R}\left( L_{0}\right) =L_{1}$ and $\phi_{1R}\left(
a_{1}\right) =a_{2}$. Then, $A^{\prime}=\dfrac{16}{3}$ and $B^{\prime}%
=\dfrac{11}{300}$. Then following the algorithm in section $\ref{Construction}%
$ we have%
\begin{align*}
f_{2L} & =f_{2L}^{\ast}\text{ in }\left[ a_{1},L_{1}\right] \\
\text{and }f_{2R} & =f_{2R}^{\ast}\text{ in }\left[ L_{1},a_{2}\right]
\end{align*}
so that%
\[
f_{2}\left( x\right) =\left\{
\begin{array}
[c]{c}%
f_{2L}\left( x\right) ,\quad x\in\left[ a_{1},L_{1}\right] \\
f_{2R}\left( x\right) ,\quad x\in\left[ L_{1},a_{2}\right]
\end{array}
\right.
\]
We now define%
\[
F_{+}\left( x\right) =\left\{
\begin{array}
[c]{l}%
f_{1}\left( x\right) ,\quad0\leq x<a_{1}\\
f_{2}\left( x\right) ,\quad a_{1}\leq x<a_{2}\\
-\dfrac{4}{3}\left( x-0.5\right) ,\quad x\geq a_{2}%
\end{array}
\right.
\]
to make $F_{+}$ continuously differentiable in $\left[ 0,\infty\right) $.
The last part of the function $F_{+}$ is taken to make $F_{+}$ monotone
decreasing for $x\geq a_{2}$ so that the function $F$ defined below satisfy
the condition that $\left\vert F\left( x\right) \right\vert \rightarrow
\infty$ as $x\rightarrow\infty$ monotonically for $x\geq a_{2}$. We take%
\[
F\left( x\right) =\left\{
\begin{array}
[c]{c}%
F_{+}\left( x\right) ,\quad x\geq0\\
F_{-}\left( x\right) ,\quad x<0
\end{array}
\right.
\]
We find two limit cycles which cross the $+ve$ $y$-axis at the points $\left(
0,0.26731065\right) $ and $\left( 0,0.5749823\right) $ respectively. So,
$y_{+}\left( 0\right) =y_{-}\left( 0\right) =0.26731065$ and $\bar{\alpha
}_{1}=0.254219124$. So, the conditions $\bar{\alpha}_{1}\leq L_{1}$ are
satisfied in this example. Thus the existence of limit cycles are ensured by
the theorem $\ref{Th n Limit Cycle}$ with $g\left( x\right) =x$ establishing
the construction in section $\ref{Construction}$. The limit cycles alongwith
the curve of $F\left( x\right) $ has been shown in figure
$\ref{Ex Construction 1 Fig}$.\begin{figure}[h]
\begin{center}
\includegraphics[height=6cm]{Figure0303}
\end{center}
\caption{Limit cycles for the system in Example \ref{Ex Construction 1} with
the curve of $F\left( x\right) .$}%
\label{Ex Construction 1 Fig}%
\end{figure}
\end{example}
\begin{remark}
\label{Choice Function Remark}From condition $\left( C2\right) $ in
$\cite{Odani N}$ we see that%
\[
g\left( \phi_{k}\left( x\right) \right) \phi_{k}^{\prime}\left( x\right)
\geq g\left( x\right) \text{.}%
\]
If $g\left( x\right) =x$, then it gives
\[
\phi_{k}\left( x\right) \phi_{k}^{\prime}\left( x\right) \geq x\text{.}%
\]
Thus, in example $\ref{Ex Construction 1}$ if we take $\phi_{k}\left(
s\right) =\phi_{1L}\left( s\right) =\sqrt{As^{2}+B}$, then the above
inequality gives%
\[%
\begin{array}
[c]{cc}
& \sqrt{As^{2}+B}\cdot\dfrac{2As}{2\sqrt{As^{2}+B}}\geq s\\
\text{i.e., } & As\geq s\\
\text{i.e., } & A\geq1\text{.}%
\end{array}
\]
By the definition of $f_{2L}^{\ast}$ and $f_{2}$ it follows that the remaining
part of the condition $\left( C2\right) $ is satisfied if%
\[
\left\vert F\left( \phi_{1L}\left( s\right) \right) \right\vert
\geq\left\vert F\left( s\right) \right\vert ,\quad s\in\left[ a_{0}%
,L_{0}\right]
\]
and in particular%
\[
\left\vert F\left( \phi_{1L}\left( 0\right) \right) \right\vert
=\left\vert F\left( 0\right) \right\vert \text{.}%
\]
Since, $\phi_{1L}\left( s\right) \in\left[ a_{1},a_{2}\right] $ and
$s\in\left[ a_{0},L_{0}\right] $ so it gives%
\begin{align}
\left\vert f_{2}\left( \phi_{1L}\left( s\right) \right) \right\vert &
\geq\left\vert f_{1}\left( s\right) \right\vert ,\quad s\in\left[
a_{0},L_{0}\right] \nonumber\\
\text{i.e., }\left\vert H_{1L}\left( f_{1L}\left( s\right) \right)
\right\vert & \geq\left\vert f_{1L}\left( s\right) \right\vert ,\quad
s\in\left[ a_{0},L_{0}\right] \text{.} \label{Odani Condition}%
\end{align}
Next, in particular the equality occurs at $s=a_{0}=0$ and so we have%
\begin{align}
& \left. \hspace{0.32in}\left\vert F\left( \phi_{1L}\left( 0\right)
\right) \right\vert =\left\vert F\left( 0\right) \right\vert \right.
\nonumber\\
& \implies\left\vert H_{1L}\left( f_{1L}\left( 0\right) \right)
\right\vert =0\nonumber\\
& \implies\left\vert H_{1L}\left( 0\right) \right\vert =0\nonumber\\
& \implies H_{1L}\left( 0\right) =0 \label{Odani Condition Equality}%
\end{align}
since $F\left( 0\right) =0$ and $f_{1L}\left( 0\right) =0$. By our
construction we also see%
\[
s\cdot H_{1}\left( s\right) <0\quad\forall~s\text{.}%
\]
Thus, $\phi_{1L}$ behaves like choice function described by Odani. Here, the
condition $\left( \ref{Odani Condition}\right) $ does not hold for the
system discussed in example $\ref{Ex Construction 1}$. In fact, here%
\[
\left\vert H_{1L}\left( f_{1L}\left( s\right) \right) \right\vert
\leq\left\vert f_{1L}\left( s\right) \right\vert \quad s\in\left[
a_{0},L_{0}\right] \text{.}%
\]
However, the conditions $($viz. $\bar{\alpha}_{i}<L_{i}$, etc.$)$ of theorem
$\ref{Th n Limit Cycle}$ are satisfied ensuring the existence of exactly two
limit cycles. This shows that the theorem $\ref{Th n Limit Cycle}$ and the
construction presented above covers a larger class of functions $F$ than those
covered in $\cite{Odani N}$. The equality in $\left( \ref{Odani Condition}%
\right) $ occurs in example $\ref{Ex Construction 1}$ only at the point
$s=a_{0}=0$. However, the equality can occur at points where $s\neq a_{0}$. We
present example $\ref{Ex Construction 2}$ below to show this kind of behaviour.
\end{remark}
\begin{remark}
The function $f_{2}$ in example $\ref{Ex Construction 1}$ is obtained from
$f_{1}$ by reflection and translation along $x$-axis. However, it is clear
from the construction of section $\ref{Construction}$, there is a plenty of
freedom in the possible extensions of $f_{1}$ having a fixed number of limit
cycles, as illustrated in examples $\ref{Ex Construction 2}$ and
$\ref{Ex Construction 3}$. In these examples we consider more general
transformations so that the limit cycles are obtained having amplitudes close
to those expected from the given physical $($dynamical$)$ problem.
\end{remark}
\begin{example}
\label{Ex Construction 2}In this example our target is to construct an example
in which%
\[
F_{+}\left( x\right) =\left\{
\begin{array}
[c]{l}%
0.055518-0.08\sqrt{1-\dfrac{\left( x-0.144\right) ^{2}}{0.04}},\quad0\leq
x\leq0.144\\
0.148506-0.172988\sqrt{1-\dfrac{\left( x-0.144\right) ^{2}}{\left(
0.206686\right) ^{2}}},\quad0.144<x\leq0.34\\
0.0910146+0.0209854\sqrt{1-\dfrac{\left( x-0.407\right) ^{2}}{\left(
0.06751554\right) ^{2}}},\quad0.34<x\leq0.407\\
-0.2280727+0.340073\sqrt{1-\dfrac{\left( x-0.407\right) ^{2}}{\left(
0.125376\right) ^{2}}},\quad0.407<x\leq0.5\\
-3.0000372\left( x-0.5\right) ,\quad x>0.5\text{.}%
\end{array}
\right.
\]
and%
\[
F\left( x\right) =\left\{
\begin{array}
[c]{c}%
F_{+}\left( x\right) ,\quad x\geq0\\
F_{-}\left( x\right) ,\quad x<0
\end{array}
\right.
\]
Here, $a_{1}=0.2$, $a_{2}=0.5$, $L_{0}=0.144$ and $L_{1}=0.407$. It is easy to
show that $\phi_{1L}\left( s\right) =\sqrt{4.974392361\cdot s^{2}+0.0625}$
and $\phi_{1R}\left( s\right) =\sqrt{2.019706\cdot s^{2}+0.12376838}$. Here,%
\begin{align*}
f_{1}\left( x\right) & =\left\{
\begin{array}
[c]{l}%
0.055518-0.08\sqrt{1-\dfrac{\left( x-0.144\right) ^{2}}{0.04}},\quad0\leq
x\leq0.144\\
0.148506-0.172988\sqrt{1-\dfrac{\left( x-0.144\right) ^{2}}{\left(
0.206686\right) ^{2}}},\quad0.144<x\leq0.2
\end{array}
\right. \\
f_{2}\left( x\right) & =\left\{
\begin{array}
[c]{l}%
0.148506-0.172988\sqrt{1-\dfrac{\left( x-0.144\right) ^{2}}{\left(
0.206686\right) ^{2}}},\quad0.2<x\leq0.34\\
0.0910146+0.0209854\sqrt{1-\dfrac{\left( x-0.407\right) ^{2}}{\left(
0.06751554\right) ^{2}}},\quad0.34<x\leq0.407\\
-0.2280727+0.340073\sqrt{1-\dfrac{\left( x-0.407\right) ^{2}}{\left(
0.125376\right) ^{2}}},\quad0.407<x\leq0.5
\end{array}
\right.
\end{align*}
The second part of the condition $\left( C2\right) $ in $\cite{Odani N}$
i.e., the condition $\left( \ref{Odani Condition}\right) $ does not hold. In
fact,
\begin{align*}
\left\vert H_{1L}\left( f_{1L}\left( s\right) \right) \right\vert &
<\left\vert f_{1L}\left( s\right) \right\vert \text{ in }\left(
0,0.05290111\right) \\
\text{and }\left\vert H_{1L}\left( f_{1L}\left( s\right) \right)
\right\vert & >\left\vert f_{1L}\left( s\right) \right\vert \text{ in
}\left( 0.05290111,0.144\right) \text{.}%
\end{align*}
The equality occurs at $s=0$ and $s=0.05290111$. Here, we get two limit cycles
crossing the $+ve$ $y$-axis at the points $\left( 0,0.29039755\right) $ and
$\left( 0,0.567249\right) $ respectively so that $y_{+}\left( 0\right)
=y_{-}\left( 0\right) =0.29039755$ and $\bar{\alpha}_{1}=0.2892792083$.
Consequently, $\bar{\alpha}_{1}\leq L_{1}$ and the other conditions of theorem
$\ref{Th n Limit Cycle}$ with $g\left( x\right) =x$ are satisfied justifying
the existence of exactly two limit cycles. These two limit cycles alongwith
the curve of $F\left( x\right) $ has been shown in figure
$\ref{Ex Construction 2 Fig}$.\begin{figure}[h]
\begin{center}
\includegraphics[height=6cm]{Figure0304}
\end{center}
\caption{Limit cycles for the system in Example \ref{Ex Construction 2} with
the curve of $F\left( x\right) .$}%
\label{Ex Construction 2 Fig}%
\end{figure}
\end{example}
\begin{example}
\label{Ex Construction 3}We now consider an example involving three limit
cycles by taking $a_{1}=0.1$, $a_{2}=0.2$, $a_{3}=0.4$ and%
\[
f_{1}\left( x\right) =0.04422166-0.08\sqrt{1-\frac{\left( x-0.05\right)
^{2}}{\left( 0.06\right) ^{2}}},\quad-0.1\leq x\leq0.1\text{.}%
\]
Here $L_{0}=0.05$ and let $L_{1}=0.15$, $L_{2}=0.3$. We take%
\begin{align*}
f_{2L}^{\ast}\left( x\right) & =-0.04422166+0.08\sqrt{1-\frac{\left(
x-0.15\right) ^{2}}{\left( 0.06\right) ^{2}}}\\
f_{2R}^{\ast}\left( x\right) & =-0.04422166+0.08\sqrt{1-\frac{\left(
x-0.15\right) ^{2}}{\left( 0.06\right) ^{2}}}\text{.}%
\end{align*}
It is easy to construct%
\begin{align*}
\phi_{1L}\left( s\right) & =\sqrt{5s^{2}+0.01},\quad s\in\left[
a_{0},L_{0}\right] \\
\phi_{1R}\left( s\right) & =\sqrt{\frac{7}{3}s^{2}+\frac{5}{300}},\quad
s\in\left[ L_{0},a_{1}\right] \text{.}%
\end{align*}
Next, we take%
\begin{align*}
f_{3L}^{\ast}\left( x\right) & =0.0043819183-0.03\sqrt{1-\frac{\left(
x-0.3\right) ^{2}}{\left( 0.101084111\right) ^{2}}}\\
f_{3R}^{\ast}\left( x\right) & =0.0043819183-0.03\sqrt{1-\frac{\left(
x-0.3\right) ^{2}}{\left( 0.101084111\right) ^{2}}}\text{.}%
\end{align*}
We can similarly construct%
\begin{align*}
\phi_{2L}\left( s\right) & =\sqrt{4s^{2}}=2s,\quad s\in\left[ a_{1}%
,L_{1}\right] \\
\phi_{2R}\left( s\right) & =2s,\quad s\in\left[ L_{1},a_{2}\right]
\text{.}%
\end{align*}
so that $\phi_{2L}\left( a_{1}\right) =a_{2}$, $\phi_{2L}\left(
L_{1}\right) =L_{2}$, $\phi_{2R}\left( L_{1}\right) =L_{2}$ and $\phi
_{2R}\left( a_{2}\right) =a_{3}$. We define%
\[
F_{+}\left( x\right) =\left\{
\begin{array}
[c]{l}%
0.04422166-0.08\sqrt{1-\dfrac{\left( x-0.05\right) ^{2}}{\left(
0.06\right) ^{2}}},\quad0\leq x<0.1\\
-0.04422166+0.08\sqrt{1-\dfrac{\left( x-0.15\right) ^{2}}{\left(
0.06\right) ^{2}}},\quad0.1\leq x<0.2\\
0.0043819183-0.03\sqrt{1-\dfrac{\left( x-0.3\right) ^{2}}{\left(
0.101084111\right) ^{2}}},\quad0.2\leq x<0.4\\
2.0100758\left( x-0.4\right) ,\quad0.4\leq x\text{.}%
\end{array}
\right.
\]%
\[
F\left( x\right) =\left\{
\begin{array}
[c]{c}%
F_{+}\left( x\right) ,\quad x\geq0\\
F_{-}\left( x\right) ,\quad x<0
\end{array}
\right.
\]
to make $F$ continuously differentiable. We can easily calculate that
$\bar{\alpha}_{1}=0.12418214965$ and $\bar{\alpha}_{2}=0.2354818163$ and
consequently $\bar{\alpha}_{i}<L_{i}$ for $i=1,2$. All the other conditions of
theorem $\ref{Th n Limit Cycle}$ with $g\left( x\right) =x$ are satisfied
and hence we get three distinct limit cycles as shown in figure
$\ref{Ex Construction 3 Fig}$ alongwith the curve of $F\left( x\right) $
defined above.\begin{figure}[h]
\begin{center}
\includegraphics[height=6cm]{Figure0305}
\end{center}
\caption{Three limit cycles for the system in Example \ref{Ex Construction 3}
with the curve of $F\left( x\right) .$}%
\label{Ex Construction 3 Fig}%
\end{figure}
\end{example}
\begin{remark}
Here the function $F$ is defined in such a manner that $\left\vert F\left(
L_{0}\right) \right\vert >\left\vert F\left( L_{2}\right) \right\vert $
implying that $\beta_{2}$ mentioned in Theorem 3 of $\cite{Chen Llibre Zhang}$
or in theorem 7.12, chapter 4 of the book $\cite{Zhing Tongren Wenzao}$ does
not exist and hence these theorems are not applicable for the corresponding
Lienard system.
\end{remark}
| 2024-02-18T23:39:58.231Z | 2010-02-27T17:46:18.000Z | algebraic_stack_train_0000 | 969 | 8,868 |
|
proofpile-arXiv_065-4799 | \section{Introduction} \label{Sec1}
\subsection{Motivation and definitions}
Correlation functions of characteristic polynomials (CFCP) appear in various fields of mathematical and theoretical physics. (i) In quantum chaology, CFCP (i.a) provide a convenient way to describe the universal features of spectral statistics of a particle confined in a finite system exhibiting chaotic classical dynamics (Bohigas, Giannoni and Schmit 1984; Andreev, Agam, Simons and Altshuler 1996; M\"uller, Heusler, Braun, Haake and Altland 2004) and (i.b) facilitate calculations of a variety of important distribution functions whose generating functions may often be expressed in terms of CFCP (see, e.g., Andreev and Simons 1995). (ii) In the random matrix theory approach to quantum chromodynamics, CFCP allow to probe various QCD partition functions (see, e.g., Verbaarschot 2010). (iii) In the number theory, CFCP have been successfully used to model behaviour of the Riemann zeta function along the critical line (Keating and Snaith 2000a, 2000b; Hughes, Keating and O'Connell 2000). (iv) Recently, CFCP surfaced in the studies of random energy landscapes (Fyodorov 2004). (v) For the r\^ole played by CFCP in the algebraic geometry, the reader is referred to the paper by Br\'ezin and Hikami (2008) and references therein.
In what follows, we adopt a formal setup which turns an $n\times n$ Hermitian matrix ${\boldsymbol{\cal H}}={\boldsymbol{\cal H}^\dagger}$ into a central object of our study. For a {\it fixed} matrix ${\boldsymbol{\cal H}}$, the characteristic polynomial ${\rm det}_n(\varsigma-{\boldsymbol{\cal H}})$ contains complete information about the matrix spectrum. To study the {\it statistics} of spectral fluctuations in an {\it ensemble} of random matrices, it is convenient to introduce the correlation function $\Pi_{n|p} ({\boldsymbol \varsigma}; {\boldsymbol \kappa})$ of characteristic polynomials
\begin{eqnarray}
\label{def-cf}
\Pi_{n|p} ({\boldsymbol \varsigma}; {\boldsymbol \kappa}) =
\left<
\prod_{\alpha=1}^{p} {\rm det}_n^{\kappa_\alpha}(\varsigma_\alpha-{\boldsymbol{\cal H}})
\right>_{\boldsymbol {\cal H}}.
\end{eqnarray}
Here, the vectors ${\boldsymbol \varsigma} = (\varsigma_1,\cdots,\varsigma_p)$ and ${\boldsymbol \kappa}=(\kappa_1,\cdots,\kappa_p)$ accommodate the energy and the ``replica'' parameters, respectively. The angular brackets $\left< f({\boldsymbol {\cal H}}) \right>_{\boldsymbol {\cal H}}$ stand for the ensemble average
\begin{eqnarray}
\label{def}
\left< f({\boldsymbol {\cal H}}) \right>_{\boldsymbol {\cal H}}
= \int d\mu_n({\boldsymbol {\cal H}}) \, f(\boldsymbol {\cal H})
\end{eqnarray}
with respect to a proper probability measure
\begin{eqnarray}
d\mu_n({\boldsymbol {\cal H}})&= P_n({\boldsymbol {\cal H}})\,({\cal D}_n {\boldsymbol {\cal H}}), \\
({\cal D}_n {\boldsymbol {\cal H}}) &=
\prod_{j=1}^n d{\cal H}_{jj} \,\prod_{j<k}^n d{\rm Re}{\cal H}_{jk}
\, d{\rm Im}{\cal H}_{jk}
\end{eqnarray}
normalised to unity. Throughout the paper, the probability density
function $P_n({\boldsymbol {\cal H}})$ is assumed to follow the
trace-like law
\begin{eqnarray}
P_n({\boldsymbol {\cal H}}) = {\cal C}_n^{-1}
\exp\left[-{\rm tr}_n\, V({\boldsymbol {\cal H}})
\right]
\end{eqnarray}
with $V({\boldsymbol {\cal H}})$ to be referred to as the
confinement potential.
There exist two canonical ways to relate the spectral statistics of
${\boldsymbol {\cal H}}$ encoded into the average $p$-point Green
function
\begin{eqnarray}
G_{n|p}({\boldsymbol \varsigma}) = \left<
\prod_{\alpha=1}^p {\rm tr}_n\left( \varsigma_\alpha - {\boldsymbol{\cal H}}
\right)^{-1}
\right>_{\boldsymbol{\cal H}}
\end{eqnarray}
to the correlation function $\Pi_{n|p}({\boldsymbol \varsigma};
{\boldsymbol \kappa})$ of characteristic polynomials.
\newline
\begin{itemize}
\item The supersymmetry-like prescription (Efetov 1983, Verbaarschot, Weidenm\"uller and Zirnbauer 1985, Guhr 1991),
\begin{eqnarray} \label{susy-r}
G_{n|p}({\boldsymbol \varsigma}) =
\left( \prod_{\alpha=1}^p
\lim_{\varsigma_\alpha^\prime\rightarrow \varsigma_\alpha} \frac{\partial}{\partial \varsigma_\alpha}
\right) \, \Pi^{\rm{(susy)}}_{n|p+p}({\boldsymbol \varsigma},{\boldsymbol \varsigma}^\prime),
\end{eqnarray}
makes use of the correlation function
\begin{eqnarray}
\Pi^{\rm{(susy)}}_{n|q+q^\prime}({\boldsymbol \varsigma},{\boldsymbol \varsigma}^\prime)
=\left<
\prod_{\alpha=1}^{q} {\rm det}_n(\varsigma_\alpha-{\boldsymbol{\cal H}})
\prod_{\beta=1}^{q^\prime} {\rm det}_n^{-1}(\varsigma_\beta^\prime-{\boldsymbol{\cal H}})
\right>_{\boldsymbol {\cal H}}
\end{eqnarray}
obtainable from $\Pi_{n|q+q^\prime}({\boldsymbol
\varsigma},{\boldsymbol \varsigma}^\prime; {\boldsymbol
\kappa},{\boldsymbol \kappa}^\prime)$ by setting the replica
parameters ${\boldsymbol \kappa}$ and ${\boldsymbol
\kappa}^\prime$ to the {\it integers} $\pm 1$.
\newline
\item On the contrary, the replica-like prescription (Hardy, Littlewood and P\'olya 1934, Edwards and Anderson 1975),
\begin{eqnarray}
\label{RL}
G_{n|p}({\boldsymbol \varsigma}) = \left( \prod_{\alpha=1}^p
\lim_{\kappa_\alpha\rightarrow 0} \kappa_\alpha^{-1} \frac{\partial}{\partial \varsigma_\alpha}
\right) \, \Pi_{n|p}({\boldsymbol \varsigma}; {\boldsymbol \kappa}),
\end{eqnarray}
entirely relies on the behaviour of the correlation function
$\Pi_{n|p}({\boldsymbol \varsigma}; {\boldsymbol \kappa})$ for
{\it real-valued} replica parameters, ${\boldsymbol \kappa} \in
{\mathbb R}^p$, as suggested by the limiting procedure in Eq.
(\ref{RL}). In this case, the notation $\Pi_{n|p} ({\boldsymbol \varsigma}; {\boldsymbol \kappa})$ should be understood as the principal value of the r.h.s. in Eq.~(\ref{def-cf}). Existence of the CFCP is guaranteed by a proper choice of imaginary parts of $\boldsymbol \varsigma$.
\newline
\end{itemize}
A nonperturbative calculation of the correlation function
$\Pi_{n|p}({\boldsymbol \varsigma}; {\boldsymbol \kappa})$ of
characteristic polynomials is a nontrivial problem. So far, the solutions reported by several groups have always reduced $\Pi_{n|p}({\boldsymbol \varsigma}; {\boldsymbol \kappa})$ to a {\it determinant form}. Its simplest -- Hankel determinant -- version follows from the eigenvalue representation \footnote{See Eq.~(\ref{rpf-1}) and a brief discussion around it.} of Eq.~(\ref{def-cf}) by virtue of the Andr\'eief--de Bruijn formula [Eq.~(\ref{adB}) below]
\begin{eqnarray}
\label{han-det}\fl \qquad
\Pi_{n|p}({\boldsymbol \varsigma}; {\boldsymbol \kappa}) = n!\,\frac{{\cal V}_n}{{\cal C}_n}\,
{\rm det}_n \left[
\int_{\mathbb R} d\lambda \, \lambda^{j+k} e^{-V(\lambda)} \prod_{\alpha=1}^p (\varsigma_\alpha -\lambda)^{\kappa_\alpha}
\right]_{0 \le j,k \le n-1}.
\end{eqnarray}
Here, ${\mathcal V}_n$ denotes a volume of the unitary group ${\boldsymbol {\cal U}}(n)$ as defined by Eq.~(\ref{un-vol}). Unfortunately, the Hankel determinant Eq.~(\ref{han-det}) is
difficult to handle in the physically interesting thermodynamic limit: finding its asymptotics in the domain $n\gg 1$ remains to a large extent an open problem (Basor, Chen and Widom 2001, Garoni 2005, Krasovsky 2007, Its and Krasovsky 2008) especially as the integral in Eq.~(\ref{han-det}) has unbounded support.
For ${\boldsymbol \kappa}$ {\it integers}, ${\boldsymbol \kappa} \in {\mathbb Z}^p$, so-called duality relations (see, e.g., Br\'ezin and Hikami 2000, Mehta and Normand 2001, Desrosiers 2009 and references therein) make it possible to identify a more convenient determinant representation of $\Pi_{n|p}({\boldsymbol \varsigma}; {\boldsymbol \kappa})$: Apart from being expressed through a determinant of a reduced size (see below), such an alternative representation of CFCP displays an explicit $n$-dependence hereby making an asymptotic large-$n$ analysis more viable. For instance, the
correlation function
\begin{eqnarray} \label{cfg} \fl
\Pi_{n|q+q^\prime}({\boldsymbol \varsigma},{\boldsymbol \varsigma}^\prime;{\boldsymbol m}, {\boldsymbol m}^\prime)
=\left<
\prod_{\alpha=1}^{q} {\rm det}_n^{m_\alpha}(\varsigma_\alpha-{\boldsymbol{\cal H}})
\prod_{\beta=1}^{q^\prime} {\rm det}_n^{-m^\prime_\beta}(\varsigma_\beta^\prime-{\boldsymbol{\cal H}})
\right>_{\boldsymbol {\cal H}}
\end{eqnarray}
with ${\boldsymbol m} \in {\mathbb Z}_+^q$ and ${\boldsymbol
m^\prime} \in {\mathbb Z}_+^{q^\prime}$ can be deduced
from the result \footnote[1]{See also much earlier works by Uvarov (1959, 1969). Alternative representations for
$\Pi^{\rm{(susy)}}_{n|q+q^\prime}({\boldsymbol
\varsigma},{\boldsymbol \varsigma}^\prime)$ have been obtained by
Strahov and Fyodorov (2003), Baik, Deift and Strahov (2003),
Borodin and Strahov (2005), Borodin, Olshanski and Strahov (2006),
and Guhr (2006).} by Fyodorov and Strahov (2003)
\begin{eqnarray}\label{pink-det}\fl
\Pi^{\rm{(susy)}}_{n|q+q^\prime}({\boldsymbol \varsigma},{\boldsymbol \varsigma}^\prime)
=
\frac{ c_{n,q^\prime}}{\Delta_{q}({\boldsymbol \varsigma})\Delta_{q^\prime}({\boldsymbol \varsigma}^\prime)}
\, {\rm det}_{q+q^\prime}
\left[
\begin{array}{c}
\left[ h_{n-q^\prime+k}(\varsigma^\prime_j)\right]_{j=1,\cdots,q;\; k=0,\cdots,q+q^\prime-1} \\
\left[ \pi_{n-q^\prime+k}(\varsigma_j)\right]_{j=1,\cdots,q^\prime;\; k=0,\cdots,q+q^\prime-1} \\
\end{array}
\right] \label{fs}
\end{eqnarray}
by inducing a proper degeneracy of energy variables. The validity of this alternative representation (which still possesses a {\it determinant form}) is restricted to $q^\prime \le n$ (Baik, Deift and
Strahov 2003). Here,
\begin{eqnarray}
\Delta_q({\boldsymbol \varsigma}) = {\rm det}_q \left[\varsigma_\alpha^{\beta-1}
\right] = \prod_{\alpha<\beta}^q (\varsigma_\beta-\varsigma_\alpha)
\end{eqnarray}
is the Vandermonde determinant; the two sets of functions,
$\pi_k(\varsigma)$ and $h_k(\varsigma)$, are the average
characteristic polynomial
\begin{eqnarray}
\pi_k(\varsigma) = \left<
{\rm det}_k(\varsigma-{\boldsymbol{\cal H}})
\right>_{\boldsymbol {\cal H}}
\end{eqnarray}
and, up to a prefactor, the average {\it inverse} characteristic
polynomial \footnote[2]{Making use of the Heine formula (Heine 1878,
Szeg\"o 1939), it can be shown that $\pi_k(\varsigma)$ is a monic
polynomial orthogonal on ${\mathbb R}$ with respect to the measure
$d\tilde{\mu}(\varsigma)=\exp[-V(\varsigma)]\,d\varsigma$. The
function $h_k(\varsigma)$ is its Cauchy-Hilbert transform (see,
e.g., Fyodorov and Strahov 2003):
\begin{eqnarray}
h_k(\varsigma) =\frac{1}{2\pi \dot{\iota}} \int_{\mathbb R} \frac{d{\tilde \mu}(\varsigma^\prime)}{\varsigma^\prime-\varsigma}
\, \pi_k(\varsigma^\prime), \;\;\; {\rm Im\,} \varsigma \neq 0. \nonumber
\end{eqnarray}}
\begin{eqnarray}
h_{k-1}(\varsigma) = c_{k,1}\,
\left<
{\rm det}_k^{-1}(\varsigma-{\boldsymbol{\cal H}})
\right>_{\boldsymbol {\cal H}}.
\end{eqnarray}
Finally, the constant $c_{n,q^\prime}$ is
\begin{eqnarray}
c_{n,q^\prime} =
\frac{(2\pi)^{q^\prime}}{\dot{\iota}^{\lceil q^\prime/2\rceil - \lfloor q^\prime/2 \rfloor}}
\frac{n!}{(n-q^\prime)!}\frac{{\cal V}_n}{{\cal V}_{n-q^\prime}}\frac{{\cal N}_{n-q^\prime}}{{\cal N}_{n}},
\end{eqnarray}
where ${\cal V}_n$ is a volume of the unitary group ${\boldsymbol {\cal U}}(n)$,
\begin{eqnarray}
\label{un-vol}
{\cal V}_n = \frac{\pi^{n(n-1)/2}}{\prod_{j=1}^n j!}.
\end{eqnarray}
The result Eq.~(\ref{fs}) is quite surprising since it expresses the
higher-order spectral correlation functions $G_{n|p}({\boldsymbol
\varsigma})$ in terms of one-point averages (Gr\"onqvist, Guhr and Kohler 2004).
For ${\boldsymbol \kappa}$ {\it reals}, ${\boldsymbol \kappa} \in {\mathbb R}^p$, the duality relations are sadly unavailable; consequently, determinant representation Eq.~(\ref{pink-det}) and determinant representations of the same ilk
(see, e.g., Strahov and Fyodorov 2003, Baik, Deift and Strahov 2003,
Borodin and Strahov 2005, Borodin, Olshanski and Strahov 2006,
and Guhr 2006) no longer exist.
{\it The natural question to ask is what structures come instead of determinants?} This question is the core issue of the present paper in which we develop a completely different way of treating of CFCP. Heavily influenced by a series of remarkable works by Adler, van Moerbeke and collaborators
(Adler, Shiota and van Moerbeke 1995, Adler and van Moerbeke 2001, and reference therein), we make use of the ideas of integrability \footnote{For a review on integrability and matrix models, the reader is referred to Morozov (1994).} to develop an {\it integrable theory of CFCP} whose main outcome is an {\it implicit} characterisation of CFCP in terms of solutions to certain nonlinear differential equations.
As will be argued later, such a theory is of utmost importance for
advancing the idea of exact integrability of zero-dimensional
replica field theories (Kanzieper 2002, Splittorff and Verbaarschot
2003, Kanzieper 2005, Osipov and Kanzieper 2007, Kanzieper 2010). In fact, it is
this particular application of our integrable theory of CFCP that motivated the present study.
\subsection{Main results at a glance}
\label{Sec-1-2}
This work, consisting of two parts, puts both the ideology and technology in the first place. Consequently, its {\it main outcome} is not a single explicit formula (or a set of them) for the correlation function $\Pi_{n|p} ({\boldsymbol \varsigma}; {\boldsymbol \kappa})$ of characteristic polynomials but
\begin{itemize}
\item a {\it regular formalism} tailor-made for a nonperturbative description of $\Pi_{n|p} ({\boldsymbol \varsigma}; {\boldsymbol \kappa})$ considered at {\it real valued} replica parameters ${\boldsymbol \kappa} \in {\mathbb R}^p$, and
\item a {\it comparative analysis} of three alternative versions of the replica method (fermionic, bosonic, and supersymmetric) which sheds new light on the phenomenon of fermionic-bosonic factorisation \footnote{A quantum correlation function is said to possess the factorisation property if it can be expressed in terms of a single fermionic and a single bosonic partition function (Splittorff and Verbaarschot 2003, Splittorff and Verbaarschot 2004).} of quantum correlation functions.
\end{itemize}
\noindent
More specifically, in the first part of the paper (comprised of Sections \ref{Sec-2}, \ref{Sec-3} and \ref{Sec-4} written in a tutorial manner) we show that
the correlation function $\Pi_{n|p} ({\boldsymbol \varsigma}; {\boldsymbol \kappa})$ of characteristic polynomials satisfies an infinite set of {\it nonlinear differential hierarchically structured} relations. Although these hierarchical relations do not supply {\it explicit} (determinant) expressions for $\Pi_{n|p} ({\boldsymbol \varsigma}; {\boldsymbol \kappa})$ as predicted by classical theories (which routinely assume ${\boldsymbol \kappa} \in {\mathbb Z}^p$), they do provide an {\it implicit} nonperturbative characterisation of $\Pi_{n|p} ({\boldsymbol \varsigma}; {\boldsymbol \kappa})$ which turns out to be much beneficial for an in-depth analysis of the mathematical foundations of zero-dimensional replica field theories arising in the random-matrix-theory context (Verbaarschot and Zirnbauer 1985).
Such an analysis is performed in the second part of the paper (Section \ref{Sec-5}) which turns the fermionic-bosonic factorisation of spectral correlation functions into its central motif. In brief, focussing on the finite-$N$ average density of eigenlevels in the paradigmatic Gaussian Unitary Ensemble (GUE), we have used the integrable theory of CFCP (developed in the first part of the paper) in conjunction with the Hamiltonian theory of Painlev\'e transcendents (Noumi 2004) to associate fictitious Hamiltonian systems ${H}_{\rm f}\left\{P(t),Q(t),t\right\}$ and $H_{\rm b}\left\{ P(t),Q(t),t\right\}$ with fermionic and bosonic replica field theories, respectively. Using this language, we demonstrate that a proper replica limit yields the average density of eigenlevels in an anticipated factorised form. Depending on the nature (fermionic or bosonic) of the replica limit, the compact and noncompact contributions can be assigned to a derivative of the canonical ``coordinate'' and canonical ``momentum'' of the corresponding Hamiltonian system. Hence, the appearance of a noncompact (bosonic) contribution in the fermionic replica limit is no longer a ``mystery'' (Splittorff and Verbaarschot 2003).
\subsection{Outline of the paper}
To help a physics oriented reader navigate through an involved integrable theory of CFCP, in Section \ref{Sec-2} we outline a general structure of the theory. Along with introducing the notation to be used throughout the paper, we list three major ingredients of the theory -- the $\tau$ function $\tau_n^{(s)} ({\boldsymbol \varsigma}, {\boldsymbol \kappa}; {\boldsymbol t})$ assigned to the correlation function $\Pi_{n|p} ({\boldsymbol \varsigma}; {\boldsymbol \kappa})$, the bilinear identity in an integral form, and the Virasoro constraints -- and further discuss an interrelation between them and the original correlation function $\Pi_{n|p} ({\boldsymbol \varsigma}; {\boldsymbol \kappa})$. Two integrable hierarchies playing a central r\^ole in our theory -- the Kadomtsev-Petviashvili and the Toda Lattice hierarchies for the $\tau$ function -- are presented in the so-called Hirota form. Both the Hirota derivative \footnote{The properties of Hirota differential operators are reviewed in Appendix \ref{App-hi}.} and Schur functions appearing in the above integrable hierarchies are defined.
Having explained a general structure of the theory, we start its detailed exposition in Section \ref{Sec-3}. In Section \ref{Sec-3-1}, a determinant structure of the $\tau$ function is established and associated matrix of moments and a symmetry dictated scalar product are introduced. The bilinear identity in an integral form which governs the behavior of $\tau$ function is derived in Section \ref{Sec-3-2}. The bilinear identity in Hirota form is derived \footnote{An alternative derivation can be found in Appendix \ref{App-bi}.} in Section \ref{Sec-3-3}. In Section \ref{Sec-3-4}, the bilinear identity is ``deciphered'' to produce a zoo of bilinear integrable hierarchies satisfied by the $\tau$ function; their complete classification is given by Eqs.~(\ref{TL}) -- (\ref{346}). The two most important integrable hierarchies -- the Kadomtsev-Petviashvili (KP) and the Toda Lattice (TL) hierarchies -- are discussed in Section \ref{Sec-3-5}, where explicit formulae are given for the two first nontrivial members of KP and TL hierachies. Section \ref{Sec-3-6} contains a detailed derivation of the Virasoro constraints, the last ingredient of the integrable theory of characteristic polynomials.
Section \ref{Sec-4} shows how the properties of the $\tau$ function studied in Section \ref{Sec-3} can be translated to those of the correlation function $\Pi_{n|p} ({\boldsymbol \varsigma}; {\boldsymbol \kappa})$ of characteristic polynomials. This is done for Gaussian Unitary Ensemble (GUE) and Laguerre Unitary Ensemble (LUE) whose treatment is very detailed. Correlation functions for two more matrix models -- Jacobi Unitary Ensemble (JUE) and Cauchy Unitary Ensemble (CyUE) -- are addressed in the Appendices \ref{App-JUE} and \ref{App-CyUE}.
Finally, in Section \ref{Sec-5}, we apply the integrable theory of CFCP to a comparative analysis of three alternative formulations of the replica method, with a special emphasis placed on the phenomenon of fermionic-bosonic factorisation of spectral correlation functions; some technical calculations involving functions of parabolic cylinder are collected in Appendix \ref{App-D-int}. To make the paper self-sufficient, we have included Appendix \ref{App-chazy} containing an overview of very basic facts on the six Painlev\'e transcendents and a closely related differential equation belonging to the Chazy I class.
The conclusions are presented in Section \ref{Sec-6}.
\section{Structure of the Theory}
\label{Sec-2}
The correlation function
\begin{eqnarray}\fl
\label{rpf-1}
\Pi_{n|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa}) =\frac{1}{{\cal N}_n} \int_{{\cal D}^{n}} \prod_{j=1}^{n}
\left( d\lambda_j \, e^{-V_n(\lambda_j)}\prod_{\alpha=1}^p (\varsigma_\alpha - \lambda_j)^{\kappa_\alpha} \right)
\cdot
\Delta_{n}^2({\boldsymbol \lambda})
\end{eqnarray}
to be considered in this section can be viewed as a natural
extension of its primary definition Eq.~(\ref{def-cf}). Written in
the eigenvalue representation (induced by the unitary rotation ${\boldsymbol
{\cal H}}={\boldsymbol {\cal U}}^\dagger {\boldsymbol \Lambda} {\boldsymbol {\cal U}}$ such
that ${\boldsymbol {\cal U}}\in{\boldsymbol {\cal U}}(n)$ and ${\boldsymbol \Lambda}={\rm
diag}(\lambda_1,\cdots,\lambda_n)$) it accommodates an
{\it $n$-dependent confinement potential} \footnote{Matrix integrals with $n$-dependent weights are known to appear in the bosonic formulations of replica field theories, see Osipov and Kanzieper (2007). This was the motivation behind our choice of the definition Eq.~(\ref{rpf-1}).} $V_n(\lambda)$ and also allows
for a generic eigenvalue integration domain \footnote[1]{In
applications to be considered in Section \ref{Sec-5}, the integration domain
${\mathcal D}$ will be set to ${\mathcal D}=[-1,+1]$ for (compact)
fermionic replicas, and to ${\mathcal D}=[0,+\infty]$ for
(noncompact) bosonic replicas. A more general setting Eq.
(\ref{D-dom}) does not complicate
the theory we present.
}
\begin{eqnarray}
\label{D-dom}
{\cal D} = \bigcup_{j=1}^r \left[ c_{2j-1}, c_{2j} \right],\;\;\; c_1 < \cdots < c_{2r}.
\end{eqnarray}
The normalisation constant ${\mathcal N}_n$ is
\begin{eqnarray}\label{norm}
{\mathcal N}_{n} = \int_{{\cal D}^{n}} \prod_{j=1}^{n}
d\lambda_j \, e^{-V_n(\lambda_j)} \cdot
\Delta_{n}^2({\boldsymbol \lambda}).
\end{eqnarray}
While for ${\boldsymbol \kappa}\in {\mathbb Z}_\pm^p$, the correlation
function $\Pi_{n|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})$ can readily be
calculated by utilising the formalism due to Fyodorov and Strahov
(2003), there seems to be no simple extension of their method to
${\boldsymbol \kappa}\in {\mathbb R}^p$. It is this latter domain that will
be covered by our theory.
Contrary to the existing approaches which represent the CFCP {\it explicitly} in a determinant form
(akin to Eq.~(\ref{fs})), our formalism does not yield any closed
expression for the correlation function $\Pi_{n|p}({\boldsymbol
{\varsigma}};{\boldsymbol \kappa})$. Instead, it describes $\Pi_{n|p}({\boldsymbol
{\varsigma}};{\boldsymbol \kappa})$ {\it implicitly} in terms of a solution
to a nonlinear (partial) differential equation which -- along with an infinite
set of nonlinear (partial) differential hierarchies satisfied by
$\Pi_{n|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})$ -- can be generated in a
regular way starting with Eq.~(\ref{rpf-1}). Let us stress that a
lack of explicitness is by no means a weak point of our theory: the
representations emerging from it save the day when a replica limit
is implemented in Eq.~(\ref{RL}).
Before plunging into the technicalities of the integrable theory of
CFCP, we wish to outline its general
structure.
\newline\newline\noindent
{\it Deformation.}---To determine the correlation function
$\Pi_{n|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})$ nonperturbatively, we
adopt the ``deform-and-study'' approach, a standard string theory
method of revealing hidden structures. Its main idea consists of
``embedding'' $\Pi_{n|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})$ into a
more general theory of the $\tau$ function
\begin{eqnarray}
\label{tau-f}
\tau_{n}^{(s)}({\boldsymbol \varsigma},{\boldsymbol \kappa}; {\boldsymbol t}) = \frac{1}{n!}
\int_{{\mathcal D}^{n}} \prod_{j=1}^{n}
\left(
d\lambda_j\, \Gamma_{n-s}({\boldsymbol \varsigma},{\boldsymbol \kappa};\lambda_j)\,
e^{v({\boldsymbol t};\lambda_j)}\right) \cdot
\Delta_{n}^2({\boldsymbol \lambda})
\end{eqnarray}
which posseses an infinite-dimensional parameter space $(s;{\boldsymbol
t})=(s;t_1, t_2,\cdots)$ arising as the result of the
$(s;{\boldsymbol t})$-deformation of the weight function
\begin{eqnarray}
\label{Gamma-n}
\Gamma_n({\boldsymbol \varsigma},{\boldsymbol \kappa};\lambda) =
e^{-V_{n}(\lambda)}\prod_{\alpha=1}^p (\varsigma_\alpha - \lambda)^{\kappa_\alpha}
\end{eqnarray}
appearing in the original definition Eq.~(\ref{rpf-1}). The
parameter $s$ is assumed to be an integer, $s\in {\mathbb Z}$, and
$v({\boldsymbol t};\lambda)$ is defined as an infinite series
\begin{eqnarray}
\label{vt-def}
v({\boldsymbol t};\lambda) = \sum_{k=1}^\infty t_k\, \lambda^k,\;\;\;
{\boldsymbol t} = (t_1,t_2,\cdots).
\end{eqnarray}
Notice that a somewhat unusual $(s\in {\mathbb Z})$-deformation of
$\Gamma_n({\boldsymbol \varsigma}, {\boldsymbol \kappa};\lambda)$ is needed to
account for the $n$-dependent confinement potential $V_n(\lambda)$
in Eq.~(\ref{rpf-1}).
\newline\newline\noindent
{\it Bilinear identity and integrable hierarchies.}---Having
embedded the correlation function $\Pi_{n|p}({\boldsymbol \varsigma};{\boldsymbol
\kappa})$ into a set of $\tau$ functions $\tau_n^{(s)}({\boldsymbol \varsigma},{\boldsymbol
\kappa}; {\boldsymbol t})$, one studies the evolution of $\tau$ functions in
the extended parameter space $(n,s,{\boldsymbol t})$ in order to identify
nontrivial nonlinear differential hierarchical relations between
them. It turns out that an infinite set of hierarchically structured
nonlinear differential equations in the variables ${\boldsymbol
t}=(t_1,t_2,\cdots)$ can be encoded into a single {\it bilinear
identity}
\begin{eqnarray} \label{bi-id} \fl
\oint_{{\cal C}_\infty} dz\,
e^{(a-1)v({\boldsymbol t}-{\boldsymbol t}^\prime;z)} \, \tau_m^{(s)}({\boldsymbol t^\prime}-[\boldsymbol{z}^{-1}])
\frac{\tau_{\ell+1}^{(\ell+1+s-m)}({\boldsymbol t}+[\boldsymbol{z}^{-1}])}{z^{\ell+1-m}} \nonumber \\
=
\nonumber \\
\oint_{{\cal C}_\infty} dz\,
e^{a\,v({\boldsymbol t}-{\boldsymbol t}^\prime;z)}\, \tau_\ell^{(\ell+s-m)} ({\boldsymbol t} - [\boldsymbol{z}^{-1}])
\frac{\tau_{m+1}^{(s+1)}({\boldsymbol t}^\prime + [\boldsymbol{z}^{-1}])}{z^{m+1-\ell}},
\end{eqnarray}
where the integration contour ${\cal C}_\infty$ encompasses the
point $z=\infty$. Here, $a\in {\mathbb R}$ is a free parameter; the
notation ${\boldsymbol t} \pm [{\boldsymbol z}^{-1}]$ stands for the infinite set of
parameters $\{t_k\pm z^{-k}/k\}_{k\in{\mathbb Z}_+}$; for brevity,
the physical parameters ${\boldsymbol \varsigma}$ and ${\boldsymbol \kappa}$ were
dropped from the arguments of $\tau$ functions.
To the best of our knowledge, this is the most general form of bilinear identity
that has ever appeared in the literature for Hermitean matrix models: not only it accounts for the $n$-dependent probability measure (``confinement potential'') but also
it generates, in a unified way, a whole zoo of integrable hierarchies satisfied by the $\tau$ function Eq.~(\ref{tau-f}). The latter was made possible by the daedal introduction of the free parameter $a$ in Eq.~(\ref{bi-id}) prompted by the study by Tu, Shaw and Yen (1996).
The bilinear
identity generates various integrable hierarchies in the $(n,s,{\boldsymbol
t})$ space. The {\it Kadomtsev-Petviashvili (KP) hierarchy},
\begin{equation}
\label{kph} \fl \qquad
\frac{1}{2}\,D_1 D_k\, \tau_n^{(s)}(\boldsymbol{t})
\circ \tau_n^{(s)}(\boldsymbol{t}) = s_{k+1}([\boldsymbol{D}]) \, \tau_n^{(s)}(\boldsymbol{t})
\circ \tau_n^{(s)}(\boldsymbol{t}),
\end{equation}
and the {\it Toda Lattice (TL) hierarchy},
\begin{equation} \fl \qquad
\label{tlh}
\frac{1}{2}\, D_1 D_k \,\tau_n^{(s)}(\boldsymbol{t})\circ
\tau_{n}^{(s)}(\boldsymbol{t})= s_{k-1}([\boldsymbol{D}])\,
\tau_{n+1}^{(s+1)}({\boldsymbol t}) \circ
\tau_{n-1}^{(s-1)}({\boldsymbol t}),
\end{equation}
are central to our approach. In the above formulae, the vector ${\boldsymbol D}$ stands for ${\boldsymbol D} = (D_1, D_2,\cdots, D_k,\cdots)$ whilst
the $k$-th component of the vector $[{\boldsymbol D}]$ equals $k^{-1} D_k$. The operator symbol $D_k \, f({\boldsymbol t})\circ
g({\boldsymbol t})$ denotes the Hirota derivative \footnote[7]{The properties of Hirota differential
operators are briefly reviewed in Appendix \ref{App-hi}; see also the book by Hirota (2004).}
\begin{eqnarray}
D_k\,f({\boldsymbol t})\circ g({\boldsymbol t}) = \frac{\partial}{\partial x_k}
f({\boldsymbol t}+{\boldsymbol x})\, g({\boldsymbol t}-{\boldsymbol x})\Big|_{{\boldsymbol x}={\boldsymbol 0}}.
\end{eqnarray}
The functions $s_k({\boldsymbol t})$ are the Schur polynomials (Macdonald
1998) defined by the expansion
\begin{eqnarray} \label{SCHUR}
\exp\left( \sum_{j=1}^\infty t_j x^j \right) = \sum_{\ell=0}^\infty
x^\ell s_\ell({\boldsymbol t}),
\end{eqnarray}
see also Table~\ref{schur-table}. A complete list of emerging
hierarchies will be presented in Section \ref{Sec-3-4}.
\begin{table}
\caption{\label{schur-table} Explicit formulae for the lowest-order
Schur polynomials $s_\ell({\boldsymbol t})$ defined by the relation
$\exp\left( \sum_{j=1}^\infty t_j x^j \right) = \sum_{\ell=0}^\infty
x^\ell s_\ell({\boldsymbol t})$.}
\begin{indented}
\lineup
\item[]
\begin{tabular}{@{}*{2}{l}} \br
$\0\0 \ell$&$\ell!\,s_\ell({\boldsymbol t})$ \cr \mr
$\0\0 0$&$1$ \vspace{0.1cm} \cr
$\0\0 1$&$t_1$ \vspace{0.1cm} \cr
$\0\0 2$&$t_1^2+2t_2$ \vspace{0.1cm}\cr
$\0\0 3$&$t_1^3+6 t_1 t_2+6 t_3$\vspace{0.1cm} \cr
$\0\0 4$&$t_1^4+24 t_1 t_3+ 12 t_1^2 t_2+ 12 t_2^2+ 24 t_4$ \vspace{0.1cm}\cr
$\0\0 5$&$t_1^5+20 t_1^3 t_2 + 60 t_1^2 t_3 + 60 t_1 t_2^2+120 t_1 t_4+120 t_2 t_3+120 t_5$\vspace{0.1cm}\cr
\br
\end{tabular}\newline
\footnotesize{The Schur polynomials admit the representation (Macdonald 1998)
\begin{eqnarray}
s_{\ell}({\boldsymbol t}) =
\sum_{|\boldsymbol{\lambda}|=\ell}\prod_{j=1}^g\frac{t_{\ell_j}^{\sigma_j}}{\sigma_j!},
\nonumber
\end{eqnarray}
where the summation runs over all partitions
$\blambda = (\ell_1^{\sigma_1},\cdots, \ell_g^{\sigma_g})$ of the
size $|\blambda|=\ell$. The notation
$\blambda = (\ell_1^{\sigma_1},\cdots,
\ell_g^{\sigma_g})$, known as the frequency representation of
the partition $\blambda$ of the size $|\blambda|=\ell$, implies
that the part $\ell_j$ appears
$\sigma_j$ times so that $\ell = \sum_{j=1}^g \ell_j\, \sigma_j$,
where $g$ is the number of inequivalent parts of
the partition. Another way to compute $s_\ell({\boldsymbol t})$ is based on
the recursion equation
\begin{eqnarray}
s_\ell({\boldsymbol t})=\frac{1}{\ell}\sum_{j=1}^{\ell}
j\, t_j s_{\ell-j}({\boldsymbol t}),\;\;\;\ell \ge 1, \nonumber
\end{eqnarray}
supplemented by the condition $s_0({\boldsymbol t})=1$.}
\end{indented}
\end{table}
\noindent\newline{\it Projection.}---The projection formula
\begin{eqnarray}
\label{pf}
\Pi_{n|p}({\boldsymbol \varsigma};{\boldsymbol \kappa}) = \frac{n!}{{\cal N}_n}
\, \tau_n^{(s)}({\boldsymbol \varsigma},{\boldsymbol \kappa};{\boldsymbol t})\Big|_{s=0,\,{\boldsymbol t}={\boldsymbol 0}}
\end{eqnarray}
makes it tempting to assume that nonlinear integrable hierarchies
satisfied by $\tau$ functions in the $(n, s, {\boldsymbol t})$-space should
induce similar, hierarchically structured, nonlinear differential
equations for the correlation function $\Pi_{n|p}({\boldsymbol
\varsigma};{\boldsymbol \kappa})$. To identify them, one has to seek an
additional block of the theory that would make a link between the partial
$\{t_k\}_{k\in{\mathbb Z}_+}$ derivatives of $\tau$ functions taken
at ${\boldsymbol t}={\boldsymbol 0}$ and the partial derivatives of $\Pi_{n|p}({\boldsymbol
\varsigma};{\boldsymbol \kappa})$ over the {\it physical parameters}
$\{\varsigma_\alpha\}_{\alpha\in{\mathbb Z}_+}$. The study by Adler,
Shiota and van Moerbeke (1995) suggests that the missing block is
the {\it Virasoro constraints} for $\tau$
functions.
\newline\newline\noindent {\it Virasoro
constraints.}---The Virasoro constraints reflect the invariance of
$\tau$ functions [Eq.~(\ref{tau-f})] under a change of
integration variables. In the context of CFCP, it is useful to demand the invariance under an infinite
set of transformations \footnote[7]{The specific choice
Eq.~(\ref{vt}) will be advocated in Section \ref{Sec-3-6}.}
\begin{eqnarray}\label{vt}
\lambda_j \rightarrow \mu_j + \epsilon \mu_j^{q+1} f(\mu_j)
\prod_{k=1}^{\varrho} (\mu_j - c_k^\prime),\;\;\;
q \ge -1,
\end{eqnarray}
labeled by integers $q$. Here, $\epsilon>0$, the vector $\boldsymbol{c^\prime}$ is ${\boldsymbol
c}^\prime=\{c_1,\cdots,c_{2r}\}\setminus\{\pm \infty, {\cal Z}_0\}$ with ${\cal Z}_0$ denoting a set of zeros of $f(\lambda)$, and $\varrho = {\rm dim\,}(\boldsymbol{c^\prime})$. The function $f(\lambda)$ is, in turn, related to the confinement potential
$V_{n-s}(\lambda)$ through the parameterisation
\begin{eqnarray}
\label{vns} \frac{dV_{n-s}}{d\lambda} =
\frac{g(\lambda)}{f(\lambda)},\;\;\;g(\lambda)=\sum_{k=0}^\infty b_k
\lambda^k,\;\;\; f(\lambda)=\sum_{k=0}^\infty a_k \lambda^k
\end{eqnarray}
in which both $g(\lambda)$ and $f(\lambda)$ depend on $n-s$ as do
the coefficients $b_k$ and $a_k$ in the above expansions. The
transformation Eq.~(\ref{vt}) induces the Virasoro-like constraints~\footnote[8]{The very notation $\hat{\cal L}_q^V$ suggests that this
operator originates from the confinement-potential-part $e^{-V_n}$
in Eqs.~(\ref{Gamma-n}) and (\ref{tau-f}). On the contrary, the
operator ${\hat {\cal L}}_q^{\rm det}$ is due to the
determinant-like product $\prod_{\alpha}(\varsigma_\alpha -
\lambda)^{\kappa_\alpha}$ in Eq.~(\ref{Gamma-n}). Indeed,
setting $\kappa_\alpha=0$ nullifies the operator ${\hat {\cal L}}_q^{\rm
det}$. See Section \ref{Sec-3-6} for a detailed derivation.}
\begin{equation}
\label{2-Vir}
\left[ \hat{{\cal L}}_{q}^V({\boldsymbol t}) + \hat{{\cal L}}_q^{\rm det}({\boldsymbol \varsigma};{\boldsymbol t})
\right] \tau_n^{(s)}({\boldsymbol \varsigma};{\boldsymbol t})
={\hat {\cal B}}_q^V ({\boldsymbol \varsigma})\,\tau_n^{(s)}({\boldsymbol \varsigma};{\boldsymbol t}),
\end{equation}
where the differential operators
\begin{eqnarray} \fl
\label{vLv}
\hat{{\cal L}}_{q}^V({\boldsymbol t}) = \sum_{\ell = 0}^\infty
\sum_{k=0}^{\varrho} s_{\varrho-k}(-{\boldsymbol p}_{\varrho} (\boldsymbol{c^\prime}))
\left(
a_\ell \hat{\cal L}_{q+k+\ell}({\boldsymbol t}) - b_\ell \frac{\partial}{\partial t_{q+k+\ell+1}}
\right)
\end{eqnarray}
and
\begin{eqnarray} \fl
\label{vLG}
\hat{{\cal L}}_{q}^{\rm det}({\boldsymbol t}) = \sum_{\ell = 0}^\infty
a_\ell
\sum_{k=0}^{\varrho} s_{\varrho-k}(-{\boldsymbol p}_{\varrho} (\boldsymbol{c^\prime}))
\sum_{m=0}^{q+k+\ell} \left(\sum_{\alpha=1}^p \kappa_\alpha\,\varsigma_\alpha^m\right)
\frac{\partial}{\partial t_{q+k+\ell-m}}
\end{eqnarray}
act in the ${\boldsymbol t}$-space whilst the differential operator
\begin{eqnarray}
\label{bq}
{\hat {\cal B}}_q^V ({\boldsymbol \varsigma}) = \sum_{\alpha=1}^p
\left( \prod_{k=1}^{\varrho} (\varsigma_\alpha - c_k^\prime) \right)
f(\varsigma_\alpha) \,
\varsigma_{\alpha}^{q+1} \frac{\partial}{\partial \varsigma_\alpha}
\end{eqnarray}
acts in the space of {\it physical parameters}
$\{\varsigma_\alpha\}_{\alpha\in{\mathbb Z}_+}$. The notation $s_k(-{\boldsymbol p}_{\varrho} (\boldsymbol{c^\prime}))$ stands for the Schur
polynomial and ${\boldsymbol p}_{\varrho}(\boldsymbol{c^\prime})$ is an infinite dimensional vector
\begin{eqnarray} \label{b9090}
{\boldsymbol p}_\varrho(\boldsymbol{c^\prime})=\left(
{\rm tr}_\varrho(\boldsymbol{c^\prime}), \frac{1}{2} {\rm tr}_\varrho(\boldsymbol{c^\prime})^2,\cdots,
\frac{1}{k} {\rm tr}_\varrho(\boldsymbol{c^\prime})^k,\cdots
\right)
\end{eqnarray}
with
\begin{eqnarray}
{\rm tr}_\varrho(\boldsymbol{c^\prime})^k =
\sum_{j=1}^{\varrho} (c_j^\prime)^k.
\end{eqnarray}
Notice that the operator $\hat{{\cal L}}_{q}^V({\boldsymbol t})$ is
expressed in terms of the Virasoro operators \footnote{~For $q=-1$, the second sum in Eq.~(\ref{vo}) is interpreted as zero.}
\begin{eqnarray}
\label{vo}
\hat{{\cal L}}_q({\boldsymbol t}) = \sum_{j=1}^\infty jt_j \,\frac{\partial}{\partial t_{q+j}}
+
\sum_{j=0}^q \frac{\partial^2}{\partial {t_j}\partial {t_{q-j}}},
\end{eqnarray}
obeying the Virasoro algebra
\begin{eqnarray}
\label{va}
[\hat{{\cal L}}_p,\hat{{\cal L}}_q] = (p-q)\hat{{\cal L}}_{p+q}, \;\;\;
p,q\ge -1.
\end{eqnarray}
\newline\noindent
{\it Projection (continued).}---Equations (\ref{pf}) and
(\ref{2-Vir}) suggest that there exists an infinite set of equations
which express various combinations of the derivatives
\begin{eqnarray} \nonumber
\frac{\partial}{\partial t_j} \,\tau_n^{(s)}({\boldsymbol \varsigma};{\boldsymbol t})\Big|_{s=0,{\boldsymbol t}={\boldsymbol 0}}\;\;\; {\rm and}\;\;\;
\frac{\partial^2}{\partial t_j \partial t_{k}} \,\tau_n^{(s)}({\boldsymbol \varsigma};{\boldsymbol t})\Big|_{
s=0,{\boldsymbol t}={\boldsymbol 0}}
\end{eqnarray}
in terms of $
{\hat {\cal B}}_q^V ({\boldsymbol \varsigma})\, \Pi_{n|p}({\boldsymbol \varsigma};{\boldsymbol \kappa})
$. This observation makes it tempting to project the hierarchical
relations Eqs. (\ref{kph}) and (\ref{tlh}) onto the hyperplane
$(s=0,{\boldsymbol t}={\boldsymbol 0})$ in an attempt to generate their analogues in
the space of {\it physical parameters}. In particular, such a
projection of the first equation of the KP hierarchy,
\begin{eqnarray} \fl
\left(
\frac{\partial^4}{\partial t_1^4} + 3 \frac{\partial^2}{\partial t_2^2}
- 4 \frac{\partial^2}{\partial t_1 \partial t_3}
\right)\log\, \tau_n^{(s)}({\boldsymbol t}) + 6 \left(
\frac{\partial^2}{\partial t_1^2} \log\, \tau_n^{(s)}({\boldsymbol t})
\right)^2 = 0,
\end{eqnarray}
is expected \footnote[3]{Whether or not the projected Virasoro
constraints and the hierarchical equations always form a closed
system is a separate question that lies beyond the scope of the
present paper.} to bring a closed nonlinear differential equation
for the correlation function $\Pi_{n|p}({\boldsymbol
{\varsigma}};{\boldsymbol \kappa})$ of characteristic polynomials. It is
this equation which, being supplemented by appropriate boundary
conditions, provides an exact, nonperturbative description of the
averages of characteristic polynomials. Similarly, projections of
other equations from the hierarchies Eqs. (\ref{kph}) and
(\ref{tlh}) will reveal additional nontrivial nonlinear differential
relations that would involve not only $\Pi_{n|p}({\boldsymbol
{\varsigma}};{\boldsymbol \kappa})$ but its ``neighbours'' $\Pi_{n\pm
q|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})$, as explained in Section \ref{Sec-4}.
Having exhibited the general structure of the theory, let us turn to
the detailed exposition of its main ingredients.
\section{From Characteristic Polynomials to $\tau$ Functions}
\label{Sec-3}
\subsection{The $\tau$ function, symmetry and associated scalar
product}
\label{Sec-3-1}
Integrability derives from the symmetry. In the context of $\tau$
functions Eq.~(\ref{tau-f}), the symmetry is encoded into
$\Delta_n^2({\boldsymbol \lambda})$, the squared Vandermonde determinant, as
it appears in the integrand below \footnote[4]{For the sake of
brevity, the physical parameters ${\boldsymbol \varsigma}$ and ${\boldsymbol
\kappa}$ were dropped from the arguments of $\tau_n^{(s)}$ and
$\Gamma_{n-s}$.}:
\begin{eqnarray}
\label{tau-f-c1}
\tau_{n}^{(s)}({\boldsymbol t}) = \frac{1}{n!}
\int_{{\mathcal D}^{n}} \prod_{j=1}^{n}
\left(
d\lambda_j\, \Gamma_{n-s}(\lambda_j)\,
e^{v({\boldsymbol t};\lambda_j)}\right) \cdot
\Delta_{n}^2({\boldsymbol \lambda}).
\end{eqnarray}
In the random matrix theory language, the $\tau$ function Eq.~(\ref{tau-f-c1}) is said to posses the $\beta=2$ symmetry. Using the identity
\begin{eqnarray}
\Delta_n({\boldsymbol \lambda}) = {\rm det}[\lambda_j^{k-1}]_{1\le j,k \le
n}
= {\rm det}[P_{k-1}(\lambda_j)]_{1\le j,k \le n}
\end{eqnarray}
with $P_k(\lambda)$ being an {\it arbitrary} set of monic
polynomials and the integration formula
(Andr\'eief 1883, de Bruijn 1955)
\begin{eqnarray} \fl
\label{adB}
\int_{{\mathcal D}^n} \prod_{j=1}^n d\lambda_j \, {\rm det}_n[\varphi_j(\lambda_k)]\,
{\rm det}_n[\psi_j(\lambda_k)] = n!\, {\rm det}_n\left[
\int_{\mathcal D} d\lambda\, \varphi_j(\lambda)\,\psi_k(\lambda)
\right],
\end{eqnarray}
the $\tau$ function Eq.~(\ref{tau-f-c1}) can be written as the
determinant
\begin{eqnarray}
\label{mm-det}
\tau_{n}^{(s)}({\boldsymbol t})
={\rm det}\, \left[ \mu_{jk}^{(n-s)}({\boldsymbol t})
\right]_{0\le j,k \le n-1}
\end{eqnarray}
of the matrix of moments
\begin{eqnarray}
\mu_{jk}^{(m)}({\boldsymbol t}) =
\left<
P_j | P_k
\right>_{\Gamma_{m}\, e^v} = \int_{\mathcal D} d\lambda\, \Gamma_{m}(\lambda)\, e^{v({\boldsymbol t};\lambda)}
P_j(\lambda)\, P_k(\lambda)
\end{eqnarray}
Both the determinant representation and the scalar product
\begin{eqnarray}
\label{sc-prod}
\left< f | g \right>_w = \int_{\mathcal D} d\lambda\, w(\lambda)\, f(\lambda)\, g(\lambda)
\end{eqnarray}
are dictated by the $\beta=2$ symmetry \footnote[3]{
The $\tau$ function Eq.~(\ref{tau-f-c1}) is a particular case of a more general $\tau$ function
\begin{eqnarray}
\label{tau-f-beta}
\tau_{n}^{(s)}({\boldsymbol t};\beta) = \frac{1}{n!}
\int_{{\mathcal D}^{n}} \prod_{j=1}^{n}
\left(
d\lambda_j\, \Gamma_{n-s}(\lambda_j)\,
e^{v({\boldsymbol t};\lambda_j)}\right) \cdot
|\Delta_{n}({\boldsymbol \lambda})|^\beta. \nonumber
\end{eqnarray}
In accordance with the Dyson ``three-fold'' way (Dyson 1962), the symmetry parameter $\beta$ may also take the values $\beta=1$ and $\beta=4$. For these cases, the $\tau$ function Eq. (\ref{tau-f-beta}) admits the {\it Pfaffian} rather than determinant representation (Adler and van Moerbeke 2001):
\begin{eqnarray}
\tau_n^{(s)} ({\boldsymbol t};\beta) = {\rm pf\,} \left[
\mu_{jk}^{(n-s)}({\boldsymbol t};\beta)
\right]_{0\le j,k \le n-1}, \nonumber
\end{eqnarray}
where the matrix of moments $\mu_{jk}^{(m)}({\boldsymbol t};\beta) =
\left<
P_j | P_k
\right>_{\Gamma_{m}\, e^v}^{(\beta)}$ is defined through the scalar product
\begin{eqnarray}
\left< f | g \right>_w^{(\beta)} = \cases{\int_{{\cal D}^2} d\lambda d\lambda^\prime
w(\lambda) \, f(\lambda) \,{\rm sgn\,} (\lambda^\prime-\lambda) w(\lambda^\prime)g(\lambda^\prime), & $\beta=1$;\\
\int_{{\cal D}} d\lambda
w(\lambda) \left[ f(\lambda) g^\prime(\lambda)-g(\lambda) f^\prime(\lambda)\right], & $\beta=4$.}\nonumber
\end{eqnarray}
}
of the $\tau$ function.
\subsection{Bilinear identity in integral form}\label{Sec-3-2}\noindent
In this subsection, the bilinear identity Eq.~(\ref{bi-id}) will be proven.\newline\newline\noindent
{\it The $\tau$ function and orthogonal polynomials}.---The
representation Eq.~(\ref{mm-det}) reveals a special r\^ole played by
the monic polynomials $P_{k}^{(m)}({\boldsymbol t}; \lambda)$ {\it
orthogonal} on ${\mathcal D}$ with respect to the measure $\Gamma_m(\lambda)\, e^{v({\boldsymbol t};\lambda)}d\lambda$. Indeed, the
orthogonality relation
\begin{eqnarray} \fl \label{or}
\left<
P_k | P_j
\right>_{\Gamma_m\,e^v}= \int_{\mathcal D} d\lambda\, \Gamma_{m}(\lambda)\, e^{v({\boldsymbol t};\lambda)}
P_{k}^{(m)}({\boldsymbol t}; \lambda)\, P_{j}^{(m)}({\boldsymbol t}; \lambda) = h_{k}^{(m)}({\boldsymbol t})\, \delta_{jk},
\end{eqnarray}
shows that the choice $P_j(\lambda)\mapsto P_{j}^{(n-s)}({\boldsymbol t};
\lambda)$ diagonalises the matrix of moments in Eq.~(\ref{mm-det})
resulting in the fairly compact representation
\begin{eqnarray}
\tau_{n}^{(s)}({\boldsymbol t}) = \prod_{j=0}^{n-1} h_{j}^{(n-s)}({\boldsymbol t}).
\end{eqnarray}
Remarkably, the monic orthogonal polynomials $P_k^{(m)}({\boldsymbol
t};\lambda)$, that were introduced as a most economic tool for the
calculation of $\tau_n^{(s)}$, can themselves be expressed in terms
of $\tau$ functions:
\begin{eqnarray}\label{p-tau}
P_k^{(m)}({\boldsymbol t};\lambda) = \lambda^k \,\frac{\tau_k^{(k-m)} ({\boldsymbol t} - [{\boldsymbol \lambda}^{-1}])}{\tau_k^{(k-m)}({\boldsymbol t})}.
\end{eqnarray}
Here, the notation ${\boldsymbol t} - [{\boldsymbol \lambda}^{-1}]$ stands for an
infinite-dimensional vector with the components
\begin{eqnarray}
\label{t-shift}
{\boldsymbol t} \pm [{\boldsymbol \lambda}^{-1}] = \left(
t_1 \pm \frac{1}{\lambda},\, t_2 \pm \frac{1}{2\lambda^2},\,\cdots, t_k \pm \frac{1}{k\,\lambda^k},
\cdots
\right).
\end{eqnarray}
The statement Eq.~(\ref{p-tau}) readily follows from the definitions
Eqs.~(\ref{tau-f-c1}) and (\ref{vt-def}), the formal relation
\begin{eqnarray}
\label{v-shifted}
e^{v({\boldsymbol t} \pm [{\boldsymbol \lambda}^{-1}];\,\lambda_j)} = e^{v({\boldsymbol t};\,\lambda_j)}
\left(
1 - \frac{\lambda_j}{\lambda}
\right)^{\mp 1},
\end{eqnarray}
and the Heine formula (Heine 1878, Szeg\"o 1939)
\begin{eqnarray} \fl
\label{szego}
P_k^{(m)}({\boldsymbol t};\lambda) = \frac{1}{k!\,\tau_k^{(k-m)}({\boldsymbol t})}
\int_{{\mathcal D}^{k}} \prod_{j=1}^{k}
\left(
d\lambda_j\, (\lambda-\lambda_j)\, \Gamma_{m}(\lambda_j)\,
e^{v({\boldsymbol t};\lambda_j)}\right) \cdot
\Delta_{k}^2({\boldsymbol \lambda}).
\end{eqnarray}
\newline\newline\noindent
{\it The $\tau$ function and Cauchy transform of orthogonal
polynomials}.---As will be seen later, the Cauchy transform of
orthogonal polynomials is an important ingredient of our proof of
the bilinear identity. Viewed as the scalar product,
\begin{eqnarray} \fl
\label{q-cauchy}
Q_k^{(m)}({\boldsymbol t};z) = \left<
P_k^{(m)}({\boldsymbol t};\lambda)\Big| \frac{1}{z-\lambda}
\right>_{\Gamma_m \,e^v} = \int_{{\mathcal D}} d\lambda
\, \Gamma_{m}(\lambda)\,
e^{v({\boldsymbol t};\lambda)} \, \frac{P_k^{(m)}({\boldsymbol t};\lambda)}{z-\lambda},
\end{eqnarray}
it can also be expressed in terms of $\tau$ function:
\begin{eqnarray}
\label{q-tau}
Q_{k}^{(m)}({\boldsymbol t};z) = z^{-k-1} \frac{\tau_{k+1}^{(k+1-m)} ({\boldsymbol t}+[{\boldsymbol z}^{-1}])}{\tau_k^{(k-m)}({\boldsymbol t})}.
\end{eqnarray}
To prove Eq.~(\ref{q-tau}), we substitute Eq.~(\ref{szego}) into
Eq.~(\ref{q-cauchy}) to derive:
\begin{eqnarray} \fl
\label{qc-st-1}
Q_{k}^{(m)}({\boldsymbol t};z) &= \frac{1}{k!\,\tau_k^{(k-m)}({\boldsymbol t})}
\int_{{\mathcal D}^{k+1}} \prod_{j=1}^{k+1}
\left(
d\lambda_j\, \Gamma_{m}(\lambda_j)\,
e^{v({\boldsymbol t};\lambda_j)}\right) \cdot \Delta_{k+1}^2({\boldsymbol \lambda})
\nonumber \\ \fl
&\times
\frac{1}{(z-\lambda_{k+1})}\prod_{j=1}^{k} \frac{1}{\lambda_{k+1}-\lambda_j}.
\end{eqnarray}
Owing to the identity
\begin{eqnarray}
\label{id-2}
\prod_{j=1}^n \frac{1}{z-\lambda_j} = \sum_{\alpha=1}^n\left(
\frac{1}{z-\lambda_\alpha} \prod_{j=1,
\;j\neq\alpha}^n\frac{1}{\lambda_\alpha-\lambda_j}
\right)
\end{eqnarray}
taken at $n=k+1$, the factor
\begin{eqnarray}
\frac{1}{(z-\lambda_{k+1})}\prod_{j=1}^{k} \frac{1}{\lambda_{k+1}-\lambda_j} \nonumber
\end{eqnarray}
in the integrand of Eq.~(\ref{qc-st-1}) can be symmetrised
\begin{eqnarray} \fl
\frac{1}{(z-\lambda_{k+1})}\prod_{j=1}^{k} \frac{1}{\lambda_{k+1}-\lambda_j} \mapsto
\frac{1}{k+1} \sum_{\alpha=1}^{k+1}\left(
\frac{1}{z-\lambda_\alpha} \prod_{j=1,
\;j\neq\alpha}^{k+1}\frac{1}{\lambda_\alpha-\lambda_j}
\right) \nonumber\\
\qquad \qquad= \frac{1}{k+1} \prod_{j=1}^{k+1} \frac{1}{z-\lambda_j} \nonumber
\end{eqnarray}
to yield the representation
\begin{eqnarray}
\label{qc-st-2} \fl
Q_{k}^{(m)}({\boldsymbol t};z) &= \frac{1}{(k+1)!\,\tau_k^{(k-m)}({\boldsymbol t})}
\int_{{\mathcal D}^{k+1}} \prod_{j=1}^{k+1}
\left(
\frac{d\lambda_j}{z-\lambda_j}\, \Gamma_{m}(\lambda_j)\,
e^{v({\boldsymbol t};\lambda_j)}\right) \cdot \Delta_{k+1}^2({\boldsymbol \lambda}).
\end{eqnarray}
In view of Eq.~(\ref{v-shifted}), this is seen to coincide with
\begin{eqnarray}
\frac{z^{-k-1} }{(k+1)!\,\tau_k^{(k-m)}({\boldsymbol t})}
\int_{{\mathcal D}^{k+1}} \prod_{j=1}^{k+1} \left(
d\lambda_j\, \Gamma_{m}(\lambda_j)\,
e^{v({\boldsymbol t}+\boldsymbol{[z^{-1}]};\lambda_j)}\right) \cdot \Delta_{k+1}^2({\boldsymbol \lambda}). \nonumber
\end{eqnarray}
Comparison with the definition Eq.~(\ref{tau-f-c1}) completes the
proof of Eq.~(\ref{q-tau}).
\newline\newline\noindent
{\it Proof of the bilinear identity.}---Now we are ready to prove
the bilinear identity
\begin{eqnarray} \label{bi-id-rep} \fl
\oint_{{\cal C}_\infty} dz\,
e^{(a-1)v({\boldsymbol t}-{\boldsymbol t}^\prime;z)} \, \tau_m^{(s)}({\boldsymbol t^\prime}-[\boldsymbol{z}^{-1}])
\frac{\tau_{\ell+1}^{(\ell+1+s-m)}({\boldsymbol t}+[\boldsymbol{z}^{-1}])}{z^{\ell+1-m}} \nonumber \\
=
\nonumber \\
\oint_{{\cal C}_\infty} dz\,
e^{a\,v({\boldsymbol t}-{\boldsymbol t}^\prime;z)}\, \tau_\ell^{(\ell+s-m)} ({\boldsymbol t} - [\boldsymbol{z}^{-1}])
\frac{\tau_{m+1}^{(s+1)}({\boldsymbol t}^\prime + [\boldsymbol{z}^{-1}])}{z^{m+1-\ell}},
\end{eqnarray}
where the integration contour ${\cal C}_\infty$ encompasses the
point $z=\infty$, and $a\in {\mathbb R}$ is a free parameter.
We start with the needlessly fancy identity
\begin{eqnarray} \fl \label{fancy}
\int_{\cal D} d\lambda \, \Gamma_n(\lambda)\, e^{v({\boldsymbol t};\lambda)}
e^{(a-1)v({\boldsymbol t}-{\boldsymbol t}^\prime;\lambda)} P_\ell^{(n)}({\boldsymbol t};\lambda)
P_m^{(n)}({\boldsymbol t}^\prime;\lambda) \nonumber \\=
\int_{\cal D} d\lambda \, \Gamma_n(\lambda)\, e^{v({\boldsymbol t}^\prime;\lambda)}
e^{a\,v({\boldsymbol t}-{\boldsymbol t}^\prime;\lambda)} P_\ell^{(n)}({\boldsymbol t};\lambda)
P_m^{(n)}({\boldsymbol t}^\prime;\lambda)
\end{eqnarray}
whose structure is prompted by the scalar product Eq.~(\ref{or}) and
which trivially holds due to a linearity of the ${\boldsymbol
t}$-deformation
\begin{eqnarray}
v({\boldsymbol t};\lambda) + (a-1)\,v({\boldsymbol t}-{\boldsymbol t}^\prime;\lambda)
=
v({\boldsymbol t}^\prime;\lambda) + a\,v({\boldsymbol t}-{\boldsymbol t}^\prime;\lambda),
\end{eqnarray}
see Eq.~(\ref{vt-def}).
The formulae relating the orthogonal
polynomials and their Cauchy transforms to $\tau$ functions
[Eqs.~(\ref{p-tau}) and (\ref{q-tau})] make it possible to express
both sides of Eq.~(\ref{fancy}) in terms of $\tau$ functions with
shifted arguments. \newline\newline\noindent
(i) Due to the Cauchy integral representation
\begin{eqnarray} \label{lhs-cauchy}\fl
e^{(a-1)v({\boldsymbol t}-{\boldsymbol t}^\prime;\lambda)} P_m^{(n)}({\boldsymbol t}^\prime;\lambda) =
\frac{1}{2\pi i} \oint_{{\cal C}_\infty} dz\,
e^{(a-1)v({\boldsymbol t}-{\boldsymbol t}^\prime;z)} \frac{P_m^{(n)}({\boldsymbol t}^\prime;z)}{z-\lambda},
\end{eqnarray}
the l.h.s. of Eq.~(\ref{fancy}) can be transformed as follows:
\begin{eqnarray} \fl
{\rm l.h.s.} = \frac{1}{2\pi i}
\oint_{{\cal C}_\infty} dz\,
e^{(a-1)v({\boldsymbol t}-{\boldsymbol t}^\prime;z)} \, P_m^{(n)}({\boldsymbol t}^\prime;z)
\underbrace{\int_{\cal D} d\lambda \, \Gamma_n(\lambda)\, e^{v({\boldsymbol t};\lambda)}
\frac{P_\ell^{(n)}({\boldsymbol t};\lambda)}{z-\lambda}}_{Q_\ell^{(n)}({\boldsymbol t};z)\;\;\; [{\rm Eq.~(\ref{q-cauchy})}]} \nonumber\\
=
\frac{1}{2\pi i}
\oint_{{\cal C}_\infty} dz\,
e^{(a-1)v({\boldsymbol t}-{\boldsymbol t}^\prime;z)} \, P_m^{(n)}({\boldsymbol t}^\prime;z)
Q_\ell^{(n)}({\boldsymbol t};z).
\end{eqnarray}
Taking into account Eqs.~(\ref{p-tau}) and (\ref{q-tau}), this is further
reduced to
\begin{eqnarray} \fl
\label{lhs-1}
{\rm l.h.s.} = \frac{1}{2\pi i} \frac{1}{\tau_\ell^{(\ell-n)}({\boldsymbol t})\, \tau_{m}^{(m-n)}({\boldsymbol t^\prime})} \nonumber \\
\times
\oint_{{\cal C}_\infty} dz\,
e^{(a-1)v({\boldsymbol t}-{\boldsymbol t}^\prime;z)} \, \tau_m^{(m-n)}({\boldsymbol t^\prime}-[\boldsymbol{z}^{-1}])
\frac{\tau_{\ell+1}^{(\ell+1-n)}({\boldsymbol t}+[\boldsymbol{z}^{-1}])}{z^{\ell+1-m}}.
\end{eqnarray}
\newline\newline\noindent (ii) To transform the r.h.s. of
Eq.~(\ref{fancy}), we make use of the Cauchy theorem in the form
\begin{eqnarray} \label{rhs-cauchy}
e^{a\,v({\boldsymbol t}-{\boldsymbol t}^\prime;\lambda)} P_\ell^{(n)}({\boldsymbol t};\lambda)=
\frac{1}{2\pi i}
\oint_{{\cal C}_\infty} dz\,
e^{a\,v({\boldsymbol t}-{\boldsymbol t}^\prime;z)} \frac{P_\ell^{(n)}({\boldsymbol t};z)}{z-\lambda},
\end{eqnarray}
to get:
\begin{eqnarray} \fl
{\rm r.h.s.} = \frac{1}{2\pi i}
\oint_{{\cal C}_\infty} dz\,
e^{a\,v({\boldsymbol t}-{\boldsymbol t}^\prime;z)} P_\ell^{(n)}({\boldsymbol t};z)
\underbrace{\int_{\cal D} d\lambda \, \Gamma_n(\lambda)\, e^{v({\boldsymbol t}^\prime;\lambda)}
\frac{
P_m^{(n)}({\boldsymbol t};\lambda)}{z-\lambda}}_{Q_m^{(n)}({\boldsymbol t^\prime};z)\;\;\; [{\rm Eq.~(\ref{q-cauchy})}]}
\nonumber \\
=
\frac{1}{2\pi i}
\oint_{{\cal C}_\infty} dz\,
e^{a\,v({\boldsymbol t}-{\boldsymbol t}^\prime;z)} P_\ell^{(n)}({\boldsymbol t};z)\, Q_m^{(n)}({\boldsymbol t^\prime};z).
\end{eqnarray}
Taking into account Eqs.~(\ref{p-tau}) and (\ref{q-tau}), this is further
reduced to
\begin{eqnarray} \fl \label{rhs-1}
{\rm r.h.s.} = \frac{1}{2\pi i} \frac{1}{\tau_\ell^{(\ell-n)}({\boldsymbol t})\, \tau_{m}^{(m-n)}({\boldsymbol t^\prime})}
\nonumber \\
\times \oint_{{\cal C}_\infty} dz\,
e^{a\,v({\boldsymbol t}-{\boldsymbol t}^\prime;z)}\, \tau_\ell^{(\ell-n)} ({\boldsymbol t} - [\boldsymbol{z}^{-1}])
\frac{\tau_{m+1}^{(m+1-n)}({\boldsymbol t}^\prime + [\boldsymbol{z}^{-1}])}{z^{m+1-\ell}}.
\end{eqnarray}
The bilinear identity Eq.~(\ref{bi-id-rep}) follows from
Eqs.~(\ref{lhs-1}) and (\ref{rhs-1}) after setting $n=m-s$. End of proof.
\subsection{Bilinear identity in Hirota form}\label{Sec-3-3}
\label{bi-hirota}
The bilinear identity Eq.~(\ref{bi-id-rep}) can alternatively be written in the Hirota form:
\begin{eqnarray}
\label{bi-hf} \fl
e^{\beta
\boldsymbol{(x}\cdot\boldsymbol{D)}}
\sum_{k=0}^\infty s_k\left( (2a-1-\beta){\boldsymbol x}\right)
s_{k+q+1}\left(
[\boldsymbol{D}]
\right)\, \tau_{p}^{(s)}({\boldsymbol t})\,\circ
\tau_{p+q}^{(s+q)}({\boldsymbol t}) \nonumber \\ \fl \qquad
= e^{-\beta
\boldsymbol{(x}\cdot\boldsymbol{D)}}
\sum_{k=q+1}^\infty s_k\left((2a-1+\beta){\boldsymbol x}\right)
s_{k-q-1}\left(
[\boldsymbol{D}]
\right)\,\tau_{p+q+1}^{(s+q+1)}({\boldsymbol t})\circ
\tau_{p-1}^{(s-1)} ({\boldsymbol t})
\end{eqnarray}
where $\beta=\pm 1$~(not to be confused with the Dyson symmetry index!), $p \ge 1$ and $q \ge -1$. Let us remind that the vector ${\boldsymbol D}$ appearing in the scalar product $({\boldsymbol x}\cdot {\boldsymbol D})=\sum_{k} x_k D_k$ stands for ${\boldsymbol D} = (D_1, D_2,\cdots, D_k,\cdots)$; the $k$-th component of the vector $[{\boldsymbol D}]$ equals $k^{-1} D_k$ (compare with Eq.~(\ref{t-shift})). The generic Hirota differential
operator ${\cal P}({\boldsymbol D})\, f({\boldsymbol t})
\circ g({\boldsymbol t})$ is defined in Appendix \ref{App-hi}. Also, $s_k({\boldsymbol x})$ are the Schur polynomials defined in Eq.~(\ref{SCHUR}).
\noindent\newline\newline
To prove Eq.~(\ref{bi-hf}), we proceed in two steps.
(i) First, we set the vectors ${\boldsymbol t}$ and ${\boldsymbol
t}^\prime$ in Eq.~(\ref{bi-id-rep}) to be
\begin{eqnarray} \label{ttprime}
({\boldsymbol t},{\boldsymbol t}^\prime) \mapsto ({\boldsymbol t}+{\boldsymbol x},{\boldsymbol t}-{\boldsymbol x}).
\end{eqnarray}
The parameterisation Eq.~(\ref{ttprime}) allows us to rewrite the $({\boldsymbol t},{\boldsymbol t}^\prime)$ dependent part of the
integrand in the l.h.s. of Eq.~(\ref{bi-id-rep})
\begin{eqnarray}
e^{(a-1)v({\boldsymbol t}-{\boldsymbol t}^\prime;z)} \, \tau_m^{(s)}({\boldsymbol t^\prime}-[\boldsymbol{z}^{-1}])
\tau_{\ell+1}^{(\ell+1+s-m)}({\boldsymbol t}+[\boldsymbol{z}^{-1}]) \nonumber
\end{eqnarray}
as follows:
\begin{eqnarray} \label{is-01} \fl
\exp\left[ v\big(2(a-1){\boldsymbol x};z\big)\right] \, \tau_m^{(s)}({\boldsymbol t}-{\boldsymbol x}-[\boldsymbol{z}^{-1}])
\tau_{\ell+1}^{(\ell+1+s-m)}({\boldsymbol t}+{\boldsymbol
x}+[\boldsymbol{z}^{-1}])\nonumber\\ \fl
=
\exp\left[ v\big(2(a-1){\boldsymbol x};z\big)\right] \nonumber\\
\fl \qquad \times \exp\left[ \boldsymbol{(x}\cdot\boldsymbol{\partial_\xi)} + \boldsymbol{(}[\boldsymbol{z}^{-1}]\cdot\boldsymbol{\partial_\xi)}\right]
\tau_{\ell+1}^{(\ell+1+s-m)}({\boldsymbol t}+{\boldsymbol \xi})\,
\tau_m^{(s)}({\boldsymbol t}-{\boldsymbol \xi})\Big|_{{\boldsymbol \xi}=0}.
\end{eqnarray}
Here, we have used the linearity of the $t$-deformation, $\alpha \,
v({\boldsymbol t};z) = v(\alpha{\boldsymbol t};z)$. Further, we spot the identity
$\boldsymbol{(}[\boldsymbol{z}^{-1}]\cdot\boldsymbol{\partial_\xi)}=v \left(
[\boldsymbol{\partial_\xi}]; z^{-1}\right)$ to reduce Eq.~(\ref{is-01}) to \footnote[5]{Here,
\begin{eqnarray}
[\boldsymbol{\partial_\xi}] = \left(
\frac{\partial}{\partial \xi_1}, \frac{1}{2}\frac{\partial}{\partial \xi_2},\cdots,
\frac{1}{k}\frac{\partial}{\partial \xi_k},\cdots
\right). \nonumber
\end{eqnarray}}
\begin{eqnarray} \fl
\exp\left[ v\big(2(a-1){\boldsymbol x};z\big)\right]\nonumber \\ \fl
\qquad \times \exp\left[
\boldsymbol{(x}\cdot\boldsymbol{\partial_\xi)}\right]\,
\exp\left[
v \left( [\boldsymbol{\partial_\xi}]; z^{-1}\right)
\right]
\tau_{\ell+1}^{(\ell+1+s-m)}({\boldsymbol t}+{\boldsymbol \xi})\,
\tau_m^{(s)}({\boldsymbol t}-{\boldsymbol \xi})\Big|_{{\boldsymbol \xi}=0}. \nonumber
\end{eqnarray}
The latter can be rewritten in
terms of Hirota differential operators [see Eq.~(\ref{b2})] with the
result being
\begin{eqnarray} \fl
\exp\left[ v\big(2(a-1){\boldsymbol x};z\big)\right]
\exp\left[
\boldsymbol{(x}\cdot\boldsymbol{D)}\right]\,
\exp\left[
v \left( [\boldsymbol{D}]; z^{-1}\right)
\right]
\tau_{\ell+1}^{(\ell+1+s-m)}({\boldsymbol t})\,\circ
\tau_m^{(s)}({\boldsymbol t}).
\end{eqnarray}
By the same token, the $({\boldsymbol t},{\boldsymbol t}^\prime)$ dependent part of
the integrand in the r.h.s. of Eq.~(\ref{bi-id-rep}),
\begin{eqnarray}
e^{a\,v({\boldsymbol t}-{\boldsymbol t}^\prime;z)}\, \tau_\ell^{(\ell+s-m)} ({\boldsymbol t}
- [\boldsymbol{z}^{-1}])\,\tau_{m+1}^{(s+1)}({\boldsymbol t}^\prime + [\boldsymbol{z}^{-1}])
\nonumber
\end{eqnarray}
can be reduced to
\begin{eqnarray} \fl
\exp\left[ v\big(2a{\boldsymbol x};z\big)\right]
\exp\left[-
\boldsymbol{(x}\cdot\boldsymbol{D)}\right]\,
\exp\left[
v \left( [\boldsymbol{D}]; z^{-1}\right)
\right]
\,\tau_{m+1}^{(s+1)}({\boldsymbol t})\circ
\tau_\ell^{(\ell+s-m)} ({\boldsymbol t}).
\end{eqnarray}
Thus, we end up with the alternative representation for the bilinear
identity Eq.~(\ref{bi-id-rep}):
\begin{eqnarray} \label{316-rew}\fl
e^{
\boldsymbol{(x}\cdot\boldsymbol{D)}}
\oint_{{\cal C}_\infty} \frac{dz}{z^{\ell-m+1}} \,
\exp\left[ v\big(2(a-1){\boldsymbol x};z\big)\right]
\,
\exp\left[
v \left( [\boldsymbol{D}]; z^{-1}\right)
\right]
\tau_{\ell+1}^{(\ell+1+s-m)}({\boldsymbol t})\,\circ
\tau_m^{(s)}({\boldsymbol t}) \nonumber \\ \fl \quad
= e^{-
\boldsymbol{(x}\cdot\boldsymbol{D)}} \oint_{{\cal C}_\infty} \frac{dz}{z^{m-\ell+1}} \,
\exp\left[ v\big(2a{\boldsymbol x};z\big)\right]
\,
\exp\left[
v \left( [\boldsymbol{D}]; z^{-1}\right)
\right]
\,\tau_{m+1}^{(s+1)}({\boldsymbol t})\circ
\tau_\ell^{(\ell+s-m)} ({\boldsymbol t}). \nonumber\\{}
\end{eqnarray}
(ii) Second, to facilitate the integration in Eq.~(\ref{316-rew}), we rewrite the integrands therein in the form of Laurent series in $z$ by employing the identity
\begin{eqnarray}
e^{v({\boldsymbol t};z)} = \exp\left(
\sum_{j=1}^\infty t_j z^j
\right) = \sum_{k=0}^\infty s_k({\boldsymbol t})\, z^k.
\end{eqnarray}
Now, the integrals in Eq.~(\ref{316-rew}) are easily performed to yield
\begin{eqnarray} \label{bid-2}\fl
e^{
\boldsymbol{(x}\cdot\boldsymbol{D)}}
\sum_{k=\max(0,\ell-m)}^\infty s_k\left(2(a-1){\boldsymbol x}\right)
s_{k+m-\ell}\left(
[\boldsymbol{D}]
\right)\, \tau_{\ell+1}^{(\ell+1+s-m)}({\boldsymbol t})\,\circ
\tau_m^{(s)}({\boldsymbol t}) \nonumber \\ \fl \quad
= e^{-
\boldsymbol{(x}\cdot\boldsymbol{D)}}
\sum_{k=\max(0,m-\ell)}^\infty s_k\left(2a{\boldsymbol x}\right)
s_{k+\ell-m}\left(
[\boldsymbol{D}]
\right)\,\tau_{m+1}^{(s+1)}({\boldsymbol t})\circ
\tau_\ell^{(\ell+s-m)} ({\boldsymbol t}).
\end{eqnarray}
It remains to verify that Eq.~(\ref{bid-2}) is equivalent to the announced result Eq.~(\ref{bi-hf}). To this end we
distinguish between two different cases. (i) If $\ell \le m$, we set $\ell=p-1$, $m=p+q$ and $s \mapsto s+q$ in Eq.~(\ref{bid-2}) to find out that it reduces to Eq.~(\ref{bi-hf}) taken at $\beta=+1$; (ii) If $\ell > m$, we set $\ell=p+q$, $m=p-1$ and $s \mapsto s-1$ in Eq.~(\ref{bid-2}) to find out that it reduces to Eq.~(\ref{bi-hf}) taken at $\beta=-1$. This ends the proof.
For an
alternative derivation of Eq.~(\ref{bi-hf}) the reader is referred to Appendix \ref{App-bi}.
\subsection{Zoo of integrable
hierarchies}\label{Sec-3-4}
The bilinear identity, in either form, encodes an infinite set of
hierarchically structured nonlinear differential equations in the
variables ${\boldsymbol t}$. Two of these hierarchies -- the KP and the TL hierarchies --
were mentioned in Section \ref{Sec-2}. Below, we provide a complete list of integrable hierarchies
associated with the $\tau$ function Eq.~(\ref{tau-f-c1}).
To identify them, we expand the bilinear identity in Hirota form [Eq.~(\ref{bi-hf})] around
${\boldsymbol x}={\boldsymbol 0}$ and $a=0$, keeping only linear in ${\boldsymbol x}$ terms. Since $s_0({\boldsymbol t})=1$ and
\begin{eqnarray}
s_k({\boldsymbol t})\Big|_{{\boldsymbol t}\rightarrow {\boldsymbol 0}} = t_k + {\cal O}({\boldsymbol
t}^2),\quad k = 1,\, 2,\,\dots
\end{eqnarray}
we obtain:
\begin{eqnarray} \label{bi-hf-exp} \fl
(1-\delta_{q,-1})\, s_{q+1}([{\boldsymbol D}]) \, \tau_{p}^{(s)}({\boldsymbol t})\,\circ
\tau_{p+q}^{(s+q)}({\boldsymbol t}) \nonumber\\ \fl
+ \sum_{k=1}^\infty x_k\left[
(2a-1-\beta)s_{k+q+1}\left([\boldsymbol{D}]\right) + \beta D_k \Big(
s_{q+1}\left([\boldsymbol{D}]\right) + \delta_{q,-1}\Big)
\right]
\, \tau_{p}^{(s)}({\boldsymbol t})\,\circ\tau_{p+q}^{(s+q)}({\boldsymbol t})
\nonumber \\ \fl
-
(2a-1+\beta)\sum_{k=\max(1,q+1)}^\infty x_k
s_{k-q-1}\left(
[\boldsymbol{D}]
\right)\,\tau_{p+q+1}^{(s+q+1)}({\boldsymbol t})\circ
\tau_{p-1}^{(s-1)} ({\boldsymbol t})+ {\cal O}({\boldsymbol x}^2)=0.
\end{eqnarray}
As soon as Eq.~(\ref{bi-hf}) holds for arbitrary $a$ and ${\boldsymbol
x}$, Eq.~(\ref{bi-hf-exp}) generates four
identities.\newline\newline\noindent
(i) The first identity
\begin{eqnarray} \label{i-3}
\quad
s_{k+q+1}\left([\boldsymbol{D}]\right) \, \tau_{p}^{(s)}({\boldsymbol t})\,\circ\tau_{p+q}^{(s+q)}({\boldsymbol
t}) = 0
\end{eqnarray}
holds for $q\ge 1$ and $k=0,\,1,\,\dots\,,\,q$.
\newline\newline\noindent
(ii) The second identity
\begin{eqnarray} \label{i-2} \fl
\qquad \big[
(1+\beta)\,s_{k+q+1}\left([\boldsymbol{D}]\right) - \beta D_k
s_{q+1}\left([\boldsymbol{D}]\right)
\big]
\, \tau_{p}^{(s)}({\boldsymbol t})\,\circ\tau_{p+q}^{(s+q)}({\boldsymbol t})=0
\end{eqnarray}
holds for $q\ge 1$ and $k=1,\,2,\,\dots\,,\,q$.
\newline\newline\noindent
(iii) The third identity
\begin{eqnarray} \label{i-4} \fl
\big[
(1+\beta)\,s_{k+q+1}\left([\boldsymbol{D}]\right) - \beta D_k
\big( s_{q+1}\left([\boldsymbol{D}]\right) + \delta_{q,-1} \big)
\big]
\, \tau_{p}^{(s)}({\boldsymbol t})\,\circ\tau_{p+q}^{(s+q)}({\boldsymbol t}) \nonumber \\
= (1-\beta)\, s_{k-q-1}\left(
[\boldsymbol{D}]
\right)\,\tau_{p+q+1}^{(s+q+1)}({\boldsymbol t})\circ
\tau_{p-1}^{(s-1)} ({\boldsymbol t})
\end{eqnarray}
holds for $q\ge -1$ and $k\ge\max(1,q+1)$.
\newline\newline\noindent
(iv) The last, fourth identity
\begin{eqnarray} \label{i-5} \fl
s_{k+q+1}\left([\boldsymbol{D}]\right)
\, \tau_{p}^{(s)}({\boldsymbol t})\,\circ\tau_{p+q}^{(s+q)}({\boldsymbol t})
= s_{k-q-1}\left(
[\boldsymbol{D}]
\right)\,\tau_{p+q+1}^{(s+q+1)}({\boldsymbol t})\circ
\tau_{p-1}^{(s-1)} ({\boldsymbol t})
\end{eqnarray}
holds for $q\ge 0$ and $k\ge q+1$.
\newline\newline
Equations (\ref{i-3}) -- (\ref{i-5}) can further be classified to yield the following bilinear hierarchies:
\newline
\begin{itemize}
\item {\it Toda Lattice (TL) hierarchy:}
\begin{eqnarray} \fl\label{TL}
\frac{1}{2} D_1 D_k
\, \tau_{p}^{(s)}({\boldsymbol t})\,\circ\tau_{p}^{(s)}({\boldsymbol t}) =
s_{k-1}\left([\boldsymbol{D}]\right)
\,\tau_{p+1}^{(s+1)}({\boldsymbol t})\circ
\tau_{p-1}^{(s-1)} ({\boldsymbol t})
\end{eqnarray}
with $k\ge 1$.
\newline
\item {\it q-modified Toda Lattice hierarchy:}
\begin{eqnarray} \fl \label{qTL}
\frac{1}{2} D_k s_{q+1}\left([\boldsymbol{D}]\right)
\, \tau_{p}^{(s)}({\boldsymbol t})\,\circ\tau_{p+q}^{(s+q)}({\boldsymbol t}) =
s_{k-q-1}\left([\boldsymbol{D}]\right)
\,\tau_{p+q+1}^{(s+q+1)}({\boldsymbol t})\circ
\tau_{p-1}^{(s-1)} ({\boldsymbol t})
\end{eqnarray}
with $q\ge 0$ and $k\ge q+1$. (For $q=0$, it reduces to the above Toda
Lattice hierarchy.)
\newline
\item {\it Kadomtsev-Petviashvili (KP) hierarchy:}
\begin{eqnarray} \fl \label{KP}
\left[
\frac{1}{2} D_1 D_k - s_{k+1}\left([\boldsymbol{D}]\right) \right]
\, \tau_{p}^{(s)}({\boldsymbol t})\,\circ\tau_{p}^{(s)}({\boldsymbol t}) = 0
\end{eqnarray}
with \footnote[8]{Both $k=1$ and $k=2$ bring trivial statements, see Appendix
\ref{App-hi}.} $k\ge 3$.
\newline
\item {\it q-modified Kadomtsev-Petviashvili hierarchy:}
\begin{eqnarray} \fl \label{qKP}
\frac{1}{2} D_k s_{q+1}\left([\boldsymbol{D}]\right)
\, \tau_{p}^{(s)}({\boldsymbol t})\,\circ\tau_{p+q}^{(s+q)}({\boldsymbol t}) =
s_{k+q+1}\left([\boldsymbol{D}]\right)
\, \tau_{p}^{(s)}({\boldsymbol t})\,\circ\tau_{p+q}^{(s+q)}({\boldsymbol
t})
\end{eqnarray}
with $q\ge 0$ and $k\ge q+1$. (For $q=0$, it reduces to the above KP
hierarchy.)\newline
\item {\it Left q-modified Kadomtsev-Petviashvili hierarchy:}
\begin{eqnarray} \fl
D_k s_{q+1}\left([\boldsymbol{D}]\right)
\, \tau_{p}^{(s)}({\boldsymbol t})\,\circ\tau_{p+q}^{(s+q)}({\boldsymbol t}) =
0
\end{eqnarray}
with $q\ge 1$ and $1 \le k \le q$. \newline
\item {\it Right q-modified Kadomtsev-Petviashvili hierarchy:}
\begin{eqnarray} \fl
s_{k+q+1}\left([\boldsymbol{D}]\right)
\, \tau_{p}^{(s)}({\boldsymbol t})\,\circ\tau_{p+q}^{(s+q)}({\boldsymbol t}) =
0
\end{eqnarray}
with $q\ge 1$ and $0 \le k \le q$.
\newline
\item {\it $(-1)$-modified Kadomtsev-Petviashvili hierarchy:}
\begin{eqnarray} \label{346}\fl
\big[
s_{k}\left([\boldsymbol{D}]\right) - D_k
\big]\, \tau_{p}^{(s)}({\boldsymbol t})\,\circ\tau_{p-1}^{(s-1)}({\boldsymbol
t})=0
\end{eqnarray}
with $k\ge 2$.
\newline
\end{itemize}
Notice, that the modified hierarchies will play no r\^ole in further
consideration.
\subsection{KP and Toda Lattice hierarchies}\label{Sec-3-5}
As was pointed out in Section \ref{Sec-2}, the KP and Toda Lattice
hierarchies are of primary importance for our formalism. In this
subsection, we explicitly present a few first members of these
hierarchies.
\newline\newline\noindent
{\it KP hierarchy.}---Due to the properties of Hirota symbol
reviewed in Appendix \ref{App-hi}, the first nontrivial equation of the KP
hierarchy corresponds to $k=3$ in Eq.~(\ref{KP}). Consulting Table
\ref{schur-table} and having in mind that $[{\boldsymbol D}]_k = k^{-1}
D_k$, we derive the first two members, ${\rm KP}_1$ and ${\rm
KP}_2$, of the KP hierarchy in Hirota form
\begin{eqnarray}
\label{kp-1}
{\rm KP}_1:\quad (D_1^4 - 4 D_1 D_3 + 3 D_2^2) \, \tau_{p}^{(s)}({\boldsymbol t})\,\circ\tau_{p}^{(s)}({\boldsymbol
t}) = 0, \\
\label{kp-2}
{\rm KP}_2:\quad (D_1^3D_2 + 2 D_2 D_3 - 3 D_1 D_4) \, \tau_{p}^{(s)}({\boldsymbol t})\,\circ\tau_{p}^{(s)}({\boldsymbol
t}) = 0.
\end{eqnarray}
In deriving Eqs.~(\ref{kp-1}) and (\ref{kp-2}), we have used the Property 2a from Appendix \ref{App-hi}.
Making use of the Property 2b from Appendix \ref{App-hi}, the two
equations can be written explicitly:
\begin{eqnarray} \fl \label{kp1-exp}
{\rm KP}_1:\quad \left(
\frac{\partial^4}{\partial t_1^4} + 3 \frac{\partial^2}{\partial t_2^2}
- 4 \frac{\partial^2}{\partial t_1 \partial t_3}
\right)\log\, \tau_p^{(s)}({\boldsymbol t}) + 6 \left(
\frac{\partial^2}{\partial t_1^2} \log\, \tau_p^{(s)}({\boldsymbol t})
\right)^2 = 0,\\ \fl \label{kp2-exp}
{\rm KP}_2:\quad \left(
\frac{\partial^4}{\partial t_1^3 \partial t_2} - 3 \frac{\partial^2}{\partial t_1 \partial t_4}
+ 2 \frac{\partial^2}{\partial t_2 \partial t_3}
\right)\log\, \tau_p^{(s)}({\boldsymbol t}) \nonumber \\
\quad \quad \quad \quad + 6 \left(
\frac{\partial^2}{\partial t_1^2} \log\, \tau_p^{(s)}({\boldsymbol t})
\right) \left(
\frac{\partial^2}{\partial t_1 \partial t_2} \log\, \tau_p^{(s)}({\boldsymbol t})
\right)= 0.
\end{eqnarray}
Only ${\rm KP}_1$ will further be used.
\newline\newline\noindent
{\it Toda Lattice hierarchy.}---The first nontrivial equations of
the Toda Lattice hierarchy can be derived along the same lines from
Eq.~(\ref{TL}).
\begin{eqnarray}
\label{tl-1}
{\rm TL}_1:\quad \frac{1}{2} D_1^2 \, \tau_{p}^{(s)}({\boldsymbol t})\,\circ\tau_{p}^{(s)}({\boldsymbol
t}) = \tau_{p+1}^{(s+1)}({\boldsymbol t})\circ
\tau_{p-1}^{(s-1)} ({\boldsymbol t}), \\
\label{tl-2}
{\rm TL}_2: \quad \frac{1}{2} D_1 D_2
\, \tau_{p}^{(s)}({\boldsymbol t})\,\circ\tau_{p}^{(s)}({\boldsymbol t}) =
D_1
\,\tau_{p+1}^{(s+1)}({\boldsymbol t})\circ
\tau_{p-1}^{(s-1)} ({\boldsymbol t}).
\end{eqnarray}
Explicitly, one has:
\begin{eqnarray} \fl\label{tl-1-expl}
{\rm TL}_1:\quad \tau_p^{(s)}({\boldsymbol t}) \frac{\partial^2
\tau_p^{(s)}({\boldsymbol t})}{\partial t_1^2} - \left(
\frac{\partial
\tau_p^{(s)}({\boldsymbol t})}{\partial t_1}
\right)^2 = \tau_{p+1}^{(s+1)}({\boldsymbol t})
\tau_{p-1}^{(s-1)} ({\boldsymbol t}),
\\ \fl\label{tl-2-expl} {\rm TL}_2:\quad \tau_p^{(s)}({\boldsymbol t}) \frac{\partial^2
\tau_p^{(s)}({\boldsymbol t})}{\partial t_1 \partial t_2} -
\frac{\partial
\tau_p^{(s)}({\boldsymbol t})}{\partial t_1}
\frac{\partial
\tau_p^{(s)}({\boldsymbol t})}{\partial t_2} \nonumber \\ =
\tau_{p-1}^{(s-1)}({\boldsymbol t}) \frac{\partial
\tau_{p+1}^{(s+1)}({\boldsymbol t})}{\partial t_1 } -
\tau_{p+1}^{(s+1)}({\boldsymbol t}) \frac{\partial
\tau_{p-1}^{(s-1)}({\boldsymbol t})}{\partial t_1 }.
\end{eqnarray}
Higher order members of the KP and Toda Lattice hierarchies can
readily be generated from Eqs.~(\ref{KP}) and~(\ref{TL}),
respectively.
\subsection{Virasoro constraints}\label{Sec-3-6}
Virasoro constraints satisfied by the $\tau$ function
Eq.~(\ref{tau-fff}) below is yet another important ingredient of the
``deform-and-study'' approach to the correlation functions of characteristic
polynomials $\Pi_{n|p} ({\boldsymbol \varsigma}; {\boldsymbol
\kappa})$. In accordance with the discussion in Section \ref{Sec-2},
Virasoro constraints are needed to translate nonlinear integrable
hierarchies Eqs.~(\ref{TL}) -- (\ref{346}), satisfied by the $\tau$ function
\begin{eqnarray} \fl
\label{tau-fff}
\tau_{n}^{(s)}({\boldsymbol \varsigma},{\boldsymbol \kappa}; {\boldsymbol t}) = \frac{1}{n!}
\int_{{\mathcal D}^{n}} \prod_{j=1}^{n}
\left(
d\lambda_j\, e^{-V_{n-s}(\lambda_j)} \, \prod_{\alpha=1}^p
(\varsigma_\alpha-\lambda_j)^{\kappa_\alpha}\, e^{v({\boldsymbol t};\lambda_j)}\right) \cdot
\Delta_{n}^2({\boldsymbol \lambda}),
\end{eqnarray}
into nonlinear, hierarchically structured differential equations for
the correlation function
\begin{eqnarray}\fl
\label{rpf-111}
\Pi_{n|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa}) =\frac{1}{{\cal N}_n} \int_{{\cal D}^{n}} \prod_{j=1}^{n}
\left( d\lambda_j \, e^{-V_n(\lambda_j)}\prod_{\alpha=1}^p (\varsigma_\alpha - \lambda_j)^{\kappa_\alpha} \right)
\cdot
\Delta_{n}^2({\boldsymbol \lambda})
\end{eqnarray}
obtained from Eq.~(\ref{tau-fff}) by setting ${\boldsymbol t}=0$ and $s=0$.
The normalisation constant ${\cal N}_n$ is defined in
Eq.~(\ref{norm}).
The Virasoro constraints reflect invariance of the $\tau$
function Eq.~(\ref{tau-fff}) under the change of integration
variables
\begin{eqnarray}
\label{vc-var}
\lambda_j \rightarrow \mu_j + \epsilon \mu_j^{q+1}
R(\mu_j),\;\;\; q \ge -1,
\end{eqnarray}
labeled by the integer $q$; here $\epsilon>0$ is an infinitesimally small parameter, and
$R(\mu)$ is a suitable benign function (e.g., a polynomial). The
function $f(\lambda)$ is related to the confinement potential
$V_{n-s}(\lambda)$ through the parameterisation (Adler, Shiota and van Moerbeke 1995)
\begin{eqnarray}
\label{fg} \frac{dV_{n-s}}{d\lambda} =
\frac{g(\lambda)}{f(\lambda)},\;\;\;g(\lambda)=\sum_{k=0}^\infty b_k
\lambda^k,\;\;\; f(\lambda)=\sum_{k=0}^\infty a_k \lambda^k
\end{eqnarray}
in which both $g(\lambda)$ and $f(\lambda)$ depend on $n-s$ as do
the coefficients $b_k$ and $a_k$ in the above expansions. We also
assume that
\begin{eqnarray}
\lim_{\lambda\rightarrow \pm \infty} f(\lambda)\,\lambda^k \,
e^{-V_{n-s}(\lambda)} = 0, \quad k\ge 0.
\end{eqnarray}
To derive the Virasoro constraints announced in
Eqs.~(\ref{2-Vir})--(\ref{bq}), we transform the integration
variables in Eq.~(\ref{tau-fff}) as specified in Eq.~(\ref{vc-var})
and further expand Eq.~(\ref{tau-fff}) in $\epsilon$. Invariance of
the integral under this transformation implies that the linear in
$\epsilon$ terms must vanish:
\begin{eqnarray}\fl
\label{vc-1}
\int_{{\cal D}^n} (d{\boldsymbol \mu})\,
\Bigg(
\beta \,\sum_{i>j} \frac{\mu_i^{q+1} R(\mu_i) - \mu_j^{q+1}
R(\mu_j)}{\mu_i-\mu_j} + \sum_{\ell=1}^n
\mu_\ell^{q+1} R^\prime (\mu_\ell) \nonumber \\ \fl
\qquad +
\sum_{\ell=1}^n R(\mu_\ell) \left[ (q+1)\mu_\ell^q + v^\prime({\boldsymbol t};\mu_\ell)\, \mu_\ell^{q+1}
- \frac{g(\mu_\ell)}{f(\mu_\ell)} \,\mu_\ell^{q+1} \right]
\Bigg)
I_n^{(s)}({\boldsymbol t}; {\boldsymbol \mu}, {\boldsymbol \varsigma})
\nonumber \\ \fl
\qquad
-\int_{{\cal D}^n} (d{\boldsymbol \mu})\, \left( \sum_{\ell=1}^n \mu_\ell^{q+1} R(\mu_\ell) \sum_{\alpha=1}^p \frac{\kappa_\alpha}{\varsigma_\alpha - \mu_\ell}
\right) \, I_n^{(s)}({\boldsymbol t}; {\boldsymbol \mu}, {\boldsymbol \varsigma}) \nonumber \\ \fl
\qquad -
\left( \sum_{i=1}^{{\rm dim}({\boldsymbol c^\prime})}
R(c_i^\prime)\, c_i^{\prime\,{q+1}} \frac{\partial}{\partial c_i^\prime}
\right)\, \int_{{\cal D}^n} (d{\boldsymbol \mu})\, I_n^{(s)}({\boldsymbol t}; {\boldsymbol \mu}, {\boldsymbol \varsigma})=0.
\end{eqnarray}
Here, ${\boldsymbol c}^\prime=\{c_1,\cdots,c_{2r}\}\setminus\{\pm \infty\}$,
\begin{eqnarray} \fl
I_n^{(s)}({\boldsymbol t}; {\boldsymbol \mu}, {\boldsymbol \varsigma}) = | \Delta_{n}({\boldsymbol \mu})|^\beta\,\prod_{k=1}^n \left[
e^{-V_{n-s}(\mu_j)} \, \prod_{\alpha=1}^p
(\varsigma_\alpha-\mu_j)^{\kappa_\alpha}\, e^{v({\boldsymbol t};\mu_j)}
\right],
\end{eqnarray}
and
\begin{eqnarray}
(d{\boldsymbol \mu})\, = \prod_{k=1}^n d\mu_k.
\end{eqnarray}
In the above formulae, we reinstated $\beta>0$; it will be set to
$\beta=2$ when needed.
The choice of $R(\mu)$ is dictated by problem in question and,
hence, is not unique. If one is interested in studying matrix
integrals as functions of the parameters $\{c_1,\cdots,c_{2r}\}$
defining the integration domain ${\cal D}$, the suitable choice of
$R(\mu)$ is
\begin{eqnarray}
R(\mu) = f(\mu).
\end{eqnarray}
In this case, the differential operator (Adler, Shiota and van Moerbeke 1995)
\begin{eqnarray} \label{c-op}
\sum_{i=1}^{2r}
R(c_i^\prime)\, c_i^{\prime\, {q+1}} \frac{\partial}{\partial c_i^\prime}
\end{eqnarray}
becomes an essential part of the Virasoro constraints. In the
context of characteristic polynomials, the integration domain ${\cal
D}$ is normally fixed whilst the {\it physical parameters}
$\{\varsigma_\alpha\}$ are allowed to vary. This prompts the choice
\begin{eqnarray}\label{R}
R(\mu) = f(\mu)
\prod_{k=1}^{\varrho} (\mu - c_k^\prime),\;\;\;
\varrho ={\rm dim}({\boldsymbol c}^\prime)
\end{eqnarray}
that nullifies the differential operator Eq.~(\ref{c-op}).
Equivalently,
\begin{eqnarray}
\label{rmu}
R(\mu) = f(\mu)\,\sum_{k=0}^{\varrho} \mu^k
s_{\varrho - k}(-{\boldsymbol p}_{\varrho}
(\boldsymbol{c^\prime})
).
\end{eqnarray}
Here, the notation $s_k(-{\boldsymbol p}_{\varrho} (\boldsymbol{c^\prime}))$ stands for the Schur
polynomial and ${\boldsymbol p}_{\varrho}(\boldsymbol{c^\prime})$ is an infinite dimensional vector
\begin{eqnarray} \label{b909}
{\boldsymbol p}_\varrho(\boldsymbol{c^\prime})=\left(
{\rm tr}_\varrho(\boldsymbol{c^\prime}), \frac{1}{2} {\rm tr}_\varrho(\boldsymbol{c^\prime})^2,\cdots,
\frac{1}{k} {\rm tr}_\varrho(\boldsymbol{c^\prime})^k,\cdots
\right)
\end{eqnarray}
with
\begin{eqnarray}
{\rm tr}_\varrho(\boldsymbol{c^\prime})^k =
\sum_{j=1}^{\varrho} (c_j^\prime)^k.
\end{eqnarray}
{\it Remark.} Equation (\ref{R}) assumes that none of $c_k^\prime$'s are zeros of
$f(\mu)$. If this is not the case, the set ${\boldsymbol c^\prime}$ must be redefined:
\begin{eqnarray} \label{c-redef}
{\boldsymbol c^\prime} \rightarrow {\boldsymbol c^\prime} \setminus \{{\cal Z}_0\},
\end{eqnarray}
where ${\cal Z}_0$ is comprised of zeros
of $f(\mu)$.
\newline\newline
Substituting Eqs.~(\ref{rmu}), (\ref{fg}) and (\ref{vt-def}) into
Eq.~(\ref{vc-1}), we derive:
\begin{eqnarray} \fl
\label{vc-2}
\int_{{\cal D}^n} (d{\boldsymbol \mu})\,
\sum_{k=0}^{\varrho}
s_{\varrho-k}(-{\boldsymbol p}_{\varrho} (\boldsymbol{c^\prime})) \Bigg[
\sum_{i=0}^\infty a_i \, \Bigg( \frac{\beta}{2}
\sum_{j=0}^{q+k+i} {\rm tr}_n ({\boldsymbol \mu}^j) \, {\rm tr}_n ({\boldsymbol
\mu}^{q+k+i-j}) \nonumber \\
\fl
\qquad +\left( 1- \frac{\beta}{2} \right) \, (i+k+q+1) \, {\rm tr}_n ({\boldsymbol \mu}^{q+k+i})
+ \sum_{j=0}^{\infty} jt_j {\rm tr}_n ({\boldsymbol
\mu}^{q+k+i+j}) \nonumber\\ \fl
\qquad + \sum_{\alpha=1}^p \kappa_\alpha \sum_{m=0}^{q+k+i}
\varsigma_{\alpha}^m \,{\rm tr}_n ({\boldsymbol \mu}^{q+k+i-m})
- \sum_{\alpha=1}^p \varsigma_\alpha^{q+k+i+1} \frac{\partial}{\partial \varsigma_\alpha}
\Bigg) \Bigg]\, I_n^{(s)}({\boldsymbol t}; {\boldsymbol \mu}, {\boldsymbol \varsigma}) \nonumber \\
\fl \qquad = \int_{{\cal D}^n} (d{\boldsymbol \mu})\,
\sum_{k=0}^{\varrho}
s_{\varrho-k}(-{\boldsymbol p}_{\varrho} (\boldsymbol{c^\prime}))
\sum_{i=0}^\infty b_i \, {\rm tr}_n ({\boldsymbol
\mu}^{q+k+i+1})\, I_n^{(s)}({\boldsymbol t}; {\boldsymbol \mu}, {\boldsymbol \varsigma}).
\end{eqnarray}
The ${\boldsymbol \varsigma}$-dependent part in Eq.~(\ref{vc-2}),
\begin{eqnarray} \fl \label{vc-3}
\int_{{\cal D}^n} (d{\boldsymbol \mu})\,
\sum_{k=0}^{\varrho}
s_{\varrho-k}(-{\boldsymbol p}_{\varrho} (\boldsymbol{c^\prime}))
\sum_{i=0}^\infty a_i \, \nonumber \\
\fl \qquad \times
\left( \sum_{\alpha=1}^p \kappa_\alpha \sum_{m=0}^{q+k+i}
\varsigma_{\alpha}^m \,{\rm tr}_n ({\boldsymbol \mu}^{q+k+i-m})
- \sum_{\alpha=1}^p \varsigma_\alpha^{q+k+i+1} \frac{\partial}{\partial
\varsigma_\alpha} \right) \, I_n^{(s)}({\boldsymbol t}; {\boldsymbol \mu}, {\boldsymbol \varsigma}),
\end{eqnarray}
originates from the term
\begin{eqnarray}
\label{vc-piece}
- \int_{{\cal D}^n} (d{\boldsymbol \mu})\, \left( \sum_{\ell=1}^n \mu_\ell^{q+1} R(\mu_\ell) \sum_{\alpha=1}^p \frac{\kappa_\alpha}{\varsigma_\alpha - \mu_\ell}
\right) \, I_n^{(s)}({\boldsymbol t}; {\boldsymbol \mu}, {\boldsymbol \varsigma})
\end{eqnarray}
in Eq.~(\ref{vc-1}). Indeed, substituting Eqs.~(\ref{rmu}) and
(\ref{fg}) into Eq.~(\ref{vc-piece}), the latter reduces to
\begin{eqnarray} \fl
\int_{{\cal D}^n} (d{\boldsymbol \mu})\, \sum_{k=0}^{\varrho}
s_{\varrho-k}(-{\boldsymbol p}_{\varrho} (\boldsymbol{c^\prime}))
\sum_{i=0}^\infty a_i \, \left( \sum_{\alpha=1}^p \kappa_\alpha
\sum_{\ell=1}^n \frac{ \mu_\ell^{q+k+i+1}}{ \mu_\ell - \varsigma_\alpha }
\right) \, I_n^{(s)}({\boldsymbol t}; {\boldsymbol \mu}, {\boldsymbol \varsigma}).
\end{eqnarray}
The double sum in parentheses can conveniently be divided into two
pieces,
\begin{eqnarray}
\label{c-1}
\sum_{\alpha=1}^p \kappa_\alpha
\sum_{\ell=1}^n \frac{ \mu_\ell^{q+k+i+1}- \varsigma_\alpha^{q+k+i+1}}{ \mu_\ell - \varsigma_\alpha }
\end{eqnarray}
and
\begin{eqnarray}
\label{c-2}
\sum_{\alpha=1}^p \kappa_\alpha
\sum_{\ell=1}^n \frac{ \varsigma_\alpha^{q+k+i+1}}{ \mu_\ell - \varsigma_\alpha
}.
\end{eqnarray}
Due to the identities
\begin{eqnarray}
\sum_{\ell=1}^n \frac{ \mu_\ell^{q+k+i+1}- \varsigma_\alpha^{q+k+i+1}}{ \mu_\ell - \varsigma_\alpha
} = \sum_{m=0}^{q+k+i}
\varsigma_{\alpha}^m \,{\rm tr}_n ({\boldsymbol \mu}^{q+k+i-m})
\end{eqnarray}
and
\begin{eqnarray}
\kappa_\alpha
\sum_{\ell=1}^n \frac{ 1}{ \mu_\ell - \varsigma_\alpha
}\, I_n^{(s)}({\boldsymbol t}; {\boldsymbol \mu}, {\boldsymbol \varsigma}) = - \frac{\partial}{\partial \varsigma_\alpha} I_n^{(s)}({\boldsymbol t}; {\boldsymbol \mu}, {\boldsymbol
\varsigma}),
\end{eqnarray}
we conclude that Eq.~(\ref{vc-piece}) reduces to the sought
Eq.~(\ref{vc-3}). We found it more convenient to rewrite the
$\partial/\partial \varsigma_\alpha$-term in Eq.~(\ref{vc-3}) in a
more compact way,
\begin{eqnarray} \fl \label{vc-4}
\int_{{\cal D}^n} (d{\boldsymbol \mu})\,
\sum_{k=0}^{\varrho}
s_{\varrho-k}(-{\boldsymbol p}_{\varrho} (\boldsymbol{c^\prime}))
\sum_{i=0}^\infty a_i \left( \sum_{\alpha=1}^p \varsigma_\alpha^{q+k+i+1} \frac{\partial}{\partial
\varsigma_\alpha}\right) \, I_n^{(s)}({\boldsymbol t}; {\boldsymbol \mu}, {\boldsymbol \varsigma})\nonumber\\
\qquad \qquad \qquad \qquad \qquad= {\hat {\cal B}}_q^V({\boldsymbol \varsigma}) \int_{{\cal D}^n} (d{\boldsymbol
\mu})\, I_n^{(s)}({\boldsymbol t}; {\boldsymbol \mu}, {\boldsymbol \varsigma})
\end{eqnarray}
with the differential operator ${\hat {\cal B}}_q^V({\boldsymbol \varsigma})$
being
\begin{eqnarray}
\label{bq-rep}
{\hat {\cal B}}_q^V ({\boldsymbol \varsigma}) = \sum_{\alpha=1}^p
\left( \prod_{k=1}^{\varrho} (\varsigma_\alpha - c_k^\prime) \right)
f(\varsigma_\alpha) \,
\varsigma_{\alpha}^{q+1} \frac{\partial}{\partial
\varsigma_\alpha}.
\end{eqnarray}
Equation (\ref{vc-4}) follows from the expansions Eqs.~(\ref{fg}),
(\ref{R}) and (\ref{rmu}).
To complete the derivation of Virasoro constraints, we further
notice that terms ${\rm \tr}_n ({\boldsymbol \mu}^j)$ in Eq.~(\ref{vc-2})
can be generated by differentiating $I_n^{(s)}$ over $t_j$. Since
${\rm \tr}_n ({\boldsymbol \mu}^0)=n$, the derivative $\partial/\partial
t_0$ should formally be understood as $\partial/\partial t_0 \equiv
n$. This observation yields Virasoro constraints for the $\tau$
function
\begin{eqnarray}
\tau_{n}^{(s)}({\boldsymbol \varsigma},{\boldsymbol \kappa}; {\boldsymbol t}) = \frac{1}{n!}
\int_{{\mathcal D}^{n}} (d{\boldsymbol \mu})\, I_n^{(s)}({\boldsymbol t}; {\boldsymbol \mu}, {\boldsymbol \varsigma})
\end{eqnarray}
in the form ($q\ge -1$)
\begin{equation}
\label{2-Vir-repp}
\left[ \hat{{\cal L}}_{q}^V({\boldsymbol t}) + \hat{{\cal L}}_q^{\rm det}({\boldsymbol \varsigma};{\boldsymbol t})
\right] \tau_n^{(s)}({\boldsymbol \varsigma};{\boldsymbol t})
={\hat {\cal B}}_q^V ({\boldsymbol \varsigma})\,\tau_n^{(s)}({\boldsymbol \varsigma};{\boldsymbol
t}).
\end{equation}
Here, the differential operators
\begin{eqnarray} \fl
\label{vLv-repp}
\hat{{\cal L}}_{q}^V({\boldsymbol t}) = \sum_{\ell = 0}^\infty
\sum_{k=0}^{\varrho} s_{\varrho-k}(-{\boldsymbol p}_{\varrho} (\boldsymbol{c^\prime}))
\left(
a_\ell \hat{\cal L}_{q+k+\ell}^{(\beta)}({\boldsymbol t}) - b_\ell \frac{\partial}{\partial t_{q+k+\ell+1}}
\right)
\end{eqnarray}
and
\begin{eqnarray} \fl
\label{vLG-repp}
\hat{{\cal L}}_{q}^{\rm det}({\boldsymbol t}) = \sum_{\ell = 0}^\infty
a_\ell
\sum_{k=0}^{\varrho} s_{\varrho-k}(-{\boldsymbol p}_{\varrho} (\boldsymbol{c^\prime}))
\sum_{m=0}^{q+k+\ell} \left(\sum_{\alpha=1}^p \kappa_\alpha\,\varsigma_\alpha^m\right)
\frac{\partial}{\partial t_{q+k+\ell-m}}
\end{eqnarray}
act in the ${\boldsymbol t}$-space whilst the differential operator ${\hat
{\cal B}}_q^V ({\boldsymbol \varsigma})$ acts in the space of {\it physical
parameters} $\{\varsigma_\alpha\}_{\alpha\in{\mathbb Z}_+}$. Notice
that the operator $\hat{{\cal L}}_{q}^V({\boldsymbol t})$ is expressed in
terms of the Virasoro operators
\begin{eqnarray} \fl
\label{vo-repp}
\hat{{\cal L}}_q^{(\beta)}({\boldsymbol t}) = \sum_{j=1}^\infty jt_j \,\frac{\partial}{\partial t_{q+j}}
+ \frac{\beta}{2}
\sum_{j=0}^q \frac{\partial^2}{\partial {t_j}\partial {t_{q-j}}} + \left(
1 -\frac{\beta}{2}
\right)(q+1)\frac{\partial}{\partial t_q},
\end{eqnarray}
obeying the Virasoro algebra
\begin{eqnarray}
\label{va-repp} [\hat{{\cal L}}_p^{(\beta)},\hat{{\cal
L}}_q^{(\beta)}] = (p-q)\hat{{\cal L}}_{p+q}^{(\beta)}, \;\;\;
p,q\ge -1.
\end{eqnarray}
The Virasoro constraints derived in this section stay valid for
arbitrary $\beta>0$; for $\beta=2$, they are reduced to
Eqs.~(\ref{2-Vir}) -- (\ref{va}) announced in Sec. \ref{Sec-2}.
\newline\newline\noindent
This concludes the derivation of three main ingredients of
integrable theory of average characteristic polynomials -- the
bilinear identity, integrable hierarchies emanating from it, and the Virasoro constraints.
\section{From $\tau$ Functions to Characteristic Polynomials}
\label{Sec-4}
The general calculational scheme formulated in Section \ref{Sec-2} and detailed in Section \ref{Sec-3} applies to a variety of random matrix ensembles. In this Section we deal with CFCP for the Gaussian Unitary Ensemble (GUE) and Laguerre Unitary Ensemble (LUE). A detailed treatment of the GUE case is needed to lay the basis for further comparative analysis of the three variations of the replica approach that will be presented in Section \ref{Sec-5}. The study of the LUE relevant to the QCD physics (Verbaarschot 2010) is included for didactic purposes. A sketchy exposition of the theory for Jacobi Unitary Ensemble (JUE) and Cauchy Unitary Ensemble (CyUE) appearing in the context of universal aspects of quantum transport through chaotic cavities (Beenakker 1997) can be found in Appendices~\ref{App-JUE}~and~\ref{App-CyUE}.
\subsection{Gaussian Unitary Ensemble (GUE)}
The correlation function of characteristic polynomials in GUE is defined by the $n$-fold integral
\begin{eqnarray}\fl
\label{rpf-gue}
\Pi_{n|p}^{\rm G}({\boldsymbol {\varsigma}};{\boldsymbol \kappa}) =\frac{1}{{\cal N}_n^{\rm G}} \int_{{\mathbb R}^{n}} \prod_{j=1}^{n}
\left( d\lambda_j \, e^{-\lambda_j^2}\prod_{\alpha=1}^p (\varsigma_\alpha - \lambda_j)^{\kappa_\alpha} \right)
\cdot
\Delta_{n}^2({\boldsymbol \lambda})
\end{eqnarray}
where
\begin{eqnarray}
\label{N-gue} \fl
{\cal N}_n^{\rm G} = \int_{{\mathbb R}^{n}} \prod_{j=1}^{n}
\left( d\lambda_j \, e^{-\lambda_j^2} \right)
\cdot
\Delta_{n}^2({\boldsymbol \lambda}) = \pi^{n/2} 2^{-n(n-1)/2}\prod_{j=1}^n \Gamma(j+1)
\end{eqnarray}
is the normalisation constant. The associated $\tau$ function equals
\begin{eqnarray}\fl
\label{tau-gue}
\tau_{n}^{\rm G}({\boldsymbol {\varsigma}},{\boldsymbol \kappa};{\boldsymbol t}) =\frac{1}{n!} \int_{{\mathbb R}^{n}} \prod_{j=1}^{n}
\left( d\lambda_j \, e^{-\lambda_j^2 + v({\boldsymbol t};\lambda_j)}
\prod_{\alpha=1}^p (\varsigma_\alpha - \lambda_j)^{\kappa_\alpha} \right)
\cdot
\Delta_{n}^2({\boldsymbol \lambda}),
\end{eqnarray}
see Section \ref{Sec-2}. (In the above definitions, the
superscript ${\rm G}$ stands for GUE but it will further be omitted
when notational confusion is unlikely to arise.)
\subsubsection{Virasoro constraints}
\noindent\newline\newline In the notation of Section \ref{Sec-3},
the definition Eq.~(\ref{rpf-gue}) implies that
\begin{eqnarray}
f(\lambda)=1 \;\; &\mapsto& \;\;\; a_k=\delta_{k,0},\\
g(\lambda) = 2\lambda\;\;&\mapsto& \;\;\;b_k = 2 \delta_{k,1},\\
{\cal D} = {\mathbb R} \;\; &\mapsto&\;\;\; {\rm dim}(\boldsymbol{c^\prime}) =0.
\end{eqnarray}
This brings the Virasoro constraints Eqs.~(\ref{2-Vir}) --
(\ref{vo}) for the $\tau$ function Eq.~(\ref{tau-gue}):
\begin{equation}
\label{2-Vir-G} \fl
\left[ \hat{\cal L}_{q}({\boldsymbol t}) - 2
\frac{\partial}{\partial t_{q+2}}
+ \sum_{m=0}^{q}
\left(\sum_{\alpha=1}^p \kappa_\alpha\,\varsigma_\alpha^m\right)
\frac{\partial}{\partial t_{q-m}}
\right] \tau_n({\boldsymbol \varsigma},{\boldsymbol \kappa};{\boldsymbol t})
={\hat {\mathcal B}}_{q} \,\tau_n({\boldsymbol \varsigma},{\boldsymbol \kappa};{\boldsymbol t}),
\end{equation}
where
\begin{eqnarray} \label{Bq}
{\hat {\mathcal B}}_q = \sum_{\alpha=1}^p
\varsigma_{\alpha}^{q+1} \frac{\partial}{\partial \varsigma_\alpha},
\end{eqnarray}
the short-hand notation $\vartheta_m({\boldsymbol \varsigma},{\boldsymbol \kappa})$ is defined as
\begin{eqnarray}
\label{theta-m}
\vartheta_m({\boldsymbol \varsigma},{\boldsymbol \kappa}) = \sum_{\alpha=1}^p \kappa_\alpha \varsigma_\alpha^m,
\end{eqnarray}
so that
\begin{eqnarray}
\label{theta-0}
\vartheta_0({\boldsymbol \varsigma},{\boldsymbol \kappa})
={\rm tr}_p\,\boldsymbol\kappa=\sum_{\alpha=1}^p \kappa_\alpha \equiv \kappa.
\end{eqnarray}
Also, $\hat{\cal L}_{q}({\boldsymbol t})$ is the Virasoro operator given by
Eq.~(\ref{vo}). Notice that the $\tau$ function in
Eq.~(\ref{2-Vir-G}) does not bear the superscript $(s)$ since the
GUE confinement potential $V(\lambda)= \lambda^2$ does not depend on
$n$.
In what follows, we need the three lowest Virasoro constraints
labeled by $q=-1$, $q=0$ and~$q=+1$. Written for the logarithm of $\tau$
function, they
read:
\begin{equation}
\label{2-Vir-q=-1} \fl
\left(
\sum_{j=2}^\infty jt_j\frac{\partial}{\partial t_{j-1}}
- 2
\frac{\partial}{\partial t_{1}}
\right) \log \tau_n ({\boldsymbol \varsigma},{\boldsymbol \kappa};{\boldsymbol t}) + nt_1
={\hat {\mathcal B}}_{-1}\,\log \tau_n ({\boldsymbol \varsigma},{\boldsymbol \kappa};{\boldsymbol t}),
\end{equation}
\begin{equation}
\label{2-Vir-q=0} \fl
\left(
\sum_{j=1}^\infty jt_j\frac{\partial}{\partial t_{j}}
- 2
\frac{\partial}{\partial t_{2}} \right)\,\log \tau_n ({\boldsymbol \varsigma},{\boldsymbol \kappa};{\boldsymbol t})
+ n \left( n+ \kappa\right)
={\hat {\mathcal B}}_{0}\,\log\tau_n ({\boldsymbol \varsigma},{\boldsymbol \kappa};{\boldsymbol t}),
\end{equation}
\begin{eqnarray}
\label{2-Vir-q=+1} \fl
\left( \sum_{j=1}^\infty jt_j\frac{\partial}{\partial t_{j+1}} - 2
\frac{\partial}{\partial t_{3}}
+
\left(2n+\kappa\right)
\frac{\partial}{\partial t_{1}}
\right) \log\tau_n ({\boldsymbol \varsigma},{\boldsymbol \kappa};{\boldsymbol t}) \nonumber \\
\qquad \qquad\qquad\quad+
n \,\vartheta_1({\boldsymbol \varsigma},{\boldsymbol \kappa})
={\hat {\mathcal B}}_{1} \,\log\tau_n({\boldsymbol \varsigma},{\boldsymbol \kappa};{\boldsymbol t}).
\end{eqnarray}
\subsubsection{Toda Lattice hierarchy}\noindent
\newline\newline
Projection of the Toda Lattice hierarchy Eq.~(\ref{tlh}) for the
${\boldsymbol t}$-dependent $\tau$ function Eq.~(\ref{tau-gue}) onto the
hyperplane ${\boldsymbol t} = {\boldsymbol 0}$ generates the Toda Lattice hierarchy
for the correlation function $\Pi^{\rm G}_{n|p}({\boldsymbol
{\varsigma}};{\boldsymbol \kappa})$ [Eq.~(\ref{rpf-gue})] of the GUE characteristic
polynomials,
\begin{eqnarray}
\label{tau-pi}
\Pi^{\rm G}_{n|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa}) = \frac{n!}{{\cal N}_n^{\rm G}}\,
\tau^{\rm G}_n({\boldsymbol \varsigma},{\boldsymbol \kappa};{\boldsymbol t})\Big|_{{\boldsymbol t}={\rm 0}}.
\end{eqnarray}
Below, the first [Eq.~(\ref{tl-1-expl})] and second
[Eq.~(\ref{tl-2-expl})] equation of the TL hierarchy will be
considered:
\begin{eqnarray} \fl\label{tl-1-equiv}
{\rm TL}_1:\quad \frac{\partial^2}{\partial t_1^2}\log \tau_n({\boldsymbol t})= \frac{\tau_{n+1}({\boldsymbol t}) \,
\tau_{n-1} ({\boldsymbol t})}{\tau_{n}^2({\boldsymbol t})},
\\ \fl
\label{tl-2-equiv} {\rm TL}_2:\quad \frac{\partial^2}{\partial t_1 \partial t_2}
\log \tau_n({\boldsymbol t})
=
\frac{\tau_{n+1}({\boldsymbol t}) \,
\tau_{n-1} ({\boldsymbol t})}{\tau_{n}^2({\boldsymbol t})}
\,
\frac{\partial}{\partial t_1} \log \left(\frac{\tau_{n+1}({\boldsymbol t})}{\tau_{n-1}({\boldsymbol t})}\right).
\end{eqnarray}
The equivalence of Eqs.~(\ref{tl-1-equiv}) and (\ref{tl-2-equiv}) to
Eqs.~(\ref{tl-1-expl}) and (\ref{tl-2-expl}) is easily established.
\noindent\newline\newline (i) To derive the first equation of the TL
hierarchy for $\Pi^{\rm G}_{n|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})$
from Eqs.~(\ref{tau-pi}) and (\ref{tl-1-equiv}), we have to
determine
$$
\frac{\partial^2}{\partial t_1^2}\log\tau_n({\boldsymbol t})\Big|_{\boldsymbol{t}=0}
$$
with the help of Virasoro constraints. This is achieved in two
steps. First, we differentiate Eq.~(\ref{2-Vir-q=-1}) over $t_1$ and
set ${\boldsymbol t}={\boldsymbol 0}$ afterwards, to derive:
\begin{eqnarray}
\label{d-11}
2 \frac{\partial^2}{\partial t_1^2}\log\tau_n({\boldsymbol t})\Big|_{\boldsymbol{t}=0}
= n -
{\hat {\mathcal B}}_{-1} \,\frac{\partial}{\partial t_1}
\log\tau_n({\boldsymbol t}) \Big|_{\boldsymbol{t}=0}.
\end{eqnarray}
Second, we set ${\boldsymbol t}={\boldsymbol 0}$ in Eq.~(\ref{2-Vir-q=-1}) to
identify the relation
\begin{eqnarray}
\label{d-1}
2 \frac{\partial}{\partial t_1}\log\tau_n({\boldsymbol t})\Big|_{\boldsymbol{t}=0}
= -
{\hat {\mathcal B}}_{-1} \,
\log\tau_n({\boldsymbol 0}).
\end{eqnarray}
Combining Eqs.~(\ref{d-11}) and (\ref{d-1}), we conclude that
\begin{eqnarray}
4 \frac{\partial^2}{\partial t_1^2}\log\tau_n({\boldsymbol t})\Big|_{\boldsymbol{t}=0}
=2 n + {\hat {\mathcal B}}_{-1}^2 \,
\log\tau_n({\boldsymbol 0}).
\end{eqnarray}
Finally, substituting this result back to Eq.~(\ref{tl-1-equiv}),
and taking into account Eqs.~(\ref{tau-pi}) and (\ref{N-gue}), we
end up with the first TL equation
\begin{eqnarray}
\label{gue-TL-1}\fl
\widetilde{{\rm TL}}_1^{\rm G}: \qquad
{\hat {\mathcal B}}_{-1}^2 \,
\log \Pi_{n|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa}) = 2 n\, \left(
\frac{\Pi_{n+1|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa}) \,
\Pi_{n-1|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})}{\Pi_{n|p}^2({\boldsymbol {\varsigma}};{\boldsymbol \kappa})} -1
\right)
\end{eqnarray}
written in the space of physical parameters ${\boldsymbol \varsigma}$.
\noindent\newline\newline (ii) The second equation of the TL
hierarchy for $\Pi_{n|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})$ can be
derived along the same lines. Equation (\ref{tl-2-equiv}) suggests
that, in addition to the derivative $\partial/\partial t_1
\log\tau_n$ at ${\boldsymbol t}={\boldsymbol 0}$ given by Eq.~(\ref{d-1}), one needs
to know the mixed derivative
$$
\frac{\partial^2}{\partial t_1\partial t_2}\log\tau_n({\boldsymbol t})\Big|_{\boldsymbol{t}=0}.
$$
It can be calculated by combining Eq.~(\ref{2-Vir-q=-1})
differentiated over $t_2$ with Eqs.~(\ref{2-Vir-q=0}) and
(\ref{d-1}). The result reads:
\begin{eqnarray}\label{gue-TL-2} \fl
\widetilde{{\rm TL}}_2^{\rm G}: \qquad
(1 - {\hat {\mathcal B}}_{0}) {\hat {\mathcal B}}_{-1} \,
\log \Pi_{n|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa}) \nonumber \\
=
n\,
\frac{\Pi_{n+1|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa}) \,
\Pi_{n-1|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})}{\Pi_{n|p}^2({\boldsymbol {\varsigma}};{\boldsymbol \kappa})}
\,{\hat {\mathcal B}}_{-1}
\log \left(
\frac{\Pi_{n+1|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})}{\Pi_{n-1|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})}
\right).
\end{eqnarray}
\newline\newline\noindent
Higher order equations of the TL hierarchy for the correlation
functions $\Pi_{n|p}$ can be derived in a similar fashion.
\newline\newline
{\it Remark.}---For $p=1$, the equations $\widetilde{{\rm
TL}}_1^{\rm G}$ and $\widetilde{{\rm TL}}_2^{\rm G}$ become
particularly simple:
\begin{eqnarray}
\label{TL-1-cc} \fl
\widetilde{{\rm TL}}_1^{\rm G}: \quad
\frac{\partial^2}{\partial \varsigma^2}\,
\log {\Pi}_{n}(\varsigma;\kappa) = 2 n\,\left(
\frac{{\Pi}_{n+1}(\varsigma;\kappa) \,
{ \Pi}_{n-1}(\varsigma;\kappa)}{{ \Pi}_{n}^2(\varsigma;\kappa)}-1\right), \\
\fl \label{TL-2-cc}
\widetilde{{\rm TL}}_2^{\rm G}: \quad
\left(1- \varsigma \frac{\partial}{\partial \varsigma} \right) \frac{\partial}{\partial \varsigma}\,
\log {\Pi}_{n}(\varsigma; \kappa)
=
n\,
\frac{{\Pi}_{n+1}(\varsigma;\kappa) \,
{\Pi}_{n-1}(\varsigma;\kappa)}{{\Pi}_{n}^2(\varsigma;\kappa)}
\,\frac{\partial}{\partial \varsigma}
\log \left(
\frac{{\Pi}_{n+1}(\varsigma;\kappa)}{{\Pi}_{n-1}(\varsigma;\kappa)}
\right). \nonumber \\
{}
\end{eqnarray}
\subsubsection{KP hierarchy and Painlev\'e IV equation}
\noindent
\newline\newline
The technology used in the previous subsection can equally be
employed to project the KP hirerachy Eq.~(\ref{kph}) onto the
hyperplane ${\boldsymbol t}={\boldsymbol 0}$. Below, only the first KP equation
\begin{eqnarray} \fl \label{kp1-exp-rep}
{\rm KP}_1:\quad \left(
\frac{\partial^4}{\partial t_1^4} + 3 \frac{\partial^2}{\partial t_2^2}
- 4 \frac{\partial^2}{\partial t_1 \partial t_3}
\right)\log\, \tau_n({\boldsymbol t}) + 6 \left(
\frac{\partial^2}{\partial t_1^2} \log\, \tau_n({\boldsymbol t})
\right)^2 = 0
\end{eqnarray}
will be treated. Notice that no superscript $(s)$ appears in
Eq.~(\ref{kp1-exp-rep}) as the GUE confinement potential does not
depend on the matrix size $n$. Proceeding along the lines of the
previous subsection, we make use of the three Virasoro constraints
Eqs.~(\ref{2-Vir-q=-1}) -- (\ref{2-Vir-q=0}) to derive:
\begin{eqnarray} \fl \label{kp-e1}
16 \frac{\partial^4}{\partial t_1^4}\log\tau_n({\boldsymbol t})\Big|_{\boldsymbol{t}=0}
={\hat {\mathcal B}}_{-1}^4 \,
\log\tau_n({\boldsymbol 0}), \\ \fl \label{kp-e2}
4 \frac{\partial^2}{\partial t_2^2}\log\tau_n({\boldsymbol t})\Big|_{\boldsymbol{t}=0}
=2n\left( n+ \kappa \right)- ( 2 - {\hat {\mathcal B}}_{0} ){\hat {\mathcal B}}_{0} \,
\log\tau_n({\boldsymbol 0}), \\ \fl \label{kp-e3}
4 \frac{\partial^2}{\partial t_1 \partial t_3}\log\tau_n({\boldsymbol t})\Big|_{\boldsymbol{t}=0}
=n\left( 3 n+ 2 \kappa \right) \nonumber\\ - \left(
{\hat {\mathcal B}}_{0} - {\hat {\mathcal B}}_{1} {\hat {\mathcal B}}_{-1} -
\frac{1}{2} \left(
2 n+ \kappa
\right) {\hat {\mathcal B}}_{-1}^2
\right) \,
\log\tau_n({\boldsymbol 0}).
\end{eqnarray}
Substitution of Eqs.~(\ref{kp-e1}) -- (\ref{kp-e3}) and (\ref{d-11})
into Eq.~(\ref{kp1-exp-rep}) generates a closed nonlinear
differential equation for $\log \Pi_{n|p}({\boldsymbol \varsigma};{\boldsymbol
\kappa})$ in the form
\begin{eqnarray} \fl \label{ch-gue}
\widetilde{{\rm KP}}_1^{\rm G}: \quad
\left[
{\hat {\mathcal B}}_{-1}^4 + 8 (n-\kappa) {\hat {\mathcal B}}_{-1}^2
-4 ( 2{\hat {\mathcal B}}_{0} - 3 {\hat {\mathcal B}}_0^2 + 4 {\hat {\mathcal B}}_1 {\hat {\mathcal B}}_{-1} )
\right] \, \log \Pi_{n|p}({\boldsymbol \varsigma};{\boldsymbol \kappa}) \nonumber\\
\qquad \qquad\qquad\qquad+ 6 \left(
{\hat {\mathcal B}}_{-1}^2 \log \Pi_{n|p}({\boldsymbol \varsigma};{\boldsymbol \kappa})
\right)^2 = 8 n \kappa.
\end{eqnarray}
Notice that appearance of the single parameter $\kappa$ instead of
the entire set ${\boldsymbol \kappa} = (\kappa_1,\dots,\kappa_p)$ in
Eq.~(\ref{ch-gue}) indicates that correlation functions
$\Pi_{n|p}({\boldsymbol \varsigma};{\boldsymbol \kappa})$ with different ${\boldsymbol
\kappa}$ but with identical traces ${\rm tr}_p\,{\boldsymbol \kappa}$
satisfy the very same equation. It is the boundary conditions
\footnote[2]{Indeed, the boundary conditions at infinity,
$$
\Pi_{n|p}({\boldsymbol \varsigma};{\boldsymbol \kappa})\Big|_{|\varsigma_\alpha|\rightarrow \infty}
\sim \prod_{\alpha=1}^p \varsigma_\alpha^{n\kappa_\alpha},
$$
do distinguish between the correlation functions characterised by
different ${\boldsymbol \kappa}$'s, see Eq.~(\ref{rpf-gue}).} that pick up
the right solution for the given set ${\boldsymbol \kappa} =
(\kappa_1,\dots,\kappa_p)$.
\newline\newline
{\it Remark.}---For $p=1$, the above equation reads:
\begin{eqnarray} \fl \label{ch-0}
\widetilde{{\rm KP}}_1^{\rm G}: \quad
\left[ \frac{\partial^4}{\partial \varsigma^4}
+ 4 \left[ 2(n-\kappa) - \varsigma^2\right]\frac{\partial^2}{\partial \varsigma^2}
+ 4 \varsigma \frac{\partial}{\partial \varsigma}
\right] \log \Pi_{n}(\varsigma;\kappa) \nonumber\\
\qquad \qquad\qquad\qquad
+ 6 \left(
\frac{\partial^2}{\partial \varsigma^2} \log \Pi_{n}( \varsigma;\kappa)
\right)^2 = 8 n \kappa.
\end{eqnarray}
This can be recognised as the Chazy I equation (see Appendix
\ref{App-chazy})
\begin{eqnarray}
\label{ek-ch-1}
\varphi^{\prime\prime\prime} + 6(\varphi^\prime)^2 + 4 \left[ 2(n-\kappa) - \varsigma^2 \right]\varphi^\prime
+ 4 \varsigma \varphi - 8n\kappa=0,
\end{eqnarray}
where
\begin{eqnarray}
\label{phi-def}
\varphi (\varsigma) =
\frac{\partial}{\partial \varsigma} \log \Pi_{n}\left( \varsigma;\kappa\right).
\end{eqnarray}
Equation (\ref{ek-ch-1}) can further be reduced to the fourth
Painlev\'e equation in the Jimbo-Miwa-Okamoto $\sigma$ form (Forrester and Witte 2001, Tracy and Widom 1994):
\begin{eqnarray}
\label{phi-piv}\fl
P_{\rm IV}: \qquad
(\varphi^{\prime\prime})^2 - 4 (\varphi - \varsigma \varphi^\prime)^2
+ 4 \varphi^\prime (\varphi^\prime+2n)(\varphi^\prime-2\kappa)=0,
\end{eqnarray}
see Appendix \ref{App-chazy} for more details. The boundary
condition to be imposed at infinity is
\begin{eqnarray}
\varphi(\varsigma)\Big|_{\varsigma\rightarrow \infty} \sim \frac{n\kappa}{\varsigma}\left(
1 + {\cal O}(\varsigma^{-1})
\right).
\end{eqnarray}
Equations (\ref{gue-TL-1}), (\ref{gue-TL-2}), (\ref{ch-gue}) and their one-point reductions Eqs.~(\ref{TL-1-cc}), (\ref{TL-2-cc}), (\ref{phi-def}) and (\ref{phi-piv}) are the main results of this subsection. They will play a central r\^ole in the forthcoming analysis of the replica approach to GUE.
\subsection{Laguerre Unitary Ensemble (LUE)}
The correlation function of characteristic polynomials in LUE
is defined by the formula
\begin{eqnarray}\fl
\label{rpf-Lue}
\Pi_{n|p}^{\rm L}({\boldsymbol {\varsigma}};{\boldsymbol \kappa}) =\frac{1}{{\cal N}_n^{\rm L}} \int_{{\mathbb R}_+^{\,n}} \prod_{j=1}^{n}
\left( d\lambda_j \, e^{-\lambda_j}\lambda_j^\nu\,\prod_{\alpha=1}^p (\varsigma_\alpha - \lambda_j)^{\kappa_\alpha} \right)
\cdot
\Delta_{n}^2({\boldsymbol \lambda}),
\end{eqnarray}
where
\begin{eqnarray}
\label{N-Lue} \fl
{\cal N}_n^{\rm L} = \int_{{\mathbb R}_+^{\,n}} \prod_{j=1}^{n}
\left( d\lambda_j \, e^{-\lambda_j} \lambda_j^\nu \right)
\cdot
\Delta_{n}^2({\boldsymbol \lambda}) = \prod_{j=1}^{n} \Gamma(j+1)\Gamma(j+\nu)
\end{eqnarray}
is the normalisation constant, and it is assumed that $\nu>-1$. The associated $\tau$ function equals
\begin{eqnarray}\fl
\label{tau-Lue}
\tau_{n}^{\rm L}({\boldsymbol {\varsigma}},{\boldsymbol \kappa};{\boldsymbol t}) =\frac{1}{n!} \int_{{\mathbb R}_+^{\,n}} \prod_{j=1}^{n}
\left( d\lambda_j \, e^{-\lambda_j + v({\boldsymbol t};\lambda_j)} \lambda_j^\nu
\prod_{\alpha=1}^p (\varsigma_\alpha - \lambda_j)^{\kappa_\alpha} \right)
\cdot
\Delta_{n}^2({\boldsymbol \lambda}).
\end{eqnarray}
In the above definitions, the superscript ${\rm L}$ stands for LUE but it will be omitted from now on.
\subsubsection{Virasoro constraints}
\noindent\newline\newline In the notation of Section \ref{Sec-3},
the definition Eq.~(\ref{rpf-Lue}) implies that \footnote[4]{Notice that ${\rm dim}(\boldsymbol{c^\prime}) =0$ follows from Eq.~(\ref{c-redef}) in which ${\cal Z}_0 = \{0\}$.
}
\begin{eqnarray}
f(\lambda)=1 \;\; &\mapsto& \;\;\; a_k=\delta_{k,1},\\
g(\lambda) = \lambda-\nu\;\;&\mapsto& \;\;\;b_k = -\nu \delta_{k,0} + \delta_{k,1},\\
{\cal D} = {\mathbb R}_+ \;\; &\mapsto&\;\;\; {\rm dim}(\boldsymbol{c^\prime}) =0.
\end{eqnarray}
This brings the following Virasoro constraints Eqs.~(\ref{2-Vir}) --
(\ref{vo}) for the $\tau$ function Eq.~(\ref{tau-Lue}):
\begin{eqnarray}
\label{2-Vir-L}\fl
\left[
\hat{\cal L}_{q}({\boldsymbol t}) +\nu \frac{\partial}{\partial t_{q}}- \frac{\partial}{\partial t_{q+1}} + \sum_{m=0}^{q}
\vartheta_m({\boldsymbol \varsigma},{\boldsymbol \kappa})
\frac{\partial}{\partial t_{q-m}}
\right] \tau_n({\boldsymbol \varsigma},{\boldsymbol \kappa};{\boldsymbol t})
={\hat {\mathcal B}}_{q} \,\tau_n({\boldsymbol \varsigma},{\boldsymbol \kappa};{\boldsymbol
t}).
\end{eqnarray}
where ${\hat {\mathcal B}}_q$ is defined by Eq.~(\ref{Bq}) and $\hat{\cal L}_{q}({\boldsymbol t})$ is the Virasoro operator given by Eq.~(\ref{vo}).
In what follows, we need the three lowest Virasoro constraints for
$q=0$, $q=1$ and~$q=2$. Written for $\log \tau_n({\boldsymbol \varsigma},{\boldsymbol
\kappa};{\boldsymbol t})$, they read:
\begin{eqnarray}
\label{2-Vir-q=-1L} \fl
\left(
\sum_{j=1}^\infty jt_j\frac{\partial}{\partial t_{j}}
-
\frac{\partial}{\partial t_{1}} \right)\,\log \tau_n ({\boldsymbol \varsigma},{\boldsymbol \kappa};{\boldsymbol t})
+ n \left(n +\nu +\kappa\right) ={\hat {\mathcal B}}_0 \,\log\tau_n ({\boldsymbol \varsigma},{\boldsymbol \kappa};{\boldsymbol t}),
\end{eqnarray}
\begin{eqnarray}
\label{2-Vir-q=0L} \fl
\left(
\sum_{j=1}^\infty jt_j\frac{\partial}{\partial t_{j+1}}
-
\frac{\partial}{\partial t_{2}} +
\left(
2n + \nu + \kappa \right) \frac{\partial}{\partial t_{1}}
\right) \log\tau_n ({\boldsymbol \varsigma},{\boldsymbol \kappa};{\boldsymbol t}) \nonumber \\
\qquad \qquad\qquad\qquad
+ n \,\vartheta_1({\boldsymbol \varsigma},{\boldsymbol \kappa})
={\hat {\mathcal B}}_1 \,\log\tau_n({\boldsymbol \varsigma},{\boldsymbol \kappa};{\boldsymbol t}),
\end{eqnarray}
\begin{eqnarray}
\label{2-Vir-q=+1L} \fl
\Bigg( \sum_{j=1}^\infty jt_j\frac{\partial}{\partial t_{j+2}} -
\frac{\partial}{\partial t_{3}}
+ \left(
2n + \nu +\kappa \right)
\frac{\partial}{\partial t_{2}} \nonumber\\
\qquad\qquad\qquad +
\,\vartheta_1({\boldsymbol \varsigma},{\boldsymbol \kappa})
\frac{\partial}{\partial t_{1}}+
\frac{\partial^2}{\partial t_{1}^2}
\Bigg) \log\tau_n ({\boldsymbol \varsigma},{\boldsymbol \kappa};{\boldsymbol t}) \nonumber \\\fl
\qquad\qquad\qquad+\left(\frac{\partial}{\partial t_{1}}\log\tau_n ({\boldsymbol \varsigma},{\boldsymbol \kappa};{\boldsymbol t})\right)^2+n\,
\,\vartheta_2({\boldsymbol \varsigma},{\boldsymbol \kappa})
={\hat {\mathcal B}}_2 \,\log\tau_n({\boldsymbol \varsigma},{\boldsymbol \kappa};{\boldsymbol t}).
\end{eqnarray}
\subsubsection{Toda Lattice hierarchy}\noindent
\newline\newline\noindent
To generate the Toda Lattice hierarchy
for the correlation function $\Pi^{\rm L}_{n|p}({\boldsymbol
{\varsigma}};{\boldsymbol \kappa})$ [Eq.~(\ref{rpf-Lue})] of characteristic
polynomials we apply the projection formula
\begin{eqnarray}
\label{tau-pi-LUE}
\Pi^{\rm L}_{n|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa}) = \frac{n!}{{\cal N}_n^{\rm L}}\,
\tau^{\rm L}_n({\boldsymbol \varsigma},{\boldsymbol \kappa};{\boldsymbol t})\Big|_{{\boldsymbol t}={\rm 0}},
\end{eqnarray}
in which the $\tau$ function is defined by Eq.~(\ref{tau-Lue}), to
the first and second equation of the ${\boldsymbol t}$-dependent TL hierarchy:
\begin{eqnarray} \fl\label{tl-1-equiv-LUE}
{\rm TL}_1:\quad \frac{\partial^2}{\partial t_1^2}\log \tau_n({\boldsymbol t})= \frac{\tau_{n+1}({\boldsymbol t}) \,
\tau_{n-1} ({\boldsymbol t})}{\tau_{n}^2({\boldsymbol t})},
\\ \fl
\label{tl-2-equiv-LUE} {\rm TL}_2:\quad \frac{\partial^2}{\partial t_1 \partial t_2}
\log \tau_n({\boldsymbol t})
=
\frac{\tau_{n+1}({\boldsymbol t}) \,
\tau_{n-1} ({\boldsymbol t})}{\tau_{n}^2({\boldsymbol t})}
\,
\frac{\partial}{\partial t_1} \log \left(\frac{\tau_{n+1}({\boldsymbol t})}{\tau_{n-1}({\boldsymbol t})}\right),
\end{eqnarray}
see Eqs.~(\ref{tl-1-equiv}) and (\ref{tl-2-equiv}).
\noindent\newline\newline (i) To derive the first equation of the TL
hierarchy for $\Pi^{\rm L}_{n|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})$
from Eqs.~(\ref{tau-pi-LUE}) and (\ref{tl-1-equiv-LUE}), we have to
determine
$$
\frac{\partial^2}{\partial t_1^2}\log\tau_n({\boldsymbol t})\Big|_{\boldsymbol{t}=0}
$$
with the help of the Virasoro constraints. Differentiating Eq.~(\ref{2-Vir-q=-1L}) over $t_1$ and setting ${\boldsymbol t}={\boldsymbol 0}$ afterwards, we obtain:
\begin{eqnarray}
\label{d-11-LUE}
\frac{\partial^2}{\partial t_1^2}\log\tau_n({\boldsymbol t})\Big|_{\boldsymbol{t}=0}
= ( 1 -
{\hat {\mathcal B}}_{0}) \,\frac{\partial}{\partial t_1}
\log\tau_n({\boldsymbol t}) \Big|_{\boldsymbol{t}=0}.
\end{eqnarray}
Second, we set ${\boldsymbol t}={\boldsymbol 0}$ in Eq.~(\ref{2-Vir-q=-1L}) to
identify the relation
\begin{eqnarray}
\label{d-1-LUE}
\frac{\partial}{\partial t_1}\log\tau_n({\boldsymbol t})\Big|_{\boldsymbol{t}=0}
= n\left(
n+\nu+\kappa
\right) -
{\hat {\mathcal B}}_{0} \,
\log\tau_n({\boldsymbol 0}).
\end{eqnarray}
Combining Eqs.~(\ref{d-11-LUE}) and (\ref{d-1-LUE}), we conclude that
\begin{eqnarray}
\frac{\partial^2}{\partial t_1^2}\log\tau_n({\boldsymbol t})\Big|_{\boldsymbol{t}=0}
= n\left(
n+\nu+\kappa
\right) -
{\hat {\mathcal B}}_{0} (
1 - {\hat {\mathcal B}}_{0}
)\,
\log\tau_n({\boldsymbol 0}).
\end{eqnarray}
Finally, substituting this result back to Eq.~(\ref{tl-1-equiv-LUE}),
and taking into account Eqs.~(\ref{tau-pi-LUE}) and (\ref{N-Lue}), we
end up with the first TL equation
\begin{eqnarray}\label{TL-1-LUE}\fl
\widetilde{{\rm TL}}_1^{\rm L}: \qquad
{\hat {\mathcal B}}_{0} ({\hat {\mathcal B}}_{0}-1) \,
\log \Pi_{n|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa}) \nonumber\\
\quad= n\,(n +\nu) \left(
\frac{\Pi_{n+1|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa}) \,
\Pi_{n-1|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})}{\Pi_{n|p}^2({\boldsymbol {\varsigma}};{\boldsymbol \kappa})} -1
\right) - n\kappa
\end{eqnarray}
written in the space of physical parameters ${\boldsymbol \varsigma}$.
Notice that the above equation becomes more symmetric if written for the correlation function
\begin{eqnarray}
\label{tilde-Pi-LUE}
\tilde{\Pi}_{n|p}({\boldsymbol {\varsigma}}) = \Pi_{n|p}({\boldsymbol {\varsigma}})\,\prod_{\alpha=1}^p \varsigma_\alpha^{ - n \kappa_\alpha}.
\end{eqnarray}
The corresponding TL equation reads:
\begin{eqnarray}\label{TL-1-LUE-alt}\fl
\widetilde{\widetilde{{\rm TL}}}_1^{\rm L}: \;
{\hat {\mathcal B}}_{0} ({\hat {\mathcal B}}_{0}-1) \,
\log \tilde{\Pi}_{n|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})= n\,(n +\nu) \left(
\frac{\tilde{\Pi}_{n+1|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa}) \,
{\tilde \Pi}_{n-1|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})}{{\tilde \Pi}_{n|p}^2({\boldsymbol {\varsigma}};{\boldsymbol \kappa})} -1
\right).
\end{eqnarray}
\noindent\newline\newline (ii) The second equation of the TL
hierarchy for $\Pi_{n|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})$ can be
derived along the same lines. Equation (\ref{tl-2-equiv-LUE}) suggests
that, in addition to the derivative $\partial/\partial t_1
\log\tau_n$ at ${\boldsymbol t}={\boldsymbol 0}$ given by Eq.~(\ref{d-1-LUE}), one needs
to know the mixed derivative
$$
\frac{\partial^2}{\partial t_1\partial t_2}\log\tau_n({\boldsymbol t})\Big|_{\boldsymbol{t}=0}.
$$
It can be calculated by combining Eq.~(\ref{2-Vir-q=-1L})
differentiated over $t_2$ with Eqs.~(\ref{2-Vir-q=0L}) and
(\ref{d-1-LUE}). Straightforward calculations bring
\begin{eqnarray}
\frac{\partial^2}{\partial t_1\partial t_2}\log\tau_n({\boldsymbol t})\Big|_{\boldsymbol{t}=0} =
(2-{\hat {\mathcal B}}_{0})
\frac{\partial}{\partial t_2}\log\tau_n({\boldsymbol t})\Big|_{\boldsymbol{t}=0},
\end{eqnarray}
where
\begin{eqnarray} \fl
\frac{\partial}{\partial t_2}\log\tau_n({\boldsymbol t})\Big|_{\boldsymbol{t}=0}
= n
\left[
\left(
2n + \nu +\kappa\right) \left(
n + \nu +\kappa \right) + \,\vartheta_1({\boldsymbol \varsigma},{\boldsymbol \kappa})
\right] \nonumber\\
\qquad\qquad-
\left[
\left(
2n + \nu +\kappa \right) \, {\hat {\mathcal B}}_{0} + {\hat {\mathcal B}}_{1}
\right] \log \tau_n({\boldsymbol 0}).
\end{eqnarray}
The final result reads:
\begin{eqnarray} \label{TL-2-LUE}\fl
\widetilde{{\rm TL}}_2^{\rm L}: \quad
({\hat {\mathcal B}}_{0}-2)\left[
\left(
2 n + \nu + \kappa
\right) {\hat {\mathcal B}}_{0} + {\hat {\mathcal B}}_{1}
\right]\,
\log \Pi_{n|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa}) \nonumber \\
\fl \qquad \qquad=
n(n+\nu)\,
\frac{\Pi_{n+1|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa}) \,
\Pi_{n-1|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})}{\Pi_{n|p}^2({\boldsymbol {\varsigma}};{\boldsymbol \kappa})} \nonumber\\
\times \left[
2 \left( 2 n +\nu+\kappa \right)
-\,{\hat {\mathcal B}}_{0}
\log \left(
\frac{\Pi_{n+1|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})}{\Pi_{n-1|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})}
\right)\right] \nonumber\\
\quad
- 2n \left( 2 n +\nu+ \kappa \right) \left( n +\nu+ \kappa \right) - n\,\vartheta_1({\boldsymbol \varsigma},{\boldsymbol \kappa}).
\end{eqnarray}
This equation takes a more compact form if written for the correlation function $\tilde{\Pi}_{n|p}$ defined by Eq.~(\ref{tilde-Pi-LUE}):
\begin{eqnarray} \label{TL-2-LUE-alt}\fl
\widetilde{\widetilde{{\rm TL}}}_2^{\rm L}: \quad
({\hat {\mathcal B}}_{0}-2)\left[
\left(
2 n + \nu + \kappa
\right) {\hat {\mathcal B}}_{0} + {\hat {\mathcal B}}_{1}
\right]\,
\log \tilde{\Pi}_{n|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa}) \nonumber \\
\fl \qquad \qquad=
n(n+\nu)\,
\frac{\tilde{\Pi}_{n+1|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa}) \,
\tilde{\Pi}_{n-1|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})}{\tilde{\Pi}_{n|p}^2({\boldsymbol {\varsigma}};{\boldsymbol \kappa})} \nonumber\\
\times \left[
2 (2 n +\nu)
-\,{\hat {\mathcal B}}_{0}
\log \left(
\frac{{\tilde \Pi}_{n+1|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})}{\tilde{\Pi}_{n-1|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})}
\right)\right] \nonumber\\
\qquad
- 2n (n+\nu) \left( 2 n +\nu+\kappa \right).
\end{eqnarray}
\subsubsection{KP hierarchy and Painlev\'e V equation}
\noindent\newline\newline
The same technology is at work for projecting the KP hirerachy Eq.~(\ref{kph}) onto ${\boldsymbol t}={\boldsymbol 0}$. Below, only the first KP equation
\begin{eqnarray} \fl \label{kp1-exp-rep-LUE}
{\rm KP}_1:\quad \left(
\frac{\partial^4}{\partial t_1^4} + 3 \frac{\partial^2}{\partial t_2^2}
- 4 \frac{\partial^2}{\partial t_1 \partial t_3}
\right)\log\, \tau_n({\boldsymbol t}) + 6 \left(
\frac{\partial^2}{\partial t_1^2} \log\, \tau_n({\boldsymbol t})
\right)^2 = 0
\end{eqnarray}
will be treated. Notice that no superscript $(s)$ appears in
Eq.~(\ref{kp1-exp-rep-LUE}) as the LUE confinement potential does not
depend on the matrix size $n$. To make the forthcoming calculation more efficient, it is beneficial to introduce the notation
\begin{eqnarray}
\label{T-not}
T_{\ell_1 \ell_2 \dots \ell_k} = \left(
\prod_{j=1}^k \frac{\partial}{\partial t_{\ell_j}}
\right) \log \tau_n({\boldsymbol t})\Bigg|_{{\boldsymbol t}={\boldsymbol 0}},\;\; T = \log \tau_n({\boldsymbol 0}),
\end{eqnarray}
which brings the KP equation Eq.~(\ref{kp1-exp-rep-LUE}) projected onto ${\boldsymbol t}={\boldsymbol 0}$ to the form
\begin{eqnarray} \label{KPT}
T_{1111} + 3 T_{22} - 4T_{13} + 6T_{11}^2=0.
\end{eqnarray}
\noindent\newline
(i) First, we observe that $T_{11}$ and $T_{1111}$ can be determined from the following chain of relations, obtained by repeated differentiation of the first Virasoro constraint Eq.~(\ref{2-Vir-q=-1L}) with respect to $t_1$:
\begin{eqnarray}
\cases{
T_{1\phantom{1}\phantom{1}\phantom{1}} = n\left(
n+\nu +\kappa
\right) - {\hat {\mathcal B}}_{0} \, T, &\\
T_{11\phantom{1}\phantom{1}} = (1-{\hat {\mathcal B}}_{0}) \, T_1, &\\
T_{111\phantom{1}} = (2-{\hat {\mathcal B}}_{0}) \, T_{11}, &\\
T_{1111} = (3-{\hat {\mathcal B}}_{0}) \, T_{111}. &
}
\end{eqnarray}
Hence,
\begin{eqnarray} \label{T11-LUE}
T_{11} = n\left(
n+\nu +\kappa
\right) - (1-{\hat {\mathcal B}}_{0}) {\hat {\mathcal B}}_{0} \, T
\end{eqnarray}
and
\begin{eqnarray}
\label{T1111-LUE}
T_{1111}
= 3!\, n \left(
n+\nu+\kappa
\right) - ( 3 - {\hat {\mathcal B}}_{0} ) ( 2 - {\hat {\mathcal B}}_{0} ) ( 1 - {\hat {\mathcal B}}_{0} )
{\hat {\mathcal B}}_{0}\, T.
\end{eqnarray}
\noindent\newline
(ii) Second, to determine $T_{13}$, we differentiate the first Virasoro constraint Eq.~(\ref{2-Vir-q=-1L}) with respect to $t_3$, and make use of the second and third constraints as they stand [Eqs.~(\ref{2-Vir-q=0L}) and (\ref{2-Vir-q=+1L})] to obtain:
\begin{eqnarray}\fl \qquad
\label{eq463}
\cases{
T_{13} = (3-{\hat {\mathcal B}}_{0})\, T_3, &\\
T_{3\phantom{1}} = (2n+\nu+\kappa)\, T_2 + \,\vartheta_1({\boldsymbol \varsigma},{\boldsymbol \kappa}) T_1 + T_1^2 + T_{11} -
{\hat {\mathcal B}}_{2}\, T + n \,\vartheta_2({\boldsymbol \varsigma},{\boldsymbol \kappa}), &\\
T_{2\phantom{1}} = (2n+\nu+\kappa)\, T_1 -
{\hat {\mathcal B}}_{1}\, T + n\,\vartheta_1({\boldsymbol \varsigma},{\boldsymbol \kappa}). &}
\end{eqnarray}
Although easy to derive, an explicit expression for $T_{13}$ is too cumbersome to be explicitly stated here.
\noindent\newline\newline
(iii) To calculate $T_{22}$, the last unknown ingredient of Eq.~(\ref{KPT}), we differentiate the first and second Virasoro constraints
[Eqs.~(\ref{2-Vir-q=-1L}) and (\ref{2-Vir-q=0L})] with respect to $t_2$ to realize that
\begin{eqnarray}
\label{eq464}
\cases{
T_{22} = 2T_3 + (2n +\nu +\kappa)\, T_{12} - {\hat {\mathcal B}}_{1}\, T_2, & \\
T_{12} = (2-{\hat {\mathcal B}}_{0})\, T_2, &}
\end{eqnarray}
Combining Eqs.~(\ref{eq463}) and (\ref{eq464}), one readily derives a closed expression for $T_{22}$.
\noindent\newline\newline
Finally, we substitute so determined $T_{1111}$, $T_{11}$, $T_{13}$ and $T_{22}$ into Eq.~(\ref{KPT}) to generate a nonlinear
differential equation for $\log \Pi_{n|p}({\boldsymbol \varsigma};{\boldsymbol
\kappa})$ in the form
\begin{eqnarray} \fl \label{ch-Lue}
\widetilde{{\rm KP}}_1^{\rm L}: \quad
\Bigg[
{\hat {\mathcal B}}_0^4
-2\hat {\mathcal B}_{0}^3
-\left[ (\nu+\kappa)^2+4\,\vartheta_1({\boldsymbol \varsigma},{\boldsymbol \kappa}) -1\right] {\hat {\mathcal B}}_0^2
+ 2 \,\vartheta_1({\boldsymbol \varsigma},{\boldsymbol \kappa}) {\hat {\mathcal B}}_0
+3 {\hat {\mathcal B}}_1^2
\nonumber\\\fl
\qquad\qquad
+(2n+\nu+\kappa) {\hat {\mathcal B}}_1 (2{\hat {\mathcal B}}_0-1)
- 2 {\hat {\mathcal B}}_2 (2{\hat {\mathcal B}}_0+1)
\Bigg] \, \log \Pi_{n|p}({\boldsymbol \varsigma};{\boldsymbol \kappa}) \nonumber\\\fl\qquad
\qquad + 6 \left(
{\hat {\mathcal B}}_0^2 \log \Pi_{n|p}({\boldsymbol \varsigma};{\boldsymbol \kappa})
\right)^2
-4\left( {\hat {\mathcal B}}_0\log \Pi_{n|p}({\boldsymbol \varsigma};{\boldsymbol \kappa})\right)\left( {\hat {\mathcal B}}_0^2 \log \Pi_{n|p}({\boldsymbol \varsigma};{\boldsymbol \kappa})\right)\nonumber\\
\qquad\qquad\qquad= n\left[ \left(\nu+\kappa\right)\,\vartheta_1({\boldsymbol \varsigma},{\boldsymbol \kappa})+\vartheta_2({\boldsymbol \varsigma},{\boldsymbol \kappa}) \right].
\end{eqnarray}
\newline
{\it Remark.}---For $p=1$, the above equation reads:
\begin{eqnarray} \fl
\Bigg[
\varsigma^4 \frac{\partial^4}{\partial \varsigma^4}
+ 4 \varsigma^3 \frac{\partial^3}{\partial \varsigma^3}
+ 2 \varsigma^2 (1-\varsigma^2) \frac{\partial^2}{\partial \varsigma^2}
- (\nu+\kappa)^2 \left( \varsigma^2 \frac{\partial^2}{\partial \varsigma^2} +
\varsigma \frac{\partial}{\partial \varsigma}
\right) \nonumber \\
\fl
\quad \quad
+ (2n+\nu-\kappa)
\left(
2 \varsigma^3 \,\frac{\partial^2}{\partial \varsigma^2}
+ \varsigma^2\,\frac{\partial}{\partial \varsigma} \right)
\Bigg] \log \Pi_{n|p}(\varsigma;\kappa)\nonumber\\
\fl \quad \quad + 2 \varsigma^2
\left[
\left(
\frac{\partial}{\partial \varsigma} + \varsigma \frac{\partial^2}{\partial \varsigma^2}
\right)\, \log \Pi_{n|p}(\varsigma;\kappa)
\right]
\left[
\left(
\frac{\partial}{\partial \varsigma} + 3 \varsigma \frac{\partial^2}{\partial \varsigma^2}
\right)\, \log \Pi_{n|p}(\varsigma;\kappa)
\right]\nonumber \\
= n\kappa \varsigma (\nu+\kappa+\varsigma).
\end{eqnarray}
It can further be simplified if written for the function
\begin{eqnarray}
\label{pi-phi-link}
\varphi(\varsigma) = \varsigma \frac{\partial}{\partial \varsigma} \,\log \Pi_{n}(\varsigma;\kappa) - n\kappa.
\end{eqnarray}
Straightforward calculations yield:
\begin{eqnarray} \fl \label{pv-LUE-chazy}
\varsigma^2 \varphi^{\prime\prime\prime} + \varsigma \varphi^{\prime\prime} -
\left[
\varsigma^2 - 2 (2n+\nu -\kappa) \,\varsigma + 4 \kappa n + (\nu+\kappa)^2
\right]\, \varphi^\prime \nonumber\\
+ \left[ 2(2\kappa -n) -\nu -2\varsigma\right] \,\varphi + 6\, \varsigma (\varphi^\prime)^2 - 4 \varphi \varphi^\prime = 2n\kappa (n+\nu).
\end{eqnarray}
This can be recognized as the Chazy I form (see Appendix~\ref{App-chazy}) of the fifth Painlev\'e transcendent. Equivalently, $\varphi$ satisfies the Painlev\'e V equation in the
Jimbo-Miwa-Okamoto form (Forrester and Witte 2002, Tracy and Widom 1994):
\begin{eqnarray}
\label{phi-pv}\fl
P_{\rm V}: \quad
\left(\varsigma \varphi^{\prime\prime} \right)^2
- \left[ \varphi- \varsigma \varphi^{\prime}
+ 2 (\varphi^\prime)^2 +
(2n+\nu-\kappa) \varphi^\prime
\right]^2 \nonumber \\
\hspace{2cm}
+ 4 \varphi^\prime
(\varphi^\prime+n)
(\varphi^\prime+n+\nu)
(\varphi^\prime-\kappa) = 0.
\end{eqnarray}
Both equations have to be supplemented by the boundary condition
\begin{eqnarray}
\varphi(\varsigma)\Big|_{\varsigma\rightarrow \infty} \sim \frac{n(n+\nu)\kappa}{\varsigma}\left(
1 + {\cal O}(\varsigma^{-1})
\right)
\end{eqnarray}
following from Eq.~(\ref{pi-phi-link}) and the asymptotic analysis of Eq.~(\ref{rpf-Lue}). Equations (\ref{TL-1-LUE}), (\ref{TL-1-LUE-alt}), (\ref{TL-2-LUE}), (\ref{TL-2-LUE-alt}), (\ref{ch-Lue}) and (\ref{phi-pv}) represent the main results of this subsection.
\subsection{Discussion}
This Section concludes the detailed exposition of integrable theory of correlation functions of RMT characteristic polynomials. Among the main results derived are:
\begin{itemize}
\item The multivariate first [Eqs. (\ref{gue-TL-1}) and (\ref{TL-1-LUE-alt})] and second [Eqs. (\ref{gue-TL-2}) and (\ref{TL-2-LUE-alt})] equations of the Toda Lattice hierarchy \footnote{See also their single variable reductions Eqs. (\ref{TL-1-cc}) and (\ref{TL-2-cc}) derived for the GUE.} which establish nonlinear differential recurrence relations between ``nearest neighbor'' correlation functions ${\Pi}_{n|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})$ and ${\Pi}_{n\pm 1|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})$, and
\item The nonlinear multivariate differential equations [Eqs. (\ref{ch-gue}) and (\ref{ch-Lue})] satisfied by ${\Pi}_{n|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})$ alone. These can be considered as multivariate generalisations of the corresponding Painlev\'e equations arising in the one-point setup $p=1$ [Eqs. (\ref{phi-piv}) and (\ref{phi-pv})]
\end{itemize}
Other nonlinear multivariate relations between the correlation functions ${\Pi}_{n|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})$ and ${\Pi}_{n\pm q|p}({\boldsymbol {\varsigma}};{\boldsymbol \kappa})$ can readily be obtained from the {\it modified} Toda Lattice and Kadomtsev-Petviashvili hierarchies listed in Section \ref{Sec-3-4}.
Finally, let us stress that a similar calculational framework applies to other $\beta=2$ matrix integrals depending on one (Osipov and Kanzieper 2007) or more (Osipov and Kanzieper 2009; Osipov, Sommers and \.Zyczkowski 2010) parameters. The reader is referred to the above papers for further details.
\section{Integrability of Zero-Dimensional Replica Field Theories}
\label{Sec-5}
\subsection{Introduction}
In this Section, the integrable theory of CFCP will be utilised to present a tutorial exposition of the exact
approach to zero dimensional replica field theories
formulated in a series of publications (Kanzieper 2002, Splittorff and Verbaarschot 2003, Osipov and
Kanzieper 2007). Focussing, for definiteness, on the calculation of the finite-$N$ average eigenlevel density in the GUE (whose exact form (Mehta 2004)
\begin{eqnarray}
\varrho_N(\epsilon) = \frac{1}{2^{N} \Gamma(N) \sqrt{\pi}}
\,e^{-\epsilon^2} \left[
H_N^\prime(\epsilon) H_{N-1}(\epsilon) -
H_N(\epsilon) H_{N-1}^\prime(\epsilon)
\right]\qquad \nonumber
\end{eqnarray}
has been known for decades), we shall put a special emphasis on a {\it comparative analysis}
of three alternative formulations -- fermionic, bosonic and supersymmetric -- of the replica method. This will allow us to meticulously analyse the {\it fermionic-bosonic factorisation phenomenon} of RMT spectral correlation functions in the {\it fermionic} and {\it bosonic} variations of the replica method, where its existence is not self-evident, to say the least.
\subsection{Density of eigenlevels in finite-$N$ GUE}
To determine the mean density of eigenlevels in the GUE, we define
the average one-point Green function
\begin{eqnarray}
G(z;N) = \left< {\rm tr} (z - {\boldsymbol {\mathcal H}})^{-1}\right>_{{\boldsymbol {\mathcal H}}\in {\rm GUE}_N}
\end{eqnarray}
that can be restored from the replica partition function ($n \in
{\mathbb R}^+$)
\begin{eqnarray}
\label{rpf-def}
{\mathcal Z}^{(\pm)}_{n}(z;N) = \left<
{\rm \det}^{\pm n} (z - {\boldsymbol {\mathcal H}})
\right>_{\boldsymbol {\mathcal H} \in {\rm GUE}_N}
\end{eqnarray}
through the replica limit
\begin{eqnarray}
G(z;N) = \pm \lim_{n\rightarrow 0} \frac{1}{n} \frac{\partial}{\partial z} {\mathcal Z}^{(\pm)}_{ n}(z;N).
\end{eqnarray}
Equation (\ref{rpf-def}) can routinely be mapped onto either
fermionic or bosonic replica field theories, the result being (see, e.g., Kanzieper 2010)
\begin{eqnarray}
\label{fer-pf}
{\mathcal Z}^{(+)}_{n}(z;N) = \frac{1}{c_n}\, \dot{\iota}^{-nN}
\int
({\cal D}_n{\boldsymbol {\mathcal Q}})\, \,e^{-{\rm tr}_n {\boldsymbol {\mathcal Q}}^2}\,
{\rm det}{}_n^{N} \left(
\dot{\iota} z - {\boldsymbol {\mathcal Q}}
\right)
\end{eqnarray}
and
\begin{eqnarray}
\label{bos-pf}
{\mathcal Z}^{(-)}_{n}(z;N) = \frac{1}{c_{n}}\,
\int
({\cal D}_{n}{\boldsymbol {\mathcal Q}})\, \,e^{-{\rm tr}_{n} {\boldsymbol {\mathcal Q}}^2}\,
{\rm det}{}_{n}^{-N} \left(
z - {\boldsymbol {\mathcal Q}}
\right).
\end{eqnarray}
Both integrals run over $n\times n$ Hermitean matrix ${\boldsymbol
{\mathcal Q}}$; the normalisation constant $c_{n}$ equals
\begin{eqnarray}
c_{n} = \int
({\cal D}_{n}{\boldsymbol {\mathcal Q}})\, \,e^{-{\rm tr}_{n} {\boldsymbol {\mathcal Q}}^2}.
\end{eqnarray}
By derivation, the replica parameter $n$ in Eqs. (\ref{fer-pf}) and
(\ref{bos-pf}) is restricted to integers, $n \in {\mathbb Z}^+$.
Notably, Eqs.~(\ref{fer-pf}) and (\ref{bos-pf}) are particular cases
of the correlation function $\Pi_{n|p}({\boldsymbol
\varsigma};{\boldsymbol \kappa})$ studied in previous sections.
\subsubsection{Fermionic replicas}\label{Sec-FR}\noindent\newline\newline
Indeed, comparison of Eq.~(\ref{fer-pf}) with the definition
Eq.~(\ref{rpf-gue}) yields
\begin{eqnarray}
\label{ZPi-F}
{\mathcal Z}^{(+)}_{n}(z;N) = (-\dot{\iota})^{nN} \Pi_{n}(\dot{\iota} z; N),
\end{eqnarray}
where the shorthand notation $\Pi_{n}(z; N)$ is used to denote
$\Pi_{n|1}^{\rm G}(z; N)$, in accordance with the earlier notation in
Eqs.~(\ref{TL-1-cc}) and (\ref{TL-2-cc}). This observation results in the Painlev\'e IV representation of the
fermionic replica partition function [see Eqs.~(\ref{phi-def}) and
(\ref{phi-piv})]:
\begin{eqnarray}
\label{052}
\frac{\partial}{\partial z} \log {\mathcal Z}^{(+)}_{n}(z;N) =
\dot{\iota}\, \varphi(t;n,N)\Big|_{t=\dot{\iota} z},
\end{eqnarray}
where $\varphi(t;n,N)$ is the fourth Painlev\'e transcendent
satisfying the equation
\begin{eqnarray}
\label{phi-piv-00}
(\varphi^{\prime\prime})^2 - 4 (\varphi - t\, \varphi^\prime)^2
+ 4 \varphi^\prime (\varphi^\prime+2n)(\varphi^\prime-2N)=0
\end{eqnarray}
subject to the boundary conditions~\footnote{Equation (\ref{phi-bc-f}) follows from Eqs.~(\ref{052}), (\ref{ZPi-F}) and the footnote below Eq.~(\ref{ch-gue}).}
\begin{eqnarray}\label{phi-bc-f}
\varphi(t;n,N) \sim \frac{nN}{t}, \qquad |t| \rightarrow \infty, \qquad t\in {\mathbb C}.
\end{eqnarray}
Here and above, $n \in {\mathbb Z}_+$.
\newline\newline\noindent
Equations (\ref{052}) and (\ref{phi-piv-00}) open the way for calculating the average Green function $G(z;N)$ via the fermionic replica
limit
\begin{eqnarray}
\label{RL-f}
G(z;N) = \lim_{n\rightarrow 0} \frac{1}{n} \frac{\partial}{\partial z} {\mathcal Z}_n^{(+)}(z;N)
= \dot{\iota} \lim_{n\rightarrow 0} \frac{1}{n} \, \varphi(t;n,N)\Big|_{t=\dot{\iota} z}.
\end{eqnarray}
For the prescription Eq.~(\ref{RL-f}) to be operational, the Painlev\'e representation of ${\mathcal Z}^{(+)}_{n}(z;N)$ should hold \footnote{Previous studies (Kanzieper 2002, Osipov and Kanzieper 2007) suggest that this is indeed the case.}
for $n\in {\mathbb R}_+$. Notice that for generic real $n$, the fermionic replica partition function
${\mathcal Z}^{(+)}_{n}(z;N)$ is no longer an analytic function of $z$ and exhibits a discontinuity across the real axis. For this reason,
the Painlev\'e equation Eq.~(\ref{phi-piv-00}) should be solved separately for $\mathfrak{Re}\, t<0$ ($\mathfrak{Im}\, z >0$) and $\mathfrak{Re}\, t>0$ ($\mathfrak{Im}\, z <0$).
\newline\newline\noindent
{\it Replica limit and the Hamiltonian formalism.}---To implement the replica limit, we employ the Hamiltonian formulation of the Painlev\'e IV (Noumi 2004, Forrester and Witte 2001) which associates
$\varphi(t;n,N)$ with the polynomial Hamiltonian (Okamoto 1980a)
\begin{eqnarray}
\label{phi-H}
\varphi(t;n,N) \equiv H_{\rm f}\left\{P,Q,t\right\} = (2P + Q + 2t) P Q + 2 n P - N Q
\end{eqnarray}
of a dynamical system $\{Q,P,H_{\rm f}\}$, where $Q=Q(t;n,N)$ and $P=P(t;n,N)$ are canonical coordinate and momentum. For such a system,
Hamilton's equations of motion read:
\begin{eqnarray}
\dot{Q} &=& + \frac{\partial H_{\rm f}}{\partial P} = Q (Q + 4 P + 2t) + 2n, \\
\dot{P} &=& - \frac{\partial H_{\rm f}}{\partial Q} = - P (2Q+2P+2t) +N.
\end{eqnarray}
Since
\begin{eqnarray}
\label{RL-h}
G(z;N) = \dot{\iota} \lim_{n\rightarrow 0} \frac{1}{n} \, H_{\rm f}\left\{P,Q,t\right\}\Big|_{t=\dot{\iota} z},
\end{eqnarray}
we need to develop a small-$n$ expansion for the Hamiltonian $H_{\rm f}\left\{P,Q,t\right\}$. Restricting ourselves to the linear in $n$ terms,
\begin{eqnarray}
\label{H-exp}
H_{\rm f}\left\{P,Q,t\right\} = nH_1^{({\rm f})}(t;N) + {\mathcal O}(n^2)
\end{eqnarray}
and
\begin{eqnarray}
P(t;n,N) &=& p_0(t;N) + n p_1(t;N) + {\mathcal O}(n^2), \\
Q(t;n,N) &=& q_0(t;N) + n q_1(t;N) + {\mathcal O}(n^2),
\end{eqnarray}
we conclude that $q_0(t;N)=0$. This derives directly from the expansion Eq.~(\ref{H-exp}) in which absence of the term of order ${\mathcal O}(n^0)$ is guaranteed by the normalisation condition ${\mathcal Z}_0^{(+)}(z;N)=1$. As the result \footnote[1]{We will drop the superscript $({\rm f})$ wherever this does not cause a notational confusion.},
\begin{eqnarray}
\label{gh1}
G(z;N) = \dot{\iota} H_1^{({\rm f})} (\dot{\iota} z;N),
\end{eqnarray}
where
\begin{eqnarray} \label{h1}
H_1(t;N) &=& 2 p_0 q_1 (p_0+t) + 2 p_0 -N q_1, \\
\label{h1-dot}
\dot{H}_1(t;N) &=& 2 p_0 q_1.
\end{eqnarray}
Here, $p_0=p_0(t;N)$ and $q_1 = q_1(t;N)$ are solutions to the system of coupled first order equations:
\begin{eqnarray}
\label{system}
\left\{
\begin{array}{cll}
\dot{p}_0 &=& - 2 p_0^2 - 2 p_0 t + N,\\
\dot{q}_1 &=& 4 p_0 q_1 + 2 q_1 t + 2.
\end{array}\right.
\end{eqnarray}
Since the initial conditions are known for $H_1(t;N)$ rather than for $p_0(t;N)$ and $q_1(t;N)$ separately, below we determine these two functions up to integration constants.
The function $p_0(t;N)$ satisfies the Riccati differential equation whose solution is
\begin{eqnarray}
\label{p0-ans}
p_0(t;N) = \frac{1}{2} \left[ \frac{\dot{u}_+(t)}{u_+(t)} - t \right],
\end{eqnarray}
where
\begin{eqnarray}
\label{ut}
u_+(t) = c_1 D_{-N-1}(t\sqrt{2}) + c_2 (-\dot{\iota})^{N} D_{N}(it \sqrt{2})
\end{eqnarray}
is, in turn, a solution to the equation of parabolic cylinder
\begin{eqnarray}
\ddot{u}_+(t) - (2N+1+t^2)u_+(t)=0.
\end{eqnarray}
Two remarks are in order. First, factoring out $(-\dot{\iota})^{N}$ in the second term in Eq.~(\ref{ut}) will simplify the formulae to follow. Second, the solution Eq.~(\ref{p0-ans}) for $p_0(t;N)$ actually depends on a {\it single} constant (either $c_1/c_2$ or $c_2/c_1$) as it must be.
To determine $q_1(t;N)$, we substitute Eq.~(\ref{p0-ans}) into the second formula of Eq.~(\ref{system}) to derive:
\begin{eqnarray}
\label{q1-int}
q_1(t;N) = 2 u_+^2(t) \int \frac{dt}{u_+^2(t)}.
\end{eqnarray}
Making use of the integration formula (see Appendix \ref{App-D-int})
\begin{eqnarray}
\label{osipov-integral}
\int \frac{dt}{u_+^2(t)} = \frac{1}{\sqrt{2}}\, \frac{\alpha_1 D_{-N-1}(t\sqrt{2}) + \alpha_2 (-\dot{\iota})^N D_{N}(it\sqrt{2})}{u_+(t)},
\end{eqnarray}
where two constants $\alpha_1$ and $\alpha_2$ are subject to the constraint
\begin{eqnarray}
\label{const-a12}
c_1 \alpha_2 - c_2 \alpha_1 = 1,
\end{eqnarray}
we further reduce Eq.~(\ref{q1-int}) to
\begin{eqnarray}
\label{q1-ans}
q_1(t;N) = \sqrt{2} u_+(t) \left[
\alpha_1 D_{-N-1}(t\sqrt{2}) + \alpha_2 (-\dot{\iota})^N D_{N}(it\sqrt{2})
\right].
\end{eqnarray}
Equations (\ref{h1-dot}), (\ref{p0-ans}), (\ref{q1-ans}) and the identity
\begin{eqnarray} \fl
\dot{u}_+(t) - t\, u_+(t) = -\sqrt{2} \left[
c_1 D_{-N}(t\sqrt{2}) - c_2 (-\dot{\iota})^{N-1} N D_{N-1}(\dot{\iota} t \sqrt{2})
\right]
\end{eqnarray}
(obtained from Eq.~(\ref{ut}) with the help of Eqs.~(\ref{d-rec-1}) and (\ref{d-rec-2})) yield $\dot{H}_1(t;N)$ in the form
\begin{eqnarray} \fl
\label{H1-sol}
\dot{H}_1(t;N) = -2
\left[
\alpha_1 D_{-N-1}(t\sqrt{2}) + \alpha_2 (-\dot{\iota})^N D_{N}(\dot{\iota} t\sqrt{2})
\right] \nonumber \\
\times
\left[
c_1 D_{-N}(t\sqrt{2}) - c_2 (-\dot{\iota})^{N-1} N D_{N-1}(\dot{\iota} t \sqrt{2})
\right].
\end{eqnarray}
Notice that appearance of
four integration constants ($c_1$, $c_2$, $\alpha_1$ and $\alpha_2$) in Eq.~(\ref{H1-sol}) is somewhat illusive: a little thought shows that
there is a pair of independent constants, either $(c_1/c_2,\alpha_2 c_2)$ or their derivatives.\newline\newline
To determine the unknown constants in Eq.~(\ref{H1-sol}), we make use of the asymptotic formulae for the functions of parabolic cylinder (collected in Appendix \ref{App-D-int}) in an attempt to meet the boundary conditions~\footnote{~Equation (\ref{h1-as}) is straightforward to derive from Eqs.~(\ref{H-exp}), (\ref{phi-H}) and (\ref{phi-bc-f}).}
\begin{eqnarray}\label{h1-as}
H_1(t;N) \sim \frac{N}{t}, \quad \dot{H}_1(t;N) \sim -\frac{N}{t^2}, \qquad |t| \rightarrow \infty, \qquad t\in {\mathbb C}.
\end{eqnarray}
Following the discussion next to Eq.~(\ref{RL-f}), the two cases $\mathfrak{Re\,} t<0$ and $\mathfrak{Re\,} t>0$ will be treated separately.
\begin{itemize}
\item {\it The case} $\mathfrak{Re\,} t<0$. Asymptotic analysis of Eq.~(\ref{H1-sol}) at $t\rightarrow -\infty$ yields
\begin{eqnarray}
\frac{\alpha_2}{\alpha_1} = (-1)^{N-1} \frac{\sqrt{2\pi}}{N!},
\nonumber
\end{eqnarray}
so that
\begin{eqnarray} \fl
\label{H1-sol-step-10}
\dot{H}_1(t;N) = 2 D_{-N-1}(-t\sqrt{2}) \left[
\alpha_1 c_1 D_{-N}(-t\sqrt{2}) - \dot{\iota} ^{N-1} N D_{N-1}(\dot{\iota} t\sqrt{2})
\right].
\end{eqnarray}
Here, we have used Eq.~(\ref{relD}). To determine the remaining constant $\alpha_1 c_1$, we make use of the boundary
condition Eq.~(\ref{h1-as}) for $t \rightarrow \pm \dot{\iota} \infty - 0$. Straightforward
calculations bring $\alpha_1 c_1 = 0$. We then conclude that
\begin{eqnarray} \fl \label{h1t-neg}
\dot{H}_1(t;N) = - 2 (-\dot{\iota} )^{N-1} N\, D_{-N-1}(-t\sqrt{2}) D_{N-1}(-\dot{\iota} t \sqrt{2}),\qquad \mathfrak{Re\,} t<0.
\end{eqnarray}
\vspace{0.2cm}
\item {\it The case} $\mathfrak{Re\,} t>0$.
Asymptotic analysis of Eq.~(\ref{H1-sol}) at $t\rightarrow +\infty$ yields $\alpha_2=0$ so that
\begin{eqnarray} \fl
\label{H1-sol-step-1}
\dot{H}_1(t;N) = 2
D_{-N-1}(t\sqrt{2})
\left[
\frac{c_1}{c_2} D_{-N}(t\sqrt{2}) - (-\dot{\iota})^{N-1} N D_{N-1}(\dot{\iota} t \sqrt{2})
\right].
\end{eqnarray}
To determine the remaining constant $c_1/c_2$, we make use of the boundary condition Eq.~(\ref{h1-as}) for $t \rightarrow \pm \dot{\iota} \infty +0$. Straightforward
calculations bring $c_1/c_2=0$. We then conclude that
\begin{eqnarray} \fl \label{h1t-pos}
\dot{H}_1(t;N) = -2 (-\dot{\iota})^{N-1} N\, D_{-N-1}(t\sqrt{2}) D_{N-1}(\dot{\iota} t \sqrt{2}),\qquad \mathfrak{Re\,} t>0.
\end{eqnarray}
\end{itemize}
\noindent\newline
The calculation of $\dot{H}_1(t;N)$ can be summarised in a single formula
\begin{eqnarray}
\label{h1d-answer}
\dot{H}_1(t;N) = - 2 (-\dot{\iota})^{N-1} N D_{-N-1}(\sigma_{\dot{\iota} t} t \sqrt{2}) D_{N-1}(\dot{\iota} \sigma_{\dot{\iota} t} t \sqrt{2}),
\end{eqnarray}
where $\sigma_{\dot{\iota} t}={\rm sgn}\,\mathfrak{Im}\, (\dot{\iota} t)={\rm sgn}\,\mathfrak{Re}\, t$ denotes the sign of $\mathfrak{Re}\, t$. In terms of canonical variables $p_0(t;N)$ and $q_1(t;N)$, this result translates to
\begin{eqnarray}
\label{p0-fer}
p_0(t;N) &=& \frac{\dot{\iota} N \sigma_{\dot{\iota} t}}{\sqrt{2}} \frac{D_{N-1}(\dot{\iota} t \sigma_{\dot{\iota} t}\sqrt{2})}{D_N(\dot{\iota} t \sigma_{\dot{\iota} t}\sqrt{2})}, \\
\label{q1-fer}
q_1(t;N) &=& - \sqrt{2} \sigma_{\dot{\iota} t} (-\dot{\iota})^N D_{-N-1}( t \sigma_{\dot{\iota} t} \sqrt{2}) D_N(\dot{\iota} t \sigma_{\dot{\iota} t} \sqrt{2}).
\end{eqnarray}
Now $H_1(t;N)$ can readily be restored by integrating Eq.~(\ref{h1d-answer}). We proceed in three steps. (i) First, we make use of differential recurrence relations Eqs.~(\ref{d-rec-1}) and (\ref{d-rec-2}) and the Wronskian Eq.~(\ref{wronsk_D}) to prove the identity
\begin{eqnarray} \fl
\dot{\iota} N D_{-N-1}(\sigma_{\dot{\iota} t} t \sqrt{2}) D_{N-1}(\dot{\iota} \sigma_{\dot{\iota} t} t \sqrt{2}) = \dot{\iota}^N - D_{-N}(\sigma_{\dot{\iota} t} t \sqrt{2}) D_{N}(\dot{\iota} \sigma_{\dot{\iota} t} t \sqrt{2}).
\end{eqnarray}
The latter allows us to write down $\dot{H}_1(t;N)$ as
\begin{eqnarray}
\label{h1d-equiv}
\dot{H}_1(t;N) = - 2 + 2 (-\dot{\iota})^{N} D_{-N}(\sigma_{\dot{\iota} t} t \sqrt{2}) D_{N}(\dot{\iota} \sigma_{\dot{\iota} t} t \sqrt{2}).
\end{eqnarray}
(ii) Second, it is beneficial to employ the differential equation Eq.~(\ref{deq}) to derive
\begin{eqnarray}
\frac{d^2}{dt^2} D_{-N}(\sigma_{\dot{\iota} t} t \sqrt{2}) &=& (2N-1+t^2) D_{-N}(\sigma_{\dot{\iota} t} t \sqrt{2}), \\
\frac{d^2}{dt^2} D_{N}(\dot{\iota} \sigma_{\dot{\iota} t} t \sqrt{2}) &=& (2N+1+t^2) D_{N}(\dot{\iota} \sigma_{\dot{\iota} t} t \sqrt{2}).
\end{eqnarray}
These two relations imply
\begin{eqnarray} \fl
2 D_{-N}(\sigma_{\dot{\iota} t} t \sqrt{2}) D_{N}(\dot{\iota} \sigma_{\dot{\iota} t} t \sqrt{2}) = D_{-N}(\sigma_{\dot{\iota} t} t \sqrt{2}) \frac{d^2}{dt^2}D_{N}(\dot{\iota} \sigma_{\dot{\iota} t} t \sqrt{2}) \nonumber\\
\qquad \qquad \qquad- D_{N}(\dot{\iota} \sigma_{\dot{\iota} t} t \sqrt{2}) \frac{d^2}{dt^2} D_{-N}(\sigma_{\dot{\iota} t} t \sqrt{2})
\end{eqnarray}
so that
\begin{eqnarray} \fl
\dot{H}_1(t;N) = - 2 \nonumber\\
\fl \qquad \qquad + (-\dot{\iota})^{N} \left[
D_{-N}(\sigma_{\dot{\iota} t} t \sqrt{2}) \frac{d^2}{dt^2}D_{N}(\dot{\iota} \sigma_{\dot{\iota} t} t \sqrt{2}) - D_{N}(\dot{\iota} \sigma_{\dot{\iota} t} t \sqrt{2}) \frac{d^2}{dt^2} D_{-N}(\sigma_{\dot{\iota} t} t \sqrt{2})
\right].\nonumber\\
{}
\end{eqnarray}
(iii) Third, we integrate the above equation to obtain
\begin{eqnarray}
H_1(t;N) = - 2 t + (-\dot{\iota})^{N} \hat{{\mathcal W}}_t \left[
D_{-N}(\sigma_{\dot{\iota} t} t \sqrt{2}), D_{N}(\dot{\iota} \sigma_{\dot{\iota} t} t \sqrt{2})
\right].
\end{eqnarray}
Here, the integration constant was set to zero in order to meet the boundary conditions Eq.~(\ref{h1-as}) at infinities. The notation $\hat{{\mathcal W}}_t$ stands for the Wronskian
\begin{eqnarray}
\label{wd}
\hat{{\mathcal W}}_t[f,g] = f \frac{\partial g}{\partial t} - \frac{\partial f}{\partial t} g.
\end{eqnarray}
\newline\noindent
{\it Average Green function and eigenlevel density.}---Now, the average one-point Green function readily follows from Eq.~(\ref{gh1}):
\begin{eqnarray} \label{gf-054-fermions}
G(z;N) = 2 z + (-\dot{\iota})^N \hat{{\mathcal W}}_z
\left[
D_{-N}(-\dot{\iota} z \sigma_z \sqrt{2}),\, D_{N}(z \sigma_z \sqrt{2})
\right].
\end{eqnarray}
Here, $\sigma_z={\rm sgn}\,\mathfrak{Im}\,z$ denotes the sign of imaginary part of $z=-\dot{\iota} t$.
\newline\noindent\newline
The average density of eigenlevels can be restored from Eq.~(\ref{gf-054-fermions}) and the relation
\begin{eqnarray}
\label{doe-gue-f}
\varrho_N(\epsilon) = -\frac{\sigma_z}{\pi} \, \mathfrak{Im}\, G(\epsilon+ \dot{\iota} \sigma_z 0;N).
\end{eqnarray}
Indeed, noticing from Eqs.~(\ref{ir-minus}) and (\ref{gf-054-fermions}) that
\begin{eqnarray} \fl
\mathfrak{Im} \left[ (-\dot{\iota})^N D_{-N}(-\dot{\iota}\epsilon \sigma_z \sqrt{2})\right] = -\frac{(-\dot{\iota})^{N-1}}{2^{N/2+1}\Gamma(N)}
\, e^{\epsilon^2/2} \int_{\mathbb R} d\tau \, \tau^{N-1} e^{-\tau^2/4 + \dot{\iota} \epsilon \sigma_z \tau},
\end{eqnarray}
we conclude, with the help of Eq.~(\ref{ir-plus}), that
\begin{eqnarray}
\mathfrak{Im} \left[ (-\dot{\iota})^N D_{-N}(-\dot{\iota} \epsilon\sigma_z \sqrt{2})\right] =
- \frac{\sqrt{\pi} \sigma_z^{N-1}}{2^{N/2} \Gamma(N)}\, e^{-\epsilon^2/2}
H_{N-1}(\epsilon).
\end{eqnarray}
Here, $H_{N-1}(\epsilon)$ is the Hermite polynomial appearing by virtue of the relation
\begin{eqnarray}
D_N(z\sqrt{2}) = e^{-z^2/2} \frac{H_N(z)}{2^{N/2}}.
\end{eqnarray}
Consequently,
\begin{eqnarray} \fl
\mathfrak{Im}\, \left\{(-\dot{\iota})^N \hat{{\mathcal W}}_z
\left[
D_{-N}(-\dot{\iota} z \sigma_z \sqrt{2}),\, D_{N}(z \sigma_z \sqrt{2})
\right]\right\}\qquad\qquad \nonumber \\
\qquad\qquad = - \frac{\sqrt{\pi} \sigma_z}{2^N \Gamma(N)}\,
\hat{{\mathcal W}}_\epsilon \left[
e^{-\epsilon^2/2} H_{N-1}(\epsilon), e^{-\epsilon^2/2} H_{N}(\epsilon)
\right].
\end{eqnarray}
Taken together with Eqs.~(\ref{doe-gue-f}) and (\ref{gf-054-fermions}), this equation yields the finite-$N$ average density of eigenlevels in the GUE:
\begin{eqnarray}
\label{doe-gue-fin} \fl
\varrho_N(\epsilon) = \frac{1}{2^N \Gamma(N) \sqrt{\pi}} \,
\hat{{\mathcal W}}_\epsilon \left[
e^{-\epsilon^2/2} H_{N-1}(\epsilon), e^{-\epsilon^2/2} H_{N}(\epsilon)
\right] \nonumber\\
\qquad\qquad = \frac{1}{2^N \Gamma(N) \sqrt{\pi}} \,e^{-\epsilon^2}
\hat{{\mathcal W}}_\epsilon \left[
H_{N-1}(\epsilon), H_{N}(\epsilon)
\right].
\end{eqnarray}
While this result, obtained via the {\it fermionic} replica limit, is seen to coincide with the celebrated finite-$N$ formula (Mehta 2004)
\begin{eqnarray}
\label{m-res}
\varrho_N(\epsilon) = \frac{1}{2^{N} \Gamma(N) \sqrt{\pi}}
\,e^{-\epsilon^2} \left[
H_N^\prime(\epsilon) H_{N-1}(\epsilon) -
H_N(\epsilon) H_{N-1}^\prime(\epsilon)
\right]\qquad
\end{eqnarray}
originally derived within the orthogonal polynomial technique, the factorisation phenomenon (as defined in Section \ref{Sec-1-2}) has not been immediately detected throughout the calculation of either $G(z;N)$ or $\varrho_N(\epsilon)$. We shall return to this point in Section \ref{Sec-5-3}.
\subsubsection{Bosonic replicas}\label{Sec-5-2-2}\noindent\newline\newline
Comparing Eq.~(\ref{bos-pf}) with the definition Eq.~(\ref{rpf-gue}),
we conclude that
\begin{eqnarray}
\label{054}
{\mathcal Z}^{(-)}_{n}(z;N) = \Pi_{n}(z; -N),
\end{eqnarray}
where $\mathfrak{Im\,}z \neq 0$. The shorthand notation $\Pi_{n}(z; -N)$ is used to denote
$\Pi_{n|1}^{\rm G}(z; -N)$, in accordance with the earlier notation in
Eqs.~(\ref{TL-1-cc}) and (\ref{TL-2-cc}). Consequently, the Painlev\'e IV representation of the bosonic
replica partition function reads [see Eqs.~(\ref{phi-def}) and
(\ref{phi-piv})]:
\begin{eqnarray}
\frac{\partial}{\partial z} \log {\mathcal Z}^{(-)}_{n}(z;N) =
\varphi(t;n,-N)\Big|_{t=z},
\end{eqnarray}
where $\psi(t;n,N)=\varphi(t;n,-N)$ is the fourth Painlev\'e transcendent
satisfying the equation
\begin{eqnarray}
\label{phi-piv-00-bos}
(\psi^{\prime\prime})^2 - 4 (\psi - t\, \psi^\prime)^2
+ 4 \psi^\prime (\psi^\prime+2n)(\psi^\prime+2N)=0
\end{eqnarray}
subject to the boundary conditions
\begin{eqnarray}\label{phi-bc-b}
\psi(t;n,N) \sim -\frac{nN}{t}, \qquad |t| \rightarrow \infty, \qquad t\in {\mathbb C}\setminus {\mathbb R}.
\end{eqnarray}
Here and above, $n \in {\mathbb Z}_+$.
\newline\newline\noindent
The average Green function $G(z;N)$ we are aimed at is given by the bosonic replica
limit
\begin{eqnarray}
\label{RL-bos}
G(z;N) = - \lim_{n\rightarrow 0} \frac{1}{n} \frac{\partial}{\partial z} {\mathcal Z}_n^{(-)}(z;N)
= - \lim_{n\rightarrow 0} \frac{1}{n} \, \psi(t;n,N)\Big|_{t=z}.
\end{eqnarray}
To implement it, we assume that the Painlev\'e representation of ${\mathcal Z}^{(-)}_{n}(z;N)$ holds for $n\in {\mathbb R}_+$.
\newline\newline\noindent
{\it Replica limit and the Hamiltonian formalism.}---Similarly to our treatment of the fermionic case, we employ the Hamiltonian formulation of the Painlev\'e IV (Noumi 2004, Forrester and Witte 2001) which associates
$\psi(t;n,N)$ with the polynomial Hamiltonian (Okamoto 1980a)
\begin{eqnarray}
\label{phi-H-bos}
\psi(t;n,N) \equiv H_{\rm b}\{P,Q,t\} = (2P + Q + 2t) PQ + 2 n P + N Q
\end{eqnarray}
of a dynamical system $\{Q,P,H_{\rm b}\}$, where $Q=Q(t;n,N)$ and $P=P(t;n,N)$ are canonical coordinate and momentum. For such a system,
Hamilton's equations of motion read:
\begin{eqnarray}
\dot{Q} &=& + \frac{\partial H_{\rm b}}{\partial P} = Q (Q+ 4 P + 2t) + 2n, \\
\dot{P} &=& - \frac{\partial H_{\rm b}}{\partial Q} = - P (2Q+2P+2t) - N.
\end{eqnarray}
Owing to Eq.~(\ref{RL-bos}), we need to develop a small-$n$ expansion for the Hamiltonian $H_{\rm b}\{Q,P,t\}$:
\begin{eqnarray}
\label{H-exp-bos}
H_{\rm b}\{P,Q,t\} = nH_1^{({\rm b})}(t;N) + {\mathcal O}(n^2).
\end{eqnarray}
Being consistent with yet another expansion
\begin{eqnarray}
P(t;n,N) &=& p_0(t;N) + np_1(t;N) + {\mathcal O}(n^2), \\
Q(t;n,N) &=& nq_1(t;N) + {\mathcal O}(n^2),
\end{eqnarray}
it results in the relation \footnote[1]{We will drop the superscript $({\rm b})$ wherever this does not cause a notational confusion.}
\begin{eqnarray}
\label{gh1-bos}
G(z;N) = - H_1^{({\rm b})} (z;N),
\end{eqnarray}
where
\begin{eqnarray} \label{h1-bos}
H_1(t;N) &=& 2 p_0 q_1 (p_0+t) + 2 p_0 + N q_1, \\
\label{h1-dot-bos}
\dot{H}_1(t;N) &=& 2 p_0 q_1.
\end{eqnarray}
Here, $p_0=p_0(t;N)$ and $q_1 = q_1(t;N)$ are solutions to the system of coupled first order equations:
\begin{eqnarray}
\label{system-bos}
\left\{
\begin{array}{cll}
\dot{p}_0 &=& - 2 p_0^2 - 2 p_0 t - N,\\
\dot{q}_1 &=& 4 p_0 q_1 + 2 q_1 t + 2.
\end{array}\right.
\end{eqnarray}
Since the initial conditions are known for $H_1(t;N)$, rather than for $p_0(t;N)$ and $q_1(t;N)$ separately, below we determine these two functions up to integration constants.
The function $p_0(t;N)$ satisfies the Riccati differential equation whose solution is
\begin{eqnarray}
\label{p0-ans-bos}
p_0(t;N) = \frac{1}{2} \left[ \frac{\dot{u}_-(t)}{u_-(t)} - t \right],
\end{eqnarray}
where
\begin{eqnarray}
\label{ut-bos}
u_-(t) = c_1 \dot{\iota}^N D_{N-1}(t\sqrt{2}) + c_2 \dot{\iota}^N D_{-N}(\dot{\iota} t \sqrt{2})
\end{eqnarray}
is, in turn, a solution to the equation of parabolic cylinder
\begin{eqnarray}
\ddot{u}_-(t) + (2N-1-t^2)u_-(t)=0.
\end{eqnarray}
Factoring out $\dot{\iota}^{N}$ in the second term in Eq.~(\ref{ut-bos}) will simplify the formulae to follow.
To determine $q_1(t;N)$, we substitute Eq.~(\ref{p0-ans-bos}) into the second formula of Eq.~(\ref{system-bos}) to derive:
\begin{eqnarray}
\label{q1-int-bos}
q_1(t;N) = 2 u^2_-(t) \int \frac{dt}{u_-^2(t)}.
\end{eqnarray}
Making use of the integration formula (see Appendix \ref{App-D-int})
\begin{eqnarray}
\label{osipov-integral-bos}
\int \frac{dt}{u^2_-(t)} = \frac{1}{\sqrt{2}}\, \frac{\alpha_1 D_{N-1}(t\sqrt{2}) + \alpha_2 D_{-N}(it\sqrt{2})}{u_-(t)},
\end{eqnarray}
where two constants $\alpha_1$ and $\alpha_2$ are subject to the constraint
\begin{eqnarray}
\label{const-a12-bos}
c_1 \alpha_2 - c_2 \alpha_1 = 1,
\end{eqnarray}
we further reduce Eq.~(\ref{q1-int-bos}) to
\begin{eqnarray}
\label{q1-ans-bos}
q_1(t;N) = \sqrt{2} u_-(t) \left[
\alpha_1 D_{N-1}(t\sqrt{2}) + \alpha_2 D_{-N}(\dot{\iota} t\sqrt{2})
\right].
\end{eqnarray}
Equations (\ref{h1-dot-bos}), (\ref{p0-ans-bos}), (\ref{q1-ans-bos}) and the identity
\begin{eqnarray} \fl
\dot{u}_-(t) - t\, u_-(t) = -\sqrt{2} \left[
c_1 \dot{\iota}^N D_{N}(t\sqrt{2}) + c_2 \dot{\iota}^{N+1} N D_{-N-1}(\dot{\iota} t \sqrt{2})
\right]
\end{eqnarray}
(obtained from Eq.~(\ref{ut-bos}) with the help of Eqs.~(\ref{d-rec-1}) and (\ref{d-rec-2})) yield $\dot{H}_1(t;N)$ in the form
\begin{eqnarray} \fl
\label{H1-sol-dot-bos}
\dot{H}_1(t;N) = -2
\left[
\alpha_1 D_{N-1}(t\sqrt{2}) + \alpha_2 D_{-N}(\dot{\iota} t\sqrt{2})
\right] \nonumber \\
\times
\left[
c_1 \dot{\iota}^N D_{N}(t\sqrt{2}) + c_2 \dot{\iota}^{N+1} N D_{-N-1}(\dot{\iota} t \sqrt{2})
\right].
\end{eqnarray}
To determine the unknown constants in Eq.~(\ref{H1-sol-dot-bos}), we make use of the asymptotic formulae for the functions of parabolic cylinder (collected in Appendix \ref{App-D-int}) to satisfy the boundary conditions [see Eq.~(\ref{phi-bc-b})]
\begin{eqnarray}\label{h1-as-bos} \fl
\qquad \qquad H_1(t;N) \sim - \frac{N}{t}, \quad \dot{H}_1(t;N) \sim \frac{N}{t^2}, \qquad |t| \rightarrow \infty, \qquad t\in {\mathbb C} \setminus {\mathbb R}.
\end{eqnarray}
The two cases $\mathfrak{Im\,} t<0$ and $\mathfrak{Im\,} t>0$ should be treated separately.
\begin{itemize}
\item {\it The case} $\mathfrak{Im\,} t<0$. Asymptotic analysis of Eq.~(\ref{H1-sol-dot-bos}) at $t\rightarrow -\dot{\iota} \infty$ yields $c_1 =0$,
so that
\begin{eqnarray} \fl
\label{H1-sol-step-10-bos}
\dot{H}_1(t;N) = 2 \dot{\iota}^{N+1} N D_{-N-1}(\dot{\iota} t\sqrt{2}) \left[
D_{N-1}(t\sqrt{2}) - \alpha_2 c_2 D_{-N}(\dot{\iota} t\sqrt{2})
\right].
\end{eqnarray}
To determine the remaining constant $\alpha_2 c_2$, we make use of the boundary
condition Eq.~(\ref{h1-as-bos}) for $t \rightarrow \pm \infty - \dot{\iota} 0$. Straightforward
calculations bring $\alpha_2 c_2 = 0$. We then conclude that
\begin{eqnarray} \fl \label{h1t-neg-bos}
\dot{H}_1(t;N) = 2\, \dot{\iota}^{N+1} N\, D_{-N-1}(\dot{\iota} t\sqrt{2}) D_{N-1}(t \sqrt{2}),\qquad \mathfrak{Im\,} t<0.
\end{eqnarray}
\vspace{0.2cm}
\item {\it The case} $\mathfrak{Im\,} t>0$.
Asymptotic analysis of Eq.~(\ref{H1-sol-dot-bos}) at $t\rightarrow + \dot{\iota} \infty$ yields
\begin{eqnarray} \label{cond-1-bos}
\frac{c_1}{c_2} = - (-\dot{\iota})^{N-1}\frac{\sqrt{2\pi}}{(N-1)!}
\end{eqnarray}
so that
\begin{eqnarray}
\label{H1-sol-step-1-bos} \fl
\dot{H}_1(t;N) = -N! \sqrt{\frac{2}{\pi}}
D_{-N-1}(-\dot{\iota} t\sqrt{2}) \nonumber\\ \fl
\qquad\qquad \times
\left[
D_{-N}(\dot{\iota} t\sqrt{2}) + \alpha_1 c_1 (-\dot{\iota})^{N-1} \frac{(N-1)!}{\sqrt{2\pi}} D_{-N}(-\dot{\iota} t \sqrt{2})
\right].
\end{eqnarray}
To determine the remaining constant $\alpha_1 c_1$, we make use of the boundary condition Eq.~(\ref{h1-as-bos}) for $t \rightarrow \pm \infty + \dot{\iota} 0$. Straightforward
calculations bring
\begin{eqnarray}
\alpha_1 c_1 = (-\dot{\iota})^{N-1} \frac{\sqrt{2\pi}}{(N-1)!}.
\end{eqnarray}
We then conclude that
\begin{eqnarray} \fl \label{h1t-pos-bos}
\dot{H}_1(t;N) = 2\, \dot{\iota}^{N+1} N\, D_{-N-1}(-\dot{\iota} t\sqrt{2}) D_{N-1}(- t \sqrt{2}),\qquad \mathfrak{Im\,} t>0.
\end{eqnarray}
\end{itemize}
\noindent\newline
The calculation of $\dot{H}_1(t;N)$ can be summarised in a single formula
\begin{eqnarray}
\label{h1d-answer-bos}
\dot{H}_1(t;N) = - 2\, \dot{\iota}^{N-1} N\, D_{-N-1}(-\dot{\iota} t \sigma_t\sqrt{2}) D_{N-1}(- t \sigma_t \sqrt{2}),
\end{eqnarray}
where $\sigma_t={\rm sgn}\,\mathfrak{Im}\, t$ denotes the sign of $\mathfrak{Im}\, t$. In terms of canonical variables $p_0(t;N)$ and $q_1(t;N)$, this result translates to
\begin{eqnarray}
\label{p0-bos}
p_0(t;N) &=& \frac{\dot{\iota} N \sigma_t}{\sqrt{2}} \frac{D_{-N-1}(-\dot{\iota} t \sigma_t \sqrt{2})}{D_{-N}(-\dot{\iota} t \sigma_t\sqrt{2})}, \\
\label{q1-bos}
q_1(t;N) &=& \sqrt{2} \sigma_t \dot{\iota}^N D_{-N}(- \dot{\iota} t \sigma_t \sqrt{2}) D_{N-1}(-t \sigma_t \sqrt{2}).
\end{eqnarray}
In view of Eq.~(\ref{gh1-bos}), the latter result is equivalent to the statement
\begin{eqnarray} \fl
\frac{\partial}{\partial z} G(z;N)
= -\dot{H}_1^{({\rm b})}(t;N)\Big|_{t=z} = 2 \, \dot{\iota}^{N-1} N D_{-N-1}(-\dot{\iota} z\sigma_z \sqrt{2}) D_{N-1}(-z\sigma_z \sqrt{2}).
\end{eqnarray}
This expression, obtained within the {\it bosonic} replicas, must be compared with its counterpart derived via the {\it fermionic} replicas [Eqs.~(\ref{gh1}) and (\ref{h1d-answer})]:
\begin{eqnarray} \fl
\frac{\partial}{\partial z} G(z;N)
= -\dot{H}_1^{({\rm f})}(t;N)\Big|_{t=i z} = 2 \, (-\dot{\iota})^{N-1} N D_{-N-1}(-\dot{\iota} z\sigma_z \sqrt{2}) D_{N-1}(z\sigma_z \sqrt{2}).
\end{eqnarray}
As the two expressions coincide, we are led to conclude that the bosonic version of the replica limit reproduces correct finite-$N$
results for the average Green function and the average density of eigenlevels as given by Eqs.~(\ref{gf-054-fermions}) and (\ref{m-res}), respectively. Again, as is the case of a fermionic calculation carried out in Section \ref{Sec-FR}, the factorisation property did not show up explicitly in the above bosonic calculation. We defer discussing this point
until Section \ref{Sec-5-3}.
\subsubsection{Supersymmetric replicas}\noindent\newline\newline
The very same integrable theory of characteristic polynomials is at
work for a ``supersymmetric'' variation of replicas invented by
Splittorff and Verbaarschot (2003). These authors suggested that the
fermionic and bosonic replica partition functions (satisfying the
fermionic and bosonic Toda Lattice equations \footnote[1]{Notice that
Splittorff and Verbaarschot (2003) use the term ``Toda Lattice
equation'' for the first equation of the TL hierarchy.},
respectively) can be seen as two different branches of a single,
{\it graded} Toda Lattice equation. Below we show that the above
statement, considered in the context of GUE, is also valid {\it beyond}
the first equation of the Toda Lattice
hierarchy.\newline\newline\noindent {\it First (graded) TL
equation.}---Equations (\ref{ZPi-F}) and (\ref{TL-1-cc}) imply that
the fermionic replica partition function ${\mathcal
Z}^{(+)}_{n}(z;N)$ satisfies the first TL equation in the form:
\begin{eqnarray}
\label{TL-1-fer}
\frac{\partial^2}{\partial z^2}\,
\log {\mathcal Z}^{(+)}_{n}(z;N) = - 2 n\,\left(
\frac{{\mathcal
Z}^{(+)}_{n-1}(z;N) \,{\mathcal
Z}^{(+)}_{n+1}(z;N) }{{\mathcal
Z}^{(+)\, 2}_{n}(z;N)}-1\right).
\end{eqnarray}
Together with the initial conditions ${\mathcal Z}^{(+)}_{0}(z;N)=1$
and
\begin{eqnarray} \label{Z-1-fer} \fl
{\mathcal Z}^{(+)}_{1}(z;N) =(-\dot{\iota})^N \Pi_{1|1}^{\rm G}(iz;N) &=& \frac{1}{\sqrt{\pi}}
\int_{\mathbb R} d\lambda\, e^{-\lambda^2} (z+\dot{\iota}\lambda)^N \nonumber\\
&=& 2^{-N/2} e^{z^2/2} D_N(z\sqrt{2}),
\end{eqnarray}
this equation uniquely determines fermionic replica partition
functions of any order ($n \ge 2$). Here, $D_N$ is the function of parabolic cylinder of positive order (see Appendix~\ref{App-D-int}).
The first TL equation for the bosonic replica partition function
${\mathcal Z}^{(-)}_{n}(z;N)$ follows from Eqs.~(\ref{054}) and
(\ref{TL-1-cc}),
\begin{eqnarray}
\label{TL-1-bos}
\frac{\partial^2}{\partial z^2}\,
\log {\mathcal Z}^{(-)}_{n}(z;N) = + 2 n\,\left(
\frac{{\mathcal
Z}^{(-)}_{n-1}(z;N) \,{\mathcal
Z}^{(-)}_{n+1}(z;N) }{{\mathcal
Z}^{(-)\, 2}_{n}(z;N)}-1\right).
\end{eqnarray}
Together with the initial conditions ${\mathcal Z}^{(-)}_{0}(z;N)=1$
and
\begin{eqnarray} \label{Z-1-bos} \fl
{\mathcal Z}^{(-)}_{1}(z;N) = \Pi_{1|1}^{\rm G}(z;-N) &=& \frac{1}{\sqrt{\pi}}
\int_{\mathbb R} d\lambda\, e^{-\lambda^2} (z- \lambda)^{-N} \nonumber\\
&=& (-\dot{\iota} \sigma_z)^N 2^{N/2} e^{-z^2/2} \, D_{-N}(-\dot{\iota} z \sigma_z \sqrt{2}),
\end{eqnarray}
where $\sigma_z={\rm sgn}\, \mathfrak{Im}\,z$ denotes the sign of $\mathfrak{Im}\,z$, this equation uniquely determines bosonic replica partition
functions of any order ($n \ge 2$). Here, $D_{-N}$ is the function of parabolic cylinder of negative order (see Appendix~\ref{App-D-int}).
Further, defining the {\it graded} replica partition function as
\begin{eqnarray}
\label{graded-pf-1}
{\mathcal Z}_{n}(z;N) = \cases{
{\mathcal Z}^{(-)}_{|n|}(z;N), & $n \in {\mathbb Z}^-$\\
1, & $n = 0$ \\
{\mathcal Z}^{(+)}_{|n|}(z;N), & $n \in {\mathbb Z}^+$,
}
\end{eqnarray}
we spot from Eqs.~(\ref{TL-1-fer}) and (\ref{TL-1-bos}) that it satisfies the single (graded) TL equation
\begin{eqnarray}
\label{TL-1-graded}
\frac{\partial^2}{\partial z^2}\,
\log {\mathcal Z}_{n}(z;N) = - 2 n\,\left(
\frac{{\mathcal
Z}_{n-1}(z;N) \,{\mathcal
Z}_{n+1}(z;N) }{{\mathcal
Z}^{2}_{n}(z;N)}-1\right).
\end{eqnarray}
Here, the index $n$ is an arbitrary integer, be it positive or negative. The first graded TL equation must be supplemented by {\it two} initial conditions given by Eqs.~(\ref{Z-1-fer}) and (\ref{Z-1-bos}).
\newline\newline\noindent {\it Second (graded) TL
equation.}---Equations (\ref{ZPi-F}), (\ref{054}) and (\ref{TL-2-cc}) imply that
both fermionic and bosonic replica partition functions ${\mathcal
Z}^{(\pm)}_{n}(z;N)$ additionally satisfy the second TL equation
\begin{eqnarray}
\label{TL-2-fer-bos} \fl
\left(1- z \frac{\partial}{\partial z} \right) \frac{\partial}{\partial z}\,
\log {\mathcal Z}^{(\pm)}_{n}(z;N)
=
n\,
\frac{{\mathcal Z}^{(\pm)}_{n+1}(z;N) \,
{\mathcal Z}^{(\pm)}_{n-1}(z;N)}{{\mathcal Z}^{(\pm)\, 2}_{n}(z;N)}
\,\frac{\partial}{\partial z}
\log \left(
\frac{{\mathcal Z}^{(\pm)}_{n+1}(z;N)}{{\mathcal Z}^{(\pm)}_{n-1}(z;N)}
\right). \nonumber \\
{}
\end{eqnarray}
Consequently, the graded replica partition function ${\mathcal Z}_{n}(z;N)$ defined by Eq.~(\ref{graded-pf-1}) is determined by the second {\it graded} TL equation
\begin{eqnarray}
\label{TL-2-graded} \fl
\left(1- z \frac{\partial}{\partial z} \right) \frac{\partial}{\partial z}\,
\log {\mathcal Z}_{n}(z;N)
=
n\,
\frac{{\mathcal Z}_{n+1}(z;N) \,
{\mathcal Z}_{n-1}(z;N)}{{\mathcal Z}^{2}_{n}(z;N)}
\,\frac{\partial}{\partial z}
\log \left(
\frac{{\mathcal Z}_{n+1}(z;N)}{{\mathcal Z}_{n-1}(z;N)}
\right) \nonumber \\
{}
\end{eqnarray}
supplemented by two initial conditions Eqs.~(\ref{Z-1-fer}) and (\ref{Z-1-bos}).
\newline\newline\noindent {\it Replica limit of graded TL
equations.}---To determine the one-point Green function $G(z;N)$ within the framework of supersymmetric replicas, one has to consider the replica limit
\begin{eqnarray}
G(z;N) = \lim_{n\rightarrow 0} \frac{1}{n} \frac{\partial}{\partial z} \, {\mathcal Z}_n(z;N).
\end{eqnarray}
The first and second graded TL equations bring
\begin{eqnarray}
\label{g-prime}
G^\prime(z;N) = 2 - 2\, {\mathcal Z}_{-1}(z;N) \,{\mathcal Z}_{1}(z;N),
\end{eqnarray}
and
\begin{eqnarray} \label{G-deq} \fl
G(z;N)= z G^\prime (z;N) + {\mathcal Z}_{-1}(z;N) {\mathcal Z}_{1}^\prime(z;N)
- {\mathcal Z}_{-1}^\prime(z;N) {\mathcal Z}_{1}(z;N),
\end{eqnarray}
respectively. Combining the two equations, we derive
\begin{eqnarray}
\label{g-factor}
G(z;N) = 2 z - 2 z\, {\mathcal Z}_{-1} \,{\mathcal Z}_{1} +
{\hat{\mathcal W}}_z\left[
{\mathcal Z}_{-1}, \,{\mathcal Z}_{1}
\right],
\end{eqnarray}
where ${\hat{\mathcal W}}_z$ is the Wronskian defined in Eq.~(\ref{wd}); the prime ${}^\prime$ stands for the derivative $\partial/\partial z$. Interestingly, the second graded TL equation has allowed us to integrate Eq.~(\ref{G-deq}) at once!
The resulting Eq.~(\ref{g-factor}) is remarkable: it shows that the average Green function can solely be expressed in terms of bosonic ${\mathcal Z}_{-1}(z;N)$ and fermionic ${\mathcal Z}_{1}(z;N)$ replica partition functions with only one flavor. This structural phenomenon known as the {\it `factorisation property'} was first observed by Splittorff and Verbaarschot (2003) in the context of the GUE density-density correlation function. Striking at first sight, the factorisation property appears to be very natural after
recognising that fermionic and bosonic replica partition functions are the members of a single graded TL hierarchy.
To make Eq.~(\ref{g-factor}) explicit, we utilise Eqs.~(\ref{Z-1-fer}) and (\ref{Z-1-bos}) to observe the identity
\begin{eqnarray} \fl
{\hat{\mathcal W}}_z\left[
{\mathcal Z}_{-1}, \,{\mathcal Z}_{1}
\right] = 2z\, {\mathcal Z}_{-1} \,{\mathcal Z}_{1} + (-\dot{\iota})^N W_z
\left[
D_{-N}(-\dot{\iota} z \sigma_z \sqrt{2}),\, D_{N}(z \sigma_z \sqrt{2})
\right].
\end{eqnarray}
Consequently,
\begin{eqnarray} \label{gf-054}
G(z;N) = 2 z + (-\dot{\iota})^N {\hat{\mathcal W}}_z
\left[
D_{-N}(-\dot{\iota} z \sigma_z \sqrt{2}),\, D_{N}(z \sigma_z \sqrt{2})
\right].
\end{eqnarray}
This expression for the average Green function, derived within the framework of {\it supersymmetric replicas}, coincides with the one obtained {\it separately} by means of fermionic and bosonic replicas (see, e.g., Eq.~(\ref{gf-054-fermions})). Hence, the result Eq.~(\ref{m-res}) for the finite-$N$ average density of eigenlevels readily follows.
\subsection{Factorisation property in fermionic and bosonic replicas}\label{Sec-5-3}
The factorisation property naturally appearing in the supersymmetric variation of the replica method suggests that a generic correlation function should contain both {\it compact} (fermionic) and {\it non-compact} (bosonic) contributions. Below, the fermionic-bosonic factorisation property will separately be discussed in the context of fermionic and bosonic replicas where its presence is by far less obvious even though the enlightened reader might have anticipated the factorisation property from Eqs. (\ref{gh1}) and (\ref{h1-dot}) for fermionic replicas and from Eqs. (\ref{gh1-bos}) and (\ref{h1-dot-bos}) for bosonic replicas.\noindent\newline\newline
{\it Fermionic replicas.}---The Hamiltonian formulation of the fourth Painlev\'e transcendent employed in Section \ref{Sec-FR} is the key. It follows from Eqs.~(\ref{gh1}) and (\ref{h1-dot}) that the average Green function $G(z;N)$ is expressed in terms of canonical variables $p_0$ and $q_1$ as
\begin{eqnarray}
\frac{\partial}{\partial z} \, G(z;N) = - 2p_0(\dot{\iota} z)\, q_1(\dot{\iota} z),
\end{eqnarray}
where
\begin{eqnarray}
p_0(\dot{\iota} z) = -\frac{\dot{\iota}}{2}\left[ z +
\frac{\partial}{\partial z} \log D_N(z \sqrt{2})
\right]
\end{eqnarray}
and
\begin{eqnarray} \fl
q_1(\dot{\iota} z) = -\frac{\dot{\iota}}{N} (-\dot{\iota} \sigma_z)^N e^{z^2/2} D_N(z\sqrt{2}) \frac{\partial}{\partial z} \left[
e^{-z^2/2} D_{-N}(-\dot{\iota} z\sigma_z \sqrt{2})
\right].
\end{eqnarray}
To derive the last two equations, we have used Eqs.~(\ref{p0-fer}) and (\ref{q1-fer}) in conjunction with Eqs.~(\ref{d-rec-1}) and (\ref{d-rec-2}).
With the help of Eq.~(\ref{Z-1-fer}), the canonical ``momentum'' $p_0$ can be related to the {\it fermionic} partition function for one flavor,
\begin{eqnarray}
\label{p0-fer-Z}
p_0(\dot{\iota} z) = -\frac{\dot{\iota}}{2} \frac{\partial}{\partial z} \log {\mathcal Z}^{(+)}_{1}(z;N).
\end{eqnarray}
This is a {\it compact contribution} to the average Green function. A {\it non-compact contribution} is encoded in the canonical ``coordinate'' $q_1$
which can be related to the {\it bosonic} partition function via Eq.~(\ref{Z-1-bos}):
\begin{eqnarray}
\label{q1-fer-Z}
q_1(\dot{\iota} z) = -\frac{\dot{\iota}}{N} \, {\mathcal Z}^{(+)}_{1}(z;N) \,
\frac{\partial}{\partial z} {\mathcal Z}^{(-)}_{1}(z;N),
\end{eqnarray}
so that
\begin{eqnarray}
\label{g-prime-fact-alt}
\frac{\partial}{\partial z} \, G(z;N) = \frac{1}{N} \frac{\partial}{\partial z} {\mathcal Z}^{(+)}_{1}(z;N)\frac{\partial}{\partial z} {\mathcal Z}^{(-)}_{1}(z;N).
\end{eqnarray}
This is yet another factorised representation for $G^\prime(z;N)$ [compare to Eq.~(\ref{g-prime})].
\noindent\newline\newline
{\it Bosonic replicas.}---To identify both compact and non-compact contributions to the average Green function, we turn to Eqs.~(\ref{gh1-bos}) and (\ref{h1-dot-bos}) to represent
the derivative of the average Green function $G(z;N)$ in terms of canonical variables $p_0$ and $q_1$ as
\begin{eqnarray}
\label{dpG}
\frac{\partial}{\partial z} \, G(z;N) = - 2p_0(z)\, q_1(z),
\end{eqnarray}
where
\begin{eqnarray}
p_0(z) = -\frac{1}{2}\left[ z -
\frac{\partial}{\partial z} \log D_{-N}(- \dot{\iota} z \sigma_z \sqrt{2})
\right]
\end{eqnarray}
and
\begin{eqnarray} \fl
q_1(z) = -\frac{1}{N} (-\dot{\iota} \sigma_z)^N e^{-z^2/2} D_{-N}(-\dot{\iota} z \sigma_z \sqrt{2}) \frac{\partial}{\partial z} \left[
e^{z^2/2} D_{N}(z \sqrt{2})
\right].
\end{eqnarray}
To derive the last two equations, we have used Eqs.~(\ref{p0-bos}) and (\ref{q1-bos}) in conjunction with Eqs.~(\ref{d-rec-1}) and (\ref{d-rec-2}).
With the help of Eq.~(\ref{Z-1-bos}), the canonical ``momentum'' $p_0$ can be related to the {\it bosonic} partition function for one flavor,
\begin{eqnarray}
\label{p0-bos-Z}
p_0(z) = \frac{1}{2} \frac{\partial}{\partial z} \log {\mathcal Z}^{(-)}_{1}(z;N).
\end{eqnarray}
This is a {\it non-compact contribution} to the average Green function. A {\it compact contribution} comes from the canonical ``coordinate'' $q_1$
which can be related to the {\it fermionic} partition function via Eq.~(\ref{Z-1-fer}):
\begin{eqnarray}
\label{q1-bos-Z}
q_1(z) = -\frac{1}{N} \, {\mathcal Z}^{(-)}_{1}(z;N) \,
\frac{\partial}{\partial z} {\mathcal Z}^{(+)}_{1}(z;N),
\end{eqnarray}
so that
\begin{eqnarray}
\frac{\partial}{\partial z} \, G(z;N) = \frac{1}{N} \frac{\partial}{\partial z} {\mathcal Z}^{(+)}_{1}(z;N)\frac{\partial}{\partial z} {\mathcal Z}^{(-)}_{1}(z;N),
\end{eqnarray}
agreeing with the earlier result Eq.~(\ref{g-prime-fact-alt}).
\noindent\newline\newline
{\it Brief summary.}---The detailed analysis of fermionic and bosonic replica limits performed in the context of the GUE averaged one-point Green function $G(z;N)$
has convincingly demonstrated that the Hamiltonian formulation of the fourth Painlev\'e transcendent provides a natural and, perhaps, most adequate language to identify the factorisation phenomenon. In particular, we have managed to show that the derivative $G^\prime (z;N)$ of the one-point Green function factorises into a product of canonical ``momentum''
\begin{eqnarray}
\label{p0-limit}
p_0(t;N) = \lim_{n\rightarrow 0} P(t;n,N)
\end{eqnarray}
and canonical ``coordinate''
\begin{eqnarray}
\label{q1-limit}
q_1(t;N) = \lim_{n\rightarrow 0} \frac{1}{n} Q(t;n,N).
\end{eqnarray}
As suggested by Eqs.~(\ref{p0-fer-Z}) and (\ref{p0-bos-Z}), the momentum contribution $p_0$ to the average Green function is a regular one; it corresponds to a compact contribution in fermionic replicas and to a non-compact contribution in bosonic replicas:
\begin{eqnarray}
p_0 \sim \frac{\partial}{\partial z} \log
\cases{
{\mathcal Z}^{(+)}_{1}(z;N) & (fermionic)\\
{\mathcal Z}^{(-)}_{1}(z;N) & (bosonic)
}
\end{eqnarray}
On the contrary, the coordinate contribution $q_1$ is of a complementary nature: defined by a replica-like limit [Eq.~(\ref{q1-limit})] it brings in a noncompact contribution in fermionic replicas [Eq.~(\ref{q1-fer-Z})] and a compact contribution in bosonic replicas [Eq.~(\ref{q1-bos-Z})]:
\begin{eqnarray}
q_1 \sim \exp\left(\int p_0\, dz\right) \times \frac{\partial}{\partial z}
\cases{
{\mathcal Z}^{(-)}_{1}(z;N) & (fermionic)\\
{\mathcal Z}^{(+)}_{1}(z;N) & (bosonic)
}
\end{eqnarray}
We close this section by noting that the very same calculational framework should be equally effective in performing the replica limit for other random-matrix ensembles and/or spectral observables.
\section{Conclusions}
\label{Sec-6}
In this paper, we have used the ideas of integrability to formulate a theory of the correlation function
\begin{eqnarray}
\Pi_{n|p} ({\boldsymbol \varsigma}; {\boldsymbol \kappa}) =
\int d\mu_n({\boldsymbol {\cal H}})\,
\prod_{\alpha=1}^{p} {\rm det}_n^{\kappa_\alpha}(\varsigma_\alpha-{\boldsymbol{\cal H}}) \nonumber
\end{eqnarray}
of characteristic polynomials for invariant non-Gaussian ensembles of Hermitean random matrices characterised by the probability measure $d\mu_n({\boldsymbol {\cal H}})$ which may well depend on the matrix dimensionality $n$. Contrary to other approaches based on various duality relations, our theory does not assume ``integerness'' of replica parameters ${\boldsymbol \kappa} = (\kappa_1,\cdots,\kappa_p)$ which are allowed to span ${\boldsymbol \kappa}\in{\mathbb R}^p$. One of the consequences of lifting the restriction ${\boldsymbol \kappa}\in {\mathbb Z}^p$ is that we were unable to represent the CFCP {\it explicitly} in a closed determinant form; instead, we have shown that the correlation function $\Pi_{n|p} ({\boldsymbol \varsigma}; {\boldsymbol \kappa})$ satisfies an infinite set of {\it nonlinear differential hierarchically structured} relations. While such a description is, to a large extent, {\it implicit}, it does provide a useful nonperturbative characterisation of $\Pi_{n|p} ({\boldsymbol \varsigma}; {\boldsymbol \kappa})$ which turns out to be much beneficial for an in-depth analysis of the mathematical foundations of zero-dimensional replica field theories.
With certainty, the replicas is not the only application of a nonperturbative approach to CFCP developed in this paper. With minor modifications, its potential readily extends to various problems of charge transfer through quantum chaotic structures (Osipov and Kanzieper 2008, Osipov and Kanzieper 2009), stochastic theory of density matrices (Osipov, Sommers and \.Zyczkowski 2010), random matrix theory approach to QCD physics (Verbaarschot 2010), to name a few. An extension of the above formalism to the CFCP of unitary matrices may bring a useful calculational tool for generating conjectures on behaviour of the Riemann zeta function at the critical line (Keating and Snaith 2000a, 2000b).
Finally, we wish to mention that an integrable theory of CFCP for $\beta=1$ and $\beta=4$ Dyson symmetry classes is very much called for. Building on the insightful work by Adler and van Moerbeke (2001), a solution of this challenging problem seems to be within the reach.
\section*{Acknowledgements}
The authors thank Nicholas Witte for providing references to the early works by Uvarov (1959, 1969). This work was supported by Deutsche Forschungsgemeinschaft SFB/Tr
12, and by the Israel Science Foundation through the grants No
286/04 and No 414/08.
\smallskip\smallskip\smallskip
\newpage
\renewcommand{ | 2024-02-18T23:39:58.298Z | 2010-09-14T02:01:31.000Z | algebraic_stack_train_0000 | 971 | 27,690 |
|
proofpile-arXiv_065-4845 | \section{Introduction}
We consider the motion of a rigid planar body, whose shape is not necessarily circular, immersed in a two-dimensional perfect fluid. We assume that the vorticity of the fluid vanishes, but we allow for a non-zero amount of circulation around the rigid body. This dynamical system is of fundamental importance in aerodynamics and was studied, among others, by Chaplygin, Kutta, Lamb, and Zhukowski; see \cite{Ko1993}, \cite{Ko2003}, and \cite{BoMa06} for a detailed overview of the literature and background information on this subject. A consequence of the non-zero circulation is the presence of a gyroscopic \emph{lift force} acting on the rigid body, which is proportional to the circulation. This force is referred to as the \emph{Kutta-Zhukowski force}.
While various aspects of this dynamical system have been discussed throughout the fluid-dynamical literature (see for instance \cite{MiTh1968} or \cite{Batchelor} for a modern account), deeper insight into the Hamiltonian structure of this system had to wait until the work of \cite{Ko1993} and \cite{BoMa06}. These authors identified a Hamiltonian structure for the equations of motion found by Chaplygin and used this structure to shed further light onto the integrability of the system and to investigate chaoticity. This pioneering work raised several questions worth investigating in their own right. The first and foremost is that the non-canonical Hamiltonian structure of the rigid body with circulation is obtained \emph{by inspection}, which immediately begs the question whether there are other, more fundamental reasons as to why this should be so. In this paper, we address this question following the approach of \cite{Ar66}. In particular, we model the dynamics of the rigid body moving in a perfect fluid as a geodesic flow on an infinite-dimensional manifold. On this space, several symmetry groups act, and by performing successive reductions with respect to each of these groups, we obtain the Hamiltonian structure and equations of motion for the rigid body with circulation in a geometric way. This approach is also used in \cite{VaKaMa2009} to derive the equations of motion for a rigid body interacting with point vortices.
In this geometric approach, the symmetry groups of the fluid-solid problem are the \emph{\textbf{particle relabeling symmetry}} group (the group of volume-preserving diffeomorphisms of the fluid reference space), and the \emph{\textbf{special Euclidian group}} $SE(2)$ of uniform solid-fluid translations and rotations. Symplectic reduction with respect to the former group is equivalent to the specification of the vorticity field of the fluid: in the problem considered here, this amounts to requiring that the external vorticity vanishes and that there is a given amount of circulation around the body. The group $SE(2)$ acts on the system by combined solid-fluid transformations. After Poisson reduction with respect to this group, we obtain the Hamiltonian structure of \cite{BoMa06}.
One straightforward advantage of the geometric description is that it provides immediate insight into the dynamics, which might be harder to obtain by non-geometric means. As an illustration, we show that the symplectic leaves for the Poisson structure obtained by \cite{BoMa06} are orbits of a certain \emph{\textbf{affine action}} of $SE(2)$ onto $\mathfrak{se}(2)^\ast$, which we compute explicitly. We show that these leaves are paraboloids of revolution, hence providing a geometric interpretation of a result of Chaplygin.
An additional result of using geometric reduction is that we obtain a new interpretation for the classical
Kutta-Zhukowski force. In particular, we show that the Kutta-Zhukowski force is proportional to the curvature of a natural fluid-dynamical connection, called the Neumann connection, which encodes the influence of the rigid body on the surrounding fluid. In this way, we exhibit interesting parallels between the dynamics of the rigid body with circulation and that of a charged particle in a magnetic field: both systems are acted upon by a gyroscopic force, the Kutta-Zhukowski force and the Lorentz force, respectively. The reduction procedure to recover the Hamiltonian description of \cite{BoMa06} then turns out to be similar to the way in which \cite{Sternberg1977} derives the equations for a charged particle by including the magnetic field directly into the Poisson structure.
Lastly, we also establish an analogue between the \emph{\textbf{Kaluza-Klein construction}} for magnetic particles and the dynamics of the rigid body with circulation. In the conventional Kaluza-Klein description, the dynamics of the magnetic particle is made into a geodesic motion by extending the configuration space and including the magnetic potential into the metric. Given the similarity between the Lorentz and the Kutta-Zhukowski force, a natural question is then whether a similar geometric description exists for the rigid body with circulation. It turns out that this is indeed so: the extended configuration space in this case is a central extension of $SE(2)$ by means of natural cocycle, which is again related to the curvature giving rise to the Kutta-Zhukowksi force. This extension of $SE(2)$ is known as the \emph{\textbf{oscillator group}} in quantum mechanics; see \cite{Streater1967}. The rigid body with circulation is then described by geodesic motion on the oscillator group.
Our description of the rigid body with circulation as a geodesic flow on a central extension is similar to the work of \cite{OvKh1987}, who showed that the Korteweg-de Vries equation can be interpreted as a geodesic motion on the Virasoro group, an extension of the diffeomorphism group of the circle. It is also worth stressing that the similarity with the classical Kaluza-Klein picture cannot be taken too far: in the Kaluza-Klein description, one modifies the metric to take into account the magnetic field, whereas we leave the metric on $SE(2)$ essentially unchanged and instead we deform the multiplication structure by means of a cocycle, giving rise to a central extension of $SE(2)$.
\paragraph{Outline of this Paper.} We begin the paper by giving an overview of the classical literature on fluid-structure interactions in section~\ref{sec:prel}. In section~\ref{sec:fsint} we recall some of the geometric concepts that arise in this context, most notably the particle relabeling symmetry group and the Neumann connection, and we give a brief summary of cotangent bundle reduction, which we use in section~\ref{sec:red} to derive the Chaplygin-Lamb equations describing a rigid body with circulation. The geometry of the oscillator group and the link with the rigid body with circulation are explained in section~\ref{sec:geodosc}, and we finish the paper in section~\ref{sec:outlook} with a brief discussion of future work. In appendix~\ref{appendix:rigidgroup} and \ref{appendix:diffgroup}, some elementary results are collected about the geometry of the special Euclidian group $SE(2)$ and the group $\mathrm{Diff}_{\mathrm{vol}}$ of volume-preserving diffeomorphisms.
\paragraph{Acknowledgements.} We would like to thank Scott Kelly, Jair Koiller, Tudor Ratiu and Banavara Shashikanth for useful suggestions and interesting discussions.
J. Vankerschaver is supported through a postdoctoral fellowship from the Research Foundation
-- Flanders (FWO-Vlaanderen). Additional financial
support from the Fonds Professor Wuytack is gratefully acknowledged.
E. Kanso and J. E. Marsden would like to acknowledge the support of the National Science Foundation through the grants CMMI 07-57092 and CMMI 07-57106, respectively.
\section{Body-Fluid Interactions: Classical Formulation}
\label{sec:prel}
Consider a planar body moving in an infinitely large volume of an incompressible and
inviscid fluid $\mathcal{F}$ at rest at infinity. The body $\mathcal{B}$
is assumed to occupy a simply connected region whose boundary
can be conformally mapped to a unit circle, and it is considered to be uniform
and neutrally-buoyant (the body weight is balanced by the force of buoyancy).
Introduce an orthonormal inertial
frame $\{{\bf e}_{1,2,3} \} $ where $\{{\bf e}_1,{\bf e}_2\}$ span the plane
of motion and ${\bf e}_3$ is the unit normal to this plane.
The configuration of the submerged
rigid body can then be described by a rotation $\theta$ about
$\mathbf{e}_3$ and a translation $\mathbf{x}_o =
x_o\mathbf{e}_1 + y_o\mathbf{e}_2$
of a point $O$ (often chosen to coincide with the conformal center of the body).
The angular and translational velocities expressed relative to the inertial
frame are of the form
$ \dot{\theta}\,\mathbf{e}_3$ and $\mathbf{v} = v_x \,\mathbf{e}_1 + v_y\, \mathbf{e}_2$
where $v_x = \dot{x}_o$, $v_y = \dot{y}_o$ (the dot denotes derivative with respect to time $t$).
It is convenient for the following development to
introduce a moving frame $\{{\bf b}_{1,2,3} \} $ attached to the body.
The point transformation from the body to the inertial frame can be represented as
\begin{equation}\label{eq:rigidmotion}
\mathbf{x} = R_{\theta} \mathbf{X} + \mathbf{x}_o, \qquad R_{\theta} =
\begin{pmatrix}
\cos\theta & -\sin\theta \\ \sin\theta & \cos\theta
\end{pmatrix},
\end{equation}
where $\mathbf{x} = x\,\mathbf{e}_1 + y\, \mathbf{e}_2$ and
$\mathbf{X} = X\, \mathbf{b}_1 + Y\, \mathbf{b}_2,$ while vectors
transform as $\mathbf{v} = R_{\theta} \mathbf{V}.$ The angular and translational
velocities expressed in the body frame take the form
$\boldsymbol{\Omega} = \Omega \, \mathbf{b}_3$ (where $\Omega = \dot{\theta}$)
and $\mathbf{V}=V_x\mathbf{b}_1 + V_y \mathbf{b}_2$ (where
$V_1 = \dot{x}_o\cos{\theta} + \dot{y}_o\sin\theta$
and $V_2 = -\dot{x}_o\sin{\theta} + \dot{y}_o\cos\theta$).
Note that the orientation
and position $(\theta,x_o,y_o)$ form an element of $SE(2)$, the group
of rigid body motions in $\mathbb{R}^2$. The velocity in the body-frame
$\zeta = (\Omega, V_{x},V_{y})^T$, where $()^T$ denotes the transpose operation,
is an element of the
vector space $\mathfrak{se}(2)$ which is the space of infinitesimal rotations and translations
in $\mathbb{R}^2$ and is referred to as the Lie algebra of $SE(2)$; for more details on the rigid
body group and its Lie algebra, see Appendix~\ref{appendix:rigidgroup} and references therein.
\paragraph{Fluid Motion.} Let the fluid fill the complement of the body in $\mathbb{R}^2$.
The reference configuration of the fluid will be denoted by $\mathcal{F}_0$,
and that of the body by $\mathcal{B}_0$. The space taken by the fluid at a generic
time $t$ will be denoted by $\mathcal{F}$. Note however that as time
progresses, the position of the body changes and hence so does its
complement $\mathcal{F}$. In geometric mechanics, the mapping from
the reference configuration $\mathcal{F}_0$ to the fluid domain $\mathcal{F}$ can be
expressed as an element of the group of volume-preserving diffeomorphisms
reviewed in Appendix~\ref{appendix:diffgroup}. In Section~\ref{sec:fsint},
we present an extension of this geometric approach to the solid-fluid
problem considered here but beforehand, we briefly describe,
using the classical vector calculus approach, the fluid motion
and the equations governing the motion of the submerged body.
The fluid velocity $\mathbf{u}$ can be written using the
Helmholtz-Hodge decomposition as follows
\begin{equation}\label{eq:u}
\mathbf{u} = \nabla \Phi_\zeta \ + \ \mathbf{u}_{\rm v} ,
\end{equation}
and has to satisfy the impermeability boundary condition on the boundary of the solid body. Now we explain the two terms in this decomposition.
The potential function $\Phi_\zeta$ is harmonic
and represents the irrotational motion of the fluid generated by
motion of the body.
The subscript $\zeta$ refers to the fact that $\Phi_\zeta$ is determined by the body velocity $\zeta$.
We emphasize that the motion
of the body inside the fluid does not generate vorticity and only causes
irrotational motions the fluid. The potential function $\Phi_\zeta$ is a
solution to Laplace's equation $\Delta \Phi_\zeta = 0,$
subject to the boundary conditions
\begin{equation} \label{Neumann}
\Delta \Phi_\zeta = 0, \qquad
\left. \frac{\partial \Phi_\zeta}{\partial n} \right|_{\partial \mathcal{B}} =
(\boldsymbol{\Omega} \times \mathbf{X} + \mathbf{V}) \cdot \mathbf{n}, \qquad
\left. \nabla \Phi_\zeta \right|_{\infty} = 0.
\end{equation}
By linearity of Laplace's equation, one can write, following Kirchhoff (see~\cite{lamb}),
\begin{equation}
\label{eq:velpot}
\Phi_\zeta = \Omega \Phi_{\Omega} + V_x \Phi_x + V_y \Phi_y,
\end{equation}
where
$\Phi_\Omega, \Phi_x,\Phi_y$ are called velocity potentials
and are solutions to Laplace's equation subject to
the boundary conditions on $\partial \mathcal{B}$
\begin{equation}\label{eq:neumann1}
\begin{split}
\left.\dfrac{\partial\Phi_\Omega}{\partial n}\right|_{\partial \mathcal{B}} \ = \
(\mathbf{X} \times \mathbf{n} )\cdot \mathbf{b}_3 \ ,
\quad
\left.\dfrac{\partial\Phi_x}{\partial n} \right|_{\partial \mathcal{B}} \ = \ \mathbf{n} \cdot \mathbf{b}_{1} \ ,
\quad \ \
\left.\dfrac{\partial\Phi_y}{\partial n} \right|_{\partial \mathcal{B}} \ = \ \mathbf{n} \cdot\mathbf{b}_{2} .
\end{split}
\end{equation}
The velocity $\mathbf{u}_{\rm v}$
is a divergence-free vector field and can be written
as $\mathbf{u}_{\rm v}= \nabla \times \boldsymbol{\Psi} + \mathbf{u}_\Gamma$,
where $\nabla \times \boldsymbol{\Psi}$ describes the fluid velocity due to ambient
vorticity and $\mathbf{u}_\Gamma$ describes the fluid velocity due to a net circulatory flow
around the submerged body.
The vector potential $\boldsymbol{\Psi}$
satisfies $\Delta \boldsymbol{\Psi} = -\boldsymbol{\omega}$, where $\boldsymbol{\omega}
= \nabla \times \mathbf{u}_{\rm v}$ is the vorticity field,
subject to the boundary conditions $(\nabla \times \boldsymbol{\Psi}) \cdot
\mathbf{n}=0$ on $\partial \mathcal{B}$ and $\nabla \times \boldsymbol{\Psi}=0$ at infinity.
Clearly, in the absence of ambient vorticity, $\boldsymbol{\Psi}$ is harmonic
and can be written for planar flows as $\boldsymbol{\Psi}= \Psi \mathbf{e}_3$,
where $\Psi$ is referred to as the stream function and satisfies the boundary conditions
(see~\cite[\S9.40]{MiTh1968})
\begin{equation} \label{boundary}
\left. \Psi \right|_{\partial \mathcal{B}} = V_x Y - V_y X - \frac{\Omega}{2}(X^2 + Y^2) =
\ \text{function of time only}.
\end{equation}
The harmonic vector field $ \mathbf{u}_\Gamma$ is non-zero only when
there is a net circulatory flow around the body; it satisfies $\nabla
\cdot {\bf u}_{\Gamma} = 0$ and $\nabla \times {\bf u}_{\Gamma}= 0$
(i.e., $\Delta \mathbf{u}_\Gamma=0$) and the boundary conditions $
{\bf u}_{\Gamma} \cdot {\bf n} = 0$ on $\partial \mathcal{B}$ and
${\bf u}_{\Gamma} = 0$ at infinity. Note that, in three dimensional
flows, one does not need the harmonic vector field
$\mathbf{u}_\Gamma$.\footnote{ In three dimensions, any closed curve
in the exterior of a {\it bounded} body is contractible, so the
harmonic vector field $\mathbf{u}_\Gamma$ may be set to zero. This
result is due to the {\em Poincar\'{e} Lemma} which can be alternatively
stated as follows: a closed one-form on a
(sub)-manifold with trivial first cohomology is globally exact.
}
A harmonic stream function $\Psi_\Gamma$ associated with the circulation around the planar body
can be defined such that $\left. \Psi_\Gamma\right|_{\partial \mathcal{B}}=$ constant.
The function $\Psi_\Gamma$ can be found using a conformal transformation
that relates the flow field in the region
exterior to the body to that in the region exterior to the unit circle. For concreteness,
let $(\tilde{X},\tilde{Y})$ denote the body coordinates in the circle plane (which are measured relative
to a frame attached to the center of the circle as a result of choosing
the origin of the body frame in the physical plane to be placed at the conformal center).
The stream function $\Psi_\Gamma$ can be readily obtained by observing that
the effect of having a net circulation $\Gamma$ around the body is equivalent to placing a point
vortex of strength $\Gamma$ at the center of mass of the body; namely,
\begin{equation} \label{circulation}
\mathbf{u}_\Gamma = \nabla \times (\Psi_\Gamma \, \mathbf{e}_3), \qquad
\Psi_\Gamma = \frac{\Gamma}{4\pi} \log (\tilde{X}^2 + \tilde{Y}^2).
\end{equation}
Alternatively, the circulatory flow could be obtained as the gradient of a harmonic potential $\Phi_\Gamma$
(the harmonic conjugate to $\Psi_\Gamma$ satisfying the Cauchy-Riemann relations). Note that
$\Phi_\Gamma$ would have to satisfy $\left. \nabla \Phi_\Gamma \cdot \mathbf{n} \right|_{\partial \mathcal{B}} = 0$.
\paragraph{Kinetic Energy.} The kinetic energy of the fluid-solid system is simply the
sum of the kinetic energies for both constitutive systems:
\begin{equation} \label{Tkin}
T = T_{\mathrm{fluid}} + T_{\mathrm{body}} =
\frac{1}{2}\int_{\mathcal{F}} \left\Vert \mathbf{u} \right\Vert^2 dV +
\frac{1}{2} m \left\Vert \mathbf{V}\right\Vert^2 + \frac{1}{2} \mathbb{I} \Omega^2.
\end{equation}
where $dV$ is a standard volume (more precisely, area) element on $\mathbb{R}^2$ . The kinetic energy
of the rigid body can be readily rewritten in the form
\begin{equation}\label{eq:Tbody}
T_{\mathrm{body}} = \dfrac{1}{2} \zeta^T \mathbb{M}_{b} \zeta, \qquad
\mathbb{M}_b := \begin{pmatrix}
\mathbb{I} & 0 \\
0 & m \mathbf{I}
\end{pmatrix}.
\end{equation}
where $\mathbf{I}$ is the $2$-by-$2$ identity matrix. The kinetic energy of the fluid
can be written as
\begin{equation} \label{eq:T}
\begin{split}
T_{\mathrm{fluid}} & = \dfrac{1}{2}\int_{\mathcal{F}} \left\Vert \mathbf{u} \right\Vert^2 dV =
\dfrac{1}{2}\int_{\mathcal{F}} \nabla(\Phi_\zeta + \Phi_\Gamma)\cdot \nabla(\Phi_\zeta + \Phi_\Gamma) \, dV \\
& = \dfrac{1}{2}\int_{\mathcal{F}} \nabla\Phi_\zeta \cdot \nabla\Phi_\zeta \, dV
+ \int_{\mathcal{F}} \nabla\Phi_\zeta \cdot \nabla \Phi_\Gamma \, dV
+ \dfrac{1}{2}\int_{\mathcal{F}} \nabla \Phi_\Gamma \cdot \nabla \Phi_\Gamma \, dV . \\
\end{split}
\end{equation}
The first term in \eqref{eq:T} can be rewritten using the divergence theorem, then employing~\eqref{eq:velpot}
and~\eqref{eq:neumann1}, as follows
\begin{equation} \label{eq:Tf_zeta}
\begin{split}
\dfrac{1}{2}\int_{\mathcal{F}} \nabla\Phi_\zeta \cdot \nabla\Phi_\zeta \, dV =
\dfrac{1}{2} \oint_{\partial \mathcal{B}} \Phi_\zeta \dfrac{\partial \Phi_\zeta}{\partial n} \, dS =
\dfrac{1}{2} \zeta^T \mathbb{M}_{f} \zeta,
\end{split}
\end{equation}
where $ \mathbb{M}_{f} $ is a $3\times3$ added mass matrix. Now, using the fact that $\nabla \Phi_\zeta$ and $\nabla \Phi_\Gamma$ are $L_2$-orthogonal: since $\nabla \Phi_\Gamma = \nabla \times (\Psi_\Gamma \, \mathbf{e}_3)$, where $\Psi_\Gamma$ is uni-valued, we have
\begin{align*}
\int_{\mathcal{F}} \nabla \Phi_\zeta \cdot \nabla \Phi_\Gamma \, dV & =
\int_{\mathcal{F}} \nabla \Phi_\zeta \cdot \nabla \times (\Psi_\Gamma \, \mathbf{e}_3) \, dV
= \oint_{\partial \mathcal{B}} \Phi_\zeta \frac{\partial \Psi_\Gamma}{\partial s} \, dS = 0,
\end{align*}
since $\Psi_\Gamma$ is constant on the boundary $\mathcal{B}$, so that the tangential derivative $\partial \Psi_\Gamma/ \partial s$ vanishes. The last term in \eqref{eq:T} can be treated as follows: in polar coordinates, $\Phi_\Gamma(r, \theta) = \Gamma \theta$, so that if we enclose the fluid-solid system in a large circular box of radius $\Lambda$,
\[
\int_{\mathcal{F}} \nabla \Phi_\Gamma \cdot \nabla \Phi_\Gamma \, dV = 2\pi \Gamma^2 \int_R^\Lambda \frac{dr}{r} = 2\pi \Gamma^2 \log \frac{\Lambda}{R}.
\]
This constant term diverges logarithmically as $\Lambda \rightarrow +\infty$. To remedy this, we regularize the kinetic energy by discarding this infinite contribution -- a remedy also used in~\cite{lamb,BoMa06}. That is, we consider the kinetic energy of the solid-fluid system to be given by
\begin{equation}
\label{eq:T_solidfluid}
T = \dfrac{1}{2} \zeta^T (\mathbb{M}_b + \mathbb{M}_f) \zeta .
\end{equation}
\paragraph{Equations of Motions.}
The equations governing the motion of the body in potential flow with non-zero circulation around the body
but in the absence of ambient vorticity are of the form
\begin{equation}
\begin{split} \label{EoM1}
\dot{\Pi} & = (\mathbf{P} \times \mathbf{V})\cdot \mathbf{b}_3 \\
\dot{\mathbf{P}} & = \mathbf{P} \times \boldsymbol{\Omega} + \Gamma \mathbf{b}_3 \times \mathbf{V}
\end{split}
\end{equation}
Here, $\Pi$ and $\mathbf{P}$ denote the angular and linear momenta of the solid-fluid system
expressed in the body frame. They are given in terms of the velocity in body frame by (here, $T$ is given by~\eqref{eq:T_solidfluid})
\begin{equation}
\Pi = \dfrac{\partial T}{\partial \Omega}, \qquad
\mathbf{P} = \dfrac{\partial T}{\partial \mathbf{V}}.
\end{equation}
One of the main objectives of this paper is to use the methods of
geometric mechanics (particularly the \textit{reduction by stages} approach)
to derive equations~\eqref{EoM1} governing the motion of
the body in potential flow and with non-zero circulation.
The case of a body of arbitrary geometry interacting with external
point vortices is addressed in~\cite{VaKaMa2009}. It would be of interest to extend these results to the case of a rigid body with circulation moving in the field of point vortices.
\section{Body-Fluid Interactions: Geometric Approach}
\label{sec:fsint}
In this section, we first establish the structure of the fluid-solid configuration space as a principal fiber bundle and show that there exists a distinguished connection on this bundle. We then recall
some general results from cotangent bundle reduction that will be useful to reduce the system and derive the equations of motion \eqref{EoM1}.
\subsection{Geometric Fluid-Solid Interactions}
\paragraph{The Configuration Space.} We describe the configurations of the body-fluid system by means of pairs $(g, \varphi)$, where $g$ is an element of $SE(2)$ describing the body motion and $\varphi: \mathcal{F}_0 \rightarrow \mathbb{R}^2$ is an embedding of the fluid reference space $\mathcal{F}_0$ into $\mathbb{R}^2$ describing the fluid. The pairs $(g, \varphi)$ have to satisfy the impermeability boundary conditions dictated by the inviscid fluid model. The embedding $\varphi$ represents the configuration of an incompressible
fluid and has therefore to be volume-preserving, i.e.,
$\varphi^\ast( dV ) = dV_0$, where $dV_0$ and $dV$ are volume forms on $\mathcal{F}_0$ and $\mathbb{R}^2$, respectively. We denote the space of all such volume-preserving embeddings by $\mathrm{Emb}_{\mathrm{vol}}(\mathcal{F}_0, \mathbb{R}^2)$. We denote by $Q$ the space of all pairs $(g, \varphi)$, $g \in SE(2)$ and $\varphi \in
\mathrm{Emb}_{\mathrm{vol}}(\mathcal{F}_0, \mathbb{R}^2)$ that satisfy the appropriate boundary
conditions. That is, the configuration manifold $Q$ of the body-fluid system is a submanifold of the product space $SE(2) \times \mathrm{Emb}_{\mathrm{vol}}(\mathcal{F}_0, \mathbb{R}^2)$.
\paragraph{Tangent and Cotangent Spaces.} At each $(g, \varphi) \in Q$, the tangent space $T_{(g, \varphi)} Q$
is a subspace of $T_g SE(2) \times T_{\varphi} \mathrm{Emb}_{\mathrm{vol}}(\mathcal{F}_0, \mathbb{R}^2)$ whose elements we denote by $(g, \varphi, \dot{g}, \dot{\varphi})$. Here, $\dot{g}$ is an element of $T_g SE(2)$ and $\dot{\varphi}$ is a map from $\mathcal{F}_0$ to $T\mathbb{R}^2$ such that $\dot{\varphi}(x) \in T_{\varphi(x)} \mathbb{R}^2$ for all $x \in \mathcal{F}_0$. Note that $\dot{g}$ represents the angular and linear velocity of the rigid body relative to the inertial frame while $\dot{\varphi}$ represents the \textbf{\emph{material}} or \textbf{\emph{Lagrangian velocity}} of the fluid. It is easier however to represent the elements of $TQ$ using
the rigid body velocity expressed in body frame $\zeta$ and the fluid velocity $\mathbf{u}$.
Note that, in the group theoretic notation, the body velocity may be defined as $\zeta = g^{-1} \dot{g}$ and the \emph{\textbf{spatial}} or \emph{\textbf{Eulerian velocity field}} of the fluid may be defined as $\mathbf{u} := \dot{\varphi} \circ \varphi^{-1}$. The vector field $\textbf{u}$ is a vector field on $\mathcal{F} := \varphi(\mathcal{F}_0)$, in contrast to $\dot{\varphi}$, which is merely a map from $\mathcal{F}_0$ to $T\mathbb{R}^2$. We emphasize that $\zeta$ and $\mathbf{u}$ cannot be chosen arbitrarily, but have to satisfy the
impermeability boundary conditions.
The cotangent space $T^\ast_{(g, \varphi)} Q$ at a point $(g, \varphi) \in Q$
consists of elements $(g, \varphi, \pi, \alpha)$, where $\pi = \mathbb{M}_b \zeta \in \mathfrak{se}(2)^\ast$
is the momentum of the submerged body and $\alpha \in \Omega^1(\mathcal{F})$ is the one-form dual
of the velocity field $\mathbf{u}$, see~Appendices~\ref{appendix:rigidgroup} and~\ref{appendix:diffgroup}
for more details.
\paragraph{Kinetic Energy on $T^\ast Q$}. The kinetic energy in~\eqref{eq:T_solidfluid} can be used
to define a metric on the cotangent bundle $T^\ast Q$. To verify this, it is informative to recall here the
\emph{\textbf{Hodge decomposition}} of differential forms (see \cite{AbMaRa1988}). Any one-form $\alpha
\in \Omega^1(\mathcal{F})$ can be decomposed in a unique way as $\alpha = \mathbf{d}\Phi + \delta \Psi + \alpha_\Gamma$, where $\Phi \in \Omega^0(\mathcal{F})$, $\Psi \in \Omega^2(\mathcal{F})$, and $\alpha_\Gamma$ is a harmonic form: $\mathbf{d}\alpha_\Gamma = \delta \alpha_\Gamma = 0$.
The Hamiltonian of the solid-fluid system can now be defined as the function $H: T^\ast Q \rightarrow \mathbb{R}$ given by
\begin{align*}
H(g, \varphi, \pi_b, \alpha) & =
\frac{1}{2} \left\Vert \alpha\right\Vert^2 +
\frac{1}{2} \pi^T \, \mathbb{M}_b^{-1} \, \pi
= \frac{1}{2} \left\Vert \mathbf{d} \Phi_\zeta\right\Vert^2 +
\frac{1}{2} \left\Vert \delta \Psi\right\Vert^2 +
\frac{1}{2} \pi_b^T \, \mathbb{M}_b^{-1} \, \pi_b,
\end{align*}
where the norm in $\left\Vert \alpha\right\Vert^2$ is that in (\ref{formnorm}) on one-forms induced by the Euclidian metric and where we have again discarded the infinite constant $\left\Vert \alpha_\Gamma \right\Vert^2$.
Here we have denoted the momentum by $\pi_b$ to emphasize the fact that this variable encodes the momentum of the rigid body only. Later on, we will consider the momentum of the combined solid-fluid system, which will be denoted by $\pi = (\Pi, \mathbf{P})$.
The first term in
the right-hand side of the Hamiltonian can be expressed in terms of the added mass matrix:
\begin{align*}
\left\Vert \mathbf{d} \Phi_\zeta\right\Vert^2 & =
\int_{\mathcal{F}} \mathbf{d} \Phi_\zeta \wedge \ast \mathbf{d} \Phi_\zeta
= \oint_{\partial \mathcal{B}} \Phi_\zeta \ast \mathbf{d} \Phi_\zeta
- \int_{\mathcal{F}} \Phi_\zeta \mathbf{d} \ast \mathbf{d} \Phi_\zeta \\
& = \oint_{\partial \mathcal{B}} \Phi_\zeta \frac{\partial \Phi_\zeta}{\partial n} \, dS
- \int_{\mathcal{F}} \Phi_\zeta \nabla^2 \Phi_\zeta \, dV
= \zeta^T \, \mathbb{M}_f \zeta,
\end{align*}
so that the Hamiltonian becomes
\begin{equation} \label{almostredham}
H(g, \varphi, \pi_b, \alpha) = \frac{1}{2} \left\Vert \delta \Psi\right\Vert^2
+ \frac{1}{2} \pi_b^T \, \mathbb{M}_b^{-1}(\mathbb{M}_b + \mathbb{M}_f) \mathbb{M}_b^{-1} \, \pi_b
\end{equation}
using $\zeta = \mathbb{M}_b^{-1} \pi_b$.
The term involving $\delta \Psi$ yields the kinetic energy due to vortical structures present in the fluid
and is zero for the rigid body with circulation.
\paragraph{The Action of the Group of Volume-Preserving Diffeomorphisms.}
The group of all volume-preserving diffeomorphisms $\mathrm{Diff}_{\mathrm{vol}}(\mathcal{F})$
acts from the right on $Q$ by composition:
for any $(g, \varphi) \in Q$ and $\phi \in \mathrm{Diff}_{\mathrm{vol}}(\mathcal{F})$ we define
\[
(g, \varphi) \cdot \phi := (g, \varphi \circ \phi).
\]
This action leaves the kinetic energy \eqref{almostredham} on $Q$ invariant, since the Eulerian velocity field $\mathbf{u}$ is itself invariant. The $\mathrm{Diff}_{\mathrm{vol}}(\mathcal{F})$-invariance
represents the \textbf{\emph{particle relabeling symmetry}}.
The manifold $Q$ is hence the total space of a principal fiber bundle with structure group $\mathrm{Diff}_{\mathrm{vol}}(\mathcal{F})$ over $SE(2)$. Here, the bundle projection projection $\mathrm{pr}: Q \rightarrow SE(2)$ is simply the projection onto the first factor: $\mathrm{pr}(g, \varphi) = g$. One can readily show that the infinitesimal generator $X_Q$ corresponding to an element $X \in \mathfrak{X}_{\mathrm{vol}}(\mathcal{F}_0)$ is given by
\begin{equation}
X_Q(g, \varphi) := (0, \varphi_\ast X) \in T_{(g, \varphi)} Q.
\end{equation}
\paragraph{The Momentum Map of the Particle Relabeling Symmetry.}
The group $\mathrm{Diff}_{\mathrm{vol}}(\mathcal{F}_0)$ acts on $Q$ and hence on $T^\ast Q$ by the cotangent lifted action. We now compute the momentum map corresponding to this action, \emph{i.e.} the particle relabeling symmetry.
This is a map $J$ from $T^\ast Q$ to $\mathfrak{X}^\ast_{\mathrm{vol}}(\mathcal{F}_0)$, and we recall from appendix~\ref{appendix:diffgroup} that $\mathfrak{X}^\ast_{\mathrm{vol}}(\mathcal{F}_0) = \mathbf{d}\Omega^1(\mathcal{F}_0) \times \mathbb{R}$. Consequently, the momentum map has two components, corresponding with circulation and vorticity (pulled back to the reference configuration $\mathcal{F}_0$). The statement that the momentum map is conserved then translates into \textbf{\emph{Kelvin's theorem}} that the vorticity is advected with the fluid, and that the circulation around each material loop is conserved.
\begin{proposition} \label{prop:vortmom}
The momentum map $J$ of the $\mathrm{Diff}_{\mathrm{vol}}(\mathcal{F}_0)$-action on $T^\ast Q$ is given by
\begin{equation} \label{mommap}
J(\pi_b, \alpha) = (\mathbf{d}\varphi^\ast\alpha, \Gamma), \quad \text{where} \quad \Gamma = \oint_{\partial \mathcal{B}} \varphi^\ast\alpha.
\end{equation}
\end{proposition}
\begin{proof}
We use the well-known formula for the momentum map of a cotangent lifted action
(see \cite{AbMa78}). For each $X \in \mathfrak{X}_{\mathrm{vol}}(\mathcal{F}_0)$, we have
\begin{align*}
\left<J(\pi_b, \alpha), X\right> &= \left<( \pi_b, \alpha), X_Q(g, \varphi)\right>
= \int_{\mathcal{F}} \left<\alpha, \varphi_\ast X\right> dV
= \int_{\mathcal{F}_0} \left<\varphi^\ast \alpha, X\right> dV_0,
\end{align*}
so that $J( \pi_b, \alpha) = [\varphi^\ast \alpha] \in \mathfrak{X}^\ast_{\mathrm{vol}}(\mathcal{F}_0)$. After composition with the isomorphism (\ref{iso}) we obtain the desired form (\ref{mommap}).
\end{proof}
\paragraph{The One-Form $\alpha_\Gamma$ with Circulation $\Gamma$.}
Recall from section~\ref{sec:prel} that having circulation $\Gamma$ around the rigid body is equivalent to placing a point vortex of strength $\Gamma$ at the conformal center of the body, whose velocity field $\mathbf{u}_\Gamma$ was given in \eqref{circulation}.
In the remainder of this paper it will be easier to work with the one-form $\alpha_\Gamma$ on $\mathcal{F}$ given by $\alpha_\Gamma = \mathbf{u}_\Gamma^\flat$, or explicitly by
\begin{equation} \label{vortalpha}
\alpha_\Gamma = \delta (\Psi_\Gamma dV),
\end{equation}
where $\Psi_\Gamma$ is the stream function \eqref{circulation}, and $dV$ is the volume form on $\mathbb{R}^2$.
\begin{proposition}
The one-form $\alpha_\Gamma$ is a harmonic one-form on $\mathcal{F}$ and satisfies
\begin{equation} \label{contint}
\oint_{\partial \mathcal{B}} \alpha_\Gamma = \Gamma.
\end{equation}
In particular, $\alpha_\Gamma$ is $L_2$-orthogonal to the space of exact one-forms.
\end{proposition}
\begin{proof}
The form $\alpha_\Gamma$ satisfies $\delta \alpha_\Gamma = 0$ by definition, and we have that $\mathbf{d} \alpha_\Gamma = \Gamma \delta(\mathbf{X}) dV$, so that $\mathbf{d} \alpha_\Gamma = 0$ in the fluid domain $\mathcal{F}$. Hence, $\alpha_\Gamma$ is harmonic and (by means of the Hodge theorem) $L_2$-orthogonal to the space of exact one-forms. The line integral \eqref{contint} follows from the expression \eqref{circulation} for the stream function $\Psi_\Gamma$.
\end{proof}
\paragraph{The Action of the Euclidian Symmetry Group.}
In addition to the right principal action of $\mathrm{Diff}_{\mathrm{vol}}(\mathcal{F}_0)$ on $Q$ described before,
the special Euclidian group $SE(2)$ acts on $Q$ by bundle automorphisms from the \emph{left}. In order words, there is an action $\psi : SE(2) \times Q \rightarrow Q$ given by
\begin{equation} \label{seaction}
\psi(h, (g, \varphi)) = h \cdot (g, \varphi) = (hg, h\varphi),
\end{equation}
for all $h \in SE(2)$ and $(g, \varphi) \in Q$. The embedding $h\varphi$ is defined by $(h\varphi)(x) = h \cdot \varphi(x)$, where the action on the right-hand side is just the standard action of $SE(2)$ on $\mathbb{R}^2$. From a physical point of view, the $SE(2)$-action corresponds to the invariance of the combined solid-fluid system under arbitrary rotations and translations.
\paragraph{The Neumann Connection.}
The bundle $\mathrm{pr} : Q \rightarrow SE(2)$ (defined by the $\mathrm{Diff}_{\mathrm{vol}}(\mathcal{F})$-invariance) is equipped with a principal fiber bundle connection, which was termed the \textbf{\emph{Neumann connection}} in \cite{VaKaMa2009}. This connection seemed to have appeared first in \cite{freeboundary} (see also \cite{Koiller1987}) and essentially encodes the effects of the rigid body on the ambient fluid.
The connection one-form $\mathcal{A}: TQ \rightarrow \mathfrak{X}_{\mathrm{vol}}(\mathcal{F}_0)$ of the Neumann connection is defined in terms of the Helmholtz-Hodge decomposition \eqref{eq:u} of vector fields: if $(g, \varphi, \zeta, \mathbf{u})$ is an element of $T_{(g, \varphi)} Q$, then
\begin{equation} \label{connform}
\mathcal{A}(g, \varphi, \zeta, \mathbf{u}) = \varphi^\ast \mathbf{u}_{\mathrm{v}},
\end{equation}
where $\mathbf{u}_{\mathrm{v}}$ is the divergence-free part of the Eulerian velocity $\mathbf{u}$ in the Helmholtz-Hodge decomposition \eqref{eq:u}. It can be shown (see \cite{VaKaMa2009}) that $\mathcal{A}$ satisfies the requirements of a connection one-form, and that $\mathcal{A}$ is invariant under the action of $SE(2)$ on $Q$:
\[
\mathcal{A}_{(g, \varphi)}( T\psi_h(\zeta, \mathbf{u})) = \mathcal{A}_{(g, \varphi)}(\zeta, \mathbf{u})
\]
for all $(\zeta, \mathbf{u}) \in T_{(g, \varphi)}Q$. Here $\psi_h = \psi(h, \cdot)$, with $\psi$ the $SE(2)$-action \eqref{seaction}. Given the exact form of the Neumann connection, one can compute its curvature, which is a two-form $\mathcal{B}$ on the total space $Q$ with values in $\mathfrak{X}_{\mathrm{vol}}(\mathcal{F}_0)$. It turns out that there exists a closed-form formula for the curvature, which was first determined by \cite{MontgomeryThesis}
and further generalized by \cite{VaKaMa2009}. More precisely, we compute an expression for the contraction $\left< \mu, \mathcal{B} \right>$, where $\mu$ is an arbitrary element of the dual space $\mathfrak{X}^\ast_{\mathrm{vol}}(\mathcal{F}_0)$.
\begin{proposition} \label{prop:curv} Let $(\zeta_1, \mathbf{u}_1)$ and $(\zeta_2, \mathbf{u}_2)$ be elements of
$T_{(g, \varphi)} Q$ and denote the solutions of \eqref{Neumann} associated to $\zeta_1$ resp. $\zeta_2$ by $\Phi_1$ and $\Phi_2$. Let $\mu$ be an arbitrary element of $\mathfrak{X}^\ast_{\mathrm{vol}}(\mathcal{F}_0)$. Then the $\mu$-component of
the curvature $\mathcal{B}$ is given by
\begin{equation} \label{curvature} \left<\mu, \mathcal{B}_{(g,
\varphi)}((\zeta_1, \mathbf{u}_1), (\zeta_2, \mathbf{u}_2))\right> = \left\langle \! \left\langle\mu, \mathbf{d} \Phi_1 \wedge \mathbf{d}
\Phi_2\right\rangle \!
\right\rangle-\oint_{\partial \mathcal{B}} \alpha \wedge \ast(\mathbf{d}
\Phi_1 \wedge \mathbf{d} \Phi_2),
\end{equation}
where $\left\langle \! \left\langle\cdot, \cdot\right\rangle \!
\right\rangle$ is the metric on the space of
forms on $\mathcal{F}$ defined in (\ref{formnorm}).
\end{proposition}
In what follows, it will be necessary to have an expression for the curvature in terms of the elementary stream functions, rather than the elementary velocity potentials. Such an expression can be easily obtained by noting that the stream function $\Psi$ is a harmonic conjugate to the velocity potential $\Phi$. In particular, if $\Phi_1, \Phi_2$ are the velocity potentials introduced in the statement of proposition~\ref{prop:curv}, with their associated stream functions $\Psi_1, \Psi_2$, then
$\mathbf{d}\Psi_1 \wedge \mathbf{d}\Psi_2 = \mathbf{d}\Phi_1 \wedge \mathbf{d}\Phi_2$. The $\mu$-component of the curvature (\ref{curvature}) hence becomes
\begin{align}
\left<\mu, \mathcal{B}\right>
& = \left\langle \! \left\langle\mu, \mathbf{d} \Psi_1 \wedge \mathbf{d}
\Psi_2\right\rangle \!
\right\rangle -\oint_{\partial \mathcal{B}} \alpha \wedge \ast(\mathbf{d}
\Psi_1 \wedge \mathbf{d} \Psi_2). \label{curvaturestream}
\end{align}
We established the structure of the fluid-solid configuration space $Q$ as the total space of a principal fiber bundle with structure group $\mathrm{Diff}_{\mathrm{vol}}(\mathcal{F}_0)$. We also showed that there exists a distinguished connection on this bundle, which is invariant under the action of $SE(2)$ on $Q$. In order to reduce the system and derive the equations of motion \eqref{EoM1}, we will need the framework of \emph{cotangent bundle reduction}, which we now describe. More information and proofs of the results quoted below can be found in \cite{MaPe2000} and \cite{MarsdenHamRed}.
\subsection{Cotangent Bundle Reduction: Some General Results}
We describe cotangent bundle reduction in a general context. Let the unreduced configuration space be denoted by $Q$ and assume that a Lie group $G$ acts freely and properly on $Q$ from the right, so that the quotient space $Q/G$ is a manifold. Furthermore, we assume that the quotient space $Q/G$ is equal to a second Lie group $H$, which acts on $Q$ from the left. In Section~\ref{sec:red}, $Q$ will be the fluid-solid configuration space, $G$ will be the group
$\mathrm{Diff}_{\mathrm{vol}}(\mathcal{F}_0)$ and $H$ will be the special Euclidian group $SE(2)$. Our first goal is to reduce by the $G$-action and describe the reduced phase space. This is described in theorem~\ref{thm:cotbundle}. In the second stage of the reduction, we will then do Poisson reduction with respect to the residual group $H$ in order to obtain a Poisson structure on the twice reduced space. This is described below in theorem~\ref{thm:poisson}.
Recall that the quotient projection $\mathrm{pr}_{Q, G}: Q \rightarrow Q/G$ defines a right principal fiber bundle, and assume that a connection on this fiber bundle is given, with connection one-form $\mathcal{A}: TQ \rightarrow \mathfrak{g}$. For more information about principal fiber bundles and connections, see \cite{KN1}.
The group $G$ acts on $T^\ast Q$ by cotangent lifts, and we denote the momentum map of this action by $J: T^\ast Q\rightarrow \mathfrak{g}^\ast$.
The next theorem characterizes the reduced phase space $(T^\ast Q)_\mu := J^{-1}(\mu)/G_\mu$, where $\mu$ is an arbitrary element of $\mathfrak{g}^\ast$. Here, $G_\mu$ is the isotropy subgroup of $\mu$, defined as follows: $g \in G_\mu$ if $\mathrm{Ad}_g^\ast \mu = \mu$, where $\mathrm{Ad}_g^\ast$ denotes the coadjoint action of $G$ on $\mathfrak{g}^\ast$. The reduced phase space $(T^\ast Q)_\mu$ can be described in full generality, but we focus here on the special case where the isotropy group $G_\mu$ is the full symmetry group: $G_\mu = G$. We will see in section~\ref{sec:diffred} that this is the relevant case to consider for the rigid body with circulation.
\begin{theorem} \label{thm:cotbundle}
Let $G$ be a group acting freely and properly from the right on a manifold $Q$ so that $\mathrm{pr}_{Q, G}: Q \rightarrow Q/G$ is a principal fiber bundle. Let $\mathcal{A}: TQ \rightarrow \mathfrak{g}$ be a connection one-form on this bundle. Let $\mu \in \mathfrak{g}^\ast$ and assume that $G_\mu = G$.
Then there is a symplectic diffeomorphism between $(T^\ast Q)_\mu$ and $T^\ast (Q/G)$, the latter with symplectic form $\omega_{\mathrm{can}} - B_\mu$; here $\omega_{\mathrm{can}}$ is the canonical symplectic form on $T^\ast (Q/G)$ and $B_\mu = \mathrm{pr}^\ast_{Q/G} \beta_\mu$, where
$\mathrm{pr}_{Q/G}: T^\ast (Q/G) \rightarrow Q/G$ is the cotangent bundle projection, and $\beta_\mu$
is determined through
\begin{equation} \label{magform}
\mathrm{pr}^\ast_{Q, G} \beta_\mu = \mathbf{d}\left\langle \mu, \mathcal{A}\right\rangle.
\end{equation}
\end{theorem}
\begin{proof}[\textrm{\textbf{Outline of the Proof}}]
This is a special case of theorem~2.3.3 in \cite{MarsdenHamRed}. We just recall the explicit form of the isomorphism between $(T^\ast Q)_\mu$ and $T^\ast (Q/G)$; the proof that this map also preserves the relevant symplectic structures can be found in \cite{MarsdenHamRed}.
The isomorphism $\varphi_\mu : (T^\ast Q)_\mu \rightarrow T^\ast (Q/G)$ is the composition of the map ${\mathrm{shift}}_\mu : (T^\ast Q)_\mu \rightarrow (T^\ast Q)_0$ and the map $\varphi_0: (T^\ast Q)_0 \rightarrow T^\ast (Q/G)$:
\begin{equation} \label{rediso}
\varphi_\mu = \varphi_0 \circ {\mathrm{shift}}_\mu.
\end{equation}
Both of these constitutive maps are isomorphisms. The map ${\mathrm{shift}}_\mu$ is defined as follows: we first introduce a map ${\mathrm{Shift}}_\mu : J^{-1}(\mu) \rightarrow J^{-1}(0)$ by
\[
{\mathrm{Shift}}_\mu(\alpha_q) = \alpha_q - \left< \mu, \mathcal{A}(q) \right>.
\]
It can easily be verified that ${\mathrm{Shift}}$ is $G$-invariant so that it drops to a quotient map ${\mathrm{shift}}_\mu: (T^\ast Q)_\mu \rightarrow (T^\ast Q)_0$.
Secondly, the map $\varphi_0: (T^\ast Q)_0 \rightarrow T^\ast (Q/G)$ is defined by noting that
\[
J^{-1}(0) = \{ \alpha_q \in T^\ast Q: \left< \alpha_q , \xi_Q(q) \right> = 0 \quad \text{for all $\xi \in \mathfrak{g}$} \}
\]
so that the map $\bar{\varphi}_0 : J^{-1}(0) \rightarrow T^\ast(Q/G)$ given by
\begin{equation} \label{barphi}
\left< \bar{\varphi}_0(\alpha_q) , T \pi_{Q, G}(v_q) \right> = \left<\alpha_q, v_q \right>
\end{equation}
is well defined. The map $\bar{\varphi}_0$ is easily seen to be $G$-invariant and surjective, and hence induces a quotient map $\varphi_0: (T^\ast Q)_0 \rightarrow T^\ast (Q/G)$.
\end{proof}
Note that the isomorphism between $(T^\ast Q)_\mu$ and $T^\ast Q/G$ is connection-dependent. As a result, the reduced symplectic form on $T^\ast Q/G$ is modified by the two-form $\beta_\mu$, which is traditionally referred to as a \emph{\textbf{magnetic term}} since it also appears in the description of a charged particle in a magnetic field (see \cite{GuSt1984}).
Having described the reduced phase space $(T^\ast Q)_\mu$, we now take into account the assumption made earlier that the base space $Q/G$ has the structure of a second Lie group $H$, which acts on $Q$ from the \emph{left} and leaves the connection one-form $\mathcal{A}$ invariant. In this case, the reduced phase space $(T^\ast Q)_\mu$ is equal to $T^\ast H$, equipped with the magnetic symplectic structure described before. As a result, $H$ acts on $T^\ast H$, and it can be checked that this action leaves the symplectic structure invariant. It would now be possible to do symplectic reduction as above for the $H$-action as well to obtain a fully reduced symplectic structure. However, all that is needed in section~\ref{sec:eucsymm} is an expression for the reduced Poisson structure, described in the following theorem.
\begin{theorem}[Theorem~7.2.1 in \cite{MarsdenHamRed}] \label{thm:poisson}
The Poisson reduced space for the left cotangent lifted action of
$H$ on $(T^\ast H, \omega_{\mathrm{can}} - B_\mu)$ is $\mathfrak{h}^\ast$ with
Poisson bracket given by
\begin{equation} \label{bracket}
\{f, g\}_\mathcal{B}(\mu) = -\left< \mu, \left[ \frac{\delta f}{\delta \mu},
\frac{\delta g}{\delta \mu} \right] \right> - {B}_\mu(e)\left(
\frac{\delta f}{\delta \mu}, \frac{\delta g}{\delta \mu} \right)
\end{equation}
for $f, g \in C^\infty(\mathfrak{h}^\ast)$.
\end{theorem}
The theorem in \cite{MarsdenHamRed} is proved for right actions,
whereas the action of $H$ here is assumed to be from the left. However, the
same proof continues to hold, \emph{mutatis mutandis}.
Lastly, we recall the definition of the $\mathcal{B}\mathfrak{h}$-potential (see \cite{MarsdenHamRed}), which plays the role of momentum map relative to the magnetic term. While this potential can be defined on an arbitrary manifold with a magnetic symplectic form, we treat just the case of a Lie group $H$ with left-invariant symplectic form $\omega_{\mathrm{can}} - B_\mu$ as before.
\begin{definition} \label{def:bgpot} Let $H$ be a Lie group with left-invariant symplectic form $\omega_{\mathrm{can}} - B_\mu$.
Suppose there exists a smooth map $\psi: H \rightarrow \mathfrak{h}^\ast$ such that
\begin{equation} \label{defbgpot}
\mathbf{i}_{\xi_H} B_\mu = \mathbf{d} \left<\psi, \xi \right>,
\end{equation}
for all $\xi \in \mathfrak{h}$. Then the map $\psi$ is called the $\mathcal{B}\mathfrak{h}$-potential of the $H$-action, relative to the magnetic term $B_\mu$.
\end{definition}
In what follows, we will always assume that $\psi$ exists. In this case, $\psi$ is defined up to an arbitrary constant, which we normalize by assuming that $\psi(e) = 0$. Under this assumption, the \emph{non-equivariance one-cocycle} $\sigma: H \rightarrow \mathfrak{h}$ associated to $\psi$,
\begin{equation} \label{onecocycle}
\sigma(g) = \psi(g) - \mathrm{Ad}^\ast_{g^{-1}} \psi(e),
\end{equation}
coincides with $\psi$. We will make no further distinction between $\psi$ and $\sigma$. The importance of $\psi$ lies in the fact that we may characterize the symplectic leaves of the magnetic bracket \eqref{bracket} in $\mathfrak{h}^\ast$ as orbits of a certain affine action of $H$ on $\mathfrak{h}^\ast$.
\begin{proposition} \label{prop:afforbit}
Let $(T^\ast H, \omega_{\mathrm{can}} - B_\mu)$ be a Lie group with a magnetic symplectic form.
The symplectic leaves in $\mathfrak{h}^\ast$ of the magnetic Poisson structure \eqref{bracket} are the orbits of the following affine action:
\begin{equation} \label{affaction}
g \cdot \mu = \mathrm{Ad}^\ast_{g^{-1}} \mu + \psi(g),
\end{equation}
where $g \in H, \mu \in \mathfrak{h}^\ast$
\end{proposition}
\begin{proof}
See theorem~7.2.2 in \cite{MarsdenHamRed}.
\end{proof}
\section{Derivation of the Chaplygin-Lamb Equations via Reduction by Stages}
\label{sec:red}
\subsection{Reduction by the Diffeomorphism Group}
\label{sec:diffred}
We impose the condition that the fluid has constant circulation $\Gamma$ by considering the symplectic reduced space $J^{-1}(0, \Gamma)/\mathrm{Diff}_{\mathrm{vol}}(\mathcal{F}_0)$. Our goal is to use cotangent bundle
reduction to express this space in a more manageable form (via establishing an isomorphism that is connection-dependent) and to obtain an explicit expression for the reduced symplectic form on this space.
The reduction theorem~\ref{thm:cotbundle} deals with the case where the isotropy group coincides with the full group. It can be easily verified that this condition is satisfied for the rigid body with circulation: here, the relevant momentum value is $(0, \Gamma)$, and by \eqref{coad} we have that for all $\phi \in \mathrm{Diff}_{\mathrm{vol}}(\mathcal{F}_0)$,
\[
\mathrm{CoAd}_\phi(0, \Gamma) = (0, \Gamma),
\]
so that $(\mathrm{Diff}_{\mathrm{vol}}(\mathcal{F}_0))_{(0, \Gamma)} = \mathrm{Diff}_{\mathrm{vol}}(\mathcal{F}_0)$.
\paragraph{The Reduced Phase Space.}
As an introduction to the methods of this section, we establish an isomorphism $\varphi_\Gamma$ between the reduced phase space $J^{-1}(0, \Gamma)/\mathrm{Diff}_{\mathrm{vol}}(\mathcal{F}_0)$ and $T^\ast SE(2)$ in a relatively ad-hoc manner. We then show that cotangent bundle reduction yields precisely this isomorphism.
Let us first introduce the linear isomorphism
$\mathfrak{m} : \mathfrak{se}(2)^\ast \rightarrow \mathfrak{se}(2)^\ast$ given by
\begin{align} \label{mapm}
\mathfrak{m}(\pi_b) & = \pi_b + \mathbb{M}_f \mathbb{M}_b^{-1} \pi_b
= ( \mathbb{M}_b + \mathbb{M}_f ) \mathbb{M}_b^{-1} \pi_b
\end{align}
where $\mathbb{M}_b$ and $\mathbb{M}_f$ are the body mass matrix and the added mass matrix, respectively. The map $\mathfrak{m}$ will prove to be crucial later on; its effect is to redefine the momentum of the rigid body in order to take into account the added mass effects.
We now derive an explicit expression for the level sets $J^{-1}(0)$ and $J^{-1}(0, \Gamma)$ of the vorticity momentum map. Let $(g, \varphi, \pi_b, \alpha)$ be an element of $T^\ast Q$: the requirement that $J(\pi_b, \alpha) = (0, \Gamma)$ is equivalent to
\[
\mathbf{d} \alpha = 0 \quad \text{and} \quad \oint_{\partial \mathcal{B}} \varphi^\ast \alpha = \Gamma.
\]
By means of the Hodge decomposition, this implies that
\[
\alpha = \alpha_\Gamma + \mathbf{d}\Phi_\zeta,
\]
where $\alpha_\Gamma$ is given by \eqref{vortalpha} and $\Phi_\zeta$ is the solution of the Neumann problem \eqref{Neumann} with boundary data $\zeta = \mathbb{M}_b^{-1} \pi_b$. Hence, we may define an isomorphism $\psi_\Gamma$ from $J^{-1}(0, \Gamma)$ to $Q \times \mathfrak{se}(2)^\ast$, given by
\[
\psi_\Gamma: (g, \varphi, \pi_b, \alpha) \mapsto (g, \varphi, \mathfrak{m}(\pi_b)) \in Q \times \mathfrak{se}(2)^\ast,
\]
where $\mathfrak{m}$ is the map \eqref{mapm}. Likewise, there is an isomorphism $\psi_0 : J^{-1}(0) \rightarrow Q \times \mathfrak{se}(2)^\ast$, obtained by realizing that $ J^{-1}(0)$ consists of elements of the form $(g, \varphi, \pi_b, \mathbf{d} \Phi_\zeta)$, where $\Phi_\zeta$ has the same interpretation as above.
The map $\psi_\Gamma$ is easily seen to be $\mathrm{Diff}_{\mathrm{vol}}(\mathcal{F}_0)$-equivariant and hence drops to a quotient isomorphism $\varphi_\Gamma$ between $ J^{-1}(0, \Gamma)/\mathrm{Diff}_{\mathrm{vol}}(\mathcal{F}_0) $ and $T^\ast SE(2)$, given by
\[
\varphi_\Gamma: [(g, \varphi, \pi_b, \alpha)] \mapsto (g, \mathfrak{m}(\pi_b)).
\]
We see that the effect of the map $\varphi_\Gamma$ is to eliminate the influence of the fluid entirely, except for the added mass effects, which are encoded by $\mathfrak{m}$.
\paragraph{The $\Gamma$-Component of the Connection One-Form.}
In order to compute the magnetic term \eqref{magform}, we need an expression for the connection one-form, contracted with the momentum value at which we do reduction. For the rigid body with circulation, the connection is the Neumann connection, and the momentum value is $(0, \Gamma)$. Therefore, let $\mathcal{A}$ be the connection one-form of the Neumann connection as in \eqref{connform} and define the $\Gamma$-component of $\mathcal{A}$ to be the one-form $\mathcal{A}_\Gamma: TQ \rightarrow \mathbb{R}$ given by
\[
\mathcal{A}_\Gamma(g, \varphi, \zeta, \mathbf{u}) :=
\left< (0, \Gamma), \mathcal{A}(g, \varphi, \zeta, \mathbf{u})\right>,
\]
where on the right-hand side we interpret $(0, \Gamma)$ as an element of $\mathfrak{X}_{\mathrm{vol}}(\mathcal{F}_0)^\ast$. In other words,
$\mathcal{A}_\Gamma(g, \varphi, \zeta, \mathbf{u})$ computes the divergence-free part of $\mathbf{u}$ and contracts it with the element $(0, \Gamma)$ of $\mathfrak{X}_{\mathrm{vol}}(\mathcal{F}_0)^\ast$. This hints at the fact that $\mathcal{A}_\Gamma$ is nothing but the one-form $\alpha_\Gamma$ of \eqref{vortalpha}, which we prove in the next proposition.
\begin{proposition} \label{prop:compalpha}
The $\Gamma$-component of the connection one-form $\mathcal{A}$ is related to the form $\alpha_\Gamma$ by the following relation: for all $(g, \varphi) \in Q$ and $(\zeta, \mathbf{u}) \in T_{(g, \varphi)} Q$, we have that
\begin{equation} \label{Aalpha}
\mathcal{A}_\Gamma(\zeta, \mathbf{u}) = \int_{\mathcal{F}} \alpha_\Gamma(\mathbf{u}) \, dV.
\end{equation}
\end{proposition}
\begin{proof}
By definition, we then have that $\mathcal{A}_\Gamma$ is given by
\[
\mathcal{A}_\Gamma(\zeta, \mathbf{u}) = \left< (0, \Gamma), \mathcal{A}(\zeta, \mathbf{u}) \right>
= \left<\alpha_\Gamma, \mathbf{u}_{\mathrm{v}} \right> = \int_{\mathcal{F}} \alpha_\Gamma(\mathbf{u}) \, dV,
\]
where we have used the
Helmholtz-Hodge decomposition $\mathbf{u} = \mathbf{u}_{\mathrm{v}} + \nabla \Phi_\zeta$,
together with the
fact that $\alpha_\Gamma$ satisfies \eqref{contint} and is $L_2$-orthogonal to gradient vector fields.
\end{proof}
\paragraph{The $\Gamma$-Component of the Curvature.} As a second step towards the computation of the magnetic symplectic form \eqref{magform}, we need a convenient expression for the exterior derivative $\mathbf{d} \mathcal{A}_\Gamma$. While it is in theory possible to compute the derivative directly from the expression \eqref{Aalpha} for $\mathcal{A}_\Gamma$, we can avoid much of the technicalities by means of the following insight: for the rigid body with circulation, $\mathbf{d} \mathcal{A}_\Gamma$ is nothing but the $\Gamma$-component of the curvature of the Neumann connection, as defined below. Recall that an explicit expression for the latter was established in proposition~\ref{prop:curv}. We define the $\Gamma$-component of the curvature $\mathcal{B}$ as before by the following prescription:
\[
\mathcal{B}_\Gamma((\zeta, \mathbf{u}), (\xi, \mathbf{v})) :=
\left< (0, \Gamma), \mathcal{B}((\zeta, \mathbf{u}), (\xi, \mathbf{v})) \right>,
\]
again relying on the fact that $(0, \Gamma) \in \mathfrak{X}_{\mathrm{vol}}(\mathcal{F}_0)^\ast$, while $\mathcal{B}((\zeta, \mathbf{u}), (\xi, \mathbf{v}))$ is an element of $\mathfrak{X}_{\mathrm{vol}}(\mathcal{F}_0)$.
\begin{proposition} \label{prop:gammacurv}
The $\Gamma$-component of the curvature is
a left $SE(2)$-invariant two-form on $SE(2)$ given at the identity by
\begin{equation}\label{curvB}
\mathcal{B}_{\Gamma}(e) = \Gamma \mathbf{e}_x^\ast \wedge \mathbf{e}_y^\ast.
\end{equation}
\end{proposition}
\begin{proof}
The left-invariance of $\mathcal{B}_{\Gamma}$ follows from the fact that the Neumann connection itself is left invariant under the action of $SE(2)$ upon itself. We may hence restrict our attention to the value of the curvature at a point $(e, \varphi) \in Q$, where $e$ is the identity in $SE(2)$. The fact that $\mathcal{B}_{\Gamma}$ drops to $SE(2)$ will be obvious once we determine its exact form, but can also be proved directly; see for instance \cite{VaKaMa2009}.
The first integral in the expression (\ref{curvaturestream}) for the curvature vanishes since the integration is over the fluid domain, where $\mathbf{d}\alpha_\Gamma = 0$. The second integral can be computed explicitly, since we only need the expression (\ref{boundary}) for the stream function on the boundary.
Let $(g, \varphi, \zeta_i, \mathbf{u}_i)$, $i = 1, 2$, be elements of $T_{(g, \varphi)} Q$ and write the velocity as $\zeta_i = (\Omega_i, \mathbf{V}_i)$, where $\mathbf{V}_i = U_i(\cos \alpha_i, \sin \alpha_i)$. Consider the stream functions $\Psi_i$ given in \eqref{boundary} corresponding to the rigid body motions $(\Omega_i, \mathbf{V}_i)$. On the boundary $\partial \mathcal{B}$, we then have that
\begin{align*}
\ast \left( \mathbf{d}\Psi_1 \wedge \mathbf{d}\Psi_2 \right) =
U_1 U_2 \sin(\alpha_1 - \alpha_2) & + \Omega_1 U_2 (x \cos\alpha_2 + y \sin\alpha_2) \\
& - \Omega_2 U_1 (x\cos\alpha_1 + y \sin\alpha_1),
\end{align*}
so that the curvature is given by
\[
\mathcal{B}_{\Gamma}((\zeta_1, \mathbf{u}_1), (\zeta_2, \mathbf{u}_2)) =
- \int _{\partial \mathcal{B}} \alpha_\Gamma \wedge \ast(\mathbf{d}\Psi_1 \wedge \mathbf{d}\Psi_2) = -\Gamma U_1 U_2 \sin(\alpha_1 - \alpha_2).
\]
Here, we have used the fact that
\[
\oint_{\partial\mathcal{B}} \alpha_\Gamma = \Gamma \quad \text{and} \quad \oint_{\partial\mathcal{B}} x \alpha_\Gamma = \oint_{\partial\mathcal{B}} y \alpha_\Gamma = 0.
\]
Note that the value of $\mathcal{B}_{\Gamma}$ does not depend on $\mathbf{u}_1, \mathbf{u}_2$ so that $\mathcal{B}_{\Gamma}$ drops to $SE(2)$ as claimed. Finally, using the basis (\ref{dualbasis}) of $\mathfrak{se}(2)^\ast$, we may write the curvature at the identity as
\[
\mathcal{B}_{\Gamma}(e) = \Gamma \mathbf{e}_x^\ast \wedge \mathbf{e}_y^\ast.
\]
This concludes the proof.
\end{proof}
We now prove the main result of this section, that the exterior derivative $\mathbf{d}\mathcal{A}_\Gamma$ is equal to the $\Gamma$-component $\mathcal{B}_\Gamma$ of the curvature. More precisely, recalling the projection $\mathrm{pr} : Q \rightarrow SE(2)$, we have
\begin{equation} \label{curveq}
\mathrm{pr}^\ast \mathcal{B}_\Gamma = \mathbf{d} \mathcal{A}_\Gamma.
\end{equation}
Note that this is not true for arbitrary reduced cotangent bundles, and that this
is highly specific to the case of rigid bodies with circulation. The expression \eqref{curveq} can be proved as follows: because of the Cartan structure formula $\mathcal{B} = \mathbf{d}\mathcal{A} + [\mathcal{A}, \mathcal{A}]$ for right actions, we have that
\begin{equation} \label{csf}
\mathbf{d} \mathcal{A}_\Gamma = \mathrm{pr}^\ast \mathcal{B}_\Gamma -\left\langle (0, \Gamma), [\mathcal{A}, \mathcal{A}]\right\rangle.
\end{equation}
It remains to show that the second term on the right-hand side vanishes. It can be shown (see the proof of
theorem~4.2 in \cite{mw1983}) that, for divergence-free
vector fields $\mathbf{u}_{\mathrm{v}, 1}$ and $\mathbf{u}_{\mathrm{v}, 2}$ tangent to $\partial \mathcal{F}$
and arbitrary one-forms $\alpha$, the following holds:
\begin{equation} \label{magic}
\int_\mathcal{F} \alpha([\mathbf{u}_{\mathrm{v}, 1}, \mathbf{u}_{\mathrm{v}, 2}]) \,d V =
\int_\mathcal{F} \mathbf{d} \alpha(\mathbf{u}_{\mathrm{v}, 1}, \mathbf{u}_{\mathrm{v}, 2}) \,d V.
\end{equation}
If we now put $\mathbf{u}_{\mathrm{v}, i} := \mathcal{A}(\zeta_i, \mathbf{u}_i)$, $i = 1,2$, we may rewrite the second term in (\ref{csf}) as follows:
\begin{align*}
\left\langle (0, \Gamma), [\mathcal{A}(\zeta_1, \mathbf{u}_1), \mathcal{A}(\zeta_2, \mathbf{u}_2)] \right\rangle & = \left\langle (0, \Gamma), [\mathbf{u}_{\mathrm{v},1}, \mathbf{u}_{\mathrm{v},2}] \right\rangle \\
& = \int_{\mathcal{F}} \alpha_\Gamma([\mathbf{u}_{\mathrm{v},1}, \mathbf{u}_{\mathrm{v},2}]) \,dV
= \int_{\mathcal{F}} \mathbf{d}\alpha_\Gamma(\mathbf{u}_{\mathrm{v},1}, \mathbf{u}_{\mathrm{v},2}) \,dV,\end{align*}
where $\alpha_\Gamma$ is the one-form given by (\ref{vortalpha}). However, the integral on the right-hand side is again zero because the integration is over $\mathcal{F}$, while the support of $\mathbf{d}\alpha_\Gamma$ is concentrated inside the rigid body.
This shows that $\mathbf{d}\mathcal{A}_\Gamma$ in (\ref{csf}) is nothing but the $\Gamma$-component of the curvature of the Neumann connection, as calculated in proposition~\ref{prop:gammacurv}.
\paragraph{The Reduced Phase Space.}
We formally establish the isomorphism $\varphi_\Gamma$ of theorem~\ref{thm:cotbundle} between the reduced configuration space $J^{-1}(0, \Gamma)/\mathrm{Diff}_{\mathrm{vol}}(\mathcal{F}_0)$ and $T^\ast SE(2)$. Recall that $\varphi_\Gamma$ is defined as the composition $\varphi_0 \circ {\mathrm{shift}}_\Gamma$, where $\varphi_0$ and ${\mathrm{shift}}_\Gamma$ were defined in the proof of theorem~\ref{thm:cotbundle}. The following two lemmas are devoted to the explicit form of these two constitutive maps for the rigid body with circulation.
\begin{lemma}
Consider the map $\mathfrak{m} : \mathfrak{se}(2)^\ast \rightarrow \mathfrak{se}(2)^\ast$
defined in \eqref{mapm}. The map $\varphi_0: (T^\ast Q)_0 \rightarrow T^\ast SE(2)$ defined
in the proof of theorem~\ref{thm:cotbundle} is given in terms of $\mathfrak{m}$ by the following expression:
\begin{equation} \label{multmap}
\varphi_0( [g, \varphi, \pi_b, \mathbf{d}\Phi_\zeta]) = (g, \mathfrak{m}(\pi_b)).
\end{equation}
\end{lemma}
\begin{proof}
Consider an element $(g, \varphi, \pi_b, \mathbf{d}\Phi_\zeta)$ of $J^{-1}(0)$.
In a left trivialization,
the map $\bar{\varphi}_0: J^{-1}(0) \rightarrow T^\ast SE(2)$ defined in \eqref{barphi} is given by
\begin{align*}
\left< \bar{\varphi}(\pi_b, \mathbf{d}\Phi_\zeta), \xi \right>
= \left< (\pi_b, \mathbf{d}\Phi_\zeta), (\xi, \nabla \Phi_\xi) \right>
= \left< \pi_b,\xi \right> + \left<\mathbf{d}\Phi_\zeta, \nabla \Phi_\xi \right>,
\end{align*}
where now $\Phi_\xi$ is the solution of \eqref{Neumann} with boundary data $\xi$.
The right-hand side can then be written as follows, using the definition of the added-mass matrices:
\begin{align*}
\left< (\pi_b, \mathbf{d}\Phi_\zeta), (\xi, \nabla \Phi_\xi) \right>
& = \left<\pi_b, \xi \right> + \int_{\mathcal{F}} \mathbf{d}\Phi_\zeta \cdot \nabla \Phi_\xi \, dV
= \zeta^T (\mathbb{M}_b + \mathbb{M}_f )\xi \\
& = \left< (\mathbb{I} + \mathbb{M}_f \mathbb{M}_b^{-1}) \pi_b, \xi \right>
\end{align*}
since $\pi_b = \mathbb{M}_b \zeta$. In other words, we have that
\[
\left< \bar{\varphi}(g, \varphi, \pi_b, \mathbf{d}\Phi_\zeta), \xi\right>
= \left< \mathfrak{m}(\pi_b), \xi \right>,
\]
so that $\bar{\varphi}(g, \varphi, \pi_b, \mathbf{d}\Phi_\zeta) = (g,\mathfrak{m}(\pi_b))$.
\end{proof}
We now determine the map ${\mathrm{shift}}_\Gamma : (T^\ast Q)_{(0, \Gamma)} \rightarrow (T^\ast Q)_0$. Because of proposition~\ref{prop:compalpha}, the map ${\mathrm{Shift}}_\Gamma : J^{-1}(0, \Gamma) \rightarrow J^{-1}(0)$ is in our case a simple shift by $\alpha_\Gamma$:
\[
{\mathrm{Shift}}_\Gamma(g, \varphi, \pi_b, \alpha) = (g, \varphi, \pi_b, \alpha-\alpha_\Gamma).
\]
Physically speaking, we may think of $\alpha \in J^{-1}(0, \Gamma)$ as a one-form with circulation $\Gamma$; by subtracting $\alpha_\Gamma$ from $\alpha$, we obtain a one-form with zero circulation. For the quotient map
${\mathrm{shift}}_\Gamma: (T^\ast Q)_{(0, \Gamma)} \rightarrow (T^\ast Q)_0$ we then have the following result.
\begin{lemma}
The quotient map
${\mathrm{shift}}_\Gamma: (T^\ast Q)_{(0, \Gamma)} \rightarrow (T^\ast Q)_0$ is given by
\[
{\mathrm{shift}}_\Gamma([(g, \varphi, \pi_b, \alpha)]) = [(g, \varphi, \pi_b, \alpha-\alpha_\Gamma)].
\]
\end{lemma}
By concatenating these two maps, it is now straightforward to find the explicit form of the isomorphism $\varphi_\Gamma$ between $(T^\ast Q)_{(0, \Gamma)}$ and $T^\ast SE(2)$: from \eqref{rediso} we have that the isomorphism is given by
\begin{equation} \label{dirmap}
\varphi_\Gamma: [(g, \varphi, \pi_b, \alpha)] \mapsto (g, \mathfrak{m}(\pi_b)).
\end{equation}
The general theory of cotangent bundle reduction guarantees that this map is an isomorphism, but
it is instructive to construct the inverse mapping explicitly. For every $(g, \pi) \in T^\ast SE(2)$, put
$\pi_b := \mathfrak{m}^{-1}(\pi)$ and set $\alpha := \alpha_\Gamma + \mathbf{d}\Phi_\zeta$, where $\Phi_\zeta$ is the solution of the Neumann problem \eqref{Neumann} with boundary data $\zeta = \mathbb{M}_b^{-1} \pi_b$. Furthermore, choose an arbitary fluid embedding $\varphi$ such that $(g, \varphi) \in Q$. The inverse mapping $\varphi_\Gamma^{-1}$ is then given by
\begin{equation} \label{invmap}
\varphi_\Gamma^{-1}(g, \pi) = [(g, \varphi, \pi_b, \alpha)], \quad
\text{where $\pi = \mathfrak{m}^{-1}(\pi_b)$ and $\alpha = \alpha_\Gamma + \mathbf{d}\Phi_\zeta$.}
\end{equation}
It is straightforward to check that $\varphi_\Gamma^{-1} \circ \varphi_\Gamma = \varphi_\Gamma \circ \varphi_\Gamma^{-1} = \mathrm{id}$.
\paragraph{The Reduced Hamiltonian.}
As a last step towards establishing a reduced Hamiltonian formulation for the rigid body with circulation, we need to find an appropriate expression for the Hamiltonian on this space. This can be done by computing first the Hamiltonian on the unreduced space from the kinetic energy (\ref{Tkin}) and then noting that the result induces a well-defined function on the reduced phase space.
\begin{proposition}
The Hamiltonian on the reduced phase space $T^\ast SE(2)$ is given by
\begin{equation} \label{redhamil}
H_{\mathrm{red}}(g, \pi) = \frac{1}{2} \pi^T\, \mathbb{M}^{-1} \,\pi,
\end{equation}
where $\mathbb{M} = \mathbb{M}_b + \mathbb{M}_f$.
\end{proposition}
\begin{proof}
The proof relies on the explicit form of the isomorphism $\varphi_\Gamma$ in \eqref{dirmap}. Let $(g, \pi)$ be an element of $T^\ast SE(2)$ and consider an arbitrary element $(g, \varphi, \pi_b, \alpha) \in J^{-1}(0, \Gamma)$ such that
$\varphi_\Gamma^{-1}(g, \pi) = [(g, \varphi, \pi_b, \alpha)]$. Recall from the discussion following \eqref{invmap} that $\pi_b$ and $\alpha$ are given by
\begin{equation} \label{momrel}
\pi_b = \mathfrak{m}^{-1}(\pi)
\quad \text{and} \quad
\alpha = \alpha_\Gamma + \mathbf{d}\Phi_\zeta,
\end{equation}
where $\Phi_\zeta$ is the solution of the Neumann problem \eqref{Neumann} with boundary data $\zeta = \mathbb{M}_b^{-1} \pi_b$. The relation between the reduced Hamiltonian $H_{\mathrm{red}}$ on $T^\ast SE(2)$ and the Hamiltonian \eqref{almostredham} is then written as
\begin{align*}
H_{\mathrm{red}}(g, \pi) & = H(g, \varphi, \pi_b, \alpha)
= \frac{1}{2} \pi_b^T \, \mathbb{M}_b^{-1}(\mathbb{M}_b + \mathbb{M}_f) \mathbb{M}_b^{-1} \, \pi_b,
\end{align*}
where we have used the fact that the fluid is irrotational, so that $\Psi = 0$ in \eqref{almostredham}.
Keeping in mind that $\pi = \mathfrak{m}(\pi_b)$, or alternatively that $\pi = (\mathbb{M}_b + \mathbb{M}_f) \mathbb{M}_b^{-1} \pi_b$, we finally obtain the following expression for the reduced Hamiltonian:
\[
H_{\mathrm{red}}(g, \pi) = \frac{1}{2} \pi^T \, (\mathbb{M}_b + \mathbb{M}_f)^{-1} \, \pi,
\]
which is precisely \eqref{redhamil}.
\end{proof}
\paragraph{Summary.} In this section, we used the framework of cotangent bundle reduction to obtain expressions for the reduced phase space, its symplectic structure, and Hamiltonian after reducing with respect to the particle relabeling symmetry. We summarize these results in the theorem below.
\begin{theorem} \label{thm:reddiff}
Let $\Gamma \in \mathbb{R}$ and consider the associated element $(0, \Gamma) \in \mathfrak{X}^\ast_{\mathrm{vol}}(\mathcal{F}_0)$. Then the following properties hold:
\begin{itemize}
\item The isotropy group $\mathrm{Diff}_{\mathrm{vol}}(\mathcal{F}_0)_{(0, \Gamma)}$ of $(0, \Gamma)$ is the whole of the group $\mathrm{Diff}_{\mathrm{vol}}(\mathcal{F}_0)$.
\item The reduced phase space $J^{-1}(0, \Gamma)/\mathrm{Diff}_{\mathrm{vol}}(\mathcal{F}_0)$ is symplectically isomorphic to $T^\ast SE(2)$, equipped with the left-invariant symplectic form given at the identity by
\begin{equation} \label{sympstruct}
\Omega_\Gamma(e) := \omega_{\mathrm{can}}(e) - \Gamma \mathbf{e}^\ast_x \wedge \mathbf{e}^\ast_y,
\end{equation}
where $\omega_{\mathrm{can}}$ is the canonical symplectic form on $T^\ast SE(2)$.
\item The reduced kinetic energy Hamiltonian on $T^\ast SE(2)$ is given by
\begin{equation} \label{redhamilt}
H_{\mathrm{red}}(g, \pi) = \frac{1}{2} \pi^T \, \mathbb{M}^{-1} \, \pi,
\end{equation}
where $\mathbb{M} = \mathbb{M}_b + \mathbb{M}_f$ is the full mass matrix.
\end{itemize}
\end{theorem}
\subsection{Reduction by the Euclidian Symmetry Group} \label{sec:eucsymm}
In the previous section, we have eliminated the particle relabeling symmetry by restricting to the fluid configurations that had circulation $\Gamma$ and no external vorticity. However, after reducing by $\mathrm{Diff}_{\mathrm{vol}}(\mathcal{F}_0)$, the solid-fluid system is still invariant under global translations and rotations. In other words, the group $SE(2)$ acts as a symmetry group for the reduced equations. This is clear from Theorem~\ref{thm:reddiff}: the group $SE(2)$ acts on $T^\ast SE(2)$ by left translations, and both the reduced symplectic form as well as the Hamiltonian are invariant under that action.
\paragraph{The Reduced Poisson Structure.} The symplectic structure \eqref{sympstruct} on $T^\ast SE(2)$ is invariant under the left action on $SE(2)$ on itself, and hence induces a Poisson bracket on the dual space $\mathfrak{se}(2)^\ast$. If the original symplectic structure were the canonical one, the reduced Poisson bracket would simply be the (minus) Lie-Poisson bracket \eqref{lpbracket}. However, because of the magnetic term in the symplectic structure, additional terms arise in the expression for the reduced Poisson bracket. The general form of the magnetic Poisson bracket is described in Theorem~\ref{thm:poisson}. In the case of the rigid body with circulation, the first term in (\ref{bracket}) is the Lie-Poisson
bracket on $\mathfrak{se}(2)^\ast$, given by (\ref{liepoisson}). The second term in (\ref{bracket}) is due to the magnetic two-form.
The entire Poisson bracket is then given by
\begin{equation} \label{eucpoisson}
\{F, G\}_\mathcal{B} = \{F, G\}_{\mathfrak{se}(2)^\ast} - \Gamma \left(
\frac{\partial F}{\partial P_x} \frac{\partial G}{\partial P_y} -
\frac{\partial F}{\partial P_y} \frac{\partial G}{\partial P_x} \right).
\end{equation}
To make this bracket more explicit,
we may evaluate the Poisson bracket on the coordinate functions on $\mathfrak{se}(2)^\ast$:
\begin{equation} \label{pbcoord}
\{P_x, P_y\}_\mathcal{B} = -\Gamma,
\quad
\{\Pi, P_x\}_\mathcal{B} = -P_y,
\quad
\{\Pi, P_y\}_\mathcal{B} = P_x.
\end{equation}
This emphasizes another advantage of deriving the Poisson brackets through reduction: any reduced Poisson bracket automatically satisfies the Jacobi identity, obviating the need for any explicit computations. While these computations would have been straightforward in this case, this is not always so (see \cite{MarsdenRatiu} for examples).
\paragraph{The Equations of Motion.}
Having established the expression for the reduced Poisson bracket on $\mathfrak{se}(2)^\ast$, we now turn to the reduced Hamiltonian. Since $H_{\mathrm{red}}$ on $T^\ast SE(2)$ in \eqref{redhamilt} is written in terms of a left trivialization of $T^\ast SE(2)$, we immediately obtain that the reduced Hamiltonian on $\mathfrak{se}(2)^\ast$ is given by
\begin{equation} \label{redham}
H(\pi) = \frac{1}{2} \pi^T \, \mathbb{M}^{-1} \, \pi.
\end{equation}
The equations of motion relative to the Poisson bracket \eqref{eucpoisson} hence take exactly the form
given in~\eqref{EoM1} which we repeat here for completeness
\begin{equation} \label{EoM}
\begin{split}
\dot{\Pi} & = (\mathbf{P} \times \mathbf{V}) \cdot \mathbf{b}_3 \\
\dot{\mathbf{P}} & = \mathbf{P} \times \Omega\mathbf{b}_3 + \Gamma \mathbf{b}_3 \times \mathbf{V}
\end{split}
\end{equation}
where we have used the fact that
\[
\frac{\partial H}{\partial \Pi} = \Omega \quad \text{and} \quad
\frac{\partial H}{\partial \mathbf{P}} = \mathbf{V}.
\]
As mentioned in the introduction, these equations occurred first in the work of Chaplygin and Lamb, and were given a sound Hamiltonian foundation by \cite{BoMa06} and \cite{BoKoMa2007}. Moreover, the equations \eqref{EoM} are a special case of the equations of motion derived by \cite{Sh2005}, \cite{BoMaRa2007} and \cite{KaOs08} for a cylinder of arbitrary shape with circulation interacting with point vortices.
Our equations are slightly different from the ones derived by \cite{Ch1933} and used in \cite{BoMa06}, but can be brought into that form by diagonalizing the mass matrix $\mathbb{M}$ and rewriting the equations \eqref{EoM} in terms of velocities rather than the momenta.
\paragraph{The Kutta-Zhukowski Force.} It is worthwhile to point out the geometric significance of the equations of motion. For $\Gamma = 0$, the equations \eqref{EoM} reduce to the classical Kirchhoff equations, which are seen to be Lie-Poisson equations on $\mathfrak{se}(2)^\ast$ (a more general version of this result already appears in \cite{Leonard1997}). When the circulation $\Gamma$ is non-zero, an additional gyroscopic force appears in the equations of motion, which is traditionally referred to as the \emph{\textbf{Kutta-Zhukowski force}}. In coordinates, this force is given by
\[
\Gamma \mathbf{b}_3 \times \mathbf{V} = \Gamma (-V_y, V_x, 0),
\]
and it follows that the force is proportional to $\Gamma$ and at right angles to $\mathbf{V}$. From a geometric point of view, the Kutta-Zhukowski force arises because of the magnetic terms in the Poisson bracket \eqref{eucpoisson}. Since the magnetic terms are in turn generated by the curvature of the Neumann connection, we have hence established the Kutta-Zhukowski force as a curvature-related effect.
A similar effect arises in the dynamics of a charged particle in a magnetic field. Here, the particle moves under the influence of the Lorentz force, which is again a gyroscopic force whose magnitude is proportional to the charge $e$ of the particle. In the Kaluza-Klein approach however, the Lorentz force becomes part of the geometry when the magnetic potential is interpreted as a connection whose curvature is precisely the magnetic field strength tensor.
\paragraph{The $\mathcal{B}_\Gamma$-potential.} We now turn to the computation of the $\mathcal{B}\mathfrak{h}$-potential $\psi$ as in definition~\ref{def:bgpot}. For the rigid body with circulation, we will refer to $\psi : SE(2) \rightarrow \mathfrak{se}(2)^\ast$ as the $\mathcal{B}_\Gamma$-potential, in order to emphasize the underlying magnetic form $\mathcal{B}_\Gamma$.
\begin{proposition}
The $\mathcal{B}_\Gamma$-potential $\psi : SE(2) \rightarrow \mathfrak{se}(2)^\ast$ is given by
\begin{equation} \label{psiexpr}
\psi(R_\theta, \mathbf{x}_0) =
\left(-\frac{\Gamma}{4} \left\Vert \mathbf{x}_0 \right\Vert^2,
\frac{\Gamma}{2} \mathbb{J} \mathbf{x}_0
\right).
\end{equation}
\end{proposition}
\begin{proof}
By \eqref{defbgpot}, we have that $\mathcal{B}_\Gamma(\xi_{SE(2)}, \eta_{SE(2)}) = \mathbf{d} \psi_\xi \cdot \eta_{SE(2)}$ for $\xi, \eta \in \mathfrak{se}(2)$. The infinitesimal generators for the left action of $SE(2)$ on itself are defined by $\xi_{SE(2)}(g) := \xi \cdot g$, and similarly for $\eta_{SE(2)}$. Because of the left $SE(2)$-invariance of $\mathcal{B}_\Gamma$, we have that this is equivalent to
\begin{equation} \label{bgpot}
\mathcal{B}_\Gamma(e) ( \mathrm{Ad}_{g^{-1}} \xi, \mathrm{Ad}_{g^{-1}} \eta )
= \left< \mathbf{d} \psi_\xi , \eta_{SE(2)}(g) \right>.
\end{equation}
We now write $g = (R_\theta, \mathbf{x}_0)$, $\xi := (\Omega, \mathbf{V})$ and $\eta = (\bar{\Omega}, \bar{\mathbf{V}})$. The vector $\mathrm{Ad}_{g^{-1}} \xi$ is then given by
\[
\mathrm{Ad}_{(R_\theta, \mathbf{x}_0)^{-1}} (\Omega, \mathbf{V}) = (\Omega, R_\theta^T ( \mathbf{V} + \Omega \mathbf{b}_3 \times \mathbf{x}_0 )),
\]
while the infinitesimal generator $\eta_{SE(2)}$ is given by
\[
(\bar{\Omega}, \bar{\mathbf{V}})_{SE(2)}(R_\theta, \mathbf{x}_0) =
\bar{\Omega} \frac{\partial}{\partial \theta}
+ \left( \bar{\mathbf{V}} - \bar{\Omega} \mathbb{J} \mathbf{x}_0 \right) \cdot \nabla.
\]
Substituting these expressions into the definition \eqref{bgpot} then yields
\[
\psi_{(\Omega, \mathbf{V})}(R_\theta, \mathbf{x}_0) =
-\Omega \frac{\Gamma}{4} \left\Vert \mathbf{x}_0 \right\Vert^2
+ \frac{\Gamma}{2} \mathbf{V}^T \mathbb{J} \mathbf{x}_0,
\]
(up to an arbitrary constant, which we set to zero)
which is equivalent to \eqref{psiexpr}.
\end{proof}
\paragraph{The Symplectic Leaves in $\mathfrak{se}(2)^\ast$.}
Proposition~\ref{prop:afforbit} offers a simple prescription for the symplectic leaves of the magnetic Poisson structure \eqref{eucpoisson}. Using the expression \eqref{psiexpr} for the $\mathcal{B}_\Gamma$-potential, the affine action of $SE(2)$ on $\mathfrak{se}(2)^\ast$ is given by
\begin{equation} \label{affactioncirc}
(R_\theta, \mathbf{x}_0) \cdot (\Pi, \mathbf{P}) =
\left( \Pi - \mathbf{P}^T R_\theta^T \mathbb{J} \mathbf{x}_0 -
\frac{\Gamma}{4} \left\Vert \mathbf{x}_0 \right\Vert^2,
R_\theta \mathbf{P} + \frac{\Gamma}{2} \mathbb{J} \mathbf{x}_0 \right),
\end{equation}
where $(R_\theta, \mathbf{x}_0) \in SE(2)$ and $(\Pi, \mathbf{P}) \in \mathfrak{se}(2)^\ast$. The orbits of this action are paraboloids of revolution with symmetry axis the $\Pi$-axis (see figure~\ref{fig:orbits}): they are level sets of the Casimir function
\begin{equation} \label{orbits}
\Phi = \Pi + \frac{1}{\Gamma} \left\Vert \mathbf{P} \right\Vert^2.
\end{equation}
Note how the case with zero circulation is a singular limit of \eqref{orbits}, and compare this with the picture of the standard coadjoint orbits in $\mathfrak{se}(2)^\ast$ (which can be obtained by setting $\Gamma = 0$ in \eqref{affactioncirc}), which are cylinders around the $\Pi$-axis, together with the individual points of the $\Pi$-axis. The trajectories of the system lie in the intersection between the symplectic leaves and the level surfaces of the Hamiltonian; see \cite{BoMa06}. An explicit integration by quadratures was obtained by \cite{Ch1933}.
\begin{figure}
\begin{center}
\includegraphics[scale=.5]{Figures/orbits}
\end{center}
\caption{Intersection with the $(\Pi, P_x)$-plane of the orbits of the affine action \eqref{affactioncirc} in $\mathfrak{se}(2)^\ast$.} \label{fig:orbits}
\end{figure}
\section{Geodesic Flow on the Oscillator Group}
\label{sec:geodosc}
The dynamics of a rigid body with circulation bears a strong resemblance to the motion of a charged particle in a magnetic field. In both cases, the Hamiltonian is the kinetic energy and the system is acted upon by a gyroscopic force, either the Kuta-Zhukowksi force, or the Lorentz force. In the case of a magnetic particle, the Lorentz force can be made geometric by means of the \emph{\textbf{Kaluza-Klein}} description. The trajectory of a magnetic particle is then a geodesic in a high-dimensional space which is the product of the original configuration space $M$ and the group $U(1)$. See \cite{MarsdenRatiu} for an overview and \cite{Sternberg1977} for the extension to Yang-Mills fields.
One can now ask the question whether a similar description exists for the rigid body with circulation, such that its motion is geodesic in an appropriate higher-dimensional space. Surprisingly, the answer turns out to be positive. The relevant space is the \textbf{\emph{oscillator group}} $\mathrm{Osc}$, which is a central extension of $SE(2)$ by the real line $\mathbb{R}$. Below, we analyze the structure of this group and we give an explicit expression for the Lie-Poisson bracket on the dual of its Lie algebra. We then show that the equations of motion induced by this bracket are closely related to the equations of motion for the rigid body with circulation.
The use of central extensions in the study of mechanical systems is not new. We mention here in particular the work of \cite{OvKh1987} on the KdV equation as a geodesic equation on the Virasoro group, and the work of \cite{Vi2001}, in which the structure of the geodesic equations on an extension of a Lie group is studied in general. A detailed account of the geometry of central extensions, including the oscillator group, can be found in \cite{MarsdenHamRed}.
\paragraph{The Oscillator Group.}
The oscillator group $\mathrm{Osc}$ is a central extension of $SE(2)$, \emph{i.e.} there is an exact sequence
\[
0 \rightarrow \mathbb{R} \rightarrow \mathrm{Osc} \rightarrow SE(2) \rightarrow \{e\}.
\]
The multiplication in $\mathrm{Osc}$ is determined by the specification of a real-valued $SE(2)$-two-cocycle $B : SE(2) \times SE(2) \rightarrow \mathbb{R}$:
\begin{equation} \label{groupcocycle}
B( (R_\theta, \mathbf{x}_0), (R_\psi, \mathbf{y}_0) ) := \omega_{\mathbb{R}^2}(\mathbf{x}_0, R_\theta \mathbf{y}_0) = \mathbf{x}_0 \cdot \mathbb{J} R_\theta \mathbf{x}_0.
\end{equation}
Here, $\omega_{\mathbb{R}^2}$ is the standard symplectic area form on $\mathbb{R}^2$:
\[
\omega_{\mathbb{R}^2}(\mathbf{x}, \mathbf{y}) := \mathbf{x}\cdot \mathbb{J} \mathbf{y},
\]
where $\mathbb{J}$ is the symplectic matrix \eqref{sympmat}.
The Lie algebra of the oscillator group is denoted as $\mathrm{osc}$ and has as its underlying vector space the space $\mathfrak{se}(2) \times \mathbb{R}$. The Lie bracket is determined by the Lie algebra two-cocycle $C : \mathfrak{se}(2) \times \mathfrak{se}(2) \rightarrow \mathbb{R}$ induced by $B$ as follows
\[
C(v_1, v_2) := \frac{\partial^2}{\partial s \partial t} \Big|_{s, t = 0} ( B(g(t), h(s)) - B(h(s), g(t)) ),
\]
where $g(t)$ and $h(s)$ are smooth curves in $SE(2)$ such that $\dot{g}(0) = v_1$ and $\dot{h}(0) = v_2$. Explicitly, $C$ is given by
\begin{equation} \label{algcocycle}
C((\Omega_1, \mathbf{V}_1), (\Omega_2, \mathbf{V}_2)) = 2 \omega_{\mathbb{R}^2}(\mathbf{V}_1, \mathbf{V}_2) = 2 \mathbf{V}_1 \cdot \mathbb{J} \mathbf{V}_2.
\end{equation}
The bracket of the oscillator algebra is then given by
\begin{align*}
[(\Omega_1, \mathbf{V}_1, a), (\Omega_2, \mathbf{V}_2, b)] & = ([(\Omega_1, \mathbf{V}_1), (\Omega_2, \mathbf{V}_2)],
C((\Omega_1, \mathbf{V}_1), (\Omega_2, \mathbf{V}_2)) \\
& = (0, -\Omega_1 \mathbb{J}\mathbf{V}_2 + \Omega_2 \mathbb{J}\mathbf{V}_1,
2 \mathbf{V}_1 \cdot \mathbb{J} \mathbf{V}_2).
\end{align*}
Further information about the oscillator group and its algebra can be found in \cite{Streater1967} and
\cite{MarsdenHamRed}.
\paragraph{The Lie-Poisson Bracket on $\mathrm{osc}^\ast$.}
We now determine the Lie-Poisson bracket on the dual Lie algebra $\mathrm{osc}^\ast$ and relate this expression with the Poisson bracket for the rigid body with circulation.
Note first that as a vector space, $\mathrm{osc}^\ast$ is just $\mathfrak{se}(2)^\ast \times \mathbb{R}$, so that an element $\nu$ of $\mathrm{osc}^\ast$ can be written as $\nu := (\pi, p )$, where $\pi = (\Pi, \mathbf{P}) \in \mathfrak{se}(2)^\ast$ and $p \in \mathbb{R}$.
\begin{proposition} \label{prop:osc}
The Lie-Poisson bracket on $\mathrm{osc}^\ast$ is given by
\begin{equation} \label{oscbracket}
\{ f, g \}_{\mathrm{osc}^\ast}(\pi, p) =
\{ f, g \}_{\mathfrak{se}(2)^\ast}(\pi) - p C\left(\frac{\delta f}{\delta \pi}, \frac{\delta g}{\delta \pi} \right),
\end{equation}
where $C$ is the $\mathfrak{se}(2)$-two-cocycle given by (\ref{algcocycle}).
\end{proposition}
\begin{proof}
As usual, it is sufficient to determine the Lie-Poisson bracket on linear functions $f, g$ on $\mathrm{osc}^\ast$, for which
\[
f(\nu) = \left\langle \nu, \frac{\delta f}{\delta \nu} \right\rangle, \quad \text{for $\frac{\delta f}{\delta \nu} \in \mathrm{osc}$},
\]
and similar for $g$. Since $\nu \in \mathrm{osc}^\ast$, we may write $\nu = (\pi, p)$, where $\pi \in \mathfrak{se}(2)^\ast$ and $p \in \mathbb{R}$. Likewise, we may decompose the variational derivative as
\[
\frac{\delta f}{\delta \nu} = \left(\frac{\delta f}{\delta \pi}, \frac{\delta f}{\delta p} \right)
\in \mathfrak{se}(2) \times \mathbb{R}.
\]
The minus Lie-Poisson bracket on $\mathrm{osc}^\ast$ is then given by a similar formula as \eqref{lpbracket}, and we get
\begin{align*}
\{f, g\}_{\mathrm{osc}^\ast} & = -\left<\nu, \left[\frac{\delta f}{\delta \nu}, \frac{\delta g}{\delta \nu} \right] \right> \\
& = -\left< (\pi, p), \left( \left[\frac{\delta f}{\delta \pi}, \frac{\delta g}{\delta \pi} \right], C\left(\frac{\delta f}{\delta \pi}, \frac{\delta g}{\delta \pi} \right) \right) \right> \\
& = -\left< \pi, \left[\frac{\delta f}{\delta \pi}, \frac{\delta g}{\delta \pi} \right] \right>
- p C\left(\frac{\delta f}{\delta \pi}, \frac{\delta g}{\delta \pi} \right),
\end{align*}
so that we obtain the expression (\ref{oscbracket}).
\end{proof}
In coordinates, the Poisson structure of proposition~\ref{prop:osc} is given by
\begin{equation} \label{poissosc}
\{ f, g \}_{\mathrm{osc}^\ast}(\pi, p) =
(\nabla_{(\pi, p)} f)^T
\begin{pmatrix}
0 & -P_y & P_x & 0 \\
P_y & 0 & -p & 0 \\
-P_x & p & 0 &0 \\
0 & 0 & 0 & 0
\end{pmatrix}
(\nabla_{(\pi, p)} g).
\end{equation}
These coordinate expressions also serve to clarify the link between the description of the rigid body on the oscillator group and the Euclidian group: each of the subspaces $\mathfrak{se}(2)^\ast$ is a Poisson submanifold of $\mathrm{osc}^\ast$, as in the following proposition.
\begin{proposition} \label{prop:inc}
Let $\Gamma \in \mathbb{R}$. The inclusion $\iota_\Gamma : \mathfrak{se}(2)^\ast \hookrightarrow \mathrm{osc}^\ast$ given by $\iota_\Gamma(\pi) = (\pi, \Gamma)$ is a Poisson map.
\end{proposition}
\begin{proof}
This can easily be seen from the coordinate expressions above: for $p = \Gamma$, the Poisson structure \eqref{oscbracket} coincides with \eqref{eucpoisson} on the submanifold $\mathfrak{se}(2)^\ast$.
\end{proof}
\paragraph{The Equations of Motion.} The reduced kinetic energy Hamiltonian \eqref{redham} on $\mathfrak{se}(2)^\ast$ gives rise to a Hamiltonian ${H}_{\mathrm{osc}^\ast}$ on $\mathrm{osc}^\ast$ as follows:
\begin{align*}
{H}_{\mathrm{osc}^\ast}(\pi, p) & = H(\pi) + \frac{p^2}{2} = \frac{1}{2} \pi^T \, \mathbb{M}^{-1} \, \pi + \frac{p^2}{2}.
\end{align*}
The equations of motion for this Hamiltonian and the bracket \eqref{oscbracket} are then given in coordinates by
\begin{align} \label{eomosc}
\dot{\Pi} & = (\mathbf{P} \times \mathbf{V}) \cdot \mathbf{b}_3, \nonumber \\
\dot{\mathbf{P}} & = \mathbf{P} \times \Omega\mathbf{b}_3 + p \mathbf{b}_3 \times \mathbf{V}, \\
\dot{p} & = 0. \nonumber
\end{align}
\paragraph{Conclusions.} We summarize the conclusions of this section in the following theorem.
\begin{theorem} Let $\Gamma$ be an element of $\mathbb{R}^2$.
\begin{itemize}
\item The Lie-Poisson structure on the oscillator algebra is given by \eqref{oscbracket} and the resulting equations of motion by \eqref{eomosc}.
\item The dynamics of a rigid body with circulation $\Gamma$ takes place on the Poisson submanifold $p = \Gamma$ of $\mathrm{osc}^\ast$.
\end{itemize}
\end{theorem}
\section{Conclusions and Outlook}
\label{sec:outlook}
In this paper, we established the non-canonical Hamiltonian structure of \cite{BoMa06} by geometric means, and we showed how the geometric description gives new insight into classical results such as the Kutta-Zhukowski force. We now discuss a number of open questions related to this description.
\paragraph{Cocycles.} The problem considered here seems to fit in a general class of systems where a remarkable interaction between cocycles and curvature forms is at play; examples include the work of \cite{HoKu1988} and \cite{CeMaRa2004} on the analogy between spin glasses and Yang-Mills fluids. Recently, a comprehensive reduction theory has been developed for such systems (see \cite{GaRa2008} and \cite{GaRa2009}). It would be interesting to see whether this reduction theory can be directly applied to the rigid body with circulation considered here.
\paragraph{The Oscillator Group and Reduction.} It is still not entirely clear why the description in terms of the oscillator group, highlighted in proposition~\eqref{prop:inc}, should exist. Is there some way of deriving the oscillator group immediately from the unreduced space $Q$ and the particle relabeling symmetry?
One way to do this goes back to the work of Kelvin, who dealt with circulation by placing a fixed surface in the fluid so that the fluid domain becomes simply connected. The flow across the surface then appears as a new variable, and the reduced variables are precisely the coordinates on $\mathrm{Osc}$. Kelvin's argument originally only held for fluids in a bounded container, but can be extended to unbounded flows as well. This approach will be the subject of a future work.
\paragraph{Point Vortices and Vortical Structures.} In \cite{VaKaMa2009}, the dynamics of a rigid body interacting with point vortices was investigated from a geometric point of view.
It would be of interest to extend the results of this paper to the case of a rigid body with circulation moving in a field of point vortices or a general distribution of vorticity and, in particular, it is interesting to investigate whether one could
generalize the oscillator group description to the case of non-zero vorticity.
Lastly, we also intend to study the geometry of controlled articulated bodies moving in the field of point vortices or other vortical structures. This setup has important applications in the theory of locomotion; see \cite{KMlocomotion} and~\cite{Kanso2009}.
| 2024-02-18T23:39:58.480Z | 2010-05-27T02:01:51.000Z | algebraic_stack_train_0000 | 983 | 14,485 |
|
proofpile-arXiv_065-4980 | \section{Introduction}
We assume that the audience is familiar with the concept of a
Courant-Friedrichs-Lewy (CFL) condition
\cite{wikipedia-CFL-condition}. Loosely speaking, the CFL condition
states: When a partial differential equation, for example the wave
equation
\begin{eqnarray}
\label{eq:wave}
\partial_t^2 u & = & c^2\, \Delta u \quad\textrm{,}
\end{eqnarray}
is integrated numerically, then the time step size $\delta t$ is
limited by the spatial resolution $\delta x$ and the maximum
propagation speed $c$ by
\begin{eqnarray}
\label{eq:cfl}
\delta t & < & Q\, \frac{\delta x}{c} \quad\textrm{.}
\end{eqnarray}
Here $Q$ is a constant of order $1$ that depends on the time
integration method (and details of the spatial discretisation).
Choosing a time step size larger than this is unstable and must
therefore be avoided. (There are time integration methods that do not
have such a stability limit, but these are expensive and not commonly
used in numerical relativity, so we will ignore them here.)
\section{Example: Exponential Decay}
In real-world equations, there are also other restrictions which limit
the time step size, and which may be independent of the spatial
resolution. One simple example for this is the exponential decay
\begin{eqnarray}
\label{eq:decay}
\partial_t u & = & - \lambda\, u
\end{eqnarray}
where $\lambda > 0$ is the decay constant. Note that this equation is
an ordinary differential equation, as there are no spatial
derivatives. The solutions of (\ref{eq:decay}) are given by
\begin{eqnarray}
u(t) & = & A\, \exp\{ - \lambda t \}
\end{eqnarray}
with amplitude $A$.
The decay constant $\lambda$ has dimension $1/T$. The time step size
is limited by
\begin{eqnarray}
\delta t & < & Q'\, \frac{1}{\lambda}
\end{eqnarray}
where $Q'$ is a constant of order $1$ that depends on the time
integration method. Choosing a time step size larger than this is
unstable and must therefore be avoided. (As with the CFL criterion,
there are time integration methods that do not have such a stability
limit.)
As an example, let us consider the forward Euler scheme with a step
size $\delta t$. This leads to the discrete time evolution equation
\begin{eqnarray}
\frac{u^{n+1} - u^n}{\delta t} & = & - \lambda\, u^n
\end{eqnarray}
or
\begin{eqnarray}
u^{n+1} & = & (1 - \delta t\, \lambda)\, u^n \quad\textrm{.}
\end{eqnarray}
This system is unstable e.g.\ if $|u^{n+1}| > |u^n|$ (there are also
other definitions of stability), or if
\begin{eqnarray}
|1 - \delta t\, \lambda| & > & 1 \quad\textrm{,}
\end{eqnarray}
which is the case for $\delta t > 2 / \lambda$ (and also for $\delta t
< 0$). In this case, the solution oscillates between positive and
negative values with an exponentially growing amplitude.
\section{Gamma Driver}
The BSSN \cite{Alcubierre99d} Gamma Driver condition is a time
evolution equation for the shift vector $\beta^i$, given by (see e.g.\
(43) in \cite{Alcubierre02a})
\begin{eqnarray}
\label{eq:gamma-driver}
\partial_t^2 \beta^i & = & F\, \partial_t \tilde \Gamma^i -
\eta\, \partial_t \beta^i \quad\textrm{.}
\end{eqnarray}
There exist variations of the Gamma Driver condition, but the
fundamental form of the equation remains the same. The term
$F\, \partial_t \tilde \Gamma^i$ contains second spatial derivatives
of the shift $\beta^i$ and renders this a hyperbolic, wave-type
equation for the shift. The parameter $\eta>0$ is a damping
parameter, very similar to $\lambda$ in (\ref{eq:decay}) above. It
drives $\partial_t \beta^i$ to zero, so that the shift $\beta^i$ will
tend to a constant in stationary spacetimes. (This makes this a
\emph{symmetry-seeking} gauge condition, since $\partial_t$ will then
tend to the corresponding Killing vector.)
Let us now consider a simple spacetime which is spatially homogeneous,
i.e.\ where all spatial derivatives vanish. In this case (see e.g.\
(40) in \cite{Alcubierre02a}), $\partial_t \tilde \Gamma^i = 0$, and
only the damped oscillator equation
\begin{eqnarray}
\partial_t^2 \beta^i & = & - \eta\, \partial_t \beta^i
\end{eqnarray}
remains. As we have seen above, solving this equation numerically
still imposes a time step size limit, even though there is no length
scale introduced by the spatial discretisation, so the spatial
resolution can be chosen to be arbitrarily large; there is therefore
no CFL limit. This demonstrates that the damping time scale set by
the parameter $\eta$ introduces a resolution-independent time step
size limit.
This instability was e.g.\ reported in \cite{Sperhake:2006cy}, below
(13) there, without explaining its cause. The authors state that the
choice $\eta=2$ is unstable near the outer boundary, and they
therefore choose $\eta=1$ instead. Decreasing $\eta$ by a factor of
$2$ increases the time step size limit correspondingly.
The explanation presented above was first brought forth by Carsten
Gundlach \cite{Gundlach2008a} and Ian Hawke \cite{Hawke2008a}. To our
knowledge, it has not yet been discussed in the literature elsewhere.
Harmonic formulations of the Einstein equations have driver parameters
similar to the BSSN Gamma Driver parameter $\eta$. Spatially varying
parameters were introduced in harmonic formulations to simplify the
gauge dynamics in the wave extraction zone far away from the origin
(see e.g.\ (8) in \cite{Scheel:2008rj}). \cite{Palenzuela:2009hx}
uses a harmonic formulation with mesh refinement, and describes using
this spatial dependence also to avoid time stepping instabilities (see
(45) there).
\section{Mesh Refinement}
When using mesh refinement to study compact objects, such as black
holes, neutron stars, or binary systems of these, one generally uses a
grid structure that has a fine resolution near the centre and
successively coarser resolutions further away from the centre. With
full Berger-Oliger AMR that uses sub-cycling in time, the CFL factors
on all refinement levels are the same, and thus the time step sizes
increase as one moves away from the centre. This makes it possible
that the time step size on the coarsest grids does not satisfy the
stability condition for the Gamma Driver damping parameter $\eta$ any
more.
One solution to this problem is to omit sub-cycling in time for the
coarsest grids by choosing the same time step size for some of the
coarsest grids. This was first advocated by \cite{Bruegmann:2003aw},
although it was introduced there to allow large shift vectors near the
outer boundary as necessary for a co-rotating coordinate system. It
was later used in \cite{Brugmann:2008zz} (see section IV there) to
avoid an instability near the outer boundary, although the instability
is there not attributed to the Gamma Driver. Omitting sub-cycling in
time on the coarsest grids often increases the computational cost only
marginally, since most of the computation time is spent on the finest
levels.
Another solution is to choose a spatially varying parameter $\eta$,
e.g.\ based on the coordinate radius and mimicking the temporal
resolution of the grid structure, which may grow linearly with the
radius. This follows the interpretation of $\eta$ setting the damping
timescale, which must not be larger than the timescale set by the time
discretisation.
One possible spatially varying definition for $\eta$ could be
\begin{eqnarray}
\label{eq:varying}
\eta(r) & := & \eta^*\; \frac{R^2}{r^2 + R^2} \quad,
\end{eqnarray}
where $r$ is the coordinate distance from the centre of the black
hole. The parameter $R$ defines a transition radius between an inner
region, where $\eta$ is approximately equal to $\eta^*$, and an outer
region, where $\eta$ gradually decreases to zero. This definition is
simple, smooth, and differentiable, and mimics a ``typical'' mesh
refinement setup, where the resolution $h$ grows approximately
linearly with the radius $r$.
Another, simpler definition for $\eta$ (which is not smooth -- but
smoothness is not necessary; $\eta$ could even be discontinuous) is
\begin{eqnarray}
\label{eq:varying-simple}
\eta(r) & := & \eta^*\; \left\{
\begin{array}{llll}
1 & \mathrm{for} & r \le R & \textrm{(near the origin)}
\\
\frac{R}{r} & \mathrm{for} & r \ge R & \textrm{(far away)}
\end{array}
\right. ,
\end{eqnarray}
which is e.g.\ implemented in the \texttt{McLachlan} code
\cite{ES-mclachlanweb}.
If there are multiple black holes, possibly with differing resolution
requirements, then prescriptions such as (\ref{eq:varying}) or
(\ref{eq:varying-simple}) need to be suitably generalised, e.g.\ via
\begin{eqnarray}
\label{eq:multiple}
\frac{1}{\eta(r)} & := & \frac{1}{\eta_1(r_1)} +
\frac{1}{\eta_2(r_2)} \quad,
\end{eqnarray}
where $\eta_1$ and $\eta_2$ are the contributions from the individual
black holes, with $r_1$ and $r_2$ the distances to their centres.
This form of (\ref{eq:multiple}) is motivated by the dimension of
$\eta$, which is $1/M$, so that two superposed black holes of masses
$m_1$ and $m_2$ lead to the same definition of $\eta$ as a single
black hole with mass $m_1+m_2$.
Another prescription for a spatially varying $\eta$ has been suggested
in \cite{Mueller:2009jx}. In this prescription, $\eta$ depends on the
determinant of the three-metric, and it thus takes the masses of the
black hole(s) automatically into account. This prescription is
motivated by binary systems of black holes with unequal masses, where
$\eta$ near the individual black holes should be adapted to the
individual black holes' masses, and it may be more suitable to use
this instead of (\ref{eq:multiple}).
There can be other limitations of the time step size near the outer
boundary, coming e.g.\ from the boundary condition itself. In
particular, radiative boundary conditions impose a CFL limit that may
be stricter than the CFL condition from the time evolution equations
in the interior.
\begin{acknowledgements}
We thank Peter Diener, Christian D. Ott, and Ulrich Sperhake for
valuable input, and for suggesting and implementing the spatially
varying beta driver in (\ref{eq:varying-simple}). We also thank
Bernd Brügmann for his comments.
%
This work was supported by the NSF awards \#0721915 and \#0905046.
It used computational resources provided by LSU, LONI, and the NSF
TeraGrid allocation TG-MCA02N014.
\end{acknowledgements}
\bibliographystyle{bibtex/unsrt-url}
| 2024-02-18T23:39:59.055Z | 2010-06-18T02:02:26.000Z | algebraic_stack_train_0000 | 1,004 | 1,638 |
|
proofpile-arXiv_065-5042 | \section{Introduction}
\label{s:intro}
Pulsar Timing Arrays (PTAs), such as the Parkes PTA (PPTA) ~\cite{man08}, the European PTA (EPTA)~\cite{jan08}, Nanograv~\cite{NANOGrav}, the International Pulsar Timing Array (IPTA) project~\cite{HobbsEtAl:2009}, and in the future the Square Kilometre Array (SKA)~\cite{laz09} provide a unique means to study the population of massive black hole (MBH) binary systems with masses above $\sim 10^7\,M_\odot$ by monitoring stable radio pulsars: in fact, gravitational waves (GWs) generated by MBH binaries (MBHBs) affect the propagation of electromagnetic signals and leave a distinct signature on the time of arrival of the radio pulses~\cite{EstabrookWahlquist:1975,Sazhin:1978,Detweiler:1979,HellingsDowns:1983}. MBH formation and evolution scenarios~\cite{vhm03, kz06, mal07, yoo07} predict the existence of a large number of MBHBs. Whereas the high redshift, low(er) mass systems will be targeted by the planned Laser Interferometer Space Antenna ({\it LISA}~\cite{bender98})~\cite{enelt,uaiti,ses04,ses05,ses07}, massive and lower redshift ($z\lower.5ex\hbox{\ltsima} 2$) binaries radiating in the (gravitational) frequency range $\sim 10^{-9}\,\mathrm{Hz} - 10^{-6} \,\mathrm{Hz}$ will be directly accessible to PTAs. These systems imprint a typical signature on the time-of-arrival of radio-pulses at a level of $\approx 1-100$ ns~\cite{papII}, which is comparable with the timing stability of several pulsars~\cite{Hobbs:2009}, with more expected to be discovered and monitored in the future. PTAs therefore provide a direct observational window onto the MBH binary population, and can contribute to address a number of astrophysical open issues, such as the shape of the bright end of the MBH mass function, the nature of the MBH-bulge relation at high masses, and the dynamical evolution at sub-parsec scales of the most massive binaries in the Universe (particularly relevant to the so-called ``final parsec problem''~\cite{mm03}).
Gravitational radiation from the cosmic population of MBHBs produces two classes of signals in PTA data: (i) a stochastic GW background generated by the incoherent superposition of radiation from the whole MBHB population~\cite{rr95, phi01, jaffe, jen05, jen06, papI} and (ii) individually resolvable, deterministic signals produced by single sources that are sufficiently massive and/or close so that the gravitational signal stands above the root-mean-square (rms) level of the background ~\cite{papII}. In~\cite{papII} (SVV, hereafter) we explored a comprehensive range of MBH population models and found that, assuming a simple order-of-magnitude criterion to estimate whether sources are resolvable above the background level $\approx 1$-to-10 individual MBHBs could be observed by future PTAs surveys. The observation of GWs from individual systems would open a new avenue for a direct census of the properties of MBHBs, offering invaluable new information about galaxy formation scenarios. The observation of systems at this stage along their merger path would also provide key insights into the understanding of the interaction between MBHBs and the stellar/gaseous environment~\cite{KocsisSesana}, and how these interactions affect the black hole-bulge correlations during the merger process. If an electro-magnetic counterpart of a MBHB identified with PTAs was to be found, such a system could offer a unique laboratory for both accretion physics (on small scales) and the interplay between black holes and their host galaxies (on large scales).
The prospects of achieving these scientific goals raise the question of what astrophysical information could be extracted from PTA data and the need to quantify the typical statistical errors that will affect the measurements, their dependence on the total number and spatial distribution of pulsars in the array (which affects the surveys observational strategies), and the consequences for multi-band observations. In this paper we estimate the statistical errors that affect the measurements of the source parameters focusing on MBHBs with no spins, in circular orbits, that are sufficiently far from coalescence so that gravitational radiation can be approximated as producing a signal with negligible frequency drift during the course of the observation time, $T \approx 10$ yr ("monochromatic" signal). This is the class of signals that in SVV we estimated to produce the bulk of the observational sample. The extension to eccentric binaries and systems with observable frequency derivative is deferred to a future work. GWs from monochromatic circular binaries constituted by non-spinning MBHs are described by seven independent parameters. We compute the expected statistical errors on the source parameters by evaluating the variance-covariance matrix -- the inverse of the Fisher information matrix -- of the observable parameters. The diagonal elements of such a matrix provide a robust lower limit to the statistical uncertainties (the so-called Cramer-Rao bound~\cite{JaynesBretthorst:2003, Cramer:1946}), which in the limit of high signal-to-noise ratio (SNR) tend to the actual statistical errors. Depending on the actual structure of the signal likelihood function and the SNR this could underestimate the actual errors, see \emph{e.g.}~\cite{NicholsonVecchio:1998,BalasubramanianDhurandhar:1998,Vallisneri:2008} for a discussion in the context of GW observations. Nonetheless, this analysis serves as an important benchmark and can then be refined by carrying out actual analyses on mock data sets and by estimating the full set of (marginalised) posterior density functions of the parameters. The main results of the paper can be summarised as follows:
\begin{itemize}
\item At least three (not co-aligned) pulsars in the PTA are necessary to fully resolve the source parameters;
\item The statistical errors on the source parameters, at \emph{fixed} SNR, decrease as the number of pulsars in the array increases. The typical accuracy greatly improves by adding pulsars up to $\approx 20$; for larger arrays, the actual gain become progressively smaller because the pulsars ``fill the sky" and the effectiveness of further triangulation saturates. In particular, for a fiducial case of an array of 100 pulsars randomly and uniformly distributed in the sky with optimal coherent SNR = 10 -- which may be appropriate for the SKA -- we find a typical GW source error box in the sky $\approx 40$ deg$^2$ and a fractional amplitude error of $\approx$ 30\%. The inclination and polarization angles can be determined within an error of $\sim 0.3$ rad, and the (constant) frequency is determined to sub-frequency resolution bin accuracy. These results are independent on the source gravitational-wave frequency.
\item When an anisotropic distribution of pulsars is considered, the typical source sky location accuracy improves linearly with the array sky coverage. The statistical errors on all the other parameters are essentially insensitive to the PTA sky coverage, as long as it covers more than $\sim 1$ srad.
\item The ongoing Parkes PTA aims at monitoring 20 pulsars with a 100 ns timing noise; the targeted pulsars are mainly located in the southern sky. A GW source in that part of the sky could be localized down to a precision of $\lesssim 10$ deg$^2$ at SNR$=10$, whereas in the northern hemisphere, the lack of monitored pulsars limits the error box to $\lower.5ex\hbox{\gtsima} 200$ deg$^2$. The median of the Parkes PTA angular resolution is $\approx 130\,(\mathrm{SNR}/10)^{-2}$ deg$^2$.
\end{itemize}
The paper is organised as follows. In Section II we describe the GW signal relevant to PTA and we introduce the quantities that come into play in the parameter estimation problem. A review of the Fisher information matrix technique and its application to the PTA case are provided in Section III. Section IV is devoted to the detailed presentation of the results, and in Section V we summarize the main findings of this study and point to future work. Unless otherwise specified, throughout the paper we use geometric units $G=c=1$.
\section{The signal}
\label{s:signal}
Observations of GWs using PTAs exploit the regularity of the time of arrival of radio pulses from pulsars. Gravitational radiation affects the arrival time of the electromagnetic signal by perturbing the null geodesics of photons traveling from a pulsar to the Earth. This was realized over thirty years ago~\cite{EstabrookWahlquist:1975,Sazhin:1978,Detweiler:1979,HellingsDowns:1983}, and the number and timing stability of radio pulsars known today and expected to be monitored with future surveys~\cite{man08,jan08,NANOGrav,laz09} make ensembles of pulsars -- PTAs -- ``cosmic detectors'' of gravitational radiation in the frequency range $\sim 10^{-9}\,\mathrm{Hz} - 10^{-6}\,\mathrm{Hz}$. Here we review the signal produced by a GW source in PTA observations.
Let us consider a GW metric perturbation $h_{ab}(t)$ in the transverse and traceless gauge (TT) described by the two independent (and time-dependent) polarisation amplitudes $h_+(t)$ and $h_\times(t)$ that carry the information about the GW source. Let us also indicate with $\hat\Omega$ the unit vector that identifies the direction of GW propagation (conversely, the direction to the GW source position in the sky is $-\hat\Omega$). The metric perturbation can therefore be written as:
\begin{equation}
h_{ab}(t,\hat\Omega) = e_{ab}^+(\hat\Omega) h_+(t,\hat\Omega) + e_{ab}^{\times}(\hat\Omega)\, h_\times(t,\hat\Omega),
\label{e:hab}
\end{equation}
where $e_{ab}^A(\hat\Omega)$ ($A = +\,,\times$) are the polarisation tensors, that are uniquely defined once one specifies the wave principal axes described by the unit vectors $\hat{m}$ and $\hat{n}$ as,
\begin{subequations}
\begin{align}
e_{ab}^+(\hat{\Omega}) &= \hat{m}_a \hat{m}_b - \hat{n}_a \hat{n}_b\,,
\label{e:e+}
\\
e_{ab}^{\times}(\hat{\Omega}) &= \hat{m}_a \hat{n}_b + \hat{n}_a \hat{m}_b\,.
\label{e:ex}
\end{align}
\end{subequations}
Let us now consider a pulsar emitting radio pulses with a frequency $\nu_0$. Radio waves propagate along the direction described by the unit vector $\hat{p}$, and in the background $h_{ab}$ the frequency of the pulse is affected. For an observer at Earth (or at the Solar System Barycentre), the frequency is shifted according to the characteristic two-pulse function~\cite{EstabrookWahlquist:1975}
\begin{eqnarray}
z(t,\hat{\Omega}) & \equiv & \frac{\nu(t) - \nu_0}{\nu_0}
\nonumber\\
& = & \frac{1}{2} \frac{\hat{p}^a\hat{p}^b}{1+\hat{p}^a\hat{\Omega}_a}\Delta h_{ab}(t;\hat{\Omega})\,.
\label{e:z}
\end{eqnarray}
Here $\nu(t)$ is the received frequency (say, at the Solar System Barycentre), and
\begin{equation}
\Delta h_{ab}(t) \equiv h_{ab}(t_\mathrm{p},\hat{\Omega}) - h_{ab}(t,\hat{\Omega})
\label{e:deltah}
\end{equation}
is the difference between the metric perturbation at the pulsar -- with spacetime coordinates $(t_\mathrm{p},\vec{x}_p)$ -- and at the receiver -- with spacetime coordinates $(t,\vec{x})$. The quantity that is actually observed is the time-residual $r(t)$, which is simply the time integral of Eq.~(\ref{e:z}),
\begin{equation}
r(t) = \int_0^t dt' z(t',\hat{\Omega})\,.
\label{e:r}
\end{equation}
We can re-write Eq.~(\ref{e:z}) in the form
\begin{equation}
z(t,\hat{\Omega}) = \sum_A F^A(\hat{\Omega}) \Delta h_{A}(t;\hat{\Omega})\,,
\label{e:z1}
\end{equation}
where
\begin{equation}
F^A(\hat{\Omega}) \equiv \frac{1}{2} \frac{\hat{p}^a\hat{p}^b}{1+\hat{p}^a\hat{\Omega}_a} e_{ab}^A(\hat{\Omega})
\label{e:FA}
\end{equation}
is the ``antenna beam pattern'', see Eqs~(\ref{e:hab}), (\ref{e:e+}) and~(\ref{e:ex})); here we use the Einstein summation convention for repeated indeces. Using the definitions~(\ref{e:e+}) and~(\ref{e:ex}) for the wave polarisation tensors, it is simple to show that the antenna beam patterns depend on the three direction cosines $\hat{m} \cdot \hat{p}$, $\hat{n} \cdot \hat{p}$ and $\hat{\Omega} \cdot \hat{p}$:
\begin{subequations}
\begin{align}
F^+(\hat{\Omega}) & = \frac{1}{2} \frac{(\hat{m} \cdot \hat{p})^2 - (\hat{n} \cdot \hat{p})^2}{1 + \hat{\Omega} \cdot \hat{p}}\,,\\
F^\times(\hat{\Omega}) & = \frac{(\hat{m} \cdot \hat{p})\,(\hat{n} \cdot \hat{p})}{1 + \hat{\Omega} \cdot \hat{p}}\,.
\end{align}
\end{subequations}
Let us now consider a reference frame $(x,y,z)$ fixed to the Solar System Barycentre. The source location in the sky is defined by the usual polar angles $(\theta,\phi)$. The unit vectors that define the wave principal axes are given by (cf. Eqs. (B4) and (B5) in Appendix B of ~\cite{Anderson-et-al:2001}; here we adopt the same convention used in high-frequency laser interferometric observations)
\begin{subequations}
\begin{align}
\vec{m} & =
(\sin\phi \cos\psi - \sin\psi \cos\phi \cos\theta) \hat{x}
\nonumber\\
& -
(\cos\phi \cos\psi + \sin\psi \sin\phi \cos\theta) \hat{y}
\nonumber\\
& +
(\sin\psi \sin\theta) \hat{z}\,,
\label{e:m}\\
\vec{n} & =
(-\sin\phi \sin\psi - \cos\psi \cos\phi \cos\theta) \hat{x}
\nonumber\\
& +
(\cos\phi \sin\psi - \cos\psi \sin\phi \cos\theta) \hat{y}
\nonumber\\
& +
(\cos\psi \sin\theta) \hat{z}\,,
\label{e:n}
\end{align}
\end{subequations}
where $\hat{x}$, $\hat{y}$ and $\hat{z}$ are the unit vectors along the axis of the reference frame, $x$, $y$, and $z$, respectively.
The angle $\psi$ is the wave polarisation angle, defined as the angle counter-clockwise about the direction of propagation from the line of nodes to the axis described by $\vec{m}$. The wave propagates in the direction $\hat\Omega = \vec{m} \times \vec{n}$, which is explicitly given by:
\begin{equation}
\hat{\Omega} =
- (\sin\theta \cos\phi)\, \hat{x}
- (\sin\theta \sin\phi)\, \hat{y}
- \cos\theta \hat{z}\,.
\end{equation}
Analogously, the unit vector
\begin{equation}
\hat{p}_\alpha =
(\sin\theta_\alpha \cos\phi_\alpha)\, \hat{x}
+ (\sin\theta_\alpha \sin\phi_\alpha)\, \hat{y}
+ \cos\theta_\alpha \hat{z}
\end{equation}
identifies the position in the sky of the $\alpha$-th pulsar using the polar angles $(\theta_\alpha,\phi_\alpha)$.
We will now derive the expression of the PTA signal, Eq.~\ref{e:z}, produced by a circular, non-precessing binary system of MBHs emitting almost monochromatic radiation, \emph{i.e} with negligible frequency drift during the observation time, $T\approx 10$ yr. The results are presented in Section~\ref{ss:timing-residuals}. In the next sub-section, we firstly justify the astrophysical assumptions.
\subsection{Astrophysical assumptions}
\label{ss:astrophysics}
Let us justify (and discuss the limitations of) the assumptions that we have made on the nature of the sources, that lead us to consider circular, non-precessing binary systems generating quasi-monochromatic radiation, before providing the result in Eq.~(\ref{researth}). We derive general expressions for the phase evolution displacement introduced by the frequency drift and by the eccentricity induced periastron precession, and the change in the orbital angular momentum direction caused by the spin-orbit coupling induced precession. The size of each of these effects is then evaluated by considering a realistic (within our current astrophysical understanding) selected population of resolvable MBHBs taken from SVV. Throughout the paper we will consider binary systems with masses $m_1$ and $m_2$ ($m_2 \le m_1$), and \emph{chirp mass} ${\cal M}=m_1^{3/5}m_2^{3/5}/(m_1+m_2)^{1/5}$, emitting a GW frequency $f$. We also define $M = m_1 + m_2$, $\mu = m_1 m_2/m$ and $q=m_2/m_1$, the total mass, the reduced mass and the mass ratio, respectively. Our notation is such that all the quantities are the observed (redshifted) ones, such that \emph{e.g.} the {\it intrinsic} (rest-frame) mass of the primary MBH is $m_{1,r}=m_1/(1+z)$ and the {\it rest frame} GW frequency is $f_r=f(1+z)$. We normalize all the results to
\begin{align}
& M_9 = \frac{M}{10^9\,M_{\odot}}\,,
\nonumber\\
& {\cal M}_{8.5} = \frac{{\cal M}}{10^{8.5}\,M_{\odot}}\,,
\nonumber\\
& f_{50} = \frac{f}{50\,{\rm nHz}}\,,
\nonumber\\
& T_{10} = \frac{T}{10\,{\rm yr}}\,,
\nonumber
\end{align}
which are the typical values for individually resolvable sources found in SVV, and the typical observation timespan.
\subsubsection{Gravitational wave frequency evolution}
A binary with the properties defined above, evolves due to radiation reaction through an adiabatic in-spiral phase, with \emph{GW frequency} $f(t)$ changing at a rate (at the leading Newtonian order)
\begin{equation}
\frac{df}{dt} = \frac{96}{5}\pi^{8/3} {\cal M}^{5/3} f^{11/3}\,.
\label{e:dfdt}
\end{equation}
The in-spiral phase terminates at the last stable orbit (LSO), that for a Schwarzschild black hole in circular orbit corresponds to the frequency
\begin{equation}
f_\mathrm{LSO} = 4.4\times 10^{-6}\,M_9^{-1}\,\,\mathrm{Hz}\,.
\label{e:flso}
\end{equation}
The observational window of PTAs is set at low frequency by the overall duration of the monitoring of pulsars $T \approx 10$ yr, and at high frequency by the cadence of the observation, $\approx 1$ week: the PTA observational window is therefore in the range $\sim 10^{-9} - 10^{-6}$ Hz. In SVV we explored the physical properties of MBHBs that are likely to be observed in this frequency range: PTAs will resolve binaries with $m_{1,2} \lower.5ex\hbox{\gtsima} 10^8 M_\odot$ and in the frequency range $\approx 10^{-8} - 10^{-7}$ Hz. In this mass-frequency region, PTAs will observe the in-spiral portion of the coalescence of a binary system and one can ignore post-Newtonian corrections to the amplitude and phase evolution, as the velocity of the binary is:
\begin{align}
v & = (\pi f M)^{2/3}\,,
\nonumber\\
& = 1.73\times 10^{-2} M_9^{2/3}f_{50}^{2/3}\,.
\label{e:v}
\end{align}
Stated in different terms, the systems will be far from plunge, as the time to coalescence for a binary radiating at frequency $f$ is (at the leading Newtonian quadrupole order and for a circular orbit system)
\begin{equation}
t_\mathrm{coal} \simeq 4\times 10^3\,{\cal M}_{8.5}^{-5/3}\,f_{50}^{-8/3}\,\mathrm{yr}\,.
\label{e:tcoal}
\end{equation}
As a consequence the frequency evolution during the observation time is going to be small, and can be neglected. In fact, it is simple to estimate the total frequency shift of radiation over the observation period
\begin{equation}
\Delta f \approx \dot{f} T \approx 0.05\,{\cal M}_{8.5}^{5/3}\,f_{50}^{11/3}\,T_{10}\,\,\, \mathrm{nHz}\,,
\label{e:fdrift}
\end{equation}
which is negligible with respect to the frequency resolution bin $\approx 3 T_{10}^{-1}$ nHz; correspondingly, the additional phase contribution
\begin{equation}
\Delta \Phi \approx \pi \dot{f} T^2 \approx 0.04\,{\cal M}_{8.5}^{5/3}\,f_{50}^{11/3}\,T_{10}^2\,\,\, \mathrm{rad},
\label{e:phasedrift}
\end{equation}
is much smaller than 1 rad. Eqs.~(\ref{e:fdrift}) and~(\ref{e:phasedrift}) clearly show that it is more than legitimate in this initial study to ignore any frequency derivative, and treat gravitational radiation as \emph{monochromatic} over the observational period.
\subsubsection{Spin effects}
We now justify our assumption of neglecting the spins in the modelling of the waveform. From an astrophysical point of view, very little precise information about the spin of MBHs can be extracted directly from observations. However, several theoretical arguments support the existence of a population of rapidly spinning MBHs. If coherent accretion from a thin disk \cite{ss73} is the dominant growth mechanism, then MBH spin-up is inevitable \cite{thorne74}; jet production in active galactic nuclei is best explained by the presence of rapidly spinning MBHs \cite{nemmen07}; in the hierarchical formation context, though MBHB mergers tend to spin down the remnant \cite{hughes03}, detailed growth models that take into account both mergers and accretion lead to populations of rapidly spinning MBHs \cite{vp05,bv08}. Spins have two main effects on the gravitational waveforms emitted during the in-spiral: (i) they affect the phase evolution~\cite{BlanchetEtAl:1995}, and (ii) they cause the orbital plane to precess through spin-orbit and spin-spin coupling~\cite{ApostolatosEtAl:1994,Kidder:1995}. The effect of the spins on the phase evolution is completely negligible for the astrophysical systems observable by PTAs: the additional phase contribution enters at the lowest order at the post$^{1.5}$-Newtonian order, that is proportional to $v^3$, and we have already shown that $v \ll 1$, see Eq.~(\ref{e:v}). Precession would provide a characteristic imprint on the signal through amplitude and phase modulations produced by the orbital plane precession, and as a consequence the time-dependent polarisation of the waves as observed by a PTA. It is fairly simple to quantify the change of the orientation of the orbital angular momentum unit vector $\vec{L}$ during a typical observation. The rate of change of the precession angle is at the leading order:
\begin{equation}
\frac{d\alpha_p}{dt} = \left(2 + \frac{3 m_2}{2 m_1}\right) \frac{L + S}{a^3}
\label{e:dalphadt}
\end{equation}
where $L = \sqrt{a \mu^2 M}$ is the magnitude of the orbital angular momentum and $S$ is the total intrinsic spin of the black holes. As long as $\mu/M\gg v/c$, we have that $L \gg S$. This is always the case for resolvable MBHBs; we find indeed that these systems are in general characterised by $q\gtrsim0.1$ (therefore $\mu/M\gtrsim0.1$), while from Eq. (\ref{e:v}) we know that in general $v/c\sim0.01$ . In this case, from Eq.~(\ref{e:dalphadt}) one obtains
\begin{eqnarray}
\Delta \alpha_p & \approx& 2\pi^{5/3} \left(1 + \frac{3 m_2}{4 m_1}\right) \mu M^{-1/3} f^{5/3} T
\nonumber\\
& \approx & 0.8 \left(1 + \frac{3 m_2}{4 m_1}\right)\left(\frac{\mu}{M}\right)M_9^{2/3}f_{50}^{5/3}T_{10}\,\mathrm{rad}\,,
\label{spin}
\end{eqnarray}
which is independent of $S$. The effect is maximum for equal mass binaries, $m_1 = m_2$, ${\mu}/{M} = 0.25$; in this case $\Delta \alpha_p \approx 0.3$ rad. It is therefore clear that in general spins will not play an important role, and we will neglect their effect in the modeling of signals at the PTA output. It is however interesting to notice that for a $10^9 M_\odot$ binary system observed for 10 years at $\approx 10^{-7}$ Hz, which is consistent with astrophysical expectations (see SVV) the orientation of the orbital angular momentum would change by $\Delta \alpha_p \approx 1$ rad. The Square-Kilometre-Array has therefore a concrete chance of detecting this signature, and to provide direct insights onto MBH spins.
\subsubsection{Eccentricity of the binary}
Let us finally consider the assumption of circular orbits, and the possible effects of neglecting eccentricity in the analysis. The presence of a residual eccentricity at orbital separations corresponding to the PTA observational window has two consequences on the observed signal: (i) the power of radiation is not confined to the harmonic at twice the orbital frequency but is spread on the (in principle infinite) set of harmonics at integer multiples of the inverse of the orbital period, and (ii) the source periapse precesses in the plane of the orbit at a rate
\begin{align}
\frac{d\gamma}{dt} & = 3\pi f \frac{\left(\pi f M\right)^{2/3}}{\left(1 - e^2\right)}\,,
\nonumber\\
& \simeq 3.9\times10^{-9} \left(1 - e^2\right)^{-1}\,M_{9}^{2/3}\,f_{50}^{5/3}\,\mathrm{rad\,\,s}^{-1}
\label{e:dgammadt}
\end{align}
which introduces additional modulations in phase and (as a consequence) amplitude in the signal recorded at the Earth. In Eq.~(\ref{e:dgammadt}) $\gamma(t)$ is the angle of the periapse measured with respect to a fixed frame attached to the source. We now briefly consider the two effects in turn. The presence of eccentricity "splits" each polarisation amplitude $h_+(t)$ and $h_\times(t)$ into harmonics according to (see \emph{e.g.} Eqs.~(5-6) in Ref.~\cite{WillemsVecchioKalogera:2008} and references therein):
\begin{eqnarray}
h^{+}_n(t) & = & A \Bigl\{-(1 + \cos^2\iota)u_n(e) \cos\left[\frac{n}{2}\,\Phi(t) + 2 \gamma(t)\right]
\nonumber \\
& & -(1 + \cos^2\iota) v_n(e) \cos\left[\frac{n}{2}\,\Phi(t) - 2 \gamma(t)\right]
\nonumber \\
& & + \sin^2\iota\, w_n(e) \cos\left[\frac{n}{2}\,\Phi(t)\right] \Bigr\},
\label{e:h+}\\
h^{\times}_{n}(t) & = & 2 A \cos\iota \Bigl\{u_n(e) \sin\left[\frac{n}{2}\,\Phi(t) + 2 \gamma(t)\right]
\nonumber\\
& & + v_n(e) \sin(\left[\frac{n}{2}\,\Phi(t) - 2 \gamma(t)\right]) \Bigr\}\,,
\label{e:hx}
\end{eqnarray}
where
\begin{equation}
\Phi(t) = 2\pi\int^t f(t') dt'\,,
\label{e:Phi}
\end{equation}
is the GW phase and $f(t)$ the instantaneous GW frequency corresponding to twice the inverse of the orbital period. The source inclination angle $\iota$ is defined as $\cos\iota = -\hat{\Omega}^a {\hat L}_a$, where ${\hat L}$ is the unit vector that describes the orientation of the source orbital plane, and the amplitude coefficients $u_n(e)$, $v_n(e)$, and $w_n(e)$ are linear combinations of the Bessel functions of the first kind $J_{n}(ne)$, $J_{n\pm 1}(ne)$ and $J_{n\pm 2}(ne)$. For an astrophysically plausible range of eccentricities $e\lower.5ex\hbox{\ltsima} 0.3$ -- see Fig.~\ref{fig1a} and the discussion below -- $|u_n(e)| \gg |v_n(e)|\,,|w_n(e)|$ and most of the power will still be confined into the $n=2$ harmonic at twice the orbital frequency, see \emph{e.g.} Fig. 3 of\ Ref.~\cite{PetersMathews:1963}. On the other hand, the change of the periapse position even for low eccentricity values may introduce significant phase shifts over coherent observations lasting several years. In fact the phase of the recorded signal is shifted by an additional contribution $2\gamma(t)$. This means that the actual frequency of the observed signal recorded at the instrument corresponds to $f(t) + {\dot{\gamma}}/{\pi}$ and differs by a measurable amount from $f(t)$. Nonetheless, one can still model the radiation observed at the PTA output as monochromatic, as long as th periapse precession term ${\dot{\gamma}}/{\pi}$ introduces a phase shift $\Delta \Phi_\gamma $ quadratic in time that is $\ll 1$ rad, which is equivalent to the condition that we have imposed on the change of the phase produced by the frequency shift induced by radiation reaction, see Eqs.~(\ref{e:fdrift}) and~(\ref{e:phasedrift}). From Eq.~(\ref{e:dgammadt}) and~(\ref{e:dfdt}), this condition yields:
\begin{align}
\Delta \Phi_\gamma & \approx \frac{d^2\gamma}{dt^2} T^2 = \frac{96\pi^{13/3}}{\left(1 - e^2\right)} M^{2/3}{\cal M}^{5/3} f^{13/3} T^2
\nonumber\\
& \approx 2\times10^{-3} \left(1 - e^2\right)^{-1} M_9^{2/3}{\cal M}_{8.5}^{5/3}\,f_{50}^{13/3}\,T_{10}^2\,\mathrm{rad}\,.
\label{e:dgamma}
\end{align}
We therefore see that the effect of the eccentricity will be in general negligible.
\subsubsection{Tests on a massive black hole population}
\begin{figure}
\centerline{\psfig{file=f1.ps,width=84.0mm}}
\caption{Testing the circular monochromatic non--spinning binary approximation. Upper left panel: distribution of the phase displacement $\Delta \Phi$ introduced by the frequency drift of the binaries. Upper right panel: change in the orbital angular momentum direction $\Delta \alpha_p$ introduced by the spin-orbit coupling. Lower left panel: eccentricity distribution of the systems. Lower right panel: distribution of phase displacement $\Delta \Phi_\gamma$ induced by relativistic periastron precession due to non-zero eccentricity of the binaries. The distributions are constructed considering all the resolvable MBHBs with residuals $>1$ns (solid lines), 10ns (long--dashed lines) and 100ns (short--dashed lines), found in 1000 Monte Carlo realizations of the Tu-SA models described in SVV, and they are normalised so that their integrals are unity.}
\label{fig1a}
\end{figure}
We can quantify more rigorously whether the assumption of monochromatic signal at the PTA output is justified, by evaluating the distributions of $\Delta \Phi$, $\Delta \alpha_p$ and $\Delta \Phi_\gamma$ on an astrophysically motivated population of resolvable MBHBs. We consider the Tu-SA MBHB population model discussed in SVV (see Section 2.2 of SVV for a detailed description) and we explore the orbital evolution, including a possible non-zero eccentricity of the observable systems. The binaries are assumed to be in circular orbit at the moment of pairing and are self consistently evolved taking into account stellar scattering and GW emission~\cite{Sesana-prep}. We generate 1000 Monte Carlo realisations of the entire population of GW signals in the PTA band and we collect the individually resolvable sources generating coherent timing residuals greater than 1, 10 and 100 ns, respectively, over 10 years. In Fig. \ref{fig1a} we plot the distributions relevant to this analysis. We see from the two upper panels, that in general, treating the system as "monochromatic" with negligible spin effects is a good approximation. If we consider a 1 ns threshold (solid lines), the phase displacement $\Delta \Phi$ introduced by the frequency drift and the orbital angular momentum direction change $\Delta \alpha_p$ due two spin-orbit coupling are always $<1$ rad, and in $\sim 80$\% of the cases are $<0.1$ rad. The lower left panel of Fig. \ref{fig1a} shows the eccentricity distribution of the same sample of individually resolvable sources. Almost all the sources are characterised by $e \lower.5ex\hbox{\ltsima} 0.1$ with a long tail extending down to $e \lower.5ex\hbox{\ltsima} 10^{-3}$ in the PTA band. The typical periastron precession--induced additional phase $2\dot{\gamma}T$ can be larger than 1 rad. However, this additional contribution grows linearly with time, and, as discussed before, will result in a measured frequency which differs from the intrinsic one by a small amount $\dot{\gamma}/\pi\lesssim 1$nHz. The ``non-monocromatic'' phase contribution $\Delta \Phi_\gamma$ that changes quadratically with time and is described by Eq. (\ref{e:dgamma}) is instead plotted in the lower right panel of Fig. \ref{fig1a}. Values of $\Delta \Phi_\gamma$ are typically of the order $10^{-3}$, completely negligible in the context of our analysis. Note that, as a general trend, increasing the threshold in the source--induced timing residuals to 10 and 100 ns, all the effects tend to be suppressed. This is because resolvable sources generating larger residuals are usually found at lower frequencies, and all the effects have a steep dependence on frequency -- see Eqs.~(\ref{e:phasedrift}),~(\ref{spin}) and~(\ref{e:dgamma}). This means that none of the effects considered above should be an issue for ongoing PTA campaigns, which aim to reach a total sensitivity of $\gtrsim30$ ns, but may possibly play a role in recovering sources at the level of a few ns, which is relevant for the planned SKA. Needless to say that a residual eccentricity at the time of pairing may result in larger values of $e$ than those shown in Fig.~\ref{fig1a}~\cite{Sesana-prep}, causing a significant scatter of the signal power among several different harmonics; however, the presence of gas may lead to circularization before they reach a frequency $\approx 10^{-9}$ Hz (see, e.g.,~\cite{dot07}). Unfortunately, little is known about the eccentricity of subparsec massive binaries, and here we tackle the case of circular systems, deferring the study of precessing eccentric binaries to future work.
\subsection{Timing residuals}
\label{ss:timing-residuals}
\begin{figure}
\centerline{\psfig{file=f2.ps,width=84.0mm}}
\caption{Normalized distribution of $\Delta f_{\alpha}$ (see text) for the same sample of MBHBs considered in Fig. \ref{fig1a}, assuming observations with 100 isotropically distributed pulsars in the sky at a distance of 1 kpc. The vertical dotted line marks the width of the array's frequency resolution bin $\Delta f_r=1/T$ ($\approx 3\times 10^{-9}$Hz for $T=10$yr).}
\label{fig1b}
\end{figure}
We have shown that the assumption of circular, monochromatic, non-precessing binary is astrophysically reasonable, surely for this initial exploratory study. We now specify the signal observed at the output, Eq.~(\ref{e:r}), in this approximation. The two independent polarisation amplitudes generated by a binary system, Eqs.~(\ref{e:h+}) and~(\ref{e:hx}), can be written as:
\begin{subequations}
\begin{align}
h_+(t) & = A_\mathrm{gw} a(\iota) \cos\Phi(t)\,,
\label{e:h+1}
\\
h_{\times}(t) &= A_\mathrm{gw} b(\iota) \sin\Phi(t)\,,
\label{e:hx1}
\end{align}
\end{subequations}
where
\begin{equation}
A_\mathrm{gw}(f) = 2 \frac{{\cal M}^{5/3}}{D}\,\left[\pi f(t)\right]^{2/3}
\label{e:Agw}
\end{equation}
is the GW amplitude, $D$ the luminosity distance to the GW source, $\Phi(t)$ is the GW phase given by Eq. (\ref{e:Phi}), and $f(t)$ the instantaneous GW frequency (twice the inverse of the orbital period). The two functions
\begin{subequations}
\begin{align}
a(\iota) & = 1 + \cos^2 \iota
\label{e:aiota}
\\
b(\iota) &= -2 \cos\iota
\label{e:biota}
\end{align}
\end{subequations}
depend on the source inclination angle $\iota$, defined in the previous Section.
As described in Section II, Eqs.~(\ref{e:deltah}) and~(\ref{e:r}), the response function of each individual pulsar $\alpha$ consists of two terms, namely, the perturbation registered at the Earth at the time $t$ of data collection ($h_{ab}(t,\hat{\Omega})$), and the perturbation registered at the pulsar at a time $t-\tau_\alpha$ ($h_{ab}(t-\tau_\alpha,\hat{\Omega})$), where $\tau_\alpha$ is the light-travel-time from the pulsar to the Earth given by:
\begin{eqnarray}
\tau_\alpha & = & L_\alpha (1 + \hat{\Omega} \cdot \hat{p}_\alpha)
\nonumber\\
& \simeq & 1.1\times 10^{11}\,\frac{L_\alpha}{1\,\mathrm{kpc}}\,(1 + \hat{\Omega} \cdot \hat{p}_\alpha)\,\mathrm{s},
\end{eqnarray}
where $L_\alpha$ is the distance to the pulsar. We can therefore formally write the observed timing residuals, Eq.~(\ref{e:r}) for each pulsar $\alpha$ as:
\begin{equation}
r_\alpha(t) = r_\alpha^{(P)}(t) + r_\alpha^{(E)}(t)\,,
\label{e:r1}
\end{equation}
where $P$ and $E$ label the ``pulsar'' and ``Earth'' contribution, respectively. During the time $\tau_\alpha$ the frequency of the source -- although "monochromatic" over the time of observation $T$ of several years -- changes by
\begin{equation}
\Delta f_{\alpha}=\int_{t-\tau_\alpha}^{t} \frac{df}{dt} dt \sim \frac{df}{dt}\tau_\alpha \approx 15\,{\cal M}_{8.5}^{5/3}f_{50}^{11/3}\tau_{\alpha,1}\,\,\,\mathrm{nHz},
\end{equation}
where $\tau_{\alpha,1}$ is the pulsar-Earth light-travel-time normalized to a distance of 1 kpc. The frequency shift $\Delta f_{\alpha}$ depends both on the parameters of the source (emission frequency and chirp mass) and the properties of the pulsar (distance and sky location with respect to the source). We can quantify this effect over an astrophysically plausible sample of GW sources by considering the population shown in Fig.~\ref{fig1a}. Let us consider the same set of resolvable sources as above, and assume detection with a PTA of 100 pulsars randomly distributed in the sky, but all at a distance of 1 kpc. For each source we consider all the $\Delta f_{\alpha}$ related to each pulsar and we plot the results in Fig. \ref{fig1b}. The distribution has a peak around $\sim5\times 10^{-8}$ Hz, which is $\sim 10$ times larger than the typical frequency resolution bin for an observing time $T\approx 10$ yr. This means that \emph{the signal associated to each pulsar generates at the PTA output two monochromatic terms at two distinct frequencies.} All the "Earth-terms"corresponding to each individual pulsar share the same frequency and phase. They can therefore be coherently summed among the array, building up a distinct monochromatic peak which is not going to be affect by the pulsar terms (also known as "self-noise") which usually happen to be at much lower frequencies. The contribution to the Earth term from each individual pulsar can be written as
\begin{eqnarray}
r_\alpha^{(E)}(t) & = & R \,[a\, F^+_\alpha\,(\sin\Phi(t)-\sin\Phi_0)
\nonumber\\
& - & b\, F^\times_\alpha(\cos\Phi(t)-\cos\Phi_0)\,] ,
\label{researth}
\end{eqnarray}
with
\begin{equation}
R=\frac{A_{\rm gw}}{2\pi f}
\label{erre}
\end{equation}
and $\Phi(t)$ given by Eq. (\ref{e:Phi}). The Earth timing residuals are therefore described by a 7-dimensional vector encoding all (and only) the parameters of the source:
\begin{equation}
\vec{\lambda} = \{R,\theta,\phi,\psi,\iota,f,\Phi_0\}\,.
\label{par}
\end{equation}
Conversely, each individual pulsar term is characterized by a different amplitude, frequency and phase, that crucially \emph{depend also on the poorly constrained distance $L_\alpha$ to the pulsar}. In order to take advantage of the power contained in the pulsar term, one needs to introduce an additional parameter for each pulsar in the PTA. As a consequence, this turns a 7-parameter reconstruction problem into a $7+M$ parameter problem. More details about the PTA response to GWs are given in Appendix A. In this paper, we decided to consider simply the Earth-term (at the expense of a modest loss in total SNR) given by Eq. (\ref{researth}), which is completely specified by the 7-parameter vector~(\ref{par}). At present, it is not clear whether it would also be advantageous to include into the analysis the pulsar-terms, that require the addition of $M$ unknown search parameters. This is an open issue that deserves further investigations and will be considered in a future paper.
\section{Parameter estimation}
\label{s:fim}
In this section we briefly review the basic theory and key equations regarding the estimate of the statistical errors that affect the measurements of the source parameters. For a comprehensive discussion of this topic we refer the reader to~\cite{JaynesBretthorst:2003}.
The whole data set collected using a PTA consisting of $M$ pulsars can be schematically represented as a vector
\begin{equation}
\vec{d} = \left\{d_1, d_2, \dots, d_M\right\}\,,
\label{e:vecd}
\end{equation}
where the data form the monitoring each pulsar $(\alpha = 1,\dots,M)$ are given by
\begin{equation}
d_\alpha(t) = n_\alpha(t) + r_\alpha(t;\vec{\lambda})\,.
\label{e:da}
\end{equation}
In the previous equation $r_\alpha(t;\vec{\lambda})$, given by Eq.~(\ref{researth}), is the GW contribution to the timing residuals of the $\alpha$-th pulsar (the signal) -- to simplify notation we have dropped (and will do so from now on) the index "E", but it should be understood as we have stressed in the previous section that we will consider only the Earth-term in the analysis -- and $n_\alpha(t)$ is the noise that affects the observations. For this analysis we make the usual (simplifying) assumption that $n_\alpha$ is a zero-mean Gaussian and stationary random process characterised by the one-sided power spectral density $S_\alpha(f)$.
The inference process in which we are interested in this paper is how well one can infer the actual value of the unknown parameter vector $\vec\lambda$, Eq.~(\ref{par}), based on the data $\vec{d}$, Eq.~(\ref{e:vecd}), and any prior information on $\vec\lambda$ available before the experiment. Within the Bayesian framework, see \emph{e.g.}~\cite{bayesian-data-analysis}, one is therefore interested in deriving the posterior probability density function (PDF) $p(\vec\lambda | \vec d)$ of the unknown parameter vector given the data set and the prior information. Bayes' theorem yields
\begin{equation}
p(\vec\lambda | \vec d) = \frac{p(\vec\lambda)\,p(\vec d|\vec\lambda)}{p(\vec d)}\,,
\label{e:posterior}
\end{equation}
where $p(\vec d|\vec\lambda)$ is the likelihood function, $p(\vec\lambda)$ is the prior probability density of $\vec\lambda$, and $p(\vec d)$ is the marginal likelihood or evidence. In the neighborhood of the maximum-likelihood estimate value $\hat{{\vec \lambda}}$, the likelihood function can be approximated as a multi-variate Gaussian distribution,
\begin{equation}
p(\vec\lambda | \vec d) \propto p(\vec\lambda)
\exp{\left[-\frac{1}{2}\Gamma_{ab} \Delta\lambda_a \Delta\lambda_b\right]}\,,
\end{equation}
where $ \Delta\lambda_a = \hat{\lambda}_a - {\lambda}_a$ and the matrix $\Gamma_{ab}$ is the Fisher information matrix; here the indexes $a,b = 1,\dots, 7$ label the components of $\vec{\lambda}$.. Note that we have used Einstein's summation convention (and we do not distinguish between covariant and contravariant indeces). In the limit of large SNR, $\hat{{\vec \lambda}}$ tends to ${{\vec \lambda}}$, and the inverse of the Fisher information matrix provides a lower limit to the error covariance of unbiased estimators of ${{\vec \lambda}}$, the so-called Cramer-Rao bound~\cite{Cramer:1946}. The variance-covariance matrix is simply the inverse of the Fisher information matrix, and its elements are
\begin{subequations}
\begin{eqnarray}
\sigma_a^2 & = & \left(\Gamma^{-1}\right)_{aa}\,,
\label{e:sigma}
\\
c_{ab} & = & \frac{\left(\Gamma^{-1}\right)_{ab}}{\sqrt{\sigma_a^2\sigma_b^2}}\,,
\label{e:cab}
\end{eqnarray}
\end{subequations}
where $-1\le c_{ab} \le +1$ ($\forall a,b$) are the correlation coefficients. We can therefore interpret $\sigma_a^2$ as a way to quantifying the expected uncertainties on the measurements of the source parameters. We refer the reader to~\cite{Vallisneri:2008} and references therein for an in-depth discussion of the interpretation of the inverse of the Fisher information matrix in the context of assessing the prospect of the estimation of the source parameters for GW observations. Here it suffices to point out that MBHBs will likely be observed at the detection threshold (see SVV), and the results presented in Section~\ref{s:results} should indeed be regarded as lower-limits to the statistical errors that one can expect to obtain in real observations, see \emph{e.g.}~\cite{NicholsonVecchio:1998,BalasubramanianDhurandhar:1998,Vallisneri:2008}.
One of the parameters that is of particular interest is the source sky location, and we will discuss in the next Section the ability of PTA to define an error box in the sky. Following Ref.~\cite{Cutler:1998}, we define the PTA angular resolution, or source error box as
\begin{equation}
\Delta \Omega=2\pi\sqrt{({\rm sin}\theta \Delta \theta \Delta \phi)^2-({\rm sin}\theta c^{\theta\phi})^2}\,;
\label{domega}
\end{equation}
with this definition, the probability for a source to lay \emph{outside} the solid angle $\Delta \Omega_0$ is $e^{-\Delta \Omega_0/\Delta \Omega}$~\cite{Cutler:1998}.
We turn now on the actual computation of the Fisher information matrix $\Gamma_{ab}$. First of all we note that in observations of multiple pulsars in the array, one can safely consider the data from different pulsars as independent, and the likelihood function of $\vec{d}$ is therefore
\begin{eqnarray}
p(\vec d|\vec\lambda) & = & \prod_\alpha p(d_\alpha|\vec\lambda)
\nonumber\\
& \propto & \exp{\left[-\frac{1}{2}\Gamma_{ab} \Delta\lambda_a \Delta\lambda_b\right]}\,,
\end{eqnarray}
where the Fisher information matrix that characterises the \emph{joint} observations in the equation above is simply given by
\begin{equation}
\Gamma_{ab} = \sum_\alpha \Gamma_{ab}^{(\alpha)}\,.
\end{equation}
$\Gamma_{ab}^{(\alpha)}$ is the Fisher information matrix relevant to the observation with the $\alpha-$th pulsar, and is simply related to the derivatives of the GW signal with respect to the unknown parameters integrated over the observation:
\begin{equation}
\Gamma_{ab}^{(\alpha)} = \left(\frac{\partial r_\alpha(t; \vec\lambda)}{\partial\lambda_a} \Biggl|\Biggr.\frac{\partial r_\alpha(t; \vec\lambda)}{\partial\lambda_b}
\right)\,,
\label{e:Gamma_ab_a}
\end{equation}
where the inner product between two functions $x(t)$ and $y(t)$ is defined as
\begin{subequations}
\begin{eqnarray}
(x|y) & = & 2 \int_{0}^{\infty} \frac{\tilde x^*(f) \tilde y(f) + \tilde x(f) \tilde y^*(f)}{S_n(f)} df\,,
\label{e:innerxy}
\\
& \simeq & \frac{2}{S_0}\int_0^{T} x(t) y(t) dt\,,
\label{e:innerxyapprox}
\end{eqnarray}
\end{subequations}
and
\begin{equation}
\tilde x(f) = \int_{-\infty}^{+\infty} x(t) e^{-2\pi i f t}
\label{e:tildex}
\end{equation}
is the Fourier Transform of a generic function $x(t)$. The second equality, Eq.~(\ref{e:innerxyapprox}) is correct only in the case in which the noise spectral density is approximately constant (with value $S_0$) across the frequency region that provides support for the two functions $\tilde x(f)$ and $\tilde y(f)$. Eq.~(\ref{e:innerxyapprox}) is appropriate to compute the scalar product for the observation of gravitational radiation from MBHBs whose frequency evolution is negligible during the observation time, which is astrophysically justified as we have shown in Section~\ref{s:intro}.
In terms of the inner product $(.|.)$ -- Eqs.~(\ref{e:innerxy}) and~(\ref{e:innerxyapprox}) -- the optimal SNR at which a signal can be observed using $\alpha$ pulsars is
\begin{equation}
{\rm SNR}_\alpha^2 = (r_\alpha | r_\alpha)\,,
\label{e:rhoalpha}
\end{equation}
and the total coherent SNR produced by timing an array of $M$ pulsars is:
\begin{equation}
{\rm SNR}^2 = \sum_{\alpha = 1}^M {\rm SNR}_\alpha^2\,.
\label{e:rho}
\end{equation}
\section{Results}
\label{s:results}
\begin{table*}
\begin{center}
\begin{tabular}{ll|cccccc}
\hline
$M$ $\,\,$& $\Delta \Omega_\mathrm{PTA} [{\rm srad}]$ $\,\,$& $\,\,\Delta\Omega$ [deg$^2$] $\,\,$& $\,\,\Delta R/R$ $\,\,$& $\,\,\Delta \iota$ [rad] $\,\,$& $\,\,\Delta \psi$ [rad] $\,\,$& $\,\,\Delta f/(10^{-10}{\rm Hz})$ $\,\,$& $\,\,\Delta \Phi_0$ [rad] $\,\,$\\
\hline
3 & $4\pi$ & $2858^{+5182}_{-1693}$ & $2.00^{+4.46}_{-1.21}$ & $1.29^{+5.02}_{-0.92}$ & $2.45^{+9.85}_{-1.67}$ & $1.78^{+0.46}_{0.40}$ & $3.02^{+16.08}_{-2.23}$\\
4 & $4\pi$ & $804^{+662}_{-370}$ & $0.76^{+1.19}_{-0.39}$ & $0.55^{+1.79}_{-0.36}$ & $0.89^{+2.90}_{-0.54}$ & $1.78^{+0.41}_{-0.33}$ & $1.29^{+5.79}_{-0.88}$\\
5 & $4\pi$ & $495^{+308}_{-216}$ & $0.54^{+0.84}_{-0.25}$ & $0.43^{+1.35}_{-0.28}$ & $0.65^{+2.10}_{-0.39}$ & $1.78^{+0.36}_{-0.30}$ & $0.98^{+4.27}_{-0.62}$\\
10 & $4\pi$ & $193^{+127}_{-92}$ & $0.36^{+0.57}_{-0.17}$ & $0.30^{+0.93}_{-0.19}$ & $0.42^{+1.49}_{-0.25}$ & $1.78^{+0.26}_{-0.23}$ & $0.71^{+3.01}_{-0.41}$\\
20 & $4\pi$ & $99.1^{+65.3}_{-44.6}$ & $0.31^{+0.51}_{-0.15}$ & $0.27^{+0.83}_{-0.16}$ & $0.35^{+1.34}_{-0.21}$ & $1.78^{+0.22}_{-0.20}$ & $0.65^{+2.66}_{-0.36}$\\
50 & $4\pi$ & $55.8^{30.5+}_{-23.0}$ & $0.30^{+0.49}_{-0.14}$ & $0.25^{+0.80}_{-0.15}$ & $0.31^{+1.26}_{-0.19}$ & $1.78^{+0.17}_{-0.16}$ & $0.60^{+2.56}_{-0.33}$\\
100 & $4\pi$ & $41.3^{+18.4}_{-15.3}$ & $0.29^{+0.48}_{-0.14}$ & $0.25^{+0.77}_{-0.15}$ & $0.31^{+1.24}_{-0.19}$ & $1.78^{+0.13}_{-0.12}$ & $0.60^{+2.49}_{-0.33}$\\
200 & $4\pi$ & $32.8^{+13.5}_{-11.1}$ & $0.29^{+0.48}_{-0.14}$ & $0.24^{+0.75}_{-0.15}$ & $0.29^{+1.21}_{-0.18}$ & $1.78^{+0.13}_{-0.12}$ & $0.59^{+2.50}_{-0.31}$\\
500 & $4\pi$ & $26.7^{+8.4}_{-8.2}$ & $0.29^{+0.48}_{-0.14}$ & $0.24^{+0.75}_{-0.15}$ & $0.29^{+1.21}_{-0.18}$ & $1.78^{+0.08}_{-0.08}$ & $0.59^{+2.50}_{-0.31}$\\
1000 & $4\pi$ & $23.2^{+6.7}_{-6.8}$ & $0.29^{+0.48}_{-0.14}$ & $0.24^{+0.73}_{-0.15}$ & $0.29^{+1.19}_{-0.18}$ & $1.78^{+0.08}_{-0.08}$ & $0.59^{+2.36}_{-0.31}$\\
\hline
100 & $0.21$ & $3675^{+3019}_{-2536}$ & $1.02^{+0.76}_{-0.34}$ & $0.47^{+1.44}_{-0.29}$ & $0.59^{+2.29}_{-0.34}$ & $1.78^{+0.56}_{-0.40}$ & $1.07^{+4.68}_{-0.68}$\\
100 & $0.84$ & $902^{+633}_{-635}$ & $0.51^{+0.44}_{-0.16}$ & $0.29^{+0.88}_{-0.18}$ & $0.34^{+1.44}_{-0.19}$ & $1.78^{+0.31}_{-0.27}$ & $0.68^{+2.87}_{-0.38}$\\
100 & $1.84$ & $403^{+315}_{-300}$ & $0.38^{+0.43}_{-0.13}$ & $0.25^{+0.80}_{-0.15}$ & $0.31^{+1.27}_{-0.18}$ & $1.78^{+0.17}_{-0.16}$ & $0.60^{+2.56}_{-0.32}$\\
100 & $\pi$ & $227^{+216}_{-184}$ & $0.33^{+0.46}_{-0.12}$ & $0.25^{+0.77}_{-0.15}$ & $0.31^{+1.24}_{-0.19}$ & $1.78^{+0.13}_{-0.16}$ & $0.60^{+2.49}_{-0.33}$\\
100 & $2\pi$ & $65.6^{+156.2}_{-38.3}$ & $0.29^{+0.48}_{-0.13}$ & $0.25^{+0.77}_{-0.15}$ & $0.31^{+1.24}_{-0.18}$ & $1.78^{+0.13}_{-0.12}$ & $0.59^{+2.50}_{-0.31}$\\
100 & $4\pi$ & $41.3^{+18.4}_{-15.3}$ & $0.29^{+0.48}_{-0.14}$ & $0.25^{+0.77}_{-0.15}$ & $0.30^{+1.24}_{-0.19}$ & $1.78^{+0.13}_{-0.12}$ & $0.60^{+2.49}_{-0.32}$\\
\hline
\end{tabular}
\end{center}
\caption{Typical uncertainties in the measurement of the GW source parameters as a function of the total number of pulsars in the array $M$ and their sky coverage $\Delta \Omega_\mathrm{PTA}$ (the portion of the sky over which the pulsars are uniformly distributed). For each PTA configuration we consider $2.5\times10^4$--to--$1.6\times10^6$ (depending on the number of pulsars in the array) GW sources with random parameters. The GW source location is drawn uniformly in the sky, and the other parameters are drawn uniformly over the full range of $\psi$, $\phi_0$ and $\cos\iota$, $f_0$ is fixed at $5\times10^{-8}$Hz. In every Monte Carlo realisation, the optimal SNR is equal to 10. The table reports the median of the statistical errors $\Delta \lambda$ -- where $\lambda$ is a generic source parameter -- and the 25$^{{\rm th}}$ and 75$^{{\rm th}}$ percentile of the distributions obtained from the Monte Carlo samplings. Note that the errors $\Delta R/R$, $\Delta \iota$, $\Delta \psi$, $\Delta f$, $\Delta \Phi_0$ all scale as SNR$^{-1}$, the error $\Delta\Omega$ scales as SNR$^{-2}$.}
\label{tab:summary}
\end{table*}
In this section we present and discuss the results of our analysis aimed at determining the uncertainties surrounding the estimates of the GW source parameters. We focus in particular on the sky localization of a MBHB, which is of particular interest for possible identifications of electromagnetic counterparts, including the host galaxy and/or galactic nucleus in which the MBHB resides. For the case of binaries in circular orbit and whose gravitational radiation does not produce a measurable frequency drift, the mass and distance are degenerate, and can not be individually measured: one can only measure the combination ${\cal M}^{5/3}/D_L$. This prevents measurements of MBHB masses, which would be of great interest. On the other hand, the orientation of the orbital angular momentum -- through measurements of the inclination angle $\iota$ and the polarisation angle $\psi$ -- can be determined (although only with modest accuracy, as we will show below), which may be useful in determining the geometry of the system, if a counterpart is detected.
The uncertainties on the source parameters depend on a number of factors, including the actual MBHB parameters, the SNR, the total number of pulsars and their location in the sky with respect to the GW source. It is therefore impossible to provide a single figure of merit that quantifies of how well PTAs will be able to do GW astronomy. One can however derive some general trends and scalings, in particular how the results depend on the number of pulsars and their distribution in the sky, which we call the {\em sky coverage of the array}; this is of particular importance to design observational campaigns, and to explore tradeoffs in the observation strategy. In the following subsections, by means of extensive Monte Carlo simulations, we study the parameter estimation accuracy as a function of the number of pulsars in the array, the total SNR of the signal, and on the array sky coverage. All our major findings are summarised in Table \ref{tab:summary}.
\subsection{General behavior}
Before considering the details of the results we discuss conceptually the process by which the source parameters can be measured. Our discussion is based on the assumption that the processing of the data is done through a coherent analysis.The frequency of the signal is trivially measured, as this is the key parameter that needs to be matched in order for a template to remain in phase with the signal throughout the observation period. Furthermore, the amplitude of the GW signal determines the actual SNR, and is measured in a straightforward way. The amplitude $R$, or equivalently $A_\mathrm{gw}$, see Eqs.~(\ref{e:Agw}) and~(\ref{erre}), provides a constraint on the chirp mass and distance combination ${\cal M}^{5/3}/D_L$. However, in the case of monochromatic signals, these two parameters of great astrophysical interest can not be measured independently. If the frequency derivative $\dot{f}$ were also observable -- this case is not considered in this paper, as it likely pertains only to a small fraction of detectable binaries, see Section~\ref{s:signal} and Fig.~\ref{fig1b} -- then one would be able to measure independently both the luminosity distance and chirp mass. In fact, from the measurement of $\dot{f} \propto {\cal M}^{5/3} f^{11/3}$, that can be evaluated from the phase evolution of timing residuals, one can measure the chirp mass, which in turn, from the observation of the amplitude, would yield an estimate of the luminosity distance\footnote{We note that a direct measurement of the chirp mass would be possible if one could detect both the Earth- and pulsar-terms, \emph{provided that the distance to the pulsar was known}. In this case one has the GW frequency at Earth, the GW frequency at the pulsar, and the Earth-pulsar light-travel-time, which in turns provides a direct measure of $\dot{f}$, and as a consequence of the chirp mass.}. The remaining parameters, those that determine the geometry of the binary -- the source location in the sky, and the orientation of the orbital plane -- and the initial phase $\phi_0$ can be determined only if the PTA array contains at least three (not co-aligned) pulsars. The source location in the sky is simply reconstructed through geometrical triangulation, because the PTA signal for each pulsar encodes the source coordinates in the sky in the relative amplitude of the sine and cosine term of the response or, equivalently, the overall phase and amplitude of the sinusoidal PTA output signal, see Eqs.~(\ref{e:r}),~(\ref{e:z1}),~(\ref{e:FA}) and~(\ref{researth}). For the reader familiar with GW observations with {\it LISA}, we highlight a fundamental difference between {\it LISA} and PTAs in the determination of the source position in the sky. With {\it LISA}, the error box decreases as the signal frequency increases (everything else being equal), because the source location in the sky is reconstructed (primarily) through the location-dependent Doppler effect produced by the motion of the instrument during the observation, which is proportional to the signal frequency. This is not the case for PTAs, where the error-box is independent of the GW frequency. It depends however on the number of pulsars in the array -- as the number of pulsars increases, one has to select with increasingly higher precision the actual value of the angular parameters, in order to ensure that the same GW signal fits correctly the timing residuals of all the pulsars -- and the location of the pulsars in the sky.
\begin{figure}
\centerline{\psfig{file=f3.ps,width=84.0mm}}
\caption{The statistical errors that affect the determination of the source location $\Delta\Omega$, see Eq.~(\ref{domega}) (upper panels) and the signal amplitude $R$ (lower panels) for four randomly selected sources (corresponding to the different line styles). We increase the number of pulsars in the array fixing a total SNR$=10$, and we plot the results as a function of the number of pulsars $M$. In the left panels we consider selected edge-on ($\iota=\pi/2$) sources, while in the right panel we plot sources with intermediate inclination of $\iota=\pi/4$.}
\label{fig2a}
\end{figure}
We first consider how the parameter estimation depends on the total number of pulsars $M$ at fixed SNR. We consider a GW source with random parameters and we evaluate the inverse of the Fisher information matrix as we progressively add pulsars to the array. The pulsars are added randomly from a uniform distribution in the sky and the noise has the same spectral density for each pulsar. We also keep the total coherent SNR fixed, at the value SNR = 10. It is clear that in a real observation the SNR actually increases approximately as $\sqrt{M}$, and therefore depends on the number of pulsars in the array. However, by normalising our results to a constant total SNR, we are able to disentangle the change in the uncertainty on parameter estimation that depends on the number of pulsars from the change due simply to the SNR. The results are shown in Fig. \ref{fig2a}. The main effect of adding pulsars in the PTA is to improve the power of triangulation and to reduce the correlation between the source parameters. At least three pulsars in the array are needed to formally resolve all the parameters; however, given the strong correlation in particular amongst $R$, $\iota$ and $\psi$ (which will be discussed later in more detail) a SNR $\sim100$ is needed to locate the source in the sky with an accuracy $\lesssim 50$ deg$^2$ in this case. It is clear that the need to maintain phase coherency between the timing residuals from several pulsars leads to a steep (by orders of magnitude) increase in accuracy from $M=3$ to $M\approx 20$ (note that the current Parkes PTA counts 20 pulsars). Adding more pulsars to the array reduces the uncertainty location region in the sky $\Delta \Omega$ by a factor of $\approx 5$ going from 20 to 1000 pulsars, but has almost no impact on the determination of the other parameters (the bottom panels of Fig. \ref{fig2a} show that $\Delta R/R$ is essentially constant for $M \lower.5ex\hbox{\gtsima} 20$).
\begin{figure}
\centerline{\psfig{file=f4.ps,width=84.0mm}}
\caption{Same as Fig. \ref{fig2a}, but here, as we add pulsars to the PTA, we consistently take into account the effect on the total coherent SNR, and accordingly we plot the results as a function of the SNR. In the left panels we plot selected edge-on ($\iota=\pi/2$) sources, while in the right panel we consider selected sources with intermediate inclination of $\iota=\pi/4$. The dotted--dashed thin lines in the upper panels follow the scaling $\Delta\Omega \propto \mathrm{SNR}^{-2}$.}
\label{fig2b}
\end{figure}
Now that we have explored the effect of the number of pulsars alone (at fixed SNR) on the parameter errors, we can consider the case in which we also let the SNR change. We repeat the analysis described above, but now the SNR is not kept fixed and we let it vary self-consistently as pulsars are added to the array. The results plotted as a function of the total coherent SNR are shown in Fig. \ref{fig2b}. Once more, we concentrate in particular on the measurement of the amplitude $R$ and the error box in the sky $\Delta\Omega$. For $M \gg 1$, the error box in the sky and the amplitude measurements scale as expected according to $\Delta \Omega\propto\mathrm{SNR}^{-2}$ and $\Delta R/R \propto \mathrm{SNR}^{-1}$ (and so do all the other parameters not shown here) . However, for $\mathrm{SNR} \lower.5ex\hbox{\ltsima} 10$ the uncertainties departs quite dramatically from the scaling above simply due to fact that with only a handful of pulsars in the array the strong correlations amongst the parameters degrade the measurements. We stress that the results shown here are independent of the GW frequency; we directly checked this property by performing several tests, in which the source's frequency is drawn randomly in the range $10^{-8}$ Hz - $10^{-7}$ Hz.
\begin{figure}
\centerline{\psfig{file=f5.ps,width=84.0mm}}
\caption{The effect of the source orbital inclination $\iota$ on the estimate of the signal parameters. Upper panels: The correlation coefficients $c^{R\iota}$ (left) and $c^{\psi\Phi_0}$ (right) as a function of $\iota$. Middle and bottom panels: the statistical errors in the measurement of amplitude $R$, polarisation angle $\psi$, inclination angle and initial phase $\Phi_0$ for a fixed PTA coherent SNR = 10, making clear the connection between inclination, correlation (degeneracy) and parameter estimation. Each asterisk on the plots is a randomly generated source.}
\label{fig3}
\end{figure}
\begin{figure}
\centerline{\psfig{file=f6.ps,width=84.0mm}}
\caption{The distributions of the statistical errors of the source parameter measurements using a sample of 25000 randomly distributed sources (see text for more details), divided in three different inclination intervals: $\iota \in [0,\pi/6]\cup[5/6\pi,\pi]$ (dotted), $\iota \in [\pi/6,\pi/3]\cup[2/3\pi, 5/6\pi]$ (dashed) and $\iota\in [\pi/3, 2/3\pi]$ (solid). In each panel, the sum of the distribution's integrals performed over the three $\iota$ bins is unity.}
\label{fig4}
\end{figure}
The source inclination $\iota$ angle is strongly correlated with the signal amplitude $R$, and the polarisation angle $\psi$ is correlated to both $\iota$ and $\Phi_0$. The results are indeed affected by the actual value of the source inclination. Left panels in Figs. \ref{fig2a} and \ref{fig2b} refer to four different edge-on sources (i.e. $\iota=\pi/2$ and the radiation is linearly polarised). In this case, the parameter have the least correlation, and $\Delta R/R=$SNR$^{-1}$. Right panels in Figs. \ref{fig2a} and \ref{fig2b} refer to sources with an "intermediate" inclination $\iota=\pi/4$; here degeneracies start to play a significant role and cause a factor of $\approx 3$ degradation in $\Delta R/R$ estimation (still scaling as SNR$^{-1}$). Note, however, that the sky position accuracy is independent on $\iota$ (upper panels in Figs. \ref{fig2a} and \ref{fig2b}), because the sky coordinates $\theta$ and $\phi$ are only weakly correlated to the other source parameters. We further explore this point by considering the behaviour of the correlation coefficients ($c^{R\iota}$ and $c^{\psi\Phi_0}$) as a function of $\iota$. Fig. \ref{fig3} shows the correlation coefficients and statistical errors in the source's parameters for a sample of 1000 individual sources using a PTA with $M=100$ and total SNR$=10$ as a function of $\iota$. For a face-on source ($\iota=0, \pi$), both polarizations equally contribute to the signal, and any polarization angle $\psi$ can be perfectly 'reproduced' by tuning the source phase $\Phi_0$, i.e. the two parameters are completely degenerate and cannot be determined. Moving towards edge-on sources, progressively change the relative contribution of the two polarizations, breaking the degeneracy with the phase. Fig. \ref{fig4}, shows statistical error distributions for the different parameters over a sample of 25000 sources divided in three different $\iota$ bins. The degradation in the determination of $R$, $\iota$ and $\psi$ moving towards face-on sources is clear. Conversely, both $\theta$ and $\phi$ do not have any strongly dependent correlation with the other parameters, the estimation of $\Omega$ is then independent on the source inclination (lower right panel in Fig. \ref{fig4}).
\subsection{Isotropic distribution of pulsars}
\begin{figure}
\centerline{\psfig{file=f7.ps,width=84.0mm}}
\caption{Median expected statistical error on the source parameters. Each point (asterisk or square) is obtained by averaging over a large Monte Carlo sample of MBHBs (it ranges from $2.5\times 10^4$ when considering 1000 pulsars to $1.6\times10^6$ when using 3 pulsars). In each panel, solid lines (squares) represent the median statistical error as a function of the total coherent SNR, assuming 100 randomly distributed pulsars in the sky; the thick dashed lines (asterisks) represent the median statistical error as a function of the number of pulsars $M$ for a fixed total SNR$=10$. In this latter case, thin dashed lines label the 25$^{\rm th}$ and the 75$^{\rm th}$ percentile of the error distributions.}
\label{fig5}
\end{figure}
\begin{figure}
\centerline{\psfig{file=f8.ps,width=84.0mm}}
\caption{Distributions normalised to unity of the size of the error-box in the sky assuming an isotropic random distribution of pulsars in the array. Upper panel: from right to left the number of pulsars considered is $M=3, 5, 20, 100, 1000$, and we fixed a total SNR$=10$ in all cases. Lower panel: from right to left we consider SNR$=5, 10, 20, 50, 100$, and we fixed $M=100$.}
\label{fig6}
\end{figure}
In this Section we study the parameter estimation for a PTA whose pulsars are \emph{isotropically} distributed in the sky, and investigate how the results depend on the number $M$ of pulsars in the array and the SNR. Current PTAs have pulsars that are far from being isotropically located on the celestial sphere -- the anisotropic distribution of pulsars is discussed in the next Section -- but the isotropic case is useful to develop an understanding of the key factors that impact on the PTA performances for astronomy. It can also be considered representative of future PTAs, such as SKA, where many stable pulsars are expected to be discovered all over the sky.
We begin by fixing the total coherent SNR at which the GW signal is observed , and we set SNR$= 10$, regardless of the number of pulsars in the array, and explore the dependence of the results on the number of pulsars $M$ in the range 3-to-1000. We then consider a fiducial 'SKA-configuration' by fixing the total number of pulsars to $M=100$, and we explore how the results depend on the SNR for values $5 \le \mathrm{SNR} \le 100$. Throughout this analysis we assume that the timing noise is exactly the same for each pulsar and that the observations of each neutron star cover the same time span. The relative contribution of each of the pulsars in the PTA to the SNR is therefore solely dictated by the geometry of the system pulsar-Earth-source, that is the specific value of the beam patter function $F^{+,\times}(\theta, \phi,\psi)$. In total we consider 14 $M$-SNR combinations, and for each of them we generate $2.5\times 10^4$-to-$1.6\times10^6$ (depending on the total number of pulsars in the array) random sources in the sky. Each source is determined by the seven parameters described by Eq. (\ref{par}), which, in all the Monte Carlo simulations presented from now on, are chosen as follow. The angles $\theta$ and $\phi$ are randomly sampled from a uniform distribution in the sky; $\Phi_0$ and $\psi$ are drawn from a uniform distribution over their relevant intervals, [0,2$\pi$] and [0,$\pi$] respectively;$\iota$ is sampled according to a probability distribution $p(\iota)= \sin\iota/2$ in the interval $[0, \pi]$ and the frequency is fixed at $f=5\times 10^{-8}$ Hz. Finally the amplitude $R$ is set in such a way to normalise the signal to the pre-selected value of the SNR. For each source we generate $M$ pulsars randomly located in the sky and we calculate the Fisher information matrix and its inverse as detailed in Section~\ref{s:fim}. We also performed trial runs considering $f=10^{-7}$ Hz and $f=10^{-8}$ Hz (not shown here) to further cross-check that the results do not depend on the actual GW frequency.
Fig. \ref{fig5} shows the median statistical errors as a function of $M$ and SNR for all the six relevant source's parameters ($\theta$ and $\phi$ are combined into the single quantity $\Delta\Omega$, according to Eq.~(\ref{domega})). Let us focus on the $M$ dependence at a fixed SNR$=10$. The crucial astrophysical quantity is the sky location accuracy, which ranges from $\approx 3000$ deg$^2$ for $M=3$ -- approximately 10\% of the whole sky -- to $\approx 20$ deg$^2$ for $M=1000$. A PTA of 100 pulsars would be able to locate a MBHB within a typical error box of $\approx 40$ deg$^2$. The statistical errors for the other parameters are very weakly dependent on $M$ for $M\lower.5ex\hbox{\gtsima} 20$. The fractional error in the source amplitude is typically $\approx 30\%$, which unfortunately prevents to constrain an astrophysically meaningful ``slice'' in the ${\cal M}-D_L$ plane. The frequency of the source, which in this case was chosen to be $f = 5\times 10^{-8}$ Hz, is determined at a $\sim 0.1$ nHz level. Errors in the inclination and polarization angles are typically $\approx 0.3$ rad, which may provide useful information about the orientation of the binary orbital plane.
All the results have the expected scaling with respect to the SNR, i.e. $\Delta\Omega \propto 1/\mathrm{SNR}^2$, and for all the other parameters shown in Fig.~\ref{fig5} the uncertainties scale as $1/\mathrm{SNR}$. A typical source with a SNR$=100$ (which our current astrophysical understanding suggests to be fairly unlikely, see SVV) would be located in the sky within an error box $\lower.5ex\hbox{\ltsima} 1\,\mathrm{deg}^2$ for $M \lower.5ex\hbox{\gtsima} 10$, which would likely enable the identification of any potential electro-magnetic counterpart.
Distributions (normalised to unity) of $\Delta \Omega$ are shown in Fig. \ref{fig6}. The lower panel shows dependence on SNR (at fixed number of pulsars in the PTA, here set to 100), whose effect is to shift the distributions to smaller values of $\Delta \Omega$ as the SNR increases, without modifying the shape of the distribution. The upper panel shows the effectiveness of triangulation; by increasing the number of pulsars at fixed coherent SNR, not only the peak of the distribution shifts towards smaller values of $\Delta \Omega$, but the whole distribution becomes progressively narrower. If they yield the same SNR, PTAs containing a larger number of pulsars (sufficiently evenly distributed in the sky) with higher intrinsic noise are more powerful than PTAs containing fewer pulsars with very good timing stability, as they allow a more accurate parameter reconstruction (in particular for sky position) and they minimise the chance of GW sources to be located in "blind spots" in the sky (see next Section).
\subsection{Anisotropic distribution of pulsars}
\begin{figure}
\centerline{\psfig{file=f9.ps,width=84.0mm}}
\caption{Median statistical error in the source's parameter estimation as a function of the sky-coverage of the pulsar distribution composing the array. Each triangle is obtained averaging over a Monte Carlo generated sample of $1.6\times10^5$ sources. In each panel, solid lines (triangles) represent the median error, assuming $M=100$ and a total SNR$=10$ in the array; thin dashed lines label the 25$^{\rm th}$ and the 75$^{\rm th}$ percentile in the statistical error distributions.}
\label{fig7}
\end{figure}
\begin{figure*}
\centerline{\psfig{file=f10_color.ps,width=160.0mm}}
\caption{Sky maps of the median sky location accuracy for an anisotropic distribution of pulsars in the array. Contour plots are generated by dividing the sky into 1600 ($40\times40$) cells and considering all the randomly sampled sources falling within each cell; SNR$=10$ is considered. The pulsar distribution progressively fills the sky starting from the top left, eventually reaching an isotropic distribution in the bottom right panel (in this case, no distinctive features are present in the sky map). In each panel, 100 black dots label an indicative distribution of 100 pulsars used to generate the maps, to highlight the sky coverage. Labels on the contours refer to the median sky location accuracy expressed in square degrees, and the color--scale is given by the bars located on the right of each map.}
\label{fig8}
\end{figure*}
\begin{figure}
\centerline{\psfig{file=f11.ps,width=84.0mm}}
\caption{Normalized distributions of the statistical errors in sky position accuracy corresponding to the six sky maps shown in Fig. \ref{fig8}. Each distribution is generated using a random subsample of $2.5\times10^4$ sources.}
\label{fig9}
\end{figure}
\begin{figure*}
\centerline{\psfig{file=f12_color.ps,width=160.0mm}}
\caption{Sky maps of the median sky location accuracy for the Parkes PTA. Contour plots are generated as in Fig. \ref{fig8}. Top panel: we fix the source SNR$=10$ over the whole sky; in this case the sky position accuracy depends only on the different triangulation effectiveness as a function of the source sky location. Bottom panel: we fix the source chirp mass and distance to give a sky and polarization averaged SNR$=10$, and we consistently compute the mean SNR as a function of the sky position. The sky map is the result of the combination of triangulation efficiency and SNR as a function of the source sky location. The color--scale is given by the bars on the right, with solid angles expressed in deg$^2$.}
\label{fig10}
\end{figure*}
The sky distribution of the pulsars in a PTA is not necessarily isotropic. This is in fact the case for present PTAs, and it is likely to remain the norm rather than the exception, until SKA comes on-line. It is therefore useful -- as it also sheds new light on the ability of reconstructing the source parameters based on the crucial location of the pulsars of the array with respect to a GW source -- to explore the dependency of the results on what we call the "PTA sky coverage" $\Delta \Omega_\mathrm{PTA}$, i.e. the minimum solid angle in the sky enclosing the whole population of the pulsars in the array. We consider as a study case a `polar' distribution of 100 pulsars; the location in the sky of each pulsar is drawn from a uniform distribution in $\phi$ and $\cos\theta$ with parameters in the range $\phi \in [0,2\pi]$ and $\theta \in [0,\theta_{{\rm max}}]$, respectively. We then generate a random population of GW sources in the sky and proceed exactly as we have described in the previous section. We consider six different values of $\Delta \Omega_\mathrm{PTA}$, progressively increasing the sky coverage. We choose $\theta_\mathrm{max} = \pi/12, \pi/6, \pi/4, \pi/3, \pi/2, \pi$ corresponding to $\Delta \Omega_\mathrm{PTA}=0.21, 0.84, 1.84, \pi, 2\pi, 4\pi$ srad. As we are interested in investigating the geometry effects, we fix in each case the total optimal SNR to 10. We dedicate the next section to consider specifically the case of the 20 pulsars that are currently part of the Parkes PTA.
The median statistic errors on the source parameters as a function of the PTA sky coverage are shown in Fig. \ref{fig7}. As one would expect, the errors decrease as the sky coverage increases, even if the SNR is kept constant. This is due to the fact that as the pulsars in the array populate more evenly the sky, they place increasingly more stringent constraints on the relative phase differences amongst the same GW signal measured at each pulsar, which depends on the geometrical factors $F^{+,\times}$. The most important effect is that the sky position is pinned down with greater accuracy; at the same time, correlations between the sky location parameters and other parameters, in particular amplitude and inclination angle are reduced. $\Delta \Omega$ scales linearly (at fixed SNR) with $\Delta \Omega_\mathrm{PTA}$, but the others parameters do not experience such a drastic improvement. The statistical uncertainty on the amplitude improves as $\sqrt{\Delta \Omega_\mathrm{PTA}}$ for $\Delta \Omega_\mathrm{PTA} \lower.5ex\hbox{\ltsima} 1$ srad, then saturates. All the other parameters are much less sensitive to the sky coverage, showing only a mild improvement (a factor $\lesssim 2$) with increasing $\Delta \Omega_\mathrm{PTA}$ up to $\sim 1$ srad.
When one considers an anisotropic distribution of pulsars, the median values computed over a random uniform distribution of GW sources in the sky do not carry however the full set of information. In particular the error-box in the sky strongly depends on the actual source location. To show and quantify this effect, we use the outputs of the Monte Carlo runs to build sky maps of the median of $\Delta \Omega$ that we shown in Fig. \ref{fig8}. When the pulsars are clustered in a small $\Delta \Omega_\mathrm{PTA}$, the properties of the signals coming from that spot in the sky (and from the diametrically opposite one) are more susceptible to small variations with the propagation direction (due to the structure of the response functions $F^{+}$ and $F^{\times}$); the sky location can then be determined with a much better accuracy, $\Delta \Omega \sim 2$ deg$^2$. Conversely, triangulation is much less effective for sources located at right angles with respect to the bulk of the pulsars. For a polar $\Delta \Omega_\mathrm{PTA}=0.21$ srad, we find a typical $\Delta \Omega \gtrsim 5000$ srad for equatorial sources; i.e., their sky location is basically undetermined. Increasing the sky coverage of the array, obviously mitigates this effect, and in the limit $\Delta \Omega_\mathrm{PTA}=4\pi$ srad (which correspond to an isotropic pulsar distribution), we find a smooth homogeneous skymap without any recognizable feature (bottom right panel of Fig. \ref{fig8}). In this case the sky location accuracy is independent on the source sky position and, for $M = 100$ and $\mathrm{SNR} = 10$ we find $\Delta \Omega \sim 40$ deg$^2$. Fig. \ref{fig9} shows the normalized distributions of the statistical errors corresponding to the six skymaps shown in Fig. \ref{fig8}. It is interesting to notice the bimodality of the distribution for intermediate values of $\Delta \Omega_\mathrm{PTA}$, due to the fact that there is a sharp transition between sensitive and non sensitive areas in the sky (this is particularly evident looking at the contours in the bottom left panels of Fig. \ref{fig8}).
We also checked another anisotropic situation of potential interest: a distribution of pulsars clustered in the Galactic plane. We considered a distribution of pulsars covering a ring in the sky, with $\phi_\alpha$ is randomly sampled in the interval [0,$2\pi$] and latitude in the range [$-\pi/12, \pi/12$] around the equatorial plane, corresponding to a solid angle of $\Delta \Omega_\mathrm{PTA}=3.26$ srad. Assuming a source SNR$=10$, the median statistical error in the source sky location is $\sim 100$ deg$^2$, ranging from $\sim 10$ deg$^2$ in the equatorial plane, to $\sim 400$ deg$^2$ at the poles. Median errors on the other parameters are basically the same as in the isotropic case.
\subsection{The Parkes Pulsar Timing Array}
We finally consider the case that is most relevant to present observations: the potential capabilities of the Parkes Pulsar Timing Array. The goal of the survey is to monitor 20 milli-second pulsars for five years with timing residuals $\approx 100$ ns~\cite{man08}. This may be sufficient to enable the detection of the stochastic background generated by the whole population of MBHBs~\cite{papI}, but according to our current astrophysical understanding (see SVV) it is unlikely to lead to the detection of radiation from individual resolvable MBHBs, although there is still a non-negligible chance of detection. It is therefore interesting to investigate the potential of such a survey.
In our analysis we fix the location of the pulsars in the PTA to the coordinates of the 20 milli-second pulsars in the Parkes PTA, obtained from~\cite{ATNF-catalogue}; however for this exploratory analysis we set the noise spectral density of the timing residuals to be the same for each pulsar, \emph{i.e.} we do not take into account the different timing stability of the pulsars. We then generate a Monte Carlo sample of GW sources in the sky with the usual procedure. We consider two different approaches. Firstly, we explore the parameter estimation accuracy as a function of the GW source sky location for selected fixed array coherent SNRs (5, 10, 20, 50 and 100). Secondly, we fix the source chirp mass, frequency and distance (so that the sky and polarization averaged coherent SNR is 10) and we explore the parameter estimation accuracy as a function of the sky location. Skymaps of statistical error in the sky location are shown in Fig. \ref{fig10}. In the top panel we fix the SNR$=10$, independently on the source position in the sky; the median error in the sky location accuracy is $\Delta \Omega \sim 130$ deg$^2$, but it ranges from $\sim 10$ deg$^2$ to $\sim400$ deg$^2$ depending on the source's sky location. The median statistical errors that affect the determination of all the other source parameters are very similar to those for the isotropic pulsar distribution case when considering $M=20$, since the pulsar array covers almost half of the sky, see Fig. \ref{fig7}. In the bottom panel, we show the results when we fix the source parameters, and therefore, the total SNR in the array does depend on the source sky location. In the southern hemisphere, where almost all the pulsars are concentrated, the SNR can be as high as 15, while in the northern hemisphere it can easily go below 6. The general shape of the skymap is mildly affected, and shows an even larger imbalance between the two hemispheres. In this case, the median error is $\Delta \Omega \sim 160$ deg$^2$, ranging from $\sim 3$ deg$^2$ to $\sim900$ deg$^2$. It is fairly clear that adding a small ($\lower.5ex\hbox{\ltsima} 10$) number pulsars in the northern hemisphere to the pulsars already part of the Parkes PTA would significantly improve the uniformity of the array sensitivity and parameter estimation capability, reducing the risk of potentially detectable GW sources ending up in a "blind spot" of the array.
\section{Conclusions}
In this paper we have studied the expected uncertainties in the measurements of the parameters of a massive black hole binary systems by means of gravitational wave observations with Pulsar Timing Arrays. We have investigated how the results vary as a function of the signal-to-noise ratio, the number of pulsars in the array and their location in the sky with respect to a gravitational wave source. Our analysis is focused on MBHBs in circular orbit with negligible frequency evolution during the observation time ("monochromatic sources"), which we have shown to represent the majority of the observable sample, for sensible models of sub--parsec MBHB eccentricity evolution. The statistical errors are evaluated by computing the variance-covariance matrix of the observable parameters, assuming a coherent analysis of the Earth-terms only produced by the timing residuals of the pulsars in the array (see Section II B).
For a fiducial case of an array of 100 pulsars randomly distributed in the sky, assuming a coherent total SNR = 10, we find a typical error box in the sky $\Delta \Omega \approx 40$ deg$^2$ and a fractional amplitude error of $\approx 0.3$. The latter places only very weak constraints on the chirp mass-distance combination ${\cal M}^{5/3}/D_L$. At fixed SNR, the typical parameter accuracy is a very steep function of the number of pulsars in the PTA up to $\approx 20$. For PTAs containing more pulsars, the actual gain becomes progressively smaller because the pulsars ``fill the sky" and the effectiveness of further triangulation weakens. We also explored the impact of having an anisotropic distribution of pulsars finding that the typical source sky location accuracy improves linearly with the array sky coverage. For the specific case of the Parkes PTA where all the pulsars are located in the southern sky, the sensitivity and sky localisation are significantly better (by an order of magnitude) in the southern hemisphere, where the error-box is $\lesssim 10 \,\mathrm{deg}^2$ for a total coherent SNR = 10. In the northern hemisphere, the lack of monitored pulsars prevent a source location to be in an uncertainty region $\lesssim 200\,\mathrm{deg}^2$. The monitoring of a handful of pulsars in the northern hemisphere would significantly increase both the SNR and the parameter recovery of GW sources, and the International PTA~\cite{HobbsEtAl:2009} will provide such a capability in the short term future.
The main focus of our analysis is on the sky localisation because sufficiently small error-boxes in the sky may allow the identification of an electro-magnetic counterpart to a GW source. Even for error-boxes of the order of tens-to-hundreds of square degrees (much larger than \emph{e.g.} the typical {\it LISA} error-boxes~\cite{v04,k07,lh08}), the typical sources are expected to be massive (${\cal M} \lower.5ex\hbox{\gtsima} 10^{8}M_{\odot}$) and at low redshift ($z\lower.5ex\hbox{\ltsima} 1.5$), and therefore the number of associated massive galaxies in the error-box should be limited to a few hundreds. Signs of a recent merger, like the presence of tidal tails or irregularities in the galaxy luminosity profile, may help in the identification of potential counterparts. Furthermore, if nuclear activity is present, \emph{e.g.} in form of some accretion mechanism, the number of candidate counterparts would shrink to an handful, and periodic variability \cite{hkm07} could help in associating the correct galaxy host. We are currently investigating the astrophysical scenarios and possible observational signatures, and we plan to come back to this important point in the future. The advantage of a counterpart is obvious: the redshift measurement would allow us, by assuming the standard concordance cosmology, to measure the luminosity distance to the GW source, which in turn would break the degeneracy in the amplitude of the timing residuals $R \propto {\cal M}^{5/3}/(D_L f^{1/3})$ between the chirp mass and the distance, providing therefore a direct measure of ${\cal M}$.
The study presented in this paper deals with monochromatic signals. However, the detection of MBHBs which exhibit a measurable frequency drift would give significant payoffs, as it would allow to break the degeneracy between distance and chirp mass, and enable the direct measurement of both parameters. Such systems may be observable with the Square-Kilometre-Array. In the future, it is therefore important to extend the present analysis to these more general signals. However, as the frequency derivative has only modest correlations with the sky position parameters, we expect that the results for the determination of the error-box in the sky discussed in this paper will still hold. A further extension to the work is to consider MBHBs characterised by non-negligible eccentricity, which is currently in progress. Another extension to our present study is to consider both the Earth- and pulsar-terms in the analysis of the data and the investigation of the possible benefits of such scheme, assuming that the pulsar distance is not known to sufficient accuracy. This also raises the issue of possible observation campaigns that could yield an accurate (to better than 1 pc) determination of the pulsar distances used in PTAs. In this case the use of the pulsar-term in the analysis would not require the introduction of (many more) unknown parameters and would have the great benefit of breaking the degeneracy between chirp mass and distance.
The final world of caution goes to the interpretation of the results that we have presented in the paper. The approach based on the computation of the Fisher Information matrix is powerful and straightforward, and is justified at this stage to understand the broad capabilities of PTAs and to explore the impact on astronomy of different observational strategies. However, the statistical errors that we compute are strictly \emph{lower limits} to the actual errors obtained in a real analysis; the fact that at least until SKA comes on line, a detection of a MBHB will be at a moderate-to-low SNR should induce caution in the way in which the results presented here are interpreted. Moreover, in our current investigation, we have not dealt with a number of important effects that in real life play a significant role, such as different calibrations of different data sets, the change of systematic factors that affect the noise, possible non-Gaussianity and non-stationarity of the noise, etc. These (and other) important issues for the study of MBHBs with PTAs should be addressed more thoroughly in the future by performing actual mock analyses and developing suitable analysis algorithms.
| 2024-02-18T23:39:59.310Z | 2010-03-02T22:03:01.000Z | algebraic_stack_train_0000 | 1,023 | 14,887 |
|
proofpile-arXiv_065-5095 | \section{Introduction}
The origin of the extragalactic gamma-ray background (EGB) at GeV $\gamma$-rays
is one of the fundamental unsolved problems in astrophysics.
The EGB was first detected by the SAS-2 mission
\citep{fichtel95} and its spectrum was measured with good accuracy by
the Energetic Gamma Ray Experiment Telescope
\citep[EGRET][]{sreekumar98,strong04} on board the Compton Observatory.
These observations by themselves do not provide much insight into the
sources of the EGB.
Blazars, active galactic nuclei
(AGN) with a relativistic jet pointing close to our line of sight,
represent the most numerous
population detected by EGRET \cite{hartman99}
and their flux constitutes 15\,\% of the total EGB intensity
(resolved sources plus diffuse emission). Therefore,
undetected blazars (e.g. all the blazars
under the sensitivity level of EGRET) are the most likely
candidates for the origin of the bulk of the EGB emission.
Studies of the luminosity function of blazars showed that
the contribution of blazars to the EGRET EGB could be in the range
from 20\,\% to 100\,\% \citep[e.g.][]{stecker96,chiang98,muecke00}, although
the newest derivations suggest that
blazars are responsible for only $\sim20$--$40$\,\%
of the EGB \citep[e.g.][]{narumoto06,dermer07,inoue09}.
It is thus possible that the EGB emission encrypts in itself the signature
of some of the most powerful and interesting phenomena in astrophysics.
Intergalactic shocks produced by the assembly of Large Scale Structures
\citep[e.g.][]{loeb00,miniati02,keshet03,gabici03}, $\gamma$-ray
emission from galaxy clusters \citep[e.g.][]{berrington03,pfrommer08},
emission from starburst as well as normal
galaxies \citep[e.g.][]{pavlidou02,thompson07}, are among
the most likely candidates for the generation of diffuse the GeV emission.
Dark matter (DM) which constitutes more than 80\,\% of the matter in
the Universe can also provide a diffuse, cosmological, background
of $\gamma$-rays. Indeed, supersymmetric theories with R-parity
predict that the lightest DM particles
(i.e., the neutralinos) are stable and can annihilate into GeV
$\gamma$-rays \citep[e.g.][]{jungman96,bergstrom00,ullio02,ahn07}.
With the advent of the {\it Fermi} Large Area Telescope (LAT) a better
understanding of the origin of the GeV diffuse emission becomes possible.
{\it Fermi} has recently performed a new measurement of the EGB spectrum
\citep[also called isotropic diffuse background,][]{lat_edb}. This
has been found to be consistent with a featureless power law with
a photon index of $\sim$2.4 in the 0.2--100\,GeV energy range.
The integrated flux (E$\geq$100\,MeV) of
1.03$(\pm0.17)\times10^{-5}$\,ph cm$^{-2}$ s$^{-1}$ sr$^{-1}$ has been
found to be significantly lower than the one of
1.45($\pm0.05$)$\times10^{-5}$\,ph cm$^{-2}$ s$^{-1}$ sr$^{-1}$ determined from EGRET data \citep[see][]{sreekumar98}.
In this study we address the contribution of {\it unresolved} point sources
to the GeV diffuse emission and we discuss the implications.
Early findings on the integrated emission
of {\it unresolved} blazars were already reported in \cite{lat_lbas} using
a sample of bright AGN detected in the first three months of {\it Fermi}
observations.
The present work represents a large advance, with $\sim$4 times more blazars and a detailed investigation of selection effects in source detection.
This work is organized as follows. In $\S$~\ref{sec:spec} the
intrinsic spectral properties of the {\it Fermi} sources are determined.
In $\S$~\ref{sec:sim} the Monte Carlo simulations used for
this analyses are outlined with the inherent systematic
uncertainties (see $\S$~\ref{sec:syst}). Finally the source
counts distributions are derived in $\S$~\ref{sec:logn} and
$\S$~\ref{sec:bands} while
the contribution of point sources to the GeV diffuse background
is determined in $\S$~\ref{sec:edb}. $\S$~\ref{sec:discussion}
discusses and summarizes our findings.
Since the final goal of this work is deriving the contribution of
sources to the EGB, we will only use physical quantities (i.e. source flux
and photon index) averaged over the time (11 months) included in the
analysis for the First {\it Fermi}-LAT catalog \citep[1FGL,][]{cat1}.
\section{Terminology}
\label{sec:term}
Throughout this paper we use a few terms which might not be familiar
to the reader. In this section meanings of the most often
used are clarified.
\begin{itemize}
\item {\it spectral bias}: (or photon index bias) is the
selection effect which allows {\it Fermi}-LAT to detect spectrally
hard sources at fluxes generally fainter than for soft sources.
\item {\it flux-limited} sample: it refers to a sample which is
selected uniformly according solely to the source flux. If the
flux limit is chosen to be bright enough (as in the case of this paper),
then the selection effects affecting any other properties
(e.g. the source spectrum) of the sample are negligible.
This is a truly uniformly selected sample.
\item {\it diffuse emission from unresolved point sources}:
it represents a measurement of the integrated emission from sources
which have not been detected by {\it Fermi}.
As it will be shown in the next sections, for each source detected at low
fluxes, there is a large number of sources which have not been detected because
of selection effects (e.g. the local background was too large
or the photon index was too soft, or a combination of both).
The diffuse emission from {\it unresolved} point
sources (computed in this work)
addresses the contribution of
all those sources which have not been detected because of these
selection effects,
but have a flux which is formally larger than the faintest {\it detected}
source.
\end{itemize}
\section{Average Spectral Properties}
\label{sec:spec}
\subsection{Intrinsic Photon index distributions}
\label{sec:photon}
As shown already in \cite[][but see also Fig.~\ref{fig:idx_f}]{lat_lbas},
at faint fluxes the LAT detects more easily hard-spectrum sources
rather than sources with a soft spectrum. Sources with a photon index
(e.g. the exponent of the power-law fit to the source photon spectrum)
of 1.5 can be detected to fluxes which are a factor $>20$ fainter
than those at which a source with a photon index of 3.0 can be detected
\citep[see][for details]{agn_cat}. Thus, given this
strong selection effect,
the intrinsic photon index distribution is necessarily different
from the observed one.
An approach to recover the intrinsic photon index distribution is
obtained by studying the sample above
F$_{100}\approx 7\times 10^{-8}$\,ph cm$^{-2}$ s$^{-1}$ and
$|b|\geq10^{\circ}$ (see right panel of Fig.~\ref{fig:idx_f}).
Indeed above this flux limit, LAT detects all sources irrespective
of their photon index, flux or position in the high-latitude sky.
Above this limit LAT detects 135 sources.
Their photon index distribution, reported in Fig.~\ref{fig:idx_f}
is compatible with a Gaussian
distribution with mean of 2.40$\pm0.02$ and dispersion of 0.24$\pm0.02$.
These values differ from the mean of 2.23$\pm0.01$ and dispersion
of 0.33$\pm0.01$ derived using the entire $|b|\geq10^{\circ}$ sample.
Similarly the intrinsic photon-index distributions of FSRQs and BL Lacs
are different from the observed distributions. In both case the {\it observed
} average photon-index is harder than the intrinsic average value.
The results are summarized in Tab.~\ref{tab:index}.
\begin{figure*}[ht!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=0.4]{f1a.eps}
\includegraphics[scale=0.4]{f1b.eps}\\
\end{tabular}
\end{center}
\caption{
{\bf Left Panel:}Flux-photon index plane for all the $|b|\geq10^{\circ}$
sources with TS$\geq25$. The dashed line is the flux limit
as a function of photon index
reported in \cite{agn_cat}, while the solid line represents the limiting
flux above which the spectral selection effects become negligible.
{\bf Right Panel:}
Photon index distribution of all sources for
F$_{100}\geq7\times 10^{-8}$\,ph cm$^{-2}$ s$^{-1}$. Above
this limit the LAT selection effect towards hard sources becomes
negligible.
}
\label{fig:idx_f}
\end{figure*}
\input{tab1}
\subsection{Stacking Analysis}
\label{sec:stacking}
Another way to determine the average spectral properties is by stacking
source spectra together. This is particularly simple since
\cite[][]{cat1} reports the source flux in five different
energy bands. We thus performed a stacking analysis
of those sources with F$_{100}\geq7\times 10^{-8}$\,ph cm$^{-2}$ s$^{-1}$,
TS$\geq25$, and $|b|\geq$10$^{\circ}$. For each energy band the average
flux is computed as the weighted average of all source fluxes in that
band using the inverse of the flux variance as a weight.
The average spectrum is shown in Fig.~\ref{fig:stack}. A
power law model gives a satisfactory fit to the data (e.g.
$\chi^2/dof\approx 1$), yielding
a photon index of 2.41$\pm0.07$ in agreement with the results
of the previous section.
We repeated the same exercise separately for sources identified as
FSRQs and BL Lacs in the {\it flux-limited} sample.
Both classes have an average spectrum which is compatible
with a single power law over the whole energy band.
FSRQs are characterized by an index of 2.45$\pm0.03$ while BL Lac objects have
an average index of 2.23$\pm0.03$
\begin{figure}[h!]
\begin{centering}
\includegraphics[scale=0.6]{f2.eps}
\caption{Stacked spectrum of sources in the {\it flux-limited} sample.
The dashed line is the best power law fit with a slope of 2.41$\pm0.07$.
\label{fig:stack}}
\end{centering}
\end{figure}
\begin{figure*}[ht!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=0.4]{f3a.eps}
\includegraphics[scale=0.4]{f3b.eps}\\
\end{tabular}
\end{center}
\caption{
Stacked spectrum of FSRQs (left) and BL Lac objects (right) in the {\it Fermi}-LAT
{\it flux-limited} sample.
}
\label{fig:stack_class}
\end{figure*}
\section{Monte Carlo Simulations}
\label{sec:sim}
In order to estimate the LAT sky coverage robustly
we performed detailed Monte Carlo simulations.
The scheme of the simulation procedure is an improved version
of what has already been applied in \cite{lat_lbas}.
We performed 18 end-to-end simulations of the LAT sky which
resemble as closely as possible the observed one.
The tool {\it gtobssim}\footnote{The list of science tools for the analysis
of {\it Fermi} data is accessible at http://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/overview.html.} has been used for this purpose.
For each simulation we modeled the Galactic and isotropic diffuse backgrounds
using models (e.g. gll\_iem\_v02.fit)
currently recommended by the LAT team.
An isotropic population of point-like sources
was added to each simulated observation.
The coordinates of each source were randomly drawn in order
to produce an isotropic distribution on the sky. Source fluxes were
randomly drawn from a standard log $N$--log $S$ distribution with parameters
similar to the one observed by LAT (see next section). Even though the method
we adopt to derive the survey sensitivity does not depend on the
normalization or the slope of the input log $N$--log $S$, using the real
distribution allows simulated observations to be produced that closely
resemble the sky as observed with the LAT.
The photon index of each source was also drawn from a Gaussian
distribution with mean of 2.40 and 1\,$\sigma$ width of 0.28. As noted in
the previous section, this distribution represents well the intrinsic
(not the observed one) distribution of photon indices. The adopted dispersion
is slightly larger than what was found in the previous section and it
is derived from the analysis of the entire sample (see $\S$~\ref{sec:logn_2d}).
In this framework we are neglecting any possible dependence of the photon
index distribution with flux.
Also we remark that the approach used here to derive the source count
distribution depends very weakly on the assumptions (e.g.
the log $N$--log $S$ used) made in the simulations.
More than 45000 randomly distributed sources have been generated for each
realization of the simulations. Detection follows (albeit in a simpler
way) the scheme used in \cite{cat1}.
This scheme adopts three energy bands for source detection.
The first band includes all {\it front}-converting\footnotemark{} and
{\it back}-converting photons with
energies larger than 200\,MeV and 400\,MeV, respectively.
\footnotetext{Photons pair-converting in the top 12 layers of the tracker
are classified as {\it front-}converting photons or {\it back-}converting otherwise. }
The second band starts at 1\,GeV for {\it front} photons and at 2\,GeV
for {\it back} photons. The high-energy band starts at 5\,GeV
for {\it front} photons and at 10\,GeV for {\it back} photons.
The choice of combining {\it front} and {\it back} events
with different energies
is motivated by the fact that {\it front} events have
a better point spread function (PSF) than {\it back} ones.
The two PSFs are similar when the energy of {\it back}-converting
photons is approximately twice as that of {\it front}-converting ones.
The image pixel sizes changes according to the energy band
and is 0.1, 0.2 and 0.3 degrees for the low, medium and high-energy
bands respectively. The final list of {\it candidate} sources
is obtained starting the detection at the highest energy band
and adding all those sources which, being detected at lower energy,
have a position not consistent with those detected at high energy.
The detection step uses {\it pgwave} for determining the position
of the excesses and {\it pointfit} for refining the source position.
{\it Pgwave} \citep{ciprini07}
is a tool which uses several approaches (e.g. wavelets,
thresholding, image denoising and a sliding cell algorithm) to
find source candidates while {\it pointfit} \citep[e.g.][]{burnett09}
employes a simplified binned likelihood algorithm
to optimize the source position.
All the source candidates found at this stage are then ingested to the
Maximum Likelihood (ML) algorithm {\it gtlike} to determine the
significance and the spectral parameters. In this step all sources'
spectra are modeled as single power laws.
On average, for each simulation only $\sim$1000 sources are detected (out
of the 45000 simulated ones)
above a TS\footnote{The test statistics (or TS) is defined as:
TS=$-2({\rm ln} L_0 - {\rm ln} L_1)$. Where $L_0$ and $L_1$
are the likelihoods of the background (null hypothesis) and
the hypothesis being tested (e.g. source plus background).
According to \cite{wilks38}, the significance of a detection
is approximately $n_{\sigma}=\sqrt(TS)$ \citep[see also][]{ajello08a}.}
of 25 and this is found to be in good agreement with the real data.
\subsection{Performances of the detection algorithm on real data}
\label{sec:cat}
In order to test the reliability of our detection pipeline
we applied it the to real 1 year dataset. Our aim was to cross
check our result with the result reported in \cite{cat1}.
The flux above 100\,MeV, computed from the power-law
fit to the 100\,MeV--100\,GeV data, is not reported in \cite{cat1}, but
it can be readily obtained using the following expression:
\begin{equation}
F_{100}=E_{piv}\times F_{density}\times \left( \frac{100}{E_{piv}}\right)^{1-\Gamma} \times |1-\Gamma|^{-1},
\end{equation}
where $F_{100}$ is the 100\,MeV--100\,GeV photon flux, $\Gamma$ is
the absolute value of the photon index,
$E_{piv}$ is the pivot energy and $F_{density}$ is the
flux density at the pivot energy \citep[see][for details]{cat1}.
Fig.~\ref{fig:comparison} shows the comparisons of both
fluxes (above 100\, MeV) and of the photon indices for the
sources detected in both pipelines. It is clear that
the fluxes and photon indices derived in this analysis are
reliable; for each source they are consistent with those
in \cite{cat1} within the reported errors.
The number of sources detected in the simplified pipeline is smaller than
found by \cite{cat1}. Above a TS of 50 and $|b|\geq20^{\circ}$
our approach detects 425 sources while the 1FGL catalog has
497. Indeed, our aim is not to produce a detection
algorithm which is as sensitive than the one used in \cite{cat1}, but
a detection algorithm which is proven to be
reliable and can be applied consistently
to both real data and simulations. This allows us to assess properly
all selection effects important for the LAT survey and its analysis.
On this note we remark that all
the 425 sources detected by our pipeline are also detected by \cite{cat1}.
For this reason we limit the studies presented in this work to the
subsample of sources which is detected by our pipeline.
The details of this sample of sources are reported in Tab.~\ref{tab:sample}.
The associations are the ones reported in \cite{agn_cat} and \cite{cat1}.
In our sample 161 sources are classified as FSRQs and 163 as BL Lac objects
while only 4 as blazars of uncertain classification. The number
of sources which are unassociated is 56, thus the identification
incompleteness of this sample is $\sim$13\%.
\begin{figure*}[ht!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=0.4]{f4a.eps}
\includegraphics[scale=0.4]{f4b.eps}\\
\end{tabular}
\end{center}
\caption{Performance of the detection pipeline used
in this work with respect to the detection pipeline used in \cite{cat1}.
The left panel shows the comparison of the reconstructed
$\gamma$-ray fluxes while the right panel shows the comparison
of the photon indices. In both cases the solid line shows the
the locus of points for which the quantity reported in the
y-axis equals the one in the x-axis.
}
\label{fig:comparison}
\end{figure*}
\input{tab2}
\subsection{Derivation of the Sky Coverage}
\label{sec:skycov}
In order to derive the sky coverage from simulations,
detected sources (output) need to be associated to the simulated
ones (input). We do this on a statistical basis using an estimator
which is defined for each set of input-output sources as:
\begin{equation}
R^2 = \left( \frac{||\bar{x}-\bar{x_0}||}{\sigma_{pos}} \right)^2 +
\left( \frac{S-S_0}{\sigma_S} \right)^2 +
\left( \frac{\Gamma-\Gamma_0}{\sigma_{\Gamma}} \right)^2
\end{equation}
where $\bar{x}$, $S$ and $\Gamma$ are the source coordinates, fluxes
and photon indices as determined from the ML step while
$\bar{x_0}$, $S_0$ and $\Gamma_0$ are the simulated (input) values.
The 1\,$\sigma$ errors on the position, flux and photon index
are $\sigma_{pos}$, $\sigma_S$ and $\sigma_{\Gamma}$ respectively.
We then flagged as the most likely associations
those pairs with the minimum value of R$^2$.
All pairs with an angular separation which is larger than the 4\,$\sigma$
error radius
are flagged as spurious and excised from
the following analysis. The empirical, as derived from the real
data, 5\,$\sigma$ error radius as a function of source TS is shown in
Fig.~\ref{fig:angsep}.
As in \cite{hasinger93} and in \cite{cappelluti07}
we defined {\it confused} sources for which the ratio
$S/(S_0+3\sigma_S)$ (where $\sigma_S$ is the error on the output flux)
is larger than 1.5. We found that,
according to this criterion, $\sim$4\,\% of the
sources (detected for $|b|\geq10^{\circ}$) are confused in the first year survey.
The right panel of Fig.~\ref{fig:simulations} shows
the ratio of reconstructed to simulated source flux versus
the simulated source flux.
At medium to bright fluxes the distribution of the ratio is centered on unity
showing that there are no systematic errors in the flux measurement.
At low fluxes (in particular for F$_{100}<10^{-9}$\,ph cm$^{-2}$ s$^{-1}$)
the distribution is
slightly (or somewhat) biased toward values greater than unity.
This is produced by three effects:
1) source confusion, 2) Eddington bias \citep{eddington40} and
3) non converging Maximum Likelihood fits (see $\S$~\ref{sec:mlfit}
for details).
The Eddington bias arises from measurement errors of any
intrinsic source property (e.g. source flux). Given its nature,
it affects only sources close to the detection threshold.
Indeed, at the detection threshold the uncertainty in the reconstructed
fluxes makes sources with a measured flux slightly larger than
the real value more easily detectable in the survey rather
than those with a measured flux slightly lower than the real one.
This causes the shift of the flux ratio
distribution of Fig.~\ref{fig:simulations} to move systematically
to values larger than unity at low fluxes.
In any case, the effect of this bias is not relevant as it affects
less than 1\,\% of the entire population.
This uncertainty will be neglected as only sources
with F$_{100}\geq10^{-9}$\,ph cm$^{-2}$ s$^{-1}$ will be considered for the
analysis presented here.
Moreover, the right panel of Fig.~\ref{fig:simulations} shows that
the measured photon index agrees well with the simulated one.
In addition to assessing the reliability and biases of our source
detection procedure, the main aim of these simulations is to provide
a precise estimate of the completeness function of the {\it Fermi}/LAT
survey (known also as sky coverage). The one-dimensional
sky coverage can be derived for each bin of flux as the ratio
between the number of detected sources and the number of simulated sources.
The detection efficiency for the entire TS$\geq50$ and $|b|\geq20^{\circ}$
sample is reported in Fig.~\ref{fig:skycov}.
This plot shows that the LAT sensitivity
extends all the way to F$_{100}\sim10^{-10}$\,ph cm$^{-2}$ s$^{-1}$
although at those fluxes only the hardest sources can be detected.
Also the sample becomes complete for
F$_{100}=7-8\times 10^{-8}$\,ph cm$^{-2}$ s$^{-1}$.
Since for these simulations, the {\it intrinsic} distribution of
photon indices has been used (see e.g. $\S$~\ref{sec:photon})
this sky coverage takes properly into account the bias towards
the detection of hard sources. This also means that this
sky coverage cannot be applied to other source samples with very
different photon index distributions.
\begin{figure}[h!]
\begin{centering}
\includegraphics[scale=0.6]{f5.eps}
\caption{Angular separation of the real LAT sources from the most
probable associated counterpart as a function of TS. All sources
with $|b|\geq$10$^{\circ}$
with a probability of associations larger than 0.5 were used
\citep[see ][for a definition of probability of association]{agn_cat}. The
solid line is the best fit for the mean offset of the angular separations
while the dashed line represents the observed 5\,$\sigma$ error radius
as a function of test statistics. Note that the 5\,$\sigma$ error radius
is weakly dependent on the level of probability of association chosen.}
\label{fig:angsep}
\end{centering}
\end{figure}
\begin{figure*}[ht!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=0.4]{f6a.eps}
\includegraphics[scale=0.4]{f6b.eps}\\
\end{tabular}
\end{center}
\caption{Left Panel: Reconstructed versus Simulated fluxes
for all sources with TS$\geq$50 and $|b$$|\geq$20$^{\circ}$.
For the analysis reported here only sources with
F$_{100}\geq10^{-9}$ ph cm$^{-2}$ s$^{-1}$ are considered.
Right Panel: Reconstructed versus Simulated photon indices for
all sources with TS$\geq$50 and $|b$$|\geq$20$^{\circ}$.
}
\label{fig:simulations}
\end{figure*}
\begin{figure}[h!]
\begin{centering}
\includegraphics[scale=0.6]{f7.eps}
\caption{Detection efficiency as a function of measured source flux for
$|b|\geq20^{\circ}$, TS$\geq50$ and for a sample of sources
with a mean photon index of 2.40 and dispersion of 0.28. The
error bars represent statistical uncertainties from the
counting statistic of our Monte Carlo simulations.}
\label{fig:skycov}
\end{centering}
\end{figure}
\section{Systematic Uncertainties}
\label{sec:syst}
\subsection{Non converging Maximum Likelihood fits}
\label{sec:mlfit}
A small number of sources detected by our pipeline have
unreliable spectral fits. Most of the time, these sources
have a reconstructed photon index which is very soft (e.g. $\sim$5.0)
and at the limit of the accepted range of values. As a consequence
their reconstructed flux overestimates the true flux by up to factor 1000
(see left panel of Fig.~\ref{fig:simulations}).
This is due to the fact
the the ML algorithm does not find an absolute minimum of the fitting
function for these cases.
Inspection of the
regions of interests (ROIs) of these objects shows that this tends
to happen either in regions very dense with sources
or close to the Galactic plane, where the diffuse emission is the brightest.
The best approach in this case would be to adopt an iterative procedure
for deriving the best-fitting parameters which starts by optimizing
the most intense components (e.g. diffuse emissions and bright sources)
and then move to the fainter ones. This procedure is correctly implemented
in \cite{cat1}. Its application to our problem would make the processing
time of our simulations very long and we note that
the systematic uncertainty deriving from it is small.
Indeed, the number of sources with unreliable spectral parameters
are for $TS\geq25$ are 2.3\,\% and 2.0\,\% for $|b|\geq15^{\circ}$ a
$|b|\geq20^{\circ}$ respectively.
These fractions decrease to 1.2\,\% and 0.9\,\% adopting TS$\geq50$.
To limit the systematic uncertainties in this analysis,
we will thus select only those sources
which are detected above TS$\geq50$ and $|b|\geq20^{\circ}$.
It will also be shown that results do not change
if the sample is enlarged to include all sources with $|b|\geq15^{\circ}$.
\subsection{Variability}
It is well known that blazars are inherently variable objects with variability
in flux of up to a factor 10 or more. Throughout this work
only average quantities (i.e. mean flux and mean photon index) are used.
This is correct in the context of the determination of the mean energy
release in the Universe of each source. Adopting the peak flux (i.e.
the brightest flux displayed by each single source) would produce the net
effect of overestimating the true intrinsic source density at any flux
\citep[see the examples in][]{reimer01} with the result of overestimating
the contribution of sources to the diffuse background.
It is not straightforward to determine how blazar variability affects
the analysis presented here.
On timescales large enough (such as the one
spanned by this analysis), the mean flux is a good estimator of
the mean energy release of a source. This is not true anymore on
short timescales (e.g. $\sim$1\,month)
since the mean flux corresponds to the source
flux at the moment of the observation. The continuous scanning
of the $\gamma$-ray sky performed by
{\it Fermi} allows to determine long-term variability
with unprecedented accuracy. As shown already in \cite{lat_lbas} the
picture arising from {\it Fermi} is rather different from the one derived by
EGRET \citep{hartman99}. Indeed, the peak-to-mean flux ratio
for {\it Fermi} sources is considerably smaller than for EGRET sources.
For most
of the {\it Fermi} sources this is just a factor 2, as is confirmed
in the 1\,year sample \citep[see Fig.10 in][]{agn_cat}. This excludes the
possibility that most of the sources are detected because of a single
outburst which happened during the 11\,months of observation and
are undetected for the remaining time. Moreover, as shown in
\cite{sed} there is little or no variation of the photon index
with flux. We thus believe that no large systematic uncertainties
are derived from the use of average physical quantities and the
total systematic uncertainty (see next section) will be slightly overestimated
to accommodate possible uncertainties caused by variability.
\subsection{Non power law spectra}
\label{sec:pow}
It is well known that the spectra of blazars are complex and often show
curvature when analyzed over a large waveband. In this case the
approximation of their spectrum with a simple power law (in the
0.1--100\,GeV band) might provide a poor estimate of their
true flux. To estimate the uncertainties derived by this assumption
we plotted for the extragalactic sample used here (e.g. TS$\geq$ 50
and $|b|\geq$20$^{\circ}$) the source flux as derived
from the power-law fit to the whole band versus the source flux
as derived from the sum of the fluxes over the 5 energy bands
reported in \cite{cat1}. This comparison is reported
in Fig.~\ref{fig:fluxcomp}. From the figure it is apparent that
the flux (F$_{100}$) derived from a power-law fit to the
whole band overestimates slightly the true source flux.
Analysis of the ratio between the power-law flux and flux derived in
5 energy bands, shows that on average the F$_{100}$ flux overestimates
the true source flux by $\sim$8\,\%. At very bright fluxes (e.g.
F$_{100}\geq10^{-7}$\,ph cm$^{-2}$ s$^{-1}$) the overestimate reduces
to $~\sim$5\,\%. For the analysis presented here we will thus assume
that the total systematic uncertainty connected to the use of fluxes
computed with a power-law fit over the broad 0.1--100\,GeV band is 8\,\%.
Considering also the uncertainties of the previous sections,
we derive that the total systematic uncertainty for the sample
used here (TS$\geq$50 and $|b|\geq$20$^{\circ}$) is $\sim$10\,\%.
Since this uncertainty affects mostly the determination of the source flux
it will be propagated by shifting in flux
the sky coverage of Fig.\ref{fig:skycov}
by $\pm10$\,\%.
\begin{figure}[h!]
\begin{centering}
\includegraphics[scale=0.6]{f8.eps}
\caption{Source flux estimated with a power-law fit to the 0.1--100\,GeV
band versus the sum of the source fluxes derived in 5 contiguous energy bands
\cite[see][for details]{cat1}. The solid line is the F$_{bands}$=F$_{100}$
relation. The spread at low fluxes arises from the difficulties of
estimating the source flux in small energy bands.
}
\label{fig:fluxcomp}
\end{centering}
\end{figure}
\section{Source Counts Distributions}
\label{sec:logn}
The source counts distribution, commonly referred to as log $N$--log$S$
or size distribution, is the cumulative number of sources $N(>S)$
detected above a given flux $S$. In this section we apply
several methods to derive the source count distribution of
{\it Fermi}/LAT sources. We also remark that the catalog used
for this analysis is the one described in $\S$~\ref{sec:cat} (see also
Tab.~\ref{tab:sample}).
\subsection{Standard Approach}
\label{sec:logn_1d}
A standard way to derive the (differential) log $N$--log $S$ is
through the following expression:
\begin{equation}
\frac{dN}{dS} = \frac{1}{\Delta\ S}\
\sum_{i=1}^{N_{\Delta S}} \frac{1}{\Omega_i}
\end{equation}
where $N_{\Delta S}$ is the total number of detected sources with fluxes
in the $\Delta$S interval, and $\Omega_i$
is the solid angle associated
with the flux of the $i_{th}$ source (i.e.,
the detection efficiency multiplied by the survey solid angle).
We also note that formally $N$ is an areal density and should
be expressed as $dN/d\Omega$. However for simplicity of notation
the areal density will, throughout this paper, be expressed as $N$.
For the $|b|\geq20^{\circ}$ sample the geometric solid angle
of the survey is 27143.6\,deg$^{2}$.
In each flux bin, the final uncertainty is obtained by summing in quadrature
the error on the number of sources and the systematic uncertainties
described in $\S$~\ref{sec:syst}.
Both the differential and the cumulative version of the source
count distributions are reported in Fig.~\ref{fig:logn_1d}.
In order to parametrize the source count distribution
we perform a $\chi^{2}$ fit to the differential data using
a broken power-law model of the type:
\begin{eqnarray}
\label{eq:dblpow}
\frac{dN}{dS} & = & A S^{-\beta_1} \ \ \ \ \ \ \ \ \ \ S \geq S_b \nonumber \\
& = & A S_b^{-\beta_1+\beta_2}S^{-\beta_2} \ \ S < S_b
\end{eqnarray}
where $A$ is the normalization and $S_b$ is the flux break.
The best-fit parameters are reported in Tab.~\ref{tab:logn_1d}.
The log $N$-- log $S$ distribution
of GeV sources shows a strong break ($\Delta \beta=\beta_1 -\beta_2\approx1.0$)
at F$_{100} =6.97(\pm0.13)\times10^{-8}$\,ph cm$^{-2}$ s$^{-1}$. At fluxes
brighter than the break flux, the source count distribution is
consistent with Euclidean ($\beta_1 = 2.5$)
while it is not at fainter fluxes.
As Tab.~\ref{tab:logn_1d} shows,
these results do not change if
the sample under study is enlarged to $|b|\geq$15$^{\circ}$.
\begin{figure*}[ht!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=0.4]{f9a.eps}
\includegraphics[scale=0.4]{f9b.eps}\\
\end{tabular}
\end{center}
\caption{Differential (left) and cumulative (right)
log $N$--log $S$ for all sources with TS$\geq50$ and $|b|\geq20^{\circ}$.
The dashed line is the best-fit broken power law model as reported in
the text.}
\label{fig:logn_1d}
\end{figure*}
\input{tab3}
\subsection{A Global Fit}
\label{sec:logn_2d}
Because of the spectral selection effect discussed in $\S$~\ref{sec:photon},
the sky coverage derived in $\S$~\ref{sec:skycov} can be used only
with samples which have a distribution of the photon indices similar
to the one used in the simulations (i.e. a Gaussian with mean and dispersion
of 2.40 and 0.28). Here we aim at overcoming this limitation by implementing
for the first time a novel,
more formal, analysis to derive the source count distribution.
We aim at describing the properties of the sample in terms of
a distribution function of the following kind:
\begin{equation}
\frac{dN}{dSd\Gamma} = f(S) \cdot g(\Gamma)
\label{eq:dn2}
\end{equation}
where $f(S)$ is the intrinsic flux distribution of sources
and $g(\Gamma)$ is the intrinsic distribution of the
photon indices.
In this analysis, $f(S)$ is modeled as a double power-law function
as in Eq.~\ref{eq:dblpow}.
The index distribution $g(\Gamma)$ is modeled a Gaussian function:
\begin{equation}
g(\Gamma) = e^{-\frac{ (\Gamma-\mu)^2}{2\sigma^2}}
\end{equation}
where $\mu$ and $\sigma$ are respectively the mean and the dispersion
of the Gaussian distribution. As it is clear from Eq.~\ref{eq:dn2},
we made the hypothesis that the $dN/dSd\Gamma$ function
is factorizable in two separate distributions in flux and photon index.
This is the most simple assumption that could be made and as it will be
shown in the next sections it provides a good description of the data.
Moreover, we emphasize, as already did in $\S$~\ref{sec:sim}, that
this analysis implicitly assumes that the photon index distribution
does not change with flux. This will be discussed in more details
in the next sections.
This function is then fitted to all datapoints using a Maximum Likelihood
approach as described in Sec.~3.2 of \cite{ajello09b}.
In this method, the Likelihood function can be defined as:
\begin{equation}
L = {\rm exp(-N_{exp})} \prod_{i=1}^{N_{\rm obs}}\lambda (S_i,\Gamma_i)
\end{equation}
with $\lambda (S,\Gamma)$ defined as:
\begin{equation}
\lambda (S,\Gamma) = \frac{dN}{dSd\Gamma}\Omega(S,\Gamma)
\end{equation}
where $\Omega(S,\Gamma)$ is the photon index dependent sky coverage
and $N_{\rm obs}$ is the number of observed sources.
This is generated from the same Monte Carlo simulation of $\S$~\ref{sec:sim}
with the difference that this time the detection probability is computed
for each bin of the photon-index--flux plane as the ratio between
detected and simulated sources (in that bin). This produces a
sky coverage which is function of both the source flux and photon index.
The {\it expected} number of sources $N_{exp}$ can be computed
as:
\begin{equation}
N_{exp}=\int d\Gamma \int dS \lambda (S,\Gamma)
\end{equation}
The maximum likelihood parameters are obtained by minimizing the function
$C(=-2 {\rm ln} L)$:
\begin{equation}
C = -2 \sum^{N_{obs}}_i {\rm ln} (\lambda(S_i,\Gamma_i))
- 2N\ {\rm ln(N_{exp} ) }
\end{equation}
while their associated 1\,$\sigma$ errors are computed by varying
the parameter of interest, while the others are allowed to float,
until an increment of $\Delta C$=1 is achieved. This gives
an estimate of the 68\,\% confidence region for the parameter of interest
\citep{avni76}.
Once the $dN/dSd\Gamma$ has been determined, the standard differential
source count distribution can be readily derived as:
\begin{equation}
\frac{dN}{dS} = \int_{-\infty}^{\infty} d\Gamma \frac{dN}{dS d\Gamma}
\end{equation}
\subsection{The Total Sample of Point Sources}
The results of the best-fit model for the entire sample of sources
(for TS$\geq$50 and $|b|\geq$20$^{\circ}$) are reported in
Tab.~\ref{tab:logn2D}.
Fig.~\ref{fig:total_distr} shows how well the best-fit model
reproduces the observed index and flux distributions.
The $\chi^2$ test yields that the probabilities that the
real distribution and the model line come from the same
parent population are 0.98 and 0.97 for the photon index and flux
distributions, respectively.
In Fig.~\ref{fig:lognboth} the source count distribution obtained
here is compared to the one derived using the standard approach
of $\S$~\ref{sec:logn_1d}; the good agreement is apparent.
We also derived the source count distributions of all objects
which are classified as blazars (or candidate blazars)
in our sample. This includes 352 out of the 425 objects reported
in Tab.~\ref{tab:sample}. The number of sources
that lack association is 56 and thus the incompleteness
of the blazar sample is 56/425$\approx 13$\,\%. A reasonable and simple
assumption is that the 56 unassociated sources are distributed among the
different source classes in a similar way as the associated portion
of the sample (see Tab.~\ref{tab:sample}).
This means that $~$46 out of the 56 unassociated sources are likely
to be blazars.
As it is possible to notice both from the best-fit
parameters of Tab.~\ref{tab:logn2D} and from Fig.~\ref{fig:lognblaz},
there is very little difference between the source count distributions
of the entire sample and the one of blazars. This confirms on a
statistical basis that
most of the 56 sources without association are likely to be blazars.
It is also clear from Fig.~\ref{fig:total_distr},
that the model (e.g. Eq.~\ref{eq:dn2})
represents a satisfactory description of the data.
This also implies that the {\it intrinsic} photon index distribution of blazars
is compatible with a Gaussian distribution that does not change
(at least dramatically) with source flux in the range of fluxes spanned
by this analysis.
A change in the average spectral properties of blazars with flux
(and/or redshift) might be caused
by the different cosmological evolutions of FSRQs and BL Lacs
or by the spectral evolution of the two source classes with redshift.
While it is something reasonable to expect, this effect is in the current
dataset not observed. The luminosity function, which is left to
a future paper, will allow us to investigate this effect
in great detail.
\begin{figure*}[ht!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=0.4]{f10a.eps}
\includegraphics[scale=0.4]{f10b.eps}\\
\end{tabular}
\end{center}
\caption{Distribution of photon indices (left) and fluxes (right)
for the TS$\geq$50 and $|b|\geq$20$^{\circ}$ sources. The dashed
line is the best fit $dN/dSd\Gamma$ model. Using the $\chi^{2}$ test
the probabilities that the data and the model line come from the
same parent population are 0.98 and 0.97 for the photon index
and flux distribution respectively.}
\label{fig:total_distr}
\end{figure*}
\begin{figure}[h!]
\begin{centering}
\includegraphics[scale=0.6]{f11.eps}
\caption{Comparison of log $N$--log $S$ of the whole sample of
(TS$\geq$50 and $|b|\geq$20$^{\circ}$) sources
built with the
standard method (green datapoints, see $\S$~\ref{sec:logn_1d})
and the global fit method (red datapoints,see $\S$~\ref{sec:logn_2d}).
}
\label{fig:lognboth}
\end{centering}
\end{figure}
\begin{figure}[h!]
\begin{centering}
\includegraphics[scale=0.6]{f12.eps}
\caption{Comparison between log $N$--log $S$ distributions
of the whole sample of sources (solid circles) and blazars (open circles).
The solid line are the respective best-fit models as reported in
Tab.~\ref{tab:logn2D}.
}
\label{fig:lognblaz}
\end{centering}
\end{figure}
\input{tab4}
\subsection{FSRQs}
\label{sec:fsrq}
For the classification of blazars as flat spectrum
radio quasars (FSRQs) or BL Lacertae objects (BL Lacs)
we use the same criteria adopted in \cite{lat_lbas}.
This classification relies on the conventional definition of BL Lac objects outlined in \cite{stocke91}, \cite{urry95}, and \cite{marcha96}
in which the equivalent width of the strongest optical emission line is
$<$5\,\AA\, and the optical spectrum shows a Ca II H/K break ratio C$<$0.4.
It is important to determine correctly the incompleteness
of the sample when dealing with a sub-class of objects.
Indeed, in the sample of Tab.\ref{tab:sample}, 56 objects
have no associations and 28 have either an uncertain or
a tentative association with blazars. Thus the total incompleteness is
84/425 = $\sim$19\,\% when we refer to either FSRQs or BL Lac objects
separately. The incompleteness levels of all the samples used here
are for clarity
reported also in Tab.~\ref{tab:logn2D}.
Since we did not perform dedicated simulations for
the FSRQ and the BL Lac classes, their source count distributions can
be derived only with the method described in $\S$~\ref{sec:logn_2d}.
The best fit to the source counts (reported in
Tab.~\ref{tab:logn2D}) is a double power-law
model with a bright-end slope of 2.41$\pm0.16$ and faint-end slope
0.70$\pm0.30$. The log $N$--log $S$ relationship shows a break around
F$_{100}=$6.12($\pm1.30)\times10^{-8}$\,ph cm$^{-2}$ s$^{-1}$.
The intrinsic distribution of the photon indices of FSRQs is found
to be compatible with a Gaussian distribution with mean and dispersion
of 2.48$\pm0.02$ and 0.18$\pm0.01$ in agreement with what found
previously in Tab.~\ref{tab:index}. The faint-end slope is noticeably
flatter and this might be due to the fact that many of the unassociated
sources below the break might be FSRQs.
Fig.~\ref{fig:logn_fsrq} shows how the best-fit model reproduces
the observed photon index and flux distributions.
The $\chi^2$ test indicates that the probability that the
real distribution and the model line come from the same
parent population is $\geq0.99 $ for both
the photon index and flux distributions respectively.
The left panel shows that the photon index distribution is not reproduced
perfectly. This might be due to incompleteness or by the
fact that the intrinsic distribution of photon indices is actually
not Gaussian. However, a Kolmogorov-Smirnov (KS) test between the
predicted and the observed distribution yields that both
distributions have a probability of $\sim96$\,\% of being
drawn from the same parent population. Thus the current dataset
is compatible with the hypothesis that the intrinsic
index distribution is Gaussian.
The log $N$--log $S$ of FSRQs is shown in Fig.~\ref{fig:blazar_all}.
\begin{figure*}[ht!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=0.4]{f13a.eps}
\includegraphics[scale=0.4]{f13b.eps}\\
\end{tabular}
\end{center}
\caption{Distribution of photon indices (left) and fluxes (right)
for the TS$\geq$50 and $|b|\geq$20$^{\circ}$ sources associated
with FSRQs.
}
\label{fig:logn_fsrq}
\end{figure*}
\begin{figure*}[ht!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=0.43]{f14a.eps}
\includegraphics[scale=0.43]{f14b.eps}\\
\end{tabular}
\end{center}
\caption{
Cumulative (left) and differential (right) source count distribution
of {\it Fermi} blazars and the sub-samples reported in Tab.~\ref{tab:logn2D}.
Given the selection effect towards spectrally hard
sources, BL Lac objects are detected to fluxes fainter than FSRQs. The
flattening at low fluxes of the FSRQs log$N$--log$S$ is probably due to
incompleteness (see text for details). The ``All Blazars'' class also includes
all those sources which are classified as blazar candidates
(see Tab.~\ref{tab:sample} for details).
}
\label{fig:blazar_all}
\end{figure*}
\subsection{BL Lacs}
\label{sec:bllac}
The best-fit model of the source count distribution of the
161 BL Lac objects is again a broken power-law model.
The break is found to be at F$_{100}=$6.77$\pm1.30\times10^{-8}$\,ph cm$^{-2}$ s$^{-1}$ while the slopes below and above the break are 1.72$\pm0.14$ and
2.74$\pm0.30$ respectively.
The intrinsic photon index distribution is found
to be compatible with a Gaussian distribution with mean
and dispersion of 2.18$\pm0.02$ and 0.23$\pm0.01$ respectively.
These results are in good agreement with the one reported in
Tab.~\ref{tab:index}. The best-fit parameters to the source
counts distribution are reported in Tab.~\ref{tab:logn2D}.
Fig.~\ref{fig:logn_bllac} shows how the best-fit model reproduces
the observed photon index and flux distributions.
The $\chi^2$ test indicates that the probability that the
real distribution and the model line come from the same
parent population is $\geq0.99$ for both
the photon index and flux distributions respectively.
The log $N$--log $S$ of BL Lacs, compared to the one of FSRQs and blazars,
is shown in Fig.~\ref{fig:blazar_all}.
\begin{figure*}[ht!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=0.4]{f15a.eps}
\includegraphics[scale=0.4]{f15b.eps}\\
\end{tabular}
\end{center}
\caption{Distribution of photon indices (left) and fluxes (right)
for the TS$\geq$50 and $|b|\geq$20$^{\circ}$ sources associated
with BL Lacs. The dashed line is the best fit $dN/dSd\Gamma$ model.
}
\label{fig:logn_bllac}
\end{figure*}
\subsection{Unassociated Sources}
\label{sec:unids}
We also constructed the log $N$--log $S$ of the 56 sources
which are unassociated and it is reported in Fig.~\ref{fig:blazar_all}.
Their source count distribution
displays a very steep bright-end slope ($\beta_1$=3.16$\pm0.50$),
a break around $\sim$4.5$\times10^{-8}$\,ph cm$^{-2}$ s$^{-1}$ and
faint-end slope of 1.63$\pm0.24$. The intrinsic photon index
distribution is found to be compatible with a Gaussian distribution
with mean and dispersion of 2.29$\pm0.03$ and 0.20$\pm0.01$ respectively
(see Tab.~\ref{tab:logn2D} for details).
The extremely steep bright-end
slope is caused by the fact that most (but not all) of the
brightest sources have an association. Below the break the log $N$--log $S$
behaves like the one of blazars with the difference that the
index distribution is suggesting that
probably most of the sources are BL Lac objects.
Indeed as can be seen in Fig.~\ref{fig:blazar_all} all the
sources with F$_{100}\leq4\times10^{-8}$\,ph cm$^{-2}$ s$^{-1}$ are identified
as BL Lac objects in our sample.
\subsection{Unfolding Analysis}
\label{unf}
Finally we employ a different approach to evaluate the log $N$--log $S$ distribution based on a deconvolution (unfolding) technique. This method allows reconstructing the distribution of the number of sources from the data without assuming any model, also taking into account
the finite resolution (i.e. dispersion) of the sky coverage.
The purpose of the unfolding is to estimate the true distribution (cause) given the observed one (effect), and assuming some knowledge about the eventual migration effects (smearing matrix) as well as the efficiencies. The elements of the smearing matrix represent the probabilities to observe a given effect that falls in an observed bin $Effect_j$ from a cause in a given true bin $Cause_i$. In our case the observed distribution represents the number of sources as function of the observed flux above 100\,MeV, while the true distribution represents the number of true sources as function of the true flux above 100\,MeV. The unfolding algorithm adopted here is based on the Bayes theorem \cite{dago}.
The smearing matrix is evaluated using the Monte Carlo simulation described in the $\S$~4. Its elements, $P(F100_{j,obs} | F100_{i,true})$, represent the probabilities that a source with a true flux above $100$\,MeV, $F100_{i,true}$, is reconstructed with an observed flux above $100$\,MeV, $F100_{j,obs}$. The data are assumed to be binned in histograms. The bin widths and the number of bins can be chosen independently for the distribution of the observed and reconstructed variables.
The log $N$--log $S$ reconstructed with this method is shown
in Fig.~\ref{fig:logn_all} and it is apparent that the source
counts distributions derived with the 3 different methods are
all in good agreement with each other.
\subsection{Comparison with Previous Estimates}
\label{sec:comp}
Fig.~\ref{fig:logn_all} shows that the log $N$--log $S$ distributions
displays a strong break at fluxes F$_{100}\approx6\times10^{-8}$\, ph cm$^{-2}$
s$^{-1}$. This represents the first time that such a flattening
is seen in the log $N$--log $S$ of $\gamma$-ray sources, blazar
in particular. This is due to the fact
that {\it Fermi} couples a good sensitivity to the all-sky coverage thus
allowing to determine the source counts distribution over more than
3 decades in flux.
Above fluxes of F$_{100}=10^{-9}$\,ph cm$^{-2}$ s$^{-1}$, the
surface density of sources is 0.12$^{+0.03}_{-0.02}$\, deg$^{-2}$.
At these faint fluxes our comparison can only be done with
predictions from different models.
\cite{dermer07} and \cite{inoue09} predict a blazar surface density
of respectively 0.030\,deg$^{-2}$
and 0.033\,deg$^{-2}$. Both these predictions are a factor
$\sim4$ below the LAT measurement. However, it should be stressed that
these models are based on the EGRET blazar sample which, because of strong
selection effects against high-energy photons, counted
a very limited number of BL Lac objects.
At brighter fluxes (F$_{100}\geq5\times10^{-8}$\,ph cm$^{-2}$ s$^{-1}$)
\cite{dermer07} predicts a density of FSRQs and BL Lacs of
4.1$\times10^{-3}$\,deg$^{-2}$ and 1.1$\times10^{-3}$\,deg$^{-2}$ respectively.
At the same flux, \cite{muecke00} predict a density of
1.21$\times10^{-3}$\,deg$^{-2}$ and 3.04$\times10^{-4}$\,deg$^{-2}$ respectively
for FSRQs and BL Lac objects.
The densities measured by {\it Fermi} are significantly larger,
being 6.0$(\pm0.6)\times10^{-3}$\,deg$^{-2}$ for FSRQs
and 2.0$(\pm 0.3)\times10^{-3}$\,deg$^{-2}$ for BL Lacs.
\begin{figure}[h!]
\begin{centering}
\includegraphics[scale=0.7]{f16.eps}
\caption{Source count distribution of {\it Fermi} point-like sources derived
with three different methods. The distribution has been multiplied
by (F$_{100}/10^{-8}$)$^{1.5}$. The dashed
line is the best fit model described in the text. The grey region
indicates the flux at which a power law connecting the log $N$--log $S$ break
(at $\sim6.6\times10^{-8}$\,ph cm$^{-2}$ s$^{-1}$) and that given flux
exceeds the EGB emission (see text for details).
}
\label{fig:logn_all}
\end{centering}
\end{figure}
\section{Analysis in the Different Energy Bands}
\label{sec:bands}
The aim of the following analysis is to determine the contribution
of point sources to the EGB in different contiguous energy bands.
This is done by creating a log$N$--log$S$ distribution in 3 different
energy bands:
0.1--1.0\,GeV, 1.0--10.0\,GeV and 10.0--100\,GeV bands.
This will allow us to study
the spectrum of the unresolved emission from point sources
and at the same time explore the properties
of the source population in different bands. With this approach,
the systematic uncertainty related to the flux estimate,
given by the complex spectra of blazars (see $\$$~\ref{sec:pow}),
will be removed.
In addition, use of these bands should allow
us to extend the survey region to $|b|\geq10^{\circ}$
(see $\S$~\ref{sec:mlfit}).
The analysis follows the method outlined in $\S$~\ref{sec:sim} with
the difference that the final ML fit is restricted to the band
under investigation. In the spectral fit, all parameters (including
the photon index) are left free and are optimized by maximizing the likelihood
function.
Only sources that a given band have TS$\geq$25 are considered
detected in that band. Formally each band and related sample is treated
as independent here and no prior knowledge of the source spectral behaviour
is assumed. In the three bands, the samples comprise respectively
362, 597 and 200 sources detected for $|b|\geq$10$^{\circ}$ and TS$\geq25$.
In both the soft and the medium band (i.e. 0.1--1.0\,GeV and 1.0--10.0\,GeV),
the log$N$--log$S$ is well described by a double power-law model, while
in the hardest band (10--100\,GeV) the log$N$--log$S$
is compatible with a single power-law model with a differential slope
of 2.36$\pm0.07$. The results of the best-fit models are reported in
Tab.~\ref{tab:logn_bands} and are shown in Fig.~\ref{fig:logn_bands}.
The {\it spectral bias} (see $\S$~\ref{sec:term})
is the strongest in the soft band while it is absent in the high-energy band,
being already almost negligible above 1\,GeV.
From the log$N$--log$S$ in the whole band we would expect
(assuming a power law with a photon index of 2.4 and that
the blazar population is not changing dramatically with energy) to
find breaks at: 6.7$\times10^{-8}$, 2.6$\times10^{-9}$, and
1$\times10^{-10}$ ph cm$^{-2}$ s$^{-1}$ for the soft, medium, and hard bands
respectively. Indeed these expectations are confirmed by the ML fits
in the individual bands (e.g. see Tab.~\ref{tab:logn_bands}).
The hard band constitutes the only exception where
the flux distribution barely extends below the flux at which the break
might be observed.
The average spectral properties of the sample change with energy.
We find that the {\it intrinsic} index distribution is compatible
with a Gaussian distribution with means of 2.25$\pm0.02$, 2.43$\pm0.03$,
and 2.17$\pm0.05$. In the three bands the fraction of BL Lac-to-FSRQ is:
0.61, 1.14, and 3.53 respectively with identification
incompletenesses of 0.18, 0.25, and 0.25 respectively.
It is apparent that the hardest band is the best one for studying
BL Lac objects since the contamination due to FSRQs is rather small.
\begin{figure*}[ht!]
\begin{center}
\begin{tabular}{ccc}
\includegraphics[scale=0.3]{f17a.eps}
\includegraphics[scale=0.3]{f17b.eps}
\includegraphics[scale=0.3]{f17c.eps}
\end{tabular}
\end{center}
\caption{Source count distributions for the soft (0.1-1.0\,GeV, left),
medium (1.0--10.0\,GeV, center) and high energy (10.0--100.0\,GeV, right)
band reconstructed with the method reported in $\S$~\ref{sec:logn_2d}.
}
\label{fig:logn_bands}
\end{figure*}
\input{tab5}
\section{Contribution of Sources to the Diffuse Background}
\label{sec:edb}
The source count distribution can be used to estimate the contribution
of point-like sources to the EGB emission. This allows us to
determine the fraction of the GeV diffuse emission that
arises from point-like source populations measured by {\it Fermi}.
As specified in $\S$~\ref{sec:term},
this estimate does not include the contribution of sources which
have been directly detected by {\it Fermi} since these are not considered
in the measurement of the diffuse background. This estimate includes all
those sources which, because the detection efficiency changes with flux,
photon index and position in the sky, have not been detected.
The diffuse emission arising from a class of sources can be determined as:
\begin{equation}
F_{\rm diffuse} = \int^{S_{\rm max}}_{S_{\rm min}}dS \int^{\Gamma_{\rm max}}_{\Gamma_{\rm min}} d\Gamma \frac{dN}{dSd\Gamma}
S \left ( 1-\frac{\Omega(\Gamma,S)} {\Omega_{\rm max}} \right )
\label{eq:diff}
\end{equation}
where $\Omega_{\rm max}$ is the geometrical sky area and the
$(1-\Omega(\Gamma,S)/\Omega_{\rm max})$ term takes into account that
the threshold at which LAT detects sources depends on both the
photon index and the source flux. We note that neglecting
the dependence of $\Omega$ on the photon index (i.e. using the
mono-dimensional sky coverage reported in Fig.~\ref{fig:skycov})
would result in an underestimate of the diffuse flux resolved by {\it Fermi}
into point-sources. The limits of integration of Eq.~\ref{eq:diff}
are $\Gamma_{\rm min}=1.0$, $\Gamma_{\rm max}=3.5$, and $S_{\rm max}=10^{-3}$\,ph cm$^{-2}$ s$^{-1}$. We also note that the integrand of
Eq.~\ref{eq:diff} goes to zero for bright fluxes or for photon indices
which are either very small or very large; thus the integration
is almost independent of the parameters reported above.
The integration is not independent of the value of $S_{\rm min}$ which
is set to the flux of the faintest source detected in the sample.
For the analysis of the whole band $S_{\rm min}$=9.36$\times10^{-10}$\,ph cm$^{-2}$ s$^{-1}$ while for the low, medium and hard band S$_{\rm min}$ is set
to:
5.17$\times10^{-9}$\,ph cm$^{-2}$ s$^{-1}$,
3.58$\times10^{-10}$\,ph cm$^{-2}$ s$^{-1}$, and
6.11$\times10^{-11}$\,ph cm$^{-2}$ s$^{-1}$ respectively.
Since in the measurement of \cite{lat_edb} the sources which are subtracted
are those detected in 9\,months of operation, the coverage used
in Eq.~\ref{eq:diff} is the one corresponding to the 9\,months
survey. The uncertainties on the diffuse flux have been computed by
performing a bootstrap analysis.
Integrating Eq.~\ref{eq:diff}
we find that the point source contribution is
1.63$(\pm0.18)\times10^{-6}$\,ph cm$^{-2}$ s$^{-1}$ sr$^{-1}$
where the systematic uncertainty is 0.6$\times10^{-6}$\,ph cm$^{-2}$ s$^{-1}$ sr$^{-1}$.
This corresponds to 16$(\pm1.8)$\,\% ($\pm$7\,\% systematic uncertainty)
of the Isotropic diffuse emission
measured by LAT \citep{lat_edb} above 100\,MeV. This small fraction is a natural
consequence of the break of the source counts distribution.
However, it is also possible to show that the parameter space for
the faint-end slope $\beta_2$ is rather limited and that a break
must exist in the range of fluxes spanned by this analysis.
Indeed, for a given $\beta_2$ (and all the other parameters
of the log $N$--log $S$ fixed at their best-fit values)
one can solve Eq.~\ref{eq:diff}
to determine the flux at which the integrated emission
of point sources exceeds the one of the EGB. Repeating
this exercise for many different values of the $\beta_2$ parameter yields
an exclusion region which constrains the behavior of the log $N$--log $S$
at low fluxes. The results of this exercise are shown in Fig.~\ref{fig:logn_all}.
From this Figure it is apparent that the log $N$--log $S$ {\it must} break
between F$_{100}\approx2\times10^{-9}$\,ph cm$^{2}$ s$^{-1}$ and
F$_{100}\approx6.6\times10^{-8}$\,ph cm$^{2}$ s$^{-1}$.
For a small break (e.g. $\beta_1-\beta_2\approx 0.2-0.3$ and then
$\beta_2\approx$2.2--2.3), the integrated emission of point sources
would still match the intensity of the diffuse background at
F$_{100}\approx 10^{-9}$\,ph cm$^{2}$ s$^{-1}$ which are sampled
by {\it Fermi}. Thus not only the break has to exist, but this
simple analysis shows that it has to be strong (see also $\S$~\ref{sec:siml}),
not to exceed the intensity of the diffuse emission.
The log$N$--log$S$ in the whole band goes deeper than the source count
distributions derived in the smaller bands. This is clearly shown
in Fig.~\ref{fig:edb}. Given the fact that most of the source
flux is emitted below 1\,GeV (for reasonable photon indices),
the source count distribution in the soft band (0.1--1.0\,GeV)
is the one which gets closer to the log$N$--log$S$ in the whole band
in terms of resolved diffuse flux.
The log $N$--log $S$ in the whole bands
shows a strong break with a faint-end slope (e.g. $\beta_2$) robustly
constrained to be $<$2. In this case the integral reported
in Eq.~\ref{eq:diff} converges for small fluxes and it can be evaluated
at zero flux to assess the
maximum contribution of {\it Fermi}-like sources to the diffuse background.
This turns out to be
2.39$(\pm0.48)\times10^{-6}$\,ph cm$^{-2}$ s$^{-1}$ sr$^{-1}$
(1.26$\times10^{-6}$\,ph cm$^{-2}$ s$^{-1}$ sr$^{-1}$ systematic uncertainty)
which represents 23$(\pm5)$\,\% (12\,\% systematic uncertainty)
of the {\it Fermi} diffuse background \citep{lat_edb}.
This is a correct result as long as the log $N$--log $S$
of point-sources (i.e. blazars) does not become steeper at fluxes below
the ones currently sampled by {\it Fermi}. A given source population normally exhibits
a source count distribution with a single downwards break \citep[e.g. see
the case of radio-quiet AGN in][]{cappelluti07}. This break is
of cosmological origin since it coincides with the change of sign of the
evolution of that population.
As can be clearly seen in the redshift distribution
in \cite{agn_cat} the epoch of maximum
growth of blazars corresponds to redshift 1.5--2.0 which
coincides well with the peak of the star formation in the Universe
\citep[e.g.][]{hopkins06}. Since {\it Fermi} is already sampling this population
it is reasonable to expect no other breaks in the source count distribution
of blazars. Under this assumption, the result of the integration of
Eq.~\ref{eq:diff} are correct. The results of this exercise are
shown in Fig.~\ref{fig:edb2} and summarized in Tab.~\ref{tab:diffuse}.
Since the 10--100\,GeV source counts distribution
does not show a break, its integral diverges for small fluxes.
Thus, in both Fig.~\ref{fig:edb2} and Tab.~\ref{tab:diffuse}
we decided to adopt, as a lower limit to the contribution of sources
to the diffuse emission in this band, the value of the integral
evaluated at the flux of the faintest detected source.
\input{tab6}
The different levels of contribution to the diffuse background as a function
of energy band might be the effect of the mixing of the two blazar populations.
In other words, as shown in $\S$~\ref{sec:bands}, FSRQs are the dominant
population below 1\,GeV while BL Lacs are the dominant one above 10\,GeV.
Given also that FSRQs are softer than BL Lacs
(see also $\S$~\ref{sec:spec}), it is
naturally to expect a modulation in the blazar
contribution to the diffuse emission as a function of energy.
This can clearly be seen in Fig.~\ref{fig:edb3} which shows
the contribution of FSRQs and BL Lacs to the diffuse emission.
This has been computed integrating the source count distribution
of Tab.~\ref{tab:logn2D} to the minimum detected source flux
which is 9.36$\times10^{-10}$\,ph cm$^{-2}$ s$^{-1}$ and
and 1.11$\times10^{-8}$\,ph cm$^{-2}$ s$^{-1}$ for BL Lacs and FSRQs
respectively. It is clear that FSRQs are contributing most of the
blazar diffuse emission below 1\,GeV while BL Lacs, given their hard
spectra, dominate above a few GeVs. The spectrum of the diffuse emission
arising from the blazar class is thus curved, being soft at low energy (e.g.
below 1\,GeV) and hard at high energy (above 10\,GeV), in agreement
with the results of the analysis of the source count distributions
in different bands.
\begin{figure*}[ht!]
\begin{center}
\begin{tabular}{c}
\includegraphics[scale=0.7]{f18.eps}
\end{tabular}
\end{center}
\caption{Contribution of point-sources to the diffuse GeV background.
The red solid line was derived from the study of the log$N$--log$S$ in the
whole band while the blue solid lines come from the study of individual
energy bands (see $\S$~\ref{sec:bands}). The bands (grey solid and hatched
blue) show the total (statistical plus systematic) uncertainty.
}
\label{fig:edb}
\end{figure*}
\begin{figure*}[ht!]
\begin{center}
\begin{tabular}{c}
\includegraphics[scale=0.7]{f19.eps}
\end{tabular}
\end{center}
\caption{Contribution of point-sources to the diffuse GeV background
obtained by extrapolating and integrating the log $N$--log $S$
to zero flux.
The red solid line was derived from the study of the log$N$--log$S$ in the
whole band while the blue solid lines come from the study of individual
energy bands (see $\S$~\ref{sec:bands}). The bands (grey solid and hatched
blue) show the total (statistical plus systematic) uncertainty.
The arrow indicates the lower limit on the integration of Eq.~\ref{eq:diff}
for the 10--100\,GeV band.
}
\label{fig:edb2}
\end{figure*}
\begin{figure*}[ht!]
\begin{center}
\begin{tabular}{c}
\includegraphics[scale=0.7]{f20.eps}
\end{tabular}
\end{center}
\caption{Contributions of different classes of blazars to the
diffuse GeV background obtained by integrating the log $N$--log $S$.
The red and the blues solid lines show the contribution of FSRQs and
BL Lacs respectively, while the pink solid line shows the sum of the two.
The bands around each line
show the total (statistical plus systematic) uncertainty.
}
\label{fig:edb3}
\end{figure*}
\subsection{Additional Tests}
\subsection{Source Count Distribution above 300\,MeV}
The effective area of the LAT decreases quickly below 300\,MeV
while at the same time both the PSF size and the intensity of the
diffuse background increase \citep[e.g. see ][]{atwood09}.
In particular at the lowest energies,
systematic uncertainties in the instrument response might compromise
the result of the maximum likelihood fit to a given source (or
set of sources). In order to overcome this limitation we constructed,
with the method outlined in $\S$~\ref{sec:bands},
the log $N$--log $S$ of point sources in the 300\,MeV--100\,GeV band.
Considering that in the E$>100$\,MeV band the log $N$--log $S$
shows a break around 6-7$\times10^{-8}$\,ph cm$^{-2}$ s$^{-1}$ and
assuming a power law with a photon index of 2.4, we would
expect to detect a break in the (E$\geq$300\,MeV) log $N$--log $S$ around
$\sim$1.5$\times10^{-8}$\,ph cm$^{-2}$ s$^{-1}$. Indeed,
as shown in Fig.~\ref{fig:logn300mev}, the break is detected at
1.68$(\pm0.33)\times10^{-8}$\,ph cm$^{-2}$ s$^{-1}$.
Moreover, as Fig.~\ref{fig:logn300mev} shows, the break of the log $N$--log $S$
and the one of the sky coverage are at different fluxes.
More precisely, the source counts start to bend down before the sky coverage
does it. This is an additional confirmation, along with the results of
$\S$~\ref{sec:bands}, that the break of the log $N$--log $S$ is not
caused by the sky coverage. The parameters of this additional
source count distribution are reported for reference in
Tab.~\ref{tab:logn_bands}.
\subsection{Simulating a log $N$--log $S$ without a break}
\label{sec:siml}
In order to rule out the hypothesis that the sources detected by {\it Fermi}
produce most
of the GeV diffuse emission, we performed an additional simulation.
In this exercise the input log $N$--log $S$ is compatible with
a single power law with a differential slope of 2.23.
At bright fluxes this log $N$--log $S$ is compatible with
the one reported in \cite{lat_lbas} and at fluxes
F$_{100}\geq10^{-9}$\,ph cm$^{-2}$ s$^{-1}$
accounts for $\sim$70\,\% of the EGB. In this scenario the surface
density of sources at F$_{100}\geq10^{-9}$\,ph cm$^{-2}$ s$^{-1}$ is
0.8\,deg$^{-2}$ (while the one we derived in $\S$~\ref{sec:comp} is
0.12\,deg$^{-2}$).
To this simulation we applied
the same analysis steps used for both the real data and
the simulations analyzed in $\S$~\ref{sec:sim}.
Fig.~\ref{fig:flux_comp} compares the flux distribution
of the sources detected in this simulation with the distribution
of the real sources
detected by LAT and also with the sources detected in
one of the simulations used in $\S$~\ref{sec:sim}.
It is apparent that the flux distribution of the sources
detected in the simulation under study here
is very different from the other two.
Indeed, in the case point-like sources produce most of the EGB
{\it Fermi} should detect many more
medium-bright sources than are actually seen.
A Kolmogorv-Smirnov test yields that the probability that
the flux distribution (arising from the log $N$--log$S$ tested
in this section) comes from the same parent population as the real
data is $\leq10^{-5}$. This probability becomes $5\times10^{-4}$
if the $\chi^2$ test is used.
The KS test between the flux distribution of one of the simulations
used in $\S$~\ref{sec:sim} and the real data yields a probability of
$\sim$87\,\% that both come from the same parent population while
it is $\sim$91\,\% if the $\chi^2$ test is used.
Thus the hypothesis that {\it Fermi} is resolving
(for F$_{100}\geq10^{-9}$\,ph cm$^{-2}$ s$^{-1}$)
the majority
of the diffuse background can be ruled out at high confidence.
\begin{figure*}[ht!]
\begin{center}
\begin{tabular}{c}
\includegraphics[scale=0.7]{f21.eps}
\end{tabular}
\end{center}
\caption{Source count distribution of all (TS$\geq25$, $|b|\geq10^{\circ}$)
sources in the 300\,MeV--100\,GeV band. The distribution has been multiplied
by (F$_{100}/10^{-8}$)$^{1.5}$. The dashed line shows the sky coverage
(scaled by an arbitrary factor) used to derive the source counts.
Note that the break of the log $N$ -- log $S$ and that one of the sky coverage
are at different fluxes.
}
\label{fig:logn300mev}
\end{figure*}
\begin{figure}[h!]
\begin{centering}
\includegraphics[scale=0.6]{f22.eps}
\caption{Flux distributions of detected sources
(TS$\geq$50 and $|b|\geq$20$^{\circ}$) for three different realizations
of the $\gamma$-ray sky. The solid thick line corresponds to a
log $N$ -- log $S$ distribution which resolves approximately $\sim$70\%
of the GeV diffuse background, while the dashed line corresponds to
the log $N$--log $S$ derived in this work which resolves approximately
$\sim$23\,\% of the diffuse background. For comparison the thin solid
line shows the flux distributions of the real sample of sources detected
by {\it Fermi}.
\label{fig:flux_comp}}
\end{centering}
\end{figure}
\section{Discussion and Conclusions}
\label{sec:discussion}
{\it Fermi} provides a huge leap in sensitivity for the study of the
$\gamma$-ray sky with respect its predecessor EGRET. This work
focuses on the global intrinsic properties of the source
population detected by {\it Fermi} at high Galactic latitudes.
We constructed the source count distribution of all sources
detected above $|b|\geq$20$^{\circ}$. This distribution
extends over three decades in flux and is compatible at
bright fluxes (e.g. F$_{100}\geq6\times10^{-8}$\,ph cm$^{-2}$ s$^{-1}$)
with a Euclidean function. Several methods have been employed to show
that at fainter fluxes the log $N$--log $S$ displays a significant
flattening. We believe that this flattening has a cosmological
origin and is due to the fact that {\it Fermi} is already sampling,
with good accuracy,
the part of the luminosity function which shows negative evolution
(i.e. a decrease of the space density of sources with increasing
redshift). This is the first time that such flattening
has been found in the source count distributions of $\gamma$-ray sources
and blazars. We also showed that the log $N$--log $S$ of
blazars follows closely that of point source, showing that most
of the unassociated high-latitude sources in the 1FLG catalog
are likely to be blazars. At the fluxes
currently sampled by {\it Fermi} (e.g. F$_{100}\geq10^{-9}$\,ph cm$^{-2}$ s$^{-1}$)
the surface density of blazars is 0.12$^{+0.03}_{-0.02}$\,deg$^{-2}$
and this is found to be a factor $\sim$4 larger than previous estimates.
The average intrinsic spectrum of blazars is in remarkably good agreement
with the spectrum of the GeV diffuse emission recently measured
by {\it Fermi} \citep{lat_edb}. Nevertheless,
integrating the log $N$--log $S$, to the minimum detected source flux,
shows that at least 16.0$^{+2.4}_{-2.6}$\,\% (the systematic
uncertainty is an additional 7\,\%) of the GeV background
can be accounted for by source populations measured by {\it Fermi}.
This is a small fraction of the total intensity and it is bound not to
increase dramatically
unless the log $N$--log$S$ becomes steeper at fluxes below
$10^{-9}$\,ph cm$^{-2}$ s$^{-1}$. This generally never happens
unless a different source class starts to be detected in large
numbers at fainter fluxes.
\cite{thompson07} predict the integrated emission of starburst galaxies
to be $10^{-6}$\,ph cm$^{-2}$ s$^{-1}$ sr$^{-1}$
(above 100\,MeV). This would represent $\sim$10\,\% of the LAT diffuse
background and would be comparable (although a bit less) to that
of blazars found here. Indeed, their prediction that M82 and NGC 253 would
be the first two starburst galaxies to be detected has been fulfilled
\citep{lat_starburst}. A similar contribution to the GeV diffuse background
should arise from the integrated emission of normal star forming galaxies
\citep{pavlidou02}. In both cases (normal and starburst galaxies) $\gamma$-rays
are produced from the interaction of cosmic rays with the interstellar gas
\cite[e.g. see][]{lat_cr}. It is natural to expect that
both normal and starburst galaxies produce a fraction of the diffuse emission
since now both classes are certified $\gamma$-ray sources \citep[see e.g.][]{cat1}.
It is also interesting to note that pulsars represent
the second largest population
in our high-latitude sample (see Tab.~\ref{tab:sample}).
According to \cite{faucher09} pulsars and in particular
millisecond pulsars can produce a relevant fraction
of the GeV diffuse emission. However, given the strong break, typically
at a few GeVs, in their spectra \citep[e.g. see][]{lat_vela2010},
millisecond pulsars are not expected to contribute much
of the diffuse emission above a few GeVs.
Finally radio-quiet AGN might also contribute to the GeV diffuse background.
In these objects the $\gamma$-ray emission is supposedly produced
by a non-thermal electrons present in the corona
above the accretion disk \citep[see e.g.][for details]{inoue08}.
\cite{inoue09} predict that, at fluxes of
F$_{100}\leq 10^{-10}$\,ph cm$^{-2}$ s$^{-1}$, radio-quiet AGN outnumber
the blazars. According to their prediction, most of background could
be explained in terms of AGN (radio-quiet and radio-loud).
It is thus clear that standard astrophysical scenarios can be invoked
to explain the GeV extragalactic diffuse background. However,
the main result of this analysis is that blazars account only for $<$40\,\%
of it\footnote{This includes extrapolating the source counts
distribution to zero flux and taking into account statistical
and systematic uncertainties.}. It remains a mystery why the average spectrum
of blazars is so similar to the EGB spectrum. Taken by itself, this
finding would lead to believe that blazars might account for the entire
GeV diffuse background. However, we showed (see Fig.~\ref{fig:flux_comp} and
$\S$~\ref{sec:siml} for details) that in this case {\it Fermi} should have
detected a much larger number (up to $\sim$50\,\%) of medium-bright
sources with a typical flux of F$_{100}\geq10^{-8}$\,ph cm$^{-2}$ s$^{-1}$.
This scenario can thus be excluded with confidence.
Thus, the integrated emission from other source classes should still have
a spectrum declining as a power-law with an index of $\sim2.4$.
This does not seem to be a difficult problem to overcome.
Indeed, at least in the case of star forming galaxies
we note that in the modeling
of both \cite{fields2010} and \cite{makiya2010} the integrated emission
from these sources displays a spectrum similar to the EGB one (at least
for energies above 200\,MeV).
Moreover, in this work we also found that the
contribution to the diffuse emission of FSRQs and BL Lacs
is different, FSRQs being softer than BL Lacs. Thus, the summed
spectrum of their integrated diffuse emission is curved, softer
at low energy and hard at high ($>10$\,GeV) energy.
This makes it slightly different from the featureless power-law of the
diffuse background.
All the estimates presented here will be refined
with the derivation of the blazar luminosity
function which is left to a follow-up paper.
\clearpage
\acknowledgments
Helpful comments from the referee are acknowledged.
The \textit{Fermi} LAT Collaboration acknowledges generous ongoing support
from a number of agencies and institutes that have supported both the
development and the operation of the LAT as well as scientific data analysis.
These include the National Aeronautics and Space Administration and the
Department of Energy in the United States, the Commissariat \`a l'Energie Atomique
and the Centre National de la Recherche Scientifique / Institut National de Physique Nucl\'eaire et de Physique des Particules in France, the Agenzia
Spaziale Italiana and the Istituto Nazionale di Fisica Nucleare in Italy,
the Ministry of Education, Culture, Sports, Science and Technology (MEXT),
High Energy Accelerator Research Organization (KEK) and Japan Aerospace
Exploration Agency (JAXA) in Japan, and the K.~A.~Wallenberg Foundation,
the Swedish Research Council and the Swedish National Space Board in Sweden.
Additional support for science analysis during the operations phase
is gratefully acknowledged from the Istituto Nazionale di Astrofisica in
Italy and the Centre National d'\'Etudes Spatiales in France.
{\it Facilities:} \facility{{\it Fermi}/LAT}.
\bibliographystyle{apj}
| 2024-02-18T23:39:59.599Z | 2010-08-24T02:03:25.000Z | algebraic_stack_train_0000 | 1,034 | 12,941 |
|
proofpile-arXiv_065-5130 | \section{Introduction}
An old problem in lattice gauge theory is extracting
the gluon condensate from the average plaquette, which in
pure SU(3) Yang-Mills theory
has the formal expansion \begin{eqnarray}
P(\beta)\equiv\langle 1- \frac{1}{3}\text{Tr} \,\text{U}_{\boxempty}
\rangle=\sum_{n=1}\frac{c_n}{\beta^n} + \frac{\pi^2}{36}
Z(\beta)\langle\frac{\alpha_s}{\pi}GG \rangle a^4 +O(a^6) \,,
\label{ope}
\end{eqnarray}
where $\beta$ denotes the lattice coupling and $a$ the lattice
spacing.
The difficulty of extracting the gluon condensate is that the average
plaquette is dominated by the perturbative contribution
and it is necessary to subtract it
to an accuracy better than one part in $10^4$.
The perturbative coefficients $c_n$ were computed to 10-loop order
using the stochastic perturbation theory \cite{direnzo2},
but this alone does not achieve
the required accuracy. Therefore, any attempt to extract
the gluon condensate
using the perturbative expansion must involve
extrapolation of the perturbative coefficients to higher orders and,
the perturbative expansion being asymptotic, proper handling of them.
Since the large order behavior of perturbative
expansion is determined by
the renormalon singularity of the Borel transform, a natural
extrapolation scheme
would be based on the renormalon singularity.
A program along this line
was implemented by Burgio et al.,
and the authors obtained a surprising
result of power correction that scales as a dim-2
condensate \cite{direnzo}. This is in contradiction with
the operator product expansion (OPE) ~(\ref{ope}) that demands
the leading power correction scale as a dim-4 condensate.
The claim of the dim-2 condensate was since then
reexamined by several authors.
In obtaining the perturbative contribution,
Horsley et al. employed an extrapolation
scheme based on the power law and truncation of the perturbative
series at the minimal element \cite{horsley},
and Rakow used stochastic
perturbation with boosted coupling to
accelerate convergence \cite{rakow},
and Meurice employed extrapolations based on assumed singularity of
the plaquette in the complex $\beta$-plane as well as the renormalon
singularity, with truncation at the minimal element \cite{meurice}.
All these studies did not see any evidence of a dim-2 condensate but
found the plaquette data was consistent with
a dim-4 condensate.
To help settle these conflicting views on the dim-2 condensate
we present in this paper a critical review of the renormalon-based
approach of \cite{direnzo}, and reveal a serious flaw
in the program of renormalon subtraction, and show that the plaquette
data, when properly handled, is consistent with a dim-4 condensate.
Specifically, we shall show that
the continuum scheme employed for renormalon subtraction in
\cite{direnzo} is not at all a scheme
where the perturbative coefficients follow a renormalon pattern,
and therefore the
claimed dim-2 condensate is severely contaminated by perturbative
contribution and cannot be interpreted as a power correction.
We then introduce a renormalon subtraction scheme based on the bilocal
expansion of Borel transform, and show that the plaquette data can be
fitted well by the sum of a dim-4 condensate and the Borel summed
perturbative contribution.
\section{Renormalon subtraction by matching large order behaviors}
In this section we give a critical review on the renormalon subtraction
procedure of \cite{direnzo}.
The perturbative coefficients $c_n$
of the average plaquette at large orders
are expected to
exhibit the large order behavior of the infrared renormalon
associated with the gluon condensate,
but the computed coefficients using stochastic perturbation
theory turn out
to grow much more rapidly than a renormalon behavior.
This implies that the
coefficients are not yet in the asymptotic regime, which is
expected to set in around
at order $\bar{n}=\beta z_0$ ($z_0$ given below
in (\ref{consts})), which gives $\bar{n}\sim 30$ for $\beta\sim 6$,
far higher than the computed levels.
It therefore appears all but impossible to extract
the gluon condensate directly from using
the stochastic perturbation theory, since the
perturbative contribution must be subtracted,
at least, to orders in the
asymptotic regime.
In Ref.\cite{direnzo} this problem was
approached by introducing a continuum
scheme in which the renormalon contribution is subtracted by
matching the
large order behavior in the continuum scheme to the computed
coefficients in
the lattice scheme.
Specifically, in order to relate $c_n$ of the lattice scheme
with the renormalon behavior the average
plaquette
is written, essentially, as
\begin{eqnarray}
P(\beta)=P^{\rm ren}(\beta_c)+ \delta P(\beta_c) +P_{\rm NP}(\beta)\,,
\label{decomposition}
\end{eqnarray}
where
\begin{eqnarray}
P^{\rm ren}(\beta_c)= \int_0^{b_{\rm max}} e^{-\beta_c b}
\frac{\cal N}{(1-b/z_0)^{1+\nu}} db \end{eqnarray}
with $\beta_c$ denoting the coupling in the continuum scheme defined by
\begin{eqnarray}
\beta_c=\beta-r_1-\frac{r_2}{\beta}
\label{beta_rel}
\end{eqnarray}
and
\begin{eqnarray}
z_0=\frac{16\pi^2}{33}\,,\quad \nu=\frac{204}{121}\,.
\label{consts}
\end{eqnarray}
In Eq. (\ref{decomposition}) the plaquette is divided into
perturbative
contributions, comprised of the renormalon contribution $P^{\rm
ren}$ and the rest of the perturbative contribution $\delta P$, and
nonperturbative power correction $P_{\rm NP}$.
In this splitting, the asymptotically divergent behavior
of the perturbative contribution is contained in $P^{\rm
ren}$, and $\delta P$ denotes the rest that can be
expressed as a convergent series.
(Here, the renormalons other than that associated
with the gluon condensate and the subleading singularities at $b=z_0$
are ignored, which, if necessary,
can be incorporated in $P^{\rm ren}$.)
We now define $ P_{\rm NP}^{(N)}$ with
\begin{eqnarray}
P_{\rm NP}^{(N)}(\beta)\equiv P(\beta)-
P^{\rm ren}(\beta_c)-\sum_{n=1}^{N}
(c_n-C_n^{\rm ren})\beta^{-n}
\label{powercorrection}
\end{eqnarray}
where $C_n^{\rm ren}$ denotes the perturbative coefficients of
$P^{\rm ren}$ in power expansion in $1/\beta$. Note that
$P_{\rm NP}^{(N)}$ is free of perturbative coefficients to order $N$.
The constants $r_1,r_2$ that define the continuum scheme and the
normalization constant $\cal N$ are determined so that $C_n^{\rm
ren}$ converges to $c_n$ at large orders. In the continuum scheme with
\begin{eqnarray}
r_1=3.1\,, \quad r_2=2.0
\label{scheme}
\end{eqnarray}
and an appropriate value for $\cal N$ it was observed that
$C_n^{\rm ren}$ converge to $c_n$ at the orders computed in
stochastic
perturbation theory. The last term in
(\ref{powercorrection}) being a
converging series $ P_{\rm NP}^{(N)}$
will be well-defined at $N\to\infty$, and this is precisely the
quantity that was assumed
to represent the power correction, and it was $P_{\rm NP}^{(8)}$ that
was shown to scale as a dim-2 condensate.
The essence of this procedure is that the
isolation of the renormalon contribution is obtained
by matching the large order
behaviors in the
lattice and continuum schemes, in which the matching
does not involve the low order
coefficients. Although the
renormalon-caused large order behaviors of any two schemes can be
matched, independently of the low order coefficients, it must
be noted that the matching would work only when the known
coefficients in both schemes exhibit renormalon behavior.
Since, however, the computed coefficients in the lattice scheme are
far from being in the asymptotic regime
and do not follow the renormalon pattern
the matching cannot be performed reliably; Therefore, the conclusion
of a dimension-2
condensate based on it should be reexamined.
That the above matching has a serious flaw can be easily shown
by mapping the perturbative coefficients in the lattice scheme to the
continuum scheme (\ref{scheme}).
If the latter is
indeed a good scheme for renormalon subtraction the mapped coefficients
should exhibit a renormalon behavior.
However, as can be seen in Table ~\ref{Table1},
which is obtained by mapping
the central values of $c_n$ from the stochastic perturbation theory,
the coefficients are alternating in sign and
far from being of a renormalon behavior. This shows
that when mapping the
perturbative coefficients between the
lattice scheme and (\ref{scheme}) the relatively high
order coefficients (say, 7-10 loop orders) are still very sensitive
on the low order coefficients.
Therefore, the above large order matching cannot be performed reliably
with the computed coefficients, and (\ref{scheme})
cannot be the right scheme where one can isolate and subtract
the renormalon contribution.
\begin{table}[t]
\begin{tabular}{cccccccc}\hline\hline
$c_1^{\text{cont}}$&$c_2^{\text{cont}}$&$c_3^{\text{cont}}$&
$c_4^{\text{cont}}$&$c_5^{\text{cont}}$&$c_6^{\text{cont}}$&
$c_7^{\text{cont}}$&$c_8^{\text{cont}}$ \\ \hline
2.0&-4.9792&10.613&-10.200&-44.218&316.34&-1096.&1947.\\ \hline\hline
\end{tabular}
\caption{The perturbative coefficients
of the average plaquette in the
continuum scheme.}
\label{Table1}
\end{table}
Checking the internal consistency of the
subtraction scheme based on the
matching of large order behavior
also shows the underlying problem.
The nonperturbative term in (\ref{decomposition}) can be written
using (\ref{powercorrection}) as
\begin{eqnarray}
P_{\rm NP}(\beta)= P_{\rm NP}^{(N)}(\beta) -\{\delta
P(\beta_c)-\sum_{n=1}^N(c_n-C_n^{\rm
ren}) \beta^{-n}\}\,.
\end{eqnarray}
For $ P_{\rm NP}^{(N)}$ to represent the power correction
it is clear that
\begin{eqnarray}
\left|\delta P(\beta_c)-\sum_{n=1}^N(c_n-C_n^{\rm ren})
\beta^{-n}\right| \ll
P_{\rm NP}^{(N)} (\beta)
\label{criterion}
\end{eqnarray}
must be satisfied. Since $\delta P(\beta_c)$ is by definition
a convergent
quantity it can be written in a series expansion
\begin{eqnarray}
\delta P(\beta_c)\equiv \sum_{n=1}^\infty D_n \beta_c^{-n}\,,
\end{eqnarray}
where $D_n$ can be computed up to the order $c_n$ are
known, and
(\ref{criterion}) can be written approximately as
\begin{eqnarray}
\frac{|\sum_{n=1}^N D_n \beta_c^{-n}-\sum_{n=1}^N(c_n-C_n^{\rm ren})
\beta^{-n}|}{ P_{\rm NP}^{(N)}(\beta)}\ll1 \,.
\label{criterion1}
\end{eqnarray}
Now, in the scheme of (\ref{scheme}), and at $N=8$ and
$\beta=6.0, 6.2$ and 6.4, for example, the ratios are $69, 59$ and
42, respectively: a severe violation of the consistency condition. This
again confirms that (\ref{scheme}) cannot be a scheme suited for
renormalon subtraction.
\section{Renormalon subtraction by Borel summation}
It is now clear that one cannot subtract the perturbative contribution
in the plaquette by mapping the
renormalon-based coefficients in a continuum
scheme to the lattice scheme, and then matching
them with the computed high order coefficients.
On the other hand, the lesson of our review
suggests that
one must map the known coefficients in the lattice scheme to a
continuum one and look for a scheme where
the mapped coefficients follow a
renormalon behavior.
Once such a scheme is found one can perform
Borel summation to subtract
perturbative contribution to isolate the power correction.
Borel summation is especially suited for this purpose, since
it allows a precise definition of the power corrections
in OPE \cite{david1,david2,svz}.
The nature of the renormalon
singularity, hence of the large order
behavior of perturbation, was obtained through the
cancellation of the ambiguities
in Borel summation and power corrections \cite{mueller}. An
extensive review of renormalons can be found in \cite{beneke}.
In this paper we shall assume that such a scheme exists and
perform Borel summation using the scheme of bilocal
expansion of Borel transform \cite{surviving}. To Borel-sum the
divergent perturbation to a sufficient accuracy for the
extraction of power correction, one must have an
accurate description of the Borel transform in the domain
that contains the origin as well as the first renormalon singularity in
Borel plane. Bilocal expansion is a scheme of reconstructing the
Borel transform in this domain, utilizing the known perturbative
coefficients and properties of the first renormalon singularity.
After Borel-summing the perturbative contribution the sum of the
Borel summation and a dim-4 power correction can be fitted
to the plaquette data. A good fit would suggest then
the power correction be of dim-4 type.
The Borel summation using the first N-loop perturbations
of the plaquette in bilocal expansion in a continuum scheme
is given in the form:
\begin{eqnarray}
{P}_{\rm BR}^{\rm (N)}(\beta)=\int_0^{\infty} e^{-\beta_c b}
\left[\sum_{n=0}^{N-1} \frac{h_n}{n!}
b^n+\frac{\cal N}{(1-b/z_0)^{1+\nu}} \right] db\,,
\label{bilocal}
\end{eqnarray}
where the integration over the renormalon singularity is performed
with principal value
prescription. The essential idea of the
bilocal expansion is to interpolate
the two perturbative expansions
about the origin and about the renormalon singularity to
rebuild the Borel transform. By incorporating the renormalon singularity
explicitly in the expansion it can extend the applicability
of the ordinary weak coupling expansion to
beyond the renormalon singularity,
and this scheme was shown to work well
with static inter-quark potential
or heavy quark pole mass \cite{surviving,heavyquark}.
Here, ${\cal N}$ denotes the normalization constant
of the large order behavior
and the coefficients $h_n$ are determined so that the Borel transform
in (\ref{bilocal}) reproduce the
perturbative coefficients in the continuum scheme when
expanded at $b\!=\!0$;
Thus $h_n$ depends on the continuum perturbative coefficients
as well as ${\cal N}$. By definition, ${P}_{\rm BR}^{(N)}(\beta)$, when
expanded in $1/\beta$, reproduces the
perturbative coefficients of the average
plaquette to N-loop order that were
employed in building the Borel transform.
For details of the bilocal expansion of Borel transform
we refer the reader to
\cite{surviving,heavyquark}.
The power correction can then be defined by
\begin{eqnarray}
{ P}_{\rm NP}^{\rm (N)}(\beta)\equiv {P}(\beta) -
{P}_{\rm BR}^{\rm (N)}(\beta)\,,
\label{pNP}
\end{eqnarray}
which, by definition, has vanishing perturbative expansions to order $N$.
Using the perturbation to 10-loop order of the plaquette
we compute ${P}_{\rm BR}^{(10)}(\beta)$ in the continuum scheme
parameterized by
Eq. (\ref{beta_rel}). Although $\cal{N}$ can be
computed perturbatively, using the
perturbations of the average plaquette, it is
still difficult to obtain a reliable
result using the known coefficients, so here
it will be treated as a fitting parameter.
Thus in our scheme, as in \cite{direnzo},
the fitting parameters are $\cal{N}$ and $r_1, r_2$ of
Eq. (\ref{beta_rel}).
Using the plaquette data for
$6.0\leq \beta \leq 6.8$ from \cite{plaquette} and
the relation between the lattice spacing $a$
and $\beta$ from static quark force
simulation \cite{sommer}
\begin{eqnarray}
\log(a/r_0)=-1.6804 - 1.7331(\beta - 6) + 0.7849(\beta - 6)^2 -
0.4428(\beta - 6)^3
\end{eqnarray}
the fit gives ${\cal N}=165$ and
\begin{eqnarray}
r_1=1.611, \quad r_2=0.246\,,
\label{fitted}
\end{eqnarray}
which values are substantially
different from those in (\ref{scheme}).
The result of the fit is shown in
Fig. \ref{fig1}, which shows that the power correction is
consistent with
a dim-4 condensate. The agreement improves as $\beta$
increases, albeit with larger uncertainties;
The deviation at low $\beta$ ($\beta <6$)
may be attributed to a dim-6
condensate, which may be seen, though not presented here,
by that adding a
dim-6 power correction in the fit
improves the agreement in the whole
range of the plot. The error bars are from the
uncertainty in the simulated
perturbative coefficients of the plaquette.
The uncertainty in the normalization constant
does not appear to be large: for example,
a variation of 20\% in ${\cal N}$ causes
less than a quarter of those by the perturbative coefficients.
From the fit we obtain a dim-4 power correction of
$P_{\rm NP}\approx 1.6\,\, (a/r_0)^4$.
Because of the asymptotic nature of the perturbative series
the power correction of the plaquette
is dependent on the subtraction scheme
of the perturbative contribution, and
thus our result may not be
directly compared to those from other
subtraction schemes. Nevertheless, it is
still interesting to observe that
the result is roughly consistent
with $0.4\,\, (a/r_0)^4$ of \cite{rakow}
and $0.7 \,\,(a/r_0)^4$ of \cite{meurice}.
Our result turns out to be
a little larger
than those estimates; This may be partly accounted for
by the fact that the existing
results were from the fit in the low $\beta$ range
of $\beta \lesssim 6$, in which
range the data are below our fitted curve.
\begin{figure}
\includegraphics[angle=0,width=8cm ]{fig1.eps}
\caption{ $\log { P}_{\rm NP}$ vs. $\beta$.
The solid line is for $4 \log(a/r_0) +0.5$.
The plot shows the power
correction should be of dim-4 type.}
\label{fig1}
\end{figure}
\section{Summary}
We have reexamined the claim of dim-2
condensate in the average plaquette,
and shown that the renormalon subtraction
procedure of \cite{direnzo}
that gave rise to the dim-2 condensate
fails consistency checks and
cannot be reliably
implemented with the known results of
stochastic perturbation theory.
We then introduced a renormalon subtraction scheme based on the
bilocal expansion of Borel transform
and found that the plaquette data is
consistent with a dim-4 condensate.
\begin{acknowledgments}
This work was supported in part by
Korea Research Foundation Grant (KRF-2008-313-C00168).
\end{acknowledgments}
\bibliographystyle{apsrev}
| 2024-02-18T23:39:59.809Z | 2011-01-04T02:04:21.000Z | algebraic_stack_train_0000 | 1,041 | 2,845 |
|
proofpile-arXiv_065-5213 | \section{Introduction}
It is not feasible anymore to expect performance gains for sequential codes by means of continuously
increasing processor clock speeds. Nowadays, processor vendors have been
concentrated on developing systems that group two or more processors onto a
single socket, sharing or not the same memory resources. This technology, called
\textit{multi-core}, has been successfully employed to different application
domains ranging from computer graphics to scientific computing, and in these
times it is commonly seen on high performance clusters, desktop
computers, notebooks, and even mobile devices. The spread of such architecture
has consequently stimulated an increasing number of researches on parallel
algorithms.
To obtain efficient implementations of parallel algorithms, one must consider
the underlying architecture on which the program is supposed to be run. In
fact, even processors belonging to the multi-core family may present different
hardware layouts, which can make an implementation to perform poorly on one
platform, while running fast on another. As an example of such issue,
multi-core processors may have different memory subsystems for each core, therefore
forcing programmers to take care of thread and memory affinity.
The finite element method is usually the first choice for numerically solving
integral and partial differential equations. Matrices arising from finite
element discretizations are usually sparse, i.e., most of its entries are
zeros. Effectively storing sparse matrices requires the use of compressed data
structures. Commonly employed approaches are the \textit{element-based},
\textit{edge-based} and \textit{compressed} data
structures. Since the later provides the best compromise between space
complexity and performance \cite{RC05a}, it was chosen as the primary data
structure of our implementation.
The \textit{compressed sparse row} (CSR) data structure stores contiguously in memory non-zero entries belonging
to the same row of a matrix. While in a dense representation any
element can be randomly accessed through the use of its row and
column indices, the CSR explicitly stores in memory the combinatorial
information for every non-zero entry. Given an $n \times n$ matrix $A$ with $nnz$
non-zero coefficients, the standard version of the CSR \cite{Saa95a} consists
of three arrays: two integer arrays $ia(n+1)$ and $ja(nnz)$ for storing
combinatorial data, and one floating-point array $a(nnz)$ containing the
non-zero coefficients. The value $ia(i)$ points to the first element of row $i$
in the $a$ array, i.e., row $i$ is defined as the subset of $a$ starting and
ending at $ia(i)$ and $ia(i+1)-1$, respectively. The column index of each
non-zero entry is stored in $ja$. There is also a transpose version,
called \textit{compressed sparse column} (CSC) format.
This representation supports matrices of arbitrary shapes and symmetry
properties. In the context of the finite element method, however, the
generality provided by the CSR is underused as most matrices are structurally symmetric.
In this case, it would be sufficient to store, roughly, half
of the matrix connectivity. The \textit{compressed sparse row-column} (CSRC)
format was designed to take benefit from this fact \cite{RF07a}. Basically, it
stores the column indices for only half of the off-diagonal entries. As the
working set size has a great impact on the performance of CSR-like data
structures, the running time of algorithms such as the matrix-vector product is expected
to be improved when using the CSRC. Also, solvers based on oblique projection methods can efficiently
access the transpose matrix, since it is implicitly defined.
The performance of finite element codes using iterative
solvers is dominated by the computations associated with the matrix-vector
multiplication algorithm. In this algorithm, we are given an $n \times n$
sparse matrix $A$ containing $nnz$ non-zeros, and a dense $n$-vector $x$,
called the \textit{source} vector. The output is an $n$-vector $y$, termed the
\textit{destination} vector, which stores the result of the $Ax$ operation.
Performing this operation using the CSR format is trivial, but it was observed
that the maximum performance in Mflop/s sustained by a na\"ive implementation
can reach only a small part of the machine peak performance \cite{GKKS99a}. As
a means of transcending this limit, several optimization techniques have been
proposed, such as reordering \cite{Tol97a,PH99a,WS97a,TJ92a}, data compression \cite{MGMM05a,WL06a},
blocking \cite{TJ92a,IYV04a,Tol97a,VM05a,PH99a,AGZ92a,NVDY07a}, vectorization \cite{AFM05a,BHZ93a}, loop unrolling
\cite{WS97a} and jamming \cite{MG04a}, and software prefetching \cite{Tol97a}.
Lately, the dissemination of multi-core computers have promoted
multi-threading as an important tuning
technique, which can be further combined with purely sequential methods.
\subsection{Related work}
Parallel sparse matrix-vector multiplication using CSR-like data structures on
multi-processed machines has been the focus of a number of researchers since
the 1990s. Early attempts to date include the paper by {\c C}ataly\"urek and Aykanat \cite{CA96a}, on
hypergraph models applied to the matrix partitioning problem, Im and Yelick \cite{IY99a},
who analysed the effect of register/cache blocking and reordering, and
Geus and R{\"{o}}llin \cite{GR01a}, considering prefetching, register blocking and reordering for
symmetric matrices. Kotakemori et al.~\cite{KHKNSN05a} also examined several storage formats on
a ccNUMA machine, which required the ability of dealing with page allocation
mechanisms.
Regarding modern multi-core platforms, the work of Goumas et al.~\cite{GKAKK08a} contains a
thorough analysis of a number of factors that may degrade the performance of
both sequential and multi-thread implementations. Performance tests were
carried out on three different platforms, including SMP, SMT and ccNUMA
systems.
Two partitioning schemes were implemented, one guided by the number of rows
and the other by the number of non-zeros per thread. It was
observed that the later approach contributes to a better load balancing,
thus improving significantly the running time.
For large matrices, they obtained average speedups of 1.96 and 2.13 using 2 and
4 threads, respectively, on an Intel Core 2 Xeon.
In this platform, their code reached about 1612 Mflop/s for 2 threads,
and 2967 Mflop/s when spawning 4 threads.
This performance changes
considerably when considering matrices whose working set sizes are far from fitting in cache.
In particular, it drops to around 815 Mflop/s and 849 Mflop/s, corresponding to the 2-
and 4-threaded cases.
Memory contention is viewed as the major bottleneck of implementations of the
sparse matrix-vector product. This problem was tackled by Kourtis et al.~\cite{KGK08a} via
compression techniques, reducing both the matrix connectivity and
floating-point numbers to be stored. Although leading to good scalability,
they obtained at most a 2-fold speedup on 8 threads, for matrices out of cache.
The experiments were conducted on two Intel Clovertown with 4MB of L2
cache each. In the same direction, Belgin et al.~\cite{BBR09a} proposed a
pattern-based blocking scheme for reducing the index overhead.
Accompanied by software prefetching and vectorization techniques, they attained
an average sequential speedup of 1.4. Their multi-thread implementation required the
synchronization of the accesses to the $y$ vector. In brief, each thread
maintains a private vector for storing partial values, which are summed up in a
reduction step into the global destination vector. They observed average
speedups around 1.04, 1.11 and 2.3 when spawning 2, 4, and 8 threads,
respectively. These results were obtained on a 2-socket Intel Harpertown 5400
with 8GB of RAM and 12MB L2 cache per socket.
Different row-wise partitioning methods were considered by Liu et al.~\cite{LZSQ09a}.
Besides evenly splitting non-zeros among threads, they evaluated
the effect of the automatic scheduling mechanisms provided by OpenMP, namely,
the \textit{static}, \textit{dynamic} and \textit{guided} schedules. Once
more, the non-zero strategy was the best choice. They also
parallelized the block CSR format. Experiments were run on four AMD Opteron 870
dual-core processors, with 16GB of RAM and $2 \times 1$MB L2 caches. Both
CSR and block CSR schemes resulted in poor scalability for large matrices,
for which the maximum speedup was approximately 2, using 8 threads.
Williams et al.~\cite{WOVSYD09a} evaluated the sparse matrix-vector kernel using the CSR
format on several up-to-date chip multiprocessor systems, such as the
heterogeneous STI Cell. They examined the effect of various optimization techniques
on the performance of a multi-thread CSR, including software
pipelining, branch elimination, SIMDization, explicit prefetching, 16-bit
indices, and register, cache and translation lookaside buffer (TLB) blocking.
A row-wise approach was employed for
thread scheduling. As regarding finite element matrices and in comparison to OSKI~\cite{VDY05a},
speedups for the fully tuned parallel code ranged
from 1.8 to 5.5 using 8 threads on an Intel Xeon E5345.
\begin{figure}[!t]
\centering
\includegraphics{images/csrc_scheme}
\caption{The layout of CSRC for an arbitrary 9$\times$9 non-symmetric matrix.}
\label{fig:csrc_scheme}
\end{figure}
More recently, Bulu{\c{c}} et al.~\cite{BFFGL09a} have presented a block structure that allows
efficient computation of both $Ax$ and $A^{\mathsf{T}}x$ in parallel. It can
be roughly seen as a dense collection of sparse blocks, rather than a sparse
collection of dense blocks, as in the standard block CSR format. In
sequential experiments carried out on an ccNUMA machine featuring AMD Opteron 8214
processors, there were no improvements over the standard CSR.
In fact, their data structure was always slower for band matrices. Concerning
its parallelization, however, it was proved that it yields a
parallelism of $\Theta(nnz/\sqrt{n}\log n)$.
In practice, it scaled up to 4 threads on an Intel Xeon X5460, and
presented linear speedups on an AMD Opteron 8214 and an Intel Core i7 920. On
the later, where the best results were attained, it reached speedups of 1.86,
2.97 and 3.71 using 2, 4 and 8 threads, respectively. However,
it does not seem to straightly allow the simultaneous
computation of $y_i \leftarrow y_i + a_{ij} x_j$ and $y_j \leftarrow y_j +
a_{ij} x_i$ in a single loop, as CSRC does.
\subsection{Overview}
The remainder of this paper is organized as follows. Section \ref{sec:csrc}
contains a precise definition of the CSRC format accompanied with a description
of the matrix-vector multiplication algorithm using such structure. Its
parallelization is described in Section \ref{sec:parallel-csrc}, where we
present two strategies for avoiding conflicts during write accesses to
the destination vector. Our results are shown in Section \ref{sec:results},
supplemented with some worthy remarks. We finally draw some conclusions in
Section \ref{sec:conclusion}.
\section{The CSRC storage format}
\label{sec:csrc}
The \textit{compressed sparse row-column} (CSRC) format is a specialization of
CSR for structurally symmetric matrices arising in finite element modelling
\cite{RF07a}, which is the target domain application of this work. Given an
arbitrary $n \times n$ global matrix $A = (a_{ij})$, with $nnz$
non-zeros, the CSRC decomposes $A$ into the sum $A_D + A_L + A_U$, where $A_D$,
$A_L$, and $A_U$ correspond to the diagonal, lower and upper parts of $A$,
respectively. The sub-matrix $A_L$ (resp.~$A_U$) is stored in a row-wise
(resp.~column-wise) manner.
In practice, the CSRC splits the off-diagonal coefficients into two
floating-point arrays, namely, $al(k)$ and $au(k)$, $k = \frac{1}{2}(nnz - n)$,
where the lower and upper entries of $A$ are stored. In other words, if $j <
i$, then $a_{ij}$ is stored in $al$, and $au$ contains its transpose $a_{ji}$.
The diagonal elements are stored in an array $ad(n)$. Other two integer
arrays, $ia(n+1)$ and $ja(k)$, are also maintained. These arrays can be defined
in terms of either the upper or lower coefficients. The $ia$ array
contains pointers to the beginning of each row (resp.~column) in $al$
(resp.~$au$), and $ja$ contains column (resp.~row) indices for those non-zero
coefficients belonging to $A_L$ (resp.~$A_U$). Another
interpretation is that $A_L$ is represented using CSR, while $A_U$ is stored using
CSC. We illustrate the CSRC data structure for an arbitrary 9$\times$9 non-symmetric matrix
consisting of 33 non-zeros in Figure \ref{fig:csrc_scheme}.
Notice that the CSRC could be viewed as the sparse skyline (SSK) format
restricted to structurally symmetric matrices \cite{Saa95a,GR01a}.
However, as shown in Section \ref{sec:rectextension}, we made it capable of
representing rectangular matrices after minor modifications. Furthermore, to
our knowledge, this is the first evaluation of such structure on modern
multi-processed machines.
\subsection{Extension to rectangular matrices}
\label{sec:rectextension}
The way the CSRC is defined would disallow us handling matrices with different
aspect ratios other than square. In the overlapping strategy
implemented in any distributed-memory finite element code using a
subdomain-by-subdomain approach
\cite{RF07a,ARM09a}, it is normal the
occurrence of rectangular matrices with a remarkable property.
An $n \times m$ matrix $A$,
with $m > n$, can always be written as the sum $A_S + A_R$, where
$A_S$ and $A_R$ are of order $n \times n$ and $n \times k$, respectively, with
$k = m - n$. In addition, the $A_S$ matrix has symmetric non-zero pattern, and
it is occasionally numerically symmetric. Therefore, it can be represented by the
CSRC definition given before, while $A_R$ can be stored using an auxiliary CSR
data structure.
\begin{figure}[!t]
\centering
\subfloat[]{\label{fig:matvec_csrc}%
\lstinputlisting[boxpos=b]{matvec_csrc.f}}
\hfil
\subfloat[]{\label{fig:matvec_csrcr}%
\lstinputlisting[boxpos=b]{matvec_csrcr.f}}
\caption{Code snippets for the non-symmetric matrix-vector multiplication
algorithm using CSRC for (a) square and
(b) rectangular matrices.}
\label{fig:SpMV}
\end{figure}
\subsection{Sequential matrix-vector product}
The sequential version of the CSRC matrix-vector multiplication algorithm has
the same loop structure as for CSR. The input matrix $A$ is
traversed by rows, and row $i$ is processed from the left to the right up to
its diagonal element $a_{ii}$. Because we assume $A$ is structurally
symmetric, its upper part can be simultaneously traversed. That is, we are
allowed to compute both $y_i \leftarrow y_i + a_{ij} x_j$ and $y_j \leftarrow
y_j + a_{ji} x_i$, in the $i$-th loop. If $A$ is also numerically symmetric, we can further
eliminate one load instruction when retrieving its upper entries. For
rectangular matrices, there is another inner loop to process the coefficients
stored in the auxiliary CSR. Figure \ref{fig:SpMV} contains Fortran implementations of
the sparse matrix-vector product using CSRC for square and rectangular matrices.
\section{Parallel implementation}
\label{sec:parallel-csrc}
To parallelize the sparse matrix-vector product using the CSRC, one can basically spawn threads at either the inner or the outer loop.
This means adding a \texttt{parallel do} directive just above line 1 or 4 of Figure \subref{fig:matvec_csrc} (and 9, for Figure \subref{fig:matvec_csrcr}).
As the amount of computations per row is usually low, the overhead due to the inner parallelization would counteract any parallelism.
On the other hand, recall that the CSRC matrix-vector product has the property that
the lower and upper parts of the input matrix are simultaneously traversed.
Thus spawning threads at line 1 requires the synchronization of writings into the destination vector.
That is, there exists a race condition on the access of the vector $y$.
If two threads work on different rows,
for example, rows $i$ and $j$, $j > i$, it is not unlikely that both threads
require writing permission to modify $y(k)$, $k \leq i$.
In short, our data structure is required to support concurrent reading and
writing on the vector $y$. These operations need to be thread-safe, but at
the same time very efficient, given the fine granularity of the operations.
Common strategies to circumvent this problem would employ atomic primitives,
locks, or the emerging transactional memory model. However, the overheads incurred by
these approaches are rather costly, compared to the total cost of accessing
$y$. A more promising solution would be to determine subsets of rows
that can be handled by distinct threads in parallel. In this paper, we have
considered two of such solutions, here termed \textit{local buffers} and
\textit{colorful} methods.
Our algorithms were analyzed using the concepts of \textit{work} and
\textit{span} \cite[Ch.~27]{CLRS09a}. The \textit{work} $T_{1}$ of an
algorithm is the total cost of running it on exactly one processor, and the
\textit{span} $T_{\infty}$ is equal to its cost when running on an infinite
number of processors. The \textit{parallelism} of a given algorithm is then
defined as the ratio $T_{1}/T_{\infty}$. So, the greater the parallelism of
an algorithm, the better the theoretical guarantees on its performance. The
work of the matrix-vector multiply using the CSRC is clearly $\Theta(nnz)$. To
calculate its span, we need to consider our partitioning strategies
separately.
\subsection{Local buffers method}
One way to avoid conflicts at the $y$ vector is to assign different
destination vectors to each thread. That is, thread $t_i$ would compute its
part of the solution, store it in a local buffer $y_i$, and then accumulate
this partial solution into the $y$ vector. This method, here called
\textit{local buffers method}, is illustrated in
Figure \subref{fig:partitioning-simple}, which shows the distribution of rows for an
arbitrary non-symmetric $9 \times 9$ matrix. In the example, the matrix is
split into three regions to be assigned to three different threads. The number of non-zeros per thread is 7, 5 and
21.
The main drawback of this method is the introduction of two additional steps: initialization and accumulation.
The accumulation step is performed to compute the final destination vector resultant from merging partial values stored in local buffers.
Threads must initialize their own buffers, because of this accumulation, otherwise they would store wrong data.
For convenience, we define the \textit{effective range} of a thread as the set of rows in $y$ that it indeed needs to modify.
We consider four ways of implementing both steps:
\begin{enumerate}
\item \textit{All-in-one}: threads initialize and accumulate in parallel the buffers of the whole team.
\item \textit{Per buffer}: for each buffer, threads initialize and accumulate in parallel.
\item \textit{Effective}: threads initialize and accumulate in parallel over the corresponding effective ranges.
\item \textit{Interval}: threads initialize and accumulate in parallel over intervals of $y$ defined by the intersection of their effective ranges.
\end{enumerate}
The spans of the \textit{all-in-one} and \textit{per buffer} methods are $\Theta(p + \log n)$ and $\Theta(p\log n)$, respectively.
If the number of threads is $\Theta(n)$, then their respective parallelism are $O(nnz / n)$ and $O(nnz/n\log n)$.
The platforms considered herein,
however, feature at most four processors. Our experiments will show that
these methods can still provide reasonable scalability for such systems.
In this case, their parallelism would be better approximated by $O(nnz / \log n)$, as for CSR.
In fact, the problem with the first two methods is that they treat all buffers as dense vectors, which is rarely true in practice as we are dealing with sparse matrices.
The \textit{effective} and \textit{interval} methods try to mitigate this issue by performing computations only on effective ranges.
For narrow band matrices, which is usually the case of finite element matrices, we can assume the effective range is $\Theta(n/p)$.
Hence the span of both methods is $\Theta(p \log (n/p))$.
Since the work per thread strongly depends on the number of non-zeros per row,
a partitioning technique based just on the number of rows may result in load
imbalance. A more efficient way is to consider the number of non-zeros per
thread, because the amount of floating point operations becomes balanced. The
results presented herein were obtained using such a non-zero guided implementation,
in which the deviation from the average number of non-zeros per row
is minimized.
\begin{figure}[t]
\centering
\subfloat[]{\includegraphics{images/partitioning-simple}
\label{fig:partitioning-simple}}
\hfil
\subfloat[]{\includegraphics{images/partitioning-colorful}
\label{fig:partitioning-colorful}}
\hfil
\subfloat[]{\includegraphics{images/conflicts}
\label{fig:conflicts}}
\caption{Illustration of the (a) local buffers and the (b) colorful
partitioning methods for 3 threads on a $9 \times 9$ matrix along with its (c) conflict graph.}
\label{fig:partitioning}
\end{figure}
\subsection{Colorful method}
The \textit{colorful method} partitions a matrix into sets of pairwise
conflict-free rows.
Here we distinguish between two kinds of conflicts.
If a thread owns row $i$ and a second thread owning row $j$, $j > i$, requires
to modify $y(k)$, $k < i$, it is called an \textit{indirect} conflict.
If $k = i$, we call such conflict \textit{direct}.
The \textit{conflict graph} of a matrix $A$ is the graph $G[A] = (V,E)$,
where each vertex $v \in V$ corresponds to a row in $A$, and the edges in $E$
represent conflicts between vertices.
Figure \subref{fig:conflicts} shows the conflict graph for the matrix in
Figure \ref{fig:csrc_scheme}.
Direct and indirect conflicts are indicated by solid and dashed lines,
respectively. In the graph, there are 12 direct and 7 indirect conflicts.
The direct
conflicts of row $i$ are exactly the rows corresponding to the column indices of
the non-zero entries at that row, i.e., the indices $ja(k)$, $k \in [ia(i), ia(i+1))$. They can be computed in a
single loop through the CSRC structure. The computation of indirect conflicts
is more demanding. In our implementation, these are determined with the aid of
the induced subgraph $G'[A]$ spanned by the edges in $G[A]$ associated with
direct conflicts. Given two vertices $u, v \in V$, if the intersection of
their neighborhood in $G'[A]$ is non-empty, then they are indirectly in
conflict.
We color the graph $G[A]$ by applying a standard sequential
coloring algorithm \cite{CM83a}.
The color classes correspond to
conflict-free blocks where the matrix-vector product can be safely carried
out in parallel. Observe that coloring rectangular matrices
is the same as coloring only its square part, since
the rectangular part is accessed by rows. The layout of a 5-colored
matrix is depicted in Figure \subref{fig:partitioning-colorful}.
Let $k$
denote the number of colors used by the coloring algorithm.
Suppose that the color classes are evenly sized, and
that the loop over the rows is implemented as a divide-and-conquer recursion.
Under such hypothesis, the span of the colorful method can be approximated by $\Theta(k\log(n/k))$.
Thus, the colorful matrix-vector product has a parallelism of $O(nnz / k\log(n/k) )$.
Although $k < p$ would lead to a better scalability when compared to the local buffers strategy,
the possibility of exploiting systems based on cache hierarchies decreases,
which affects considerably the code performance.
Furthermore, the number of processors used in our experiments was always smaller than the number of colors.
\section{Experimental results}
\label{sec:results}
Our implementation was evaluated on two Intel processors, including an Intel
Core~2 Duo E8200 (codenamed \textit{Wolfdale}) and an Intel i7 940 (codenamed
\textit{Bloomfield}). The Wolfdale processor runs at 2.66GHz, with L2 cache of
6MB and 8GB of RAM, and the Bloomfield one runs at 2.93GHz with $4\times$256KB
L2 caches, 8MB of L3 cache and 8GB of RAM. Our interest on Intel Core~2 Duo
machines lies on the fact that our finite element simulations are carried out
on a dedicated 32-nodes cluster of such processors.
The code was parallelized using OpenMP directives, and compiled
with Intel Fortran compiler (\texttt{ifort}) version 11.1 with level 3
optimizations (\texttt{-O3} flag) enabled. Machine counters were accessed
through the PAPI 3.7.1 library API \cite{BDGHM00a}. The measurements of
speedups and Mflop/s were carried out with PAPI instrumentation disabled.
The tests were performed on a data set comprised of 60 matrices, from which 32 are
numerically symmetric. There is one non-symmetric dense matrix of order 1K, 50
matrices selected from the University of Florida sparse matrix collection
\cite{Dav97a}, and 3 groups of 3 matrices each, called angical, tracer, and
cube2m, of our own devise. Inside these groups, matrices correspond to
one global finite element matrix output by our sequential finite element code,
and two global matrices for both of the adopted domain partitioning schemes,
overlapping (suffix ``\_o32'') and non-overlapping (suffix ``\_n32''),
where 32 stands for the number of sub-domains. Our benchmark computes the sparse
matrix-vector product a thousand times for each matrix in Table
\ref{tab:matrices-details}, which is a reasonable value for iterative solvers
like the preconditioned conjugate gradient method and the generalized
minimum residual method. All results correspond to
median values over three of such runs.
\begin{table}[!t]
\caption{Details of the matrices used in our experiments.}
\label{tab:matrices-details}
\centering
{\scriptsize
\begin{tabularx}{0.485\textwidth}{@{}ll@{\ \ }rrrr@{}}
\toprule
Matrix & Sym. & $n$ & $nnz$ & $nnz/n$ & $ws$ (KB)\\
\midrule
thermal & no & 3456 & 66528 & 19 & 710 \\
ex37 & no & 3565 & 67591 & 18 & 722 \\
flowmeter5 & no & 9669 & 67391 & 6 & 828 \\
piston & no & 2025 & 100015 & 49 & 1012 \\
SiNa & yes & 5743 & 102265 & 17 & 1288 \\
benzene & yes & 8219 & 125444 & 15 & 1598 \\
cage10 & no & 11397 & 150645 & 13 & 1671 \\
spmsrtls & yes & 29995 & 129971 & 4 & 1991 \\
torsion1 & yes & 40000 & 118804 & 2 & 2017 \\
minsurfo & yes & 40806 & 122214 & 2 & 2069 \\
wang4 & no & 26068 & 177196 & 6 & 2188 \\
chem\_master1 & no & 40401 & 201201 & 4 & 2675 \\
dixmaanl & yes & 60000 & 179999 & 2 & 3046 \\
chipcool1 & no & 20082 & 281150 & 14 & 3098 \\
t3dl & yes & 20360 & 265113 & 13 & 3424 \\
poisson3Da & no & 13514 & 352762 & 26 & 3682 \\
k3plates & no & 11107 & 378927 & 34 & 3895 \\
gridgena & yes & 48962 & 280523 & 5 & 4052 \\
cbuckle & yes & 13681 & 345098 & 25 & 4257 \\
bcircuit & no & 68902 & 375558 & 5 & 4878 \\
angical\_n32 & yes & 20115 & 391473 & 19 & 4901 \\
angical\_o32 & no & 18696 & 732186 & 39 & 4957 \\
tracer\_n32 & yes & 33993 & 443612 & 13 & 5729 \\
tracer\_o32 & no & 31484 & 828360 & 26 & 5889 \\
crystk02 & yes & 13965 & 491274 & 35 & 5975 \\
olafu & yes & 16146 & 515651 & 31 & 6295 \\
gyro & yes & 17361 & 519260 & 29 & 6356 \\
dawson5 & yes & 51537 & 531157 & 10 & 7029 \\
ASIC\_100ks & no & 99190 & 578890 & 5 & 7396 \\
bcsstk35 & yes & 30237 & 740200 & 24 & 9146 \\
\bottomrule
\end{tabularx}
\hfill
\begin{tabularx}{0.485\textwidth}{@{}l@{\ \ }l@{\ \ }r@{\ \ }r@{\ \ }r@{\ \ }r@{}}
\toprule
Matrix & Sym. & $n$ & $nnz$ & $nnz/n$ & $ws$ (KB)\\
\midrule
dense\_1000 & no & 1000 & 1000000 & 1000 & 9783 \\
sparsine & yes & 50000 & 799494 & 15 & 10150 \\
crystk03 & yes & 24696 & 887937 & 35 & 10791 \\
ex11 & no & 16614 & 1096948 & 66 & 11004 \\
2cubes\_sphere & yes & 101492 & 874378 & 8 & 11832 \\
xenon1 & no & 48600 & 1181120 & 24 & 12388 \\
raefsky3 & no & 21200 & 1488768 & 70 & 14911 \\
cube2m\_o32 & no & 60044 & 1567463 & 26 & 16774 \\
nasasrb & yes & 54870 & 1366097 & 24 & 16866 \\
cube2m\_n32 & no & 65350 & 1636210 & 25 & 17127 \\
venkat01 & no & 62424 & 1717792 & 27 & 17872 \\
filter3D & yes & 106437 & 1406808 & 13 & 18149 \\
appu & no & 14000 & 1853104 & 132 & 18342 \\
poisson3Db & no & 85623 & 2374949 & 27 & 24697 \\
thermomech\_dK & no & 204316 & 2846228 & 13 & 31386 \\
Ga3As3H12 & yes & 61349 & 3016148 & 49 & 36304 \\
xenon2 & no & 157464 & 3866688 & 24 & 40528 \\
tmt\_sym & yes & 726713 & 2903837 & 3 & 45384 \\
CO & yes & 221119 & 3943588 & 17 & 49668 \\
tmt\_unsym & no & 917825 & 4584801 & 4 & 60907 \\
crankseg\_1 & yes & 52804 & 5333507 & 101 & 63327 \\
SiO2 & yes & 155331 & 5719417 & 36 & 69451 \\
bmw3\_2 & yes & 227362 & 5757996 & 25 & 71029 \\
af\_0\_k101 & yes & 503625 & 9027150 & 17 & 113656 \\
angical & yes & 546587 & 11218066 & 20 & 140002 \\
F1 & yes & 343791 & 13590452 & 39 & 164634 \\
tracer & yes & 1050374 & 14250293 & 13 & 183407 \\
audikw\_1 & yes & 943695 & 39297771 & 41 & 475265 \\
cube2m & no & 2000000 & 52219136 & 26 & 545108 \\
cage15 & no & 5154859 & 99199551 & 19 & 1059358 \\
\bottomrule
\end{tabularx}}
\end{table}
\subsection{Sequential performance}
We have compared the sequential performance of CSRC to the standard CSR.
For symmetric matrices, we have chosen the OSKI implementation \cite{LVDY04a}
as the representative of the symmetric CSR algorithm, assuming that only the
lower part of $A$ is stored.
In the sparse matrix-vector product, each element of the matrix is accessed
exactly once. Thus, accessing these entries incurs only on compulsory misses.
On the other hand, the elements of $x$ and $y$ are accessed multiple times.
This would enable us to take advantage of cache hierachies by
reusing recently accessed values. In the CSR, the access pattern of the $x$
vector is known to be the major hindrance to the exploitation of data reuse,
because arrays $y$, $ia$, $ja$ and $a$ all have stride-1 accesses. Since the
$y$ vector is not traversed using unit stride anymore in the CSRC, one could argue
that there would be an increase in the number of cache misses. As
presented in Figure \ref{fig:missratio}, experiments on L2 data cache misses
suggest just the converse, while the ratio of TLB misses is roughly constant.
\begin{figure}[t]
\centering
\includegraphics[height=0.22\textheight]{images/wolfdale-matvec_bench-sequential-missratio}
\caption{Percentages of L2 and TLB cache misses
using CSRC and CSR on the
Wolfdale processor.}
\label{fig:missratio}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[height=0.22\textheight]{images/matvec_bench-sequential-mflops}
\caption{Sequential performance in Mflop/s of the matrix-vector
product using CSR and CSRC on both Wolfdale and Bloomfield processors.}
\label{fig:matvec_bench-sequential-mflops}
\end{figure}
The performance of the algorithm considered herein is memory bounded, because the
number of load/store operations is at least as greater as the number of
floating-point multiply-add instructions. In a dense matrix-vector product, we
need to carry out $O(n^2)$ operations on $O(n^2)$ amount of data, while for sparse
matrices, these quantities are both $O(n)$.
In particular, observe that the computation
of the square $Ax$ product using the CSRC requires the execution of $n$ multiply and $nnz - n$
multiply-add operations, whereas the CSR algorithm requires $nnz$ multiply-add
operations. On systems without fused multiply-add operations, the CSR and CSRC
algorithms would perform $2nnz$ and $2nnz - n$ floating-point instructions,
respectively. On the other hand, the number of load instructions for CSR is
$3nnz$, and $\frac{5}{2}nnz - \frac{1}{2}n$ for the CSRC format. Hence the
ratio between loadings and flops is approximately 1.26 for CSRC and exactly 1.5
for CSR. This bandwidth mitigation may be the most relevant reason for the
efficiency of CSRC shown in Figure \ref{fig:matvec_bench-sequential-mflops}.
It is also worth noting the advantage of the augmented CSRC on
matrices whose square part is numerically symmetric,
i.e., the matrices angical\_o32 and tracer\_o32.
\subsection{Multi-thread version}
Our parallel implementation was evaluated with up to 4 threads on Bloomfield
with Hyper-Threading technology disabled.
The values of speedup
are relative to the pure sequential CSRC algorithm, and not to the one thread case.
One would expect that the colorful method is best suited to matrices with
few conflicts, e.g., narrow band matrices,
because the lower is the maximum degree in the conflict graph, the larger is its parallelism.
As shown in Figure
\ref{fig:matvec_bench-local_buffers_effective_nonzeros_vs_colorful},
it was more efficient only on the matrices torsion1, minsurfo and dixmaanl, which have the smallest bandwidth among all matrices.
Nonetheless, according to Figures \subref{fig:wolfdale-matvec_bench-colorful-speedup} and \subref{fig:bloomfield-matvec_bench-colorful-speedup}, small matrices can still benefit from some parallelism.
An important deficiency of the colorful strategy, which contributes to its lack of
locality, is the variable-size stride access to the source and destination
vectors.
Inside each color, there not exist rows
sharing neither $y$ nor $x$ positions, because if they do share there will be a
conflict, therefore they must have different colors.
We claim that there must be an optimal color size to compensate such irregular accesses.
\begin{figure}[!t]
\centering
\includegraphics[height=0.22\textheight]{images/matvec_bench-local_buffers_effective_nonzeros_vs_colorful}
\caption{Performance comparison between the colorful method and the fastest local buffers implementation
on the Wolfdale and Bloomfield systems.}
\label{fig:matvec_bench-local_buffers_effective_nonzeros_vs_colorful}
\end{figure}
\begin{figure}[!t]
\centering
\begin{tabular}{@{}>{\footnotesize}lm{0.9\textwidth}@{}}
(a) & \subfloat{\includegraphics[height=0.21\textheight]{images/wolfdale-matvec_bench-colorful-speedup}\label{fig:wolfdale-matvec_bench-colorful-speedup}}\\
(b) & \subfloat{\includegraphics[height=0.21\textheight]{images/bloomfield-matvec_bench-colorful-speedup}\label{fig:bloomfield-matvec_bench-colorful-speedup}}
\end{tabular}
\caption{Speedups for the colorful method on the (a) Wolfdale and (b) Bloomfield processors.}
\label{fig:matvec_bench-colorful-speedups}
\end{figure}
Figures \ref{fig:wolfdale-matvec_bench-local_buffers-speedups}
and \ref{fig:bloomfield-matvec_bench-local_buffers-speedups} show the outcomes of speedups attained
by all four implementations of the local buffers strategy.
The
overheads due to the initialization and accumulation steps become
notorious when using just one thread. This can be easily overcome by
checking the number of threads at runtime. If there exists only one thread
in the working team, the global destination vector is used instead.
Although all four implementations reached reasonable speedup peaks, the effective method has been more stable over the whole data set.
On the average, it is the best choice for 93\% of the cases on the Wolfdale, and for 80\% and 78\% on Bloomfield with 2 and 4 threads, respectively.
\begin{figure}[p]
\centering
\begin{tabular}{@{}>{\footnotesize}lm{0.9\textwidth}@{}}
(a) & \subfloat{\includegraphics[height=0.21\textheight]{images/wolfdale-matvec_bench-local_buffers_static_nonzeros-speedup}\label{fig:wolfdale-matvec_bench-local_buffers_static_nonzeros-speedup}}\\
(b) & \subfloat{\includegraphics[height=0.21\textheight]{images/wolfdale-matvec_bench-local_buffers_full_nonzeros-speedup}\label{fig:wolfdale-matvec_bench-local_buffers_full_nonzeros-speedup}}\\
(c) & \subfloat{\includegraphics[height=0.21\textheight]{images/wolfdale-matvec_bench-local_buffers_effective_nonzeros-speedup}\label{fig:wolfdale-matvec_bench-local_buffers_effective_nonzeros-speedup}}\\
(d) & \subfloat{\includegraphics[height=0.21\textheight]{images/wolfdale-matvec_bench-local_buffers_strict_nonzeros-speedup}\label{fig:wolfdale-matvec_bench-local_buffers_strict_nonzeros-speedup}}\\
\end{tabular}
\caption{Speedups achieved by the local buffers strategy using the (a) all-in-one, (b) per buffer, (c) effective and (d) interval methods of initialization/accumulation on the Wolfdale processor.}
\label{fig:wolfdale-matvec_bench-local_buffers-speedups}
\end{figure}
\begin{figure}[p]
\centering
\begin{tabular}{@{}>{\footnotesize}lm{0.9\textwidth}@{}}
(a) & \subfloat{\includegraphics[height=0.21\textheight]{images/bloomfield-matvec_bench-local_buffers_static_nonzeros-speedup}\label{fig:bloomfield-matvec_bench-local_buffers_static_nonzeros-speedup}}\\
(b) & \subfloat{\includegraphics[height=0.21\textheight]{images/bloomfield-matvec_bench-local_buffers_full_nonzeros-speedup}\label{fig:bloomfield-matvec_bench-local_buffers_full_nonzeros-speedup}}\\
(c) & \subfloat{\includegraphics[height=0.21\textheight]{images/bloomfield-matvec_bench-local_buffers_effective_nonzeros-speedup}\label{fig:bloomfield-matvec_bench-local_buffers_effective_nonzeros-speedup}}\\
(d) & \subfloat{\includegraphics[height=0.21\textheight]{images/bloomfield-matvec_bench-local_buffers_strict_nonzeros-speedup}\label{fig:bloomfield-matvec_bench-local_buffers_strict_nonzeros-speedup}}\\
\end{tabular}
\caption{Speedups achieved by the local buffers strategy using the (a) all-in-one, (b) per buffer, (c) effective and (d) interval methods of initialization/accumulation on the Bloomfield processor.}
\label{fig:bloomfield-matvec_bench-local_buffers-speedups}
\end{figure}
To better illustrate the performance of different initialization/accumulation algorithms, Table \ref{tab:matvec_bench-acctime} presents average values of the running time consumed by these algorithms considering two classes of matrices, the ones that fit in cache and the others that do not.
As expected, both all-in-one and per buffer strategies have similar performance.
The effective and interval methods have demonstrated to be very feasible for practical use, although the later may incur a higher overheard because the number of intervals is at least as great as the number of threads.
In general, the running time is influenced by the working set size
and the band structure of the matrix. When the arrays used by the CSRC
fit or nearly fit into cache memory, better speedups were
obtained with almost linear scalability, reaching up to 1.87 on Wolfdale. For some matrices
from the University of Florida collection it was observed a poor
performance, e.g., tmt\_sym, tmt\_unsym, cage15 and F1. In the case of cage15 and F1, this may be attributed to the
absence of a band structure. On
the other hand, there seems to be a bandwidth lower bound for preserving performance.
In particular, the quasi-diagonal profile of the matrices tmt\_sym and tmt\_unsym
have contributed to amplify indirection overheads.
Our code has been 63\% more efficient on Bloomfield using 2 threads than on Wolfdale.
Taking a closer view, however, we see that Wolfdale is faster on 80\% of matrices with working set sizes up to 8MB, while Bloomfield beats the former on 94\% of the remaining matrices.
Notice that Wolfdale requires less cycles than Bloomfield to access its outer most cache, which would explain its superiority on small matrices.
Analysing the performance with 4 threads on the Bloomfield processor shown in Figure \subref{fig:bloomfield-matvec_bench-local_buffers_effective_nonzeros-speedup}, speedups indicate that large working sets drastically degrades the efficiency of the implementation, compared to the 2-threaded case.
On smaller matrices, speedups seem to grow linearly, with peaks of 1.83 and 3.40 using 2 and 4 threads, respectively.
\begin{table}[t]
\caption{Average values of the maximum running time among all threads spent during the initialization and accumulation steps using four different approaches.}
\label{tab:matvec_bench-acctime}
\centering
\renewcommand{\arraystretch}{1.1}
{\footnotesize
\begin{tabular}{@{}l@{\ \ }c@{\ \ }cc@{\ \ }cc@{\ \ }c@{}}
\toprule
\multirow{3}{*}{Method} & \multicolumn{2}{c}{Wolfdale} & \multicolumn{4}{c}{Bloomfield}\\\cmidrule(r){2-3}\cmidrule(l){4-7}
& $ws < 6$MB & $ws > 6$MB & \multicolumn{2}{c}{$ws < 8$MB} & \multicolumn{2}{c}{$ws > 8$MB}\\\cmidrule(r){2-2}\cmidrule(r){3-3}\cmidrule(lr){4-5}\cmidrule(l){6-7}
& 2 & 2 & 2 & 4 & 2 & 4\\
\midrule
all-in-one & 0.0455 & 4.3831 & 0.0370 & 0.0475 & 1.3127 & 2.5068\\
per buffer & 0.0455 & 4.3876 & 0.0320 & 0.0393 & 1.8522 & 3.8299\\
effective & \textbf{0.0215} & \textbf{1.8785} & \textbf{0.0176} & \textbf{0.0234} & \textbf{0.8094} & \textbf{1.2575}\\
interval & 0.0858 & 2.9122 & 0.0748 & 0.0456 & 1.3920 & 1.4939\\
\bottomrule
\end{tabular}}
\end{table}
\section{Conclusion}
\label{sec:conclusion}
We have been concerned with the parallelization of the matrix-vector multiplication
algorithm using the CSRC data structure, focusing on multi-core
architectures. It has been advocated that multi-core parallelization alone can
compete with purely sequential optimization techniques. We could observe that,
provided sufficient memory bandwidth, our implementation has demonstrated to be fairly scalable.
The main deficiency of the colorful method is due to variable size stride
accesses, which can destroy any locality provided by matrix reordering
techniques. We claim that it could be improved by fixing the maximum allowed
stride size inside each color class. This will be the objective of our future
investigations.
Computing the transpose matrix-vector multiplication is
considered costly when using the standard CSR. An easy but still expensive solution
would be to convert it into the CSC format before spawning threads.
This operation is very straightforward using the CSRC,
as we just need to swap the addresses of $al$ and $au$, and we are done.
Clearly, the computational costs remain the same.
Our results extend previous work
on the computation of the sparse matrix-vector product for
structurally symmetric matrices to multi-core
architectures.
The algorithms hereby presented are now part of a distributed-memory implementation
of the finite element method \cite{RF07a}.
Currently,
we conduct experiments on the effect of coupling both coarse- and fine-grained parallelisms.
\section*{Acknowledgment}
We would like to thank Prof.~Jos{\'e} A.~F.~Santiago and Cid S.~G.~Monteiro for
granting access to the Intel i7 machines used in our experiments. We are also
grateful to Ayd{\i}n Bulu\c{c} and the anonymous reviewers
for the helpful comments.
\bibliographystyle{abbrv}
| 2024-02-18T23:40:00.217Z | 2010-06-01T02:00:23.000Z | algebraic_stack_train_0000 | 1,060 | 6,516 |
|
proofpile-arXiv_065-5225 | \section{Introduction}
At this conference, the Standard Models of Particle Physics and Cosmology
have again been impressively confirmed. In the experimental talks on strong
interactions, electroweak precision tests, flavour and neutrino physics
and searches for `new physics' no significant
deviations from Standard Model predictions
have been reported. Also in Astrophysics, where unexpected results in
high-energy cosmic rays were found, conventional astrophysical explanations
of the new data appear to be sufficient. In Cosmology, we have entered an
era of precision physics with theory lagging far behind.
Given this situation, one faces the question: What are the theoretical and
experimental hints for physics beyond the Standard Models, and what
discoveries can we hope for at the LHC, in non-accelerator experiments,
and in astrophysical and cosmological observations? In the following
I shall summarize some results of this conference using this question as
a guideline. Particular emphasis will therefore be given to the Higgs sector
of the Standard Model, the ``topic number one'' at the LHC, and the recent
results in high-energy cosmic rays, which caused tremendous excitement
during the past year because of the possible connection to dark matter.
The Standard Model of Particle Physics is a relativistic quantum field theory,
a non-Abelian gauge theory with symmetry group
\begin{equation}
G_{\mathrm{SM}} = SU(3)\times SU(2)\times U(1)
\end{equation}
for the strong and electroweak interactions, respectively. Three generations
of quarks and leptons with chiral gauge interactions describe all features
of matter. The current focus is on
\begin{itemize}
\item
Precision measurements and calculations in QCD
\item
Heavy ions and nonperturbative field theory
\item
Electroweak symmetry breaking, with the key elements: top-quark,
W-boson and Higgs bosons
\item
Flavour physics and neutrinos.
\end{itemize}
The cosmological Standard Model is also based on a gauge theory, Einstein's
theory of gravity. Together with the Robertson-Walker metric this leads to
Friedmann's equations. Within current errors, the universe is known to be
spatially flat, and its expansion rate is increasing. Most remarkably,
its energy density is dominated by `dark matter' and `dark energy'. The desire
to disentangle the nature of dark matter and dark energy, and to understand
their possible connection to particle physics is the main driving force in
observational cosmology today.
On the theoretical frontier, string theory is the main theme, despite the fact
that after more than thirty years of research it still has not become a
falsifiable theory. Nevertheless, string theory has inspired many extensions
of the Standard Model, which will be tested at the LHC and it has
stimulated interesting models for the early universe which can be probed by
cosmological observations. String theory goes beyond field theory by replacing
point-interactions of particles by nonlocal interactions of strings. In this
way it has also become a valuable tool to analyze strongly interacting
systems of particles at high energies and high densities.
\section{Strong Interactions}
\subsection{QCD at colliders}
Quantum chromodynamics is the prototype of a non-Abelian gauge theory.
To improve our quantitative understanding of this theory has remained a
theoretical challenge for more than three decades. In recent years important
topics have been the determination of the scale-dependent strong coupling
$\alpha_s(Q^2)$, higher-order calculations of matrix elements, the analysis
of multi-leg final states and soft processes including underlying events and
diffraction \cite{schleper}.
Understanding QCD is also a prerequisite for electroweak
precision tests and physics
beyond the Standard Model. The search for the Higgs boson, for instance,
requires the knowledge of the gluon distribution function, at low Bjorken-$x$
for light Higgs bosons and at large Bjorken-$x$ for large Higgs masses.
Recently, a combined analysis of the deep-inelastic scattering data of the H1
and ZEUS collaborations at HERA has led to significantly more precise quark
and gluon distribution functions in the whole $x$-range. The new HERA-PDF's are
compared with previous determinations of parton distribution functions
by the CTEQ and MSTW collaborations
in Fig.~\ref{fig:PDFs}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=6.5cm]{PDF-1}
\hspace*{1cm}
\includegraphics[width=6.5cm]{PDF-2}
\end{center}
\caption{Quark and gluon distribution functions from a combined analysis of
the H1 and ZEUS collaborations compared with distribution functions obtained
by CTEQ (left) and MSTW (right). From \cite{schleper}.
\label{fig:PDFs}
}
\end{figure}
Impressive progress has been made in the development of new techniques for
multi-leg next-to-leading (NLO) calculations \cite{anastasiou}. As a
result, the full NLO calculation for the inclusive W+3jet production cross
section in hadron-hadron collisions became possible. In Fig.~\ref{fig:NLO}
the LO and NLO predictions are compared with CDF data; the scale dependence
is significantly reduced. Another important process, especially as
background for Higgs search, is
$pp \rightarrow t\bar{t}b\bar{b} + X$ for which a full NLO calculation
has also been performed. As expected, the scale dependence is reduced
(see Fig.~\ref{fig:NLO}).
One may worry, however, that the `correction' compared to the LO calculation
is $\mathcal{O}(100)\%$! Most remarkable is also the progress in calculating
multi-leg amplitudes. Using conventional as well as string theory techniques
it has become possible to compute scattering amplitudes involving up to 22
gluons \cite{anastasiou}!
\begin{figure}[t]
\begin{center}
\includegraphics[height=6cm]{w3j}
\hspace*{1cm}
\includegraphics[height=6.3cm]{denner}
\end{center}
\caption{Left: The measured inclusive W+3jet production cross section for
$p\bar{p}$ collisions at the Tevatron as function of the Third Jet $E_T$;
from \cite{schleper}. Right: Scale dependence of LO and NLO cross sections
for the process $pp \rightarrow t\bar{t}b\bar{b} + X$ at the LHC.
From \cite{anastasiou}.
\label{fig:NLO}
}
\end{figure}
\subsection{Quark-gluon plasma and AdS/CFT correspondence}
During the past years dense hadronic matter has become another frontier of
QCD due to new results from RHIC and novel theoretical developments
\cite{wiedemann}. An interesting collective phenomenon is the `elliptic
flow' of particles produced in heavy ion collisions. From the size and
$p_T$-dependence of the elliptic flow one can determine the shear viscosity
$\eta$ which appears as parameter in hydrodynamic simulations. The small
measured value of $\eta$ caused considerable excitement among theorists
since it could be understood in the context of a strongly coupled
$N=4$ supersymmetric Yang-Mills (SYM) theory.
Another intriguing phenomenon are monojets, originally conjectured by Bjorken
for proton-proton collisions. In heavy ion collisions their appearance is
expected due to the radiative energy loss in the medium (see
Fig~\ref{fig:mono}), which can be asymmetric for partons after a hard
scattering. The monojet phenomenon has already been observed at RHIC
and will be studied in detail at LHC.
\begin{figure}[b]
\begin{center}
\includegraphics[width=9cm]{mono}
\end{center}
\caption{Monojet event in a heavy ion collision. From \cite{wiedemann}.}
\label{fig:mono}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=2.5cm]{janik}
\end{center}
\caption{The leading L\"uscher graph contributing to the Konishi operator at
four loops. The dashed line represents all asymptotic states of the theory
while the two vertical lines correspond to the two particles forming the
Konishi state in the two dimensional string worldsheet QFT. From \cite{janik}.}
\label{fig:luscher}
\end{figure}
On the theoretical side, significant progress has been made towards `solving
$N=4$ SYM theory' \cite{janik}. Here `solving' means the determination of
the anomalous dimensions of all operators for any value of the gauge coupling,
so that one can extrapolate the theory from the perturbative weak-coupling
regime to the nonperturbative strong coupling regime. The anomalous dimensions
can be calculated in usual perturbation theory as well as, via the AdS/CFT
correspondence, by means of string theory in the spacetime background
$AdS_5\times S^5$, i.e., by considering a particular two-dimensional field
theory on a finite cylinder (see Fig.~\ref{fig:luscher}).
As an example, consider the Konishi operator $\mathrm{tr}\Phi_i^2$, where
$\Phi_i$
are the adjoint scalars of $N=4$ SYM. String theory yields at order $g^8$,
\begin{equation}
\label{sum}
\Delta^{(4-loop)}_{wrapping} =
\sum_{Q=1}^{\infty} \left\{ -\frac{num(Q)}{\left(9 Q^4-3
Q^2+1\right)^4 \left(27 Q^6-27 Q^4+36 Q^2+16\right)}
+\frac{864}{Q^3}-\frac{1440}{Q^5} \right\}\ ,
\end{equation}
with the numerator
\begin{align}
num(Q) =& 7776 Q (19683 Q^{18}-78732 Q^{16}+150903
Q^{14}-134865 Q^{12}+ \nonumber\\
&+1458 Q^{10}+48357 Q^8-13311
Q^6-1053 Q^4+369 Q^2-10)\ .
\end{align}
The sum (\ref{sum}) can be carried out with the result
\begin{equation}
\Delta^{(4-loop)}_{wrapping} = (324+864\zeta(3)-1440 \zeta(5))g^8\ ,
\end{equation}
which exactly agrees with a direct perturbative computation at four-loop order
(around 131015 Feynman graphs). The recent string calculation to order $g^{10}$
still remains to be checked by a five-loop perturbative calculation.
The string calculations give the impression that there is some structure in
the perturbative expansion of gauge theories which has not been understood
so far. In this way, $N=4$ SYM theory may become the `harmonic oscillator
of four-dimensional gauge theories'.
\section{The Higgs sector}
The central theme of physics at the LHC is the Higgs sector \cite{grojean}
of the Standard Model. The weak and electromagnetic interactions are described
by a spontaneously broken gauge theory. The Goldstone bosons of the symmetry
breaking
\begin{equation}\label{sym1}
SU(2)_L\times U(1)_Y \rightarrow U(1)_\textrm{em}
\end{equation}
give mass to the W- and Z-bosons via the Higgs mechanism. In the Standard Model
the electroweak symmetry is broken in the simplest possible way, by the vacuum
expectation value of a single $SU(2)_L$ doublet, corresponding to the symmetry
breaking $SU(2)_L\times SU(2)_R \rightarrow SU(2)_{L+R}$ which contains the
Goldstone bosons of (\ref{sym1}).
The unequivocal prediction of the Standard Model is the existence of a new
elementary particle, the Higgs boson. During the past two decades this theory
has been impressively confirmed in many ways. Electroweak precision tests
favour a light Higgs boson \cite{conway},
$m_H \simeq 87^{+35}_{-26}\ \textrm{GeV}$.
For Higgs masses in the range from 130~GeV to 180~GeV, the
Standard Model can be consistently extrapolated from the electroweak scale
$\Lambda_\textrm{EW} \sim 100~\textrm{GeV}$ to the grand unification (GUT)
scale $\Lambda_\textrm{GUT} \sim 10^{16}~\textrm{GeV}$, avoiding the potential
problems of vacuum instability and the Landau pole for the Higgs self-coupling.
The unsuccessful search for the Higgs boson
at LEP led to the lower bound on the Higgs mass $114\ \textrm{GeV} < m_H$,
and the search at the Fermilab Tevatron excludes the the mass range
$163 < m_H < 166\ \textrm{GeV}$ at 95\% C.L.
(see Fig.~{\ref{fig:higgstevatron}). Together with the assumption of grand
unification, and the implied upper bound of $180\ \textrm{GeV}$ on
the Higgs mass, this further supports the existence of a light Higgs boson.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.6\textwidth,clip,angle=0]{tevcomb_nov6.pdf}
\caption{ \label{fig:higgstevatron}
Observed and expected 95\% C.L. on the ratios to the SM cross sections, as
functions of the Higgs boson mass; combined CDF and D0 analysis.
From \cite{conway}.}
\end{center}
\end{figure}
Supersymmetric extensions of the Standard Model are particularly well
motivated. They stabilize the hierarchy between the electroweak scale and the
GUT scale, and in the minimal case with two $SU(2)_L$ doublets,
the MSSM, the strong and electroweak gauge couplings unify with surprizing
accuracy at $\Lambda_\textrm{GUT} \sim 10^{16}~\textrm{GeV}$. In addition,
the lightest supersymmetric particle (LSP), neutralino or gravitino, is a
natural dark matter candidate. As a consequence, the search for superparticles
dominates `New Physics' searches at the Tevatron and the LHC \cite{buescher}.
On the other hand, supersymmetric versions of the Standard Model require some
`fine-tuning' of parameters. In particular in the MSSM, where the Higgs
self-coupling is given by the gauge couplings, one obtains at tree level an
upper bound on the lightest CP-even Higgs scalar,
\begin{equation}
m_h \leq m_Z\ .
\end{equation}
One-loop radiative corrections can lift the Higgs mass above the LEP bound
provided the scalar top is heavier than 1~TeV. Consistency with the
$\rho$-parameter then requires an adjustement of different parameters at the level
of 1\%, which is sometimes considered to be `unnatural'
\footnote{Note, that in the non-supersymmetric Standard Model the small
value of the CP-violating parameter $\epsilon'$ is also due to fine-tuned
cancellations between unrelated contributions.}. This fine-tuning can be
avoided in models with more fields such as the next-to-minimal supersymmetric
Standard Model (NMSSM) or `little Higgs' models, where the Higgs fields
appear as pseudo-Goldstone bosons of a global symmetry containing
$SU(2)_L\times SU(2)_R \rightarrow SU(2)_{L+R}$.
So far no Higgs-like boson has been found and we do not know what the
origin of electroweak symmetry breaking is. Theorists have been rather
inventive and the considered possibilities range from weakly coupled
elementary Higgs bosons with or without supersymmetry via composite Higgs
bosons and technicolour to the extreme case of large extra dimensions with
no Higgs boson. The corresponding Higgs scenarios come with colourful
names such as \cite{grojean}:
buried, charming, composite, fat, fermiophobic, gauge, gaugephobic,
holographic, intermediate, invisible, leptophilic, little, littlest, lone,
phantom, portal, private, slim, simplest, strangephilic, twin, un-, unusual,
\ldots .
The various possibilities will hopefully soon be reduced by LHC data.
\subsection{Weak versus strong electroweak symmetry breaking}
To unravel the nature of electroweak symmetry breaking, it is not sufficient
to find a `Higgs-like' resonance and to measure mass and spin. Of crucial
importance is also the study of longitudinally polarized W-bosons at large
center-of-mass energies, $s \gg m_W^2$, a notoriously difficult measurment.
The gauge boson self-interactions lead to a $WW$ scattering amplitude which
rises with energy,
\begin{equation}
\mathcal{A} (W_L^a W_L^b \to W_L^c W_L^d) =
\mathcal{A}(s) \delta^{ab}\delta^{cd}
+ \mathcal{A}(t) \delta^{ac}\delta^{bd}
+\mathcal{A}(u) \delta^{ad}\delta^{bc}\ , \qquad
\mathcal{A}(s)= i\frac{s}{v^2},
\end{equation}
and violates perturbative unitarity at $\sqrt{s} = 1-3\ \mathrm{TeV}$.
A scalar field $h$, which couples to longitudinal $W$'s with strength $\alpha$
relative to the SM Higgs coupling, yields the additional scattering amplitude
\begin{equation}
\mathcal{A}_\textrm{\tiny scalar}(s) =
- i\frac{\alpha^2}{v^2(s-m_h^2)}\ .
\end{equation}
As expected, the leading term of the total scattering amplitude,
\begin{equation}
\mathcal{A}_\textrm{\tiny tot}(s) =
- i\frac{(\alpha^2-1)s^2 + m_h^2 s}{v^2 (s-m_h^2)}\ ,
\end{equation}
vanishes for $\alpha^2=1$, which corresponds to the SM Higgs, and unitarity
is restored. It is important to realize, however, that the exchange of a
scalar may only partially unitarize the $WW$ scattering amplitude. This
happens in composite Higgs models where the Higgs mass can be light compared
to the compositeness scale $f>v$. Restoration of unitarity is then postponed
to energies $\sqrt{s} \sim 4\pi f > m_h$, where additional degrees of freedom
become visible, which are related to the strong interactions forming the
composite Higgs boson.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.45\textwidth,clip,angle=0]{WWtoWWTotal2.pdf}
\hspace{.2cm}
\includegraphics[width=0.45\textwidth,clip,angle=0]{WWtoWWTotalCutOnt2.pdf}
\caption{\label{fig:WpWpTOWpWp}
$W^+W^+\to W^+W^+$ partonic cross section as a function of the center-of-mass
energy for $m_h = 180$~GeV for the SM ($\xi=0$) and for composite Higgs models
($\xi=v^2/f^2 \not =0$).
On the left, the inclusive cross section is shown with a cut on $t$ and $u$
of order $m_W^2$; the plot on the right displays the hard cross section with
the cut $-0.75 < t/s < -0.25$. From \cite{grojean}.
}\label{fig:unitarity}
\end{center}
\end{figure}
Signatures of composite Higgs models can be systematically studied by adding
higher-dimensional operators to the Standard Model Lagrangian \cite{grojean},
\begin{eqnarray}
&&\mathcal{L}_{\tiny comp} =
\frac{c_H}{2f^2} \left( \partial_\mu \left( H^\dagger H \right) \right)^2
+ \frac{c_T}{2f^2} \left( H^\dagger{\overleftrightarrow D}_\mu H\right)^2
- \frac{c_6\lambda}{f^2}\left( H^\dagger H \right)^3
+ \left( \frac{c_yy_f}{f^2}H^\dagger H {\bar f}_L Hf_R +{\rm h.c.}\right) \nonumber \\
&&
+\frac{ic_Wg}{2m_\rho^2}\left( H^\dagger \sigma^i \overleftrightarrow {D^\mu} H \right )( D^\nu W_{\mu \nu})^i
+\frac{ic_Bg'}{2m_\rho^2}\left( H^\dagger \overleftrightarrow {D^\mu} H \right )( \partial^\nu B_{\mu \nu}) +\ldots \ ;
\label{eq:comp}
\end{eqnarray}
here $g, g', \lambda$ and $f_{L,R}$ are the electroweak gauge couplings, the
quartic Higgs coupling and the Yukawa coupling of the fermions $f_{L,R}$,
respectively; $m_{\rho} \simeq 4\pi f$, and the coefficients,
$c_H, c_T \ldots$ are expected to be of order one. The effective Lagrangian
(\ref{eq:comp}) describes departures from the Standard Model to leading order
in $\xi = v^2/f^2$.
The measurement of a rising cross section for longitudinal W-bosons at the
LHC is a challenging task. In Fig.~\ref{fig:unitarity} the predicted rise with
energy is shown for $m_h = 180$~GeV and two values of $\xi = v^2/f^2$.
The discovery of a `Higgs boson' at the LHC, with no other signs of new
physics, would still allow a rather low scale of compositeness,
$\xi \simeq 1$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=.75\textwidth]{WarpedHiggsless_final}
\caption[]{The symmetry-breaking structure of the warped Higgsless
model of Csaki et al. The model considers a 5D gauge theory in a fixed gravitational anti-de-Sitter (AdS) background.
The UV~brane (sometimes called the Planck brane) is located at $z=R$ and the IR~brane (also called the TeV brane) is located at $z=R'$. $R$ is the AdS curvature scale. In conformal coordinates, the AdS metric is given by
$
ds^2= \left( R/z \right)^2 \Big( \eta_{\mu \nu} dx^\mu dx^\nu - dz^2 \Big).
$ From \cite{grojean}.
}
\label{fig:higgsless}
\end{center}
\end{figure}
\subsection{Higgsless models}
Despite all electroweak precision tests, it still is conceivable that the
electroweak gauge symmetry is broken without Higgs mechanism and that no Higgs
boson exists. However, in this extreme case other new particles are predicted,
which unitarize the $WW$ scattering amplitude.
An interesting example of this kind are higher-dimensional theories with size
of the electroweak scale, $r_{\tiny higgsless} \sim 1/v =
\mathcal{O}(10^{-16}\ \mathrm{cm}$) (see Fig.~\ref{fig:higgsless}). The $W$- and
$Z$-bosons are now interpreted as Kaluza-Klein modes whose mass is due to
their transverse momentum in the extra dimensions,
\begin{equation}
E^2 = \vec{p}^2_3 + p^2_\perp = \vec{p}^2_3 + m_W^2 \ ,
\end{equation}
where $\vec{p}$ is the ordinary 3-momentum. Naively, one expects a strong
rise of the $WW$ scattering amplitude with energy,
\begin{equation}
\mathcal{A} = \mathcal{A}^{(4)} \left( \frac{\sqrt{s}}{v}\right)^4
+ \mathcal{A}^{(2)} \left( \frac{\sqrt{s}}{v}\right)^2 + \mathcal{A}^{(0)} + \ldots
\end{equation}
However, inclusion of all Kaluza-Klein modes leads to
$\mathcal{A}^{(4)}=\mathcal{A}^{(2)}=0$, which is a consequence of the
relations between couplings and masses enforced by the higher-dimensional
gauge theory. Since the extra dimensions have electroweak size, higgsless
models predict $W'$ and $Z'$ vector bosons below 1~TeV with sizable
couplings to the standard $W$ and $Z$ vector bosons.
\subsection{Top-Higgs system}
\begin{figure}[t]
\begin{center}
\includegraphics[height=7cm]{TevMtopComboMar09}
\hspace*{1cm}
\includegraphics[height=8cm]{weiglein}
\caption{ \label{fig:tophiggs}
Left: Top-quark mass measurements of CDF and D0. Right: Predicted dependence
of the W-mass on the top mass in the SM and the MSSM. From
\cite{schwanenberger}.
}
\end{center}
\end{figure}
In the Standard Model the top-quark \cite{schwanenberger} plays a special role
because of its large
Yukawa coupling. In some supersymmetric extensions, the top Yukawa coupling
even triggers electroweak symmetry breaking. It is very remarkable that
the top-quark mass is now known with an accuracy comparable to its width
(see Fig.~\ref{fig:tophiggs}),
\begin{equation}
m_{\textrm{top}} = 173.1 \pm 1.3\ \mathrm{GeV}\ .
\end{equation}
The meaning of a top-quark mass given with this precision is a subtle
theoretical issue. To further improve this precision would be very interesting
for several reasons. First of all, it is a challenge for the present
theoretical understanding of QCD processes to relate the measured `top-quark
mass' to parameters of the Standard Model Lagrangian. Moreover, since the
top-Higgs system plays a special role in many extensions of the Standard
Model, one may hope to discover some departure form Standard Model predictions.
In the right panel of Fig.~\ref{fig:tophiggs} the predicted dependence of the
$W$-mass \cite{hays} on the top mass is compared for the SM and the MSSM.
It is intriguing
that, at the 68\% C.L., the supersymmetric extension of the Standard Model is
favoured, but clearly increased precision is needed \cite{hays}.
\section{Flavour Physics}
The remarkable success of the CKM description of flavour violation and in
particular CP violation is demonstrated by the so-called Unitarity Triangle
fit shown in Fig.~\ref{fig:CKM}. A large data set on quark mixing angles
and CP-asymmetry parameters are consistent within theoretical and experimental
uncertainties \cite{bevan}. So far no deviation from the Standard Model
has been detected.
Via a naive operator analysis, one obtains from electroweak precision tests
and data on flavour changing neutral currents (FCNC) the lower bounds on
`new physics' \cite{buras}:
\begin{equation}
\Lambda^{\mathrm{EW}}_{\mathrm{NP}} > 5~\mathrm{TeV}\ ,\quad
\Lambda^{\mathrm{FCNC}}_{\mathrm{NP}} > 1000~\mathrm{TeV}\ .
\end{equation}
Hence, it may very well be that no departures from the Standard Model will
be found at the LHC and other currently planned accelerators.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.45\textwidth,clip,angle=0]{CKMfitter}
\caption{ \label{fig:CKM}
Unitarity triangle fit by the CKMfitter collaboration in 2009.
From \cite{buras}.}
\end{center}
\end{figure}
On the other hand, as we have already seen in our discussion of the Higgs
sector, it is also conceivable that dramatic departures from the Standard
Model will be discovered at the LHC. In this case new physics in FCNC
processes is also expected at TeV energies. This is the case in supersymmetric
extensions of the Standard Model, the `Littlest Higgs' model with T-parity
or Rundall-Sundrum models, as discussed in detail in \cite{buras}.
\begin{figure}[b]
\begin{center}
\includegraphics[width=0.4\textwidth,clip,angle=0]{buras1}
\hspace{1cm}
\includegraphics[width=0.4\textwidth,clip,angle=0]{buras2}
\caption{\label{fig:buras}
$Br(\mu\to e\gamma)$ vs. $S_{\psi\phi}$ (left)
and $d_e$ vs. $Br(\mu\to e\gamma)$ (right) in the RVV model.
The green points are consistent with the $(g-2)_\mu$
anomaly at $95\%$ C.L., i.e. $\Delta a_\mu\ge 1\times 10^{-9}$.
From \cite{buras}.}
\end{center}
\end{figure}
As an example consider the supersymmetric non-Abelian flavour model of Ross,
Velasco and Vives (RVV), which leads to interesting correlatons between
quark- and lepton-flavour changing processes and also between CP-violation
in the quark and the lepton sector \cite{buras}. In the Standard Model
the mixing induced CP asymmetry in the $B_s$ system is predicted to be very
small: $(S_{\psi\phi})_{\rm SM}\approx 0.04$. However, present data from
CDF and D0 could be the first hint for a much larger value
\cite{punzi,buras}
\begin{equation}
S_{\psi\phi}=0.81^{+0.12}_{-0.32}\ .
\end{equation}
More precise measurements by CDF, D0, LHCb, ATLAS and CMS will clarify this
intriguing puzzle in the coming years. In the RVV model, the prediction for
$S_{\psi\phi}$ is correlated with predictions for the branching ratio
$Br(\mu\to e\gamma)$ and the electric dipole moment $d_e$
(see Fig.~\ref{fig:buras}). Consistency with the $(g-2)_\mu$ anomaly favours
smaller superparticle masses, which leads to a larger electric dipole moment
and branching ratios $Br(\mu\to e\gamma)$ within the reach of the MEG
experiment at PSI \cite{bevan}.
An important part of flavour physics is neutrino physics, currently an
experimentally-driven field \cite{wark}. The next goal is the measurement
of the mixing angle $\theta_{13}$ in the PMNS-matrix, with important
implications for the feasibility to observe CP violation in neutrino
oscillations. Even more important is the determination of the absolute
neutrino mass scale. Cosmological observations have the potential to
reach the sensitivity $\sum m_{\nu} < 0.1\ \mathrm{eV}$, which would be
very interesting for the connection to grand unification and also leptogenesis.
A model-independent mass determination is possible by measuring the endpoint
in Tritium $\beta$-decay where the KATRIN experiment is expected to
reach a sensitivity of $0.2\ \mathrm{eV}$.
\begin{figure}[b]
\begin{center}
\includegraphics[height=6.5cm]{wilczek}
\hspace*{1cm}
\includegraphics[width=7.5cm]{vafa}
\caption{ \label{fig:GUT}
Left: The unification group $SO(10)$ incorporates the Standard Model group
$SU(3)\times SU(2)\times U(1)$ as subgroup; the quarks and leptons of one
family, together with a right-handed neutrino, are united in a single
${\bf 16}$-plet of $SO(10)$; from \cite{wilczek}. Right: Geometric
picture of F-theory GUTs; matter and Higgs fields are confined to
six-dimensional submanifolds; they intersect at a four-dimensional `point'
with enhanced $E_8$ symmetry where Yukawa couplings are generated.
From \cite{uranga}.
}
\label{fig:wilczek}
\end{center}
\end{figure}
\section{GUTs and Strings}
The symmetries and the particle content of the Standard Model point towards
grand unified theories (GUTs) of the strong and electroweak interactions.
Assuming that the celebrated unification of gauge couplings in the
supersymmetric Standard Model is not a misleading coincidence, supersymmetric
GUTs \cite{wilczek} have become the most popular extension of the Standard
Model. Remarkably, one generation of matter, including the right-handed
neutrino, forms a single spinor representation of $SO(10)$
(see Fig.~\ref{fig:wilczek}). It therefore appears natural to assume an
underlying $SO(10)$ structure of the theory. The route of unification
continues via exceptional groups, terminating at $E_8$,
\begin{equation}
SU(3)\times SU(2)\times U(1) \subset SU(5) \subset SO(10) \subset
E_6 \subset E_7 \subset E_8\ .
\end{equation}
The right-handed neutrino, whose existence is predicted by $SO(10)$
unification, leads to a successful phenomenology of neutrino masses and
mixings via the seesaw mechanism
and can also account for the cosmological matter-antimatter asymmetry via
leptogenesis.
The exceptional group $E_8$ is beautifully realized in the heterotic string.
Nonetheless, embedding the Standard Model into string theory has turned
out to be extremely difficult, possibly because of the huge number of string
vacua. Searching for the Standard Model vacuum in string theory would then
be like looking for a needle in a haystack. Recently, this situation has
improved, and promising string vacua have been found by incorporating
GUT structures in specific string models \cite{uranga}. The different
constructions are based on Calabi-Yau or orbifold compactifications of
the heterotic string, magnetized brane models and, most recently, on F-theory
(see Fig.~\ref{fig:wilczek}). One obtains an appealing geometric picture
where gauge interactions are eight-dimensional, matter and Higgs fields are
confined to six dimensions, and Yukawa couplings are generated at the
intersection of all these submanifolds, at a four-dimensional `point' with
enhanced $E_8$ symmetry.
The programme to embed the Standard Model into string theory using GUT
structures is promising but a number of severe problems remain to be solved.
They include
the appearance of states with exotic quantum numbers which have to be
removed from the low-energy theory, the treatment of supersymmetry breaking
in string theory and the stabilization of moduli fields. Optimistically,
one can hope to
identify some features which are generic for string compactifications
leading to the Standard Model, so that eventually string theory may lead
to predictions for observable quantities.
\section{Astrophysics and Cosmology}
\begin{figure}[b]
\includegraphics[width=6cm,angle=90]{pamela}
\hspace*{0.6cm}
\includegraphics[width=8.1cm]{fermi_charged}
\caption{
Left: The PAMELA positron fraction compared with the theoretical
model of Moskalenko \& Strong; the error bars correspond to one
standard deviation.
Right: The Fermi-LAT and HESS CR electron spectrum (red filled circles);
systematic errors are shown by the gray band;
other high-energy measurements and a conventional diffusive model are also
shown. From \cite{reimer}.
\label{fig:pamela}
}
\end{figure}
During the past year the cosmic-ray (CR) excesses observed by the PAMELA,
Fermi-LAT and HESS collaborations (see Figs.~\ref{fig:pamela} and
\ref{fig:strumiabest}) have received enormous attention \cite{reimer,strumia}.
This interest is due to the fact that the PAMELA positron fraction
$e^+/(e^-+e^+)$ and the Fermi-LAT CR electron spectrum ($e^-+e^+$ flux) show
an excess above conventional astrophysical predictions at energies close to
the scale of electroweak symmetry breaking. This suggests that the observed
excesses may be related to dark matter consisting of WIMPs, Weakly Interacting
Massive Particles.
\begin{figure}[b]
\begin{center}
\hspace*{-1cm}
\includegraphics[width=1.05\textwidth]{strumiabest}
\caption{
{\bf DM annihilations into $\tau^+\tau^-$}. The predictions are based on MED
diffusion and the isothermal profile. Left: Positron fraction compared with
the PAMELA excess. Middle: $e^++e^-$ flux compared with the Fermi-LAT and
HESS data. Right: The Fermi-LAT diffuse gamma-spectrum compared with
bremsstrahlung (dashed red line) and inverse compton (IC) radiation (black
full line) with the components CMB (green), dust (blue) and CMB( green).
From \cite{strumia}.
\label{fig:strumiabest}}
\end{center}
\end{figure}
In the meantime various analyses have shown that both CR excesses can be
accounted for by conventional astrophysical sources, in particular nearby
pulsars and/or supernovae remnants. On the other hand, it is still conceivable
that the excesses are completely, or at least partially, due to dark matter.
Since this is the main reason for the interest of a large community in
the new CR data, I shall focus on the dark matter interpretation in the
following.
The first puzzle of the rising PAMELA positron fraction was the absence of an
excess in the antiproton flux. This led many theorists consider `leptophilic'
dark matter candidates where annihilations into leptons dominate over
annihilations into quarks. The Fermi-LAT excess in $e^++e^-$ flux extends to
energies up to a cutoff of almost 1~TeV, determined by HESS. Obviously, this
requires leptonic decays of heavy DM particles, with masses beyond the reach
of LHC. A representative example of a successful fit is shown in
Fig.~\ref{fig:strumiabest}. Note, that the gamma-ray flux due to
bremsstrahlung and inverse Compton (IC) scattering of the produced leptons
is still consistent with present Fermi-LAT data. However, a remaining
problem of annihilating DM models is the explanation of the magnitude of
observed fluxes which is proportial $\langle \rho^2_{\mathrm{DM}} \rangle$,
the square of the DM density. Typically, a large `boost factor', i.e.,
an enhancement of $\langle \rho^2_{\mathrm{DM}} \rangle$ compared to
values obtained by numerical simulations, has to be assumed to achieve
consistency with observations.
\begin{figure}[t]
\begin{center}
\hspace{-0.5cm}
\includegraphics[width=0.99\textwidth]{strumiadecay}
\caption{{\bf DM decays into leptons}. Left: $4\mu$. Middle: $\mu^+\mu^-$.
Right: $\tau^+\tau^-$. Regions favored by PAMELA (green bands)
and by PAMELA, Ferimi-LAT and HESS observations (red ellipses) are
compared with HESS observations of the Galatic Center
(blue continuous line), the Galactic Ridge (blue dot-dashed)
and spherical dwarfes (blue dashed). From \cite{strumia}.
\label{fig:strumiadecay}.}
\end{center}
\end{figure}
The problems of annihilating DM models caused a new interest in decaying
DM models. Representative examples of dark matter candidates with different
leptonic decay channels are compared in Fig.~\ref{fig:strumiadecay}. Again
masses in the TeV range are favoured. The typical lifetime of $10^{26}$~s
is naturally obtained for decaying gravitinos, which can also be consistent
with the nonobservation of an antiproton excess, and in models where decays
are induced by GUT-suppressed dimension-6 operators. Decaying DM matter
models also lead to characteristic signatures at LHC, which are currently
actively investigated.
\begin{figure}[b]
\includegraphics[width=5.5cm]{fermi_diffuse}
\hspace*{1cm}
\includegraphics[width=8.1cm]{agile}
\caption{Left: Galactic diffuse emission; intensity averaged over all
longitudes and latitudes in the range $10^\circ \leq |b| \leq 20^\circ$;
data points and systematic uncertainties: Fermi-LAT (red), EGRET (blue);
from \cite{reimer}. Right: Present and projected bounds on spin-independent
WIMP-nucleon cross sections from different experiments compared with
predictions of Roszkowski et al. for supersymmetric models; from \cite{agile}.
\label{fig:diffuse}
}
\end{figure}
In addition to the $e^-+e^+$ flux the diffuse gamma-ray spectrum measured
by Fermi-LAT is of great importance indirect dark matter searches. Data for
the Galactic diffuse emission are shown in Fig.~\ref{fig:diffuse}. The
GeV excess observed more than 10 years ago by EGRET, which stimulated
several dark matter interpretations, has not been confirmed. Soon expected
data on the isotropic diffuse gamma-ray flux will severly constrain
decaying and annihilating dark matter models. In direct search experiments
limits on nucleon-WIMP cross sections have also been significantly improved.
The sensitivity will be further increased by two to four orders of magnitude
in the
coming years (see Fig.~\ref{fig:diffuse}). They now probe a large part
of the parameter space of supersymmetric models, and in the next few years
we can expect stringent tests of WIMP dark matter from combined analyses of
direct and indirect searches, and LHC data.
\begin{figure}[t]
\begin{center}
\includegraphics[width=9cm]{nu_kampert}
\end{center}
\caption{Measured atmospheric neutrino fluxes and compilation of latest
limits on diffuse neutrino fluxes compared to predicted fluxes.
From \cite{kampert}.}
\label{fig:kampert}
\end{figure}
Annihilation of dark matter particles can also lead to high-energy neutrinos
which could be observed by large volume Cerenkov detectors such as AMANDA,
ANTARES and ICECUBE. Searches for diffuse neutrino fluxes have been performed
by a large number of experiments operating at different energy regions
(see Fig.~\ref{fig:kampert} for a compilation of recent data). The current
limits are approaching both the Waxmann-Bahcall and the cosmogenic flux
(labelled `GZK') predictions \cite{kampert}.
Rapid advances in observational cosmology have led to a cosmological
Standard Model for which a large number of cosmological parameters have
been determined with remarkable precision. The theoretical framework is a
spatially flat Friedman Universe with accelerating expansion \cite{mukhanov}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=7.5cm]{mukhanov2}
\hspace*{1cm}
\includegraphics[height=6cm]{mukhanov1}
\caption{ \label{fig:mukhanov}
Left: The angular power spectrum of the CMB temperature anisotropies
from WMAP5; the grey points are the unbinned data and the solid points
are binned data with error estimates; the solid line shows the prediction
from the best fitting $\Lambda\mathrm{CDM}$ model.
Right: Confidence level contours of 68\%, 95\% and 99\% in the
$\Omega_{\Lambda}-\Omega_m$ plane from the Cosmic Microwave Background,
Baryonic Acoustic Oscillations and the Union SNe~Ia set, together with
their combination assuming $w =-1$. From \cite{mukhanov}.
}
\label{fig:mukhanov}
\end{center}
\end{figure}
The measurement of the luminosity distance of Type Ia supernovae (SNe~Ia),
used as `standard candles', and the analysis of the temperature anisotropies
of the cosmic microwave background (CMB) (see Fig.~\ref{fig:mukhanov}) have
provided an accurate knowledge of the composition of the energy density of
the universe. This includes the total energy density $\Omega_{\mathrm{tot}}$,
the total matter density $\Omega_{\mathrm{m}}$, the baryon density
$\Omega_{\mathrm{b}}$, the radiation density $\Omega_{\mathrm{r}}$, the
neutrino density $\Omega_{\nu}$ and the cosmological constant
$\Omega_{\Lambda}$; the total matter density contains the
cold dark matter density,
$\Omega_{\mathrm{m}} = \Omega_{\mathrm{cdm}} + \Omega_{\mathrm{b}}$.
Most remarkably, the universe is spatially flat within errors,
\begin{equation}
\Omega_{\mathrm{tot}} = 1.006 \pm 0.006\ ,
\end{equation}
and dominated by dark matter ($\Omega_{\mathrm{cdm}} \simeq 0.22$) and
dark energy ($\Omega_{\Lambda} \simeq 0.74$) \cite{mukhanov}.
In the future, gravitational waves may become a new window to the present
as well as the early universe. Impressive progress has been made in
improving the sensitivity of current laser interferometers, and detection
of gravitational waves is definitely expected with the next generation of
detectors \cite{danzmann}. If the sensitivity can be increased to higher
frequences, it is conceivable that the equation of state in the very
early universe can be probed with gravitational waves.
\begin{figure}[t]
\begin{center}
\includegraphics[width=6cm]{epicycle2}
\vspace*{1cm}\hspace*{1cm}
\includegraphics[width=6cm]{solar2}
\caption[]{Left: Ptolemy's epicycle model of the planetary system. Right:
Copernicus's heliocentric model of the planetary system.}
\label{fig:epicycle}
\end{center}
\end{figure}
Many dark matter candidates have been suggested in various extensions of
the Standard Model of Particle Physics and we can hope that new data form
the LHC, and direct and indirect search experiments will clarify this
problem in the coming years. On the contrary, research on `dark energy'
is dominated by question marks \cite{mukhanov}. Many explanations have been
suggested, including quintessence, k-essence, modifications of gravity,
extra dimensions etc., but experimental and theoretical breakthroughs still
appear to be ahead of us.
\section{Outlook}
With the start of the LHC \cite{evans} and new data taken by ATLAS
\cite{gianotti}, CMS\cite{virdee}, LHCb \cite{golutvin} and ALICE
\cite{giubellino} we are entering a new era in Particle Physics. We expect
to gain deeper insight into the mechanism of electroweak symmetry breaking,
the origin of quark and lepton mass matrices and the behaviour of matter
at high temperatures and densities,
and many hope that supersymmetry will be discovered.
Important results can also be expected from ongoing and planned
non-accelerator experiments, cosmic-ray experiments and more precise
cosmological observations. These include the determination of the absolute
neutrino mass scale, possible evidence for weakly interacting dark
matter particles, polarization of the cosmic microwave background
and the determination of the equation of state of dark energy for
different redshifts.
On the theoretical side, there appear to be two main avenues beyond the
Standard Model: (A) New strong interactions at TeV energies, like composite
W-bosons, a composite top-quark, technicolour or large extra dimensions,
or (B) the extrapolation of the Standard Model far beyond the electroweak
mass scale, with more and more symmetries becoming manifest: supersymmetry,
grand unified symmetries, higher-dimensional space-time symmetries and
possibly symmetries special to string theory.
In Cracow we are reminded of Nicolaus Copernicus who, about 500 years ago,
invented the heliocentric model of the planetary system, in contrast to
Ptolemy's epicycle model (see Fig.~\ref{fig:epicycle}). Given the high
symmetry and simplicity of the heliocentric model, one may
think that Copernicus would have had a preference for avenue (B) beyond the
Standard Model, but we obviously cannot be sure. It took about seventy years
until, after new astronomical observations, the heliocentric model was
generally accepted. Fortunately, with the successful start of the LHC, we can
hope for crucial information about the Physics beyond the Standard Model
much faster.
\section*{Acknowledgements}
I would like to thank the members of the international and local organizing
committees, especially Antoni Szczurek and Marek Je\.zabek, for their
successful work and for their hospitality in this beautiful city. I am indepted
to many colleagues at this conference and at DESY for their help in the
preparation of this talk, especially Andrzej Buras, Laura Covi, Leszek Motyka,
Peter Schleper and Fabio Zwirner.
| 2024-02-18T23:40:00.321Z | 2010-03-05T16:44:38.000Z | algebraic_stack_train_0000 | 1,065 | 6,664 |
|
proofpile-arXiv_065-5271 | \section{Introduction}
Let us consider a first order differential system in the real plane,
\begin{equation}\label{sysPQ}
\dot x = P(x,y), \qquad \dot y = Q(x,y).
\end{equation}
The study of the dynamics of (\ref{sysPQ}) strongly depends on the existence and stability properties of special solutions such as equilibrium points and non-constant periodic solutions. In particular, if an attracting non-constant periodic solution exists, then it dominates the dynamics of (\ref{sysPQ}) in an open, connected subset of the plane, its region of attraction. In some cases such a region of attraction can even extend to cover the whole plane, with the unique exception of an equilibrium point. Uniqueness theorems for non-constant periodic solutions, i. e. limit cycles, have been extensively studied, see \cite{CRV} and \cite{XZ} for recent results and extensive bibliographies. Most of the results known are concerned with the classical Li\'enard system,
\begin{equation}\label{syslie}
\dot x = y - F(x) , \qquad \dot y = - g(x).
\end{equation}
and its generalizations, such as
\begin{equation}\label{sysGG}
\dot x = \beta(x)\big[ \varphi(y) - F(x) \big], \qquad \dot y = -\alpha(y)g(x).
\end{equation}
Such a class of systems also contain Lotka-Volterra systems and systems equivalent to Rayleigh equation
\begin{equation}\label{equaray}
\ddot x + f(\dot x) + g(x) = 0,
\end{equation}
as special cases. A very recent result \cite{CRV} is concerned with systems equivalent to
\begin{equation}\label{equaCRV}
\ddot x + \sum_{k=0}^{N}f_{2k+1}(x){\dot x}^{2k+1} + x = 0,
\end{equation}
with $f_{2k+1}(x)\geq 0$, increasing for $x > 0$, decreasing for $x < 0$, $k=0, \dots, N$.
On the other hand, there exist classes of second order ODE's which are not covered by the above cases. This is the case of a model developped in \cite{ETBA}, which led to the equation
\begin{equation}\label{ETBA}
\ddot x + \epsilon \dot x (x^2 + x \dot x + {\dot x}^2 -1) + x = 0, \qquad \epsilon >0.
\end{equation}
In this paper we prove a uniqueness result for systems equivalent to
\begin{equation}\label{equaphi}
\ddot x + \dot x \phi(x,\dot x) + x = 0,
\end{equation}
under the assumtpion that $\phi(x,y)$ be a function with star-shaped level sets. As a consequence, we are able to prove existence and uniqueness of the limit cycle for the equation (\ref{ETBA}).
\section{Risultati preliminari}
Let $\Omega \subset {\rm I\! R}^2$ be a star-shaped set. We say that a function $\phi \in C^1( \Omega, {\rm I\! R}) $ is {\it star-shaped} if $(x,y) \cdot \nabla \phi = x {\partial \phi \over \partial x} + y {\partial \phi \over \partial y}$ does not change sign. We say that $\phi$ is
{\it strictly star-shaped} if $(x,y) \cdot \nabla \phi \neq 0$. We call {\it ray} a half-line having origin at the point $(0,0)$.
Let us consider a system equivalent to the equation (\ref{equaphi})
\begin{equation}\label{sisphi}
\dot x = y \qquad \dot y = -x - y \phi(x,y).
\end{equation}
We denote by $\gamma(t,x^*,y^*)$ the unique solution to the system (\ref{sisphi}) such that $\gamma(0,x^*,y^*) = (x^*,y^*)$. We first consider a sufficient condition for limit cycles' uniqueness.
\begin{theorem}\label{teorema} Let $\phi: {\rm I\! R}^2 \rightarrow {\rm I\! R}^2 $ be a strictly star-shaped function. Then (\ref{sisphi}) has at most one limit cycle.
\end{theorem}
{\it Proof.}
Let us assume that, for $(x,y) \neq (0,0)$,
$$
x {\partial \phi \over \partial x} + y {\partial \phi \over \partial y} > 0.
$$
The proof can be performed analogously for the opposite inequality.
Applying Corollary 6 in \cite{GS} requires to compute the expression
$$
\nu = P\left( x{\partial Q \over \partial x} + y{\partial Q \over \partial y} \right) - Q \left( x{\partial P \over \partial x} + y{\partial P \over \partial y}\right) ,
$$
where $P$ and $Q$ are the components of the considered vector field. For system (\ref{sisphi}), one has
$$
\nu =
y \left(-x - xy {\partial \phi \over \partial x} - y\phi - y ^2 {\partial \phi \over \partial y} \right) - \left( -x - y \phi(x,y) \right) y =
$$
$$
-y^2 \left( x {\partial \phi \over \partial x} + y {\partial \phi \over \partial y} \right) \leq 0.
$$
The function $\nu$ vanishes only for $y=0$. Let us assume, by absurd, that two distinct limit cycles exist, $\gamma_1$ and $\gamma_2$. Since the system (\ref{sisphi}) has only one critical point, the two cycles have to be concentric. Let us assume that $\gamma_2$ encloses $\gamma_1$. For both cycles one has:
$$
\int_{0}^{T_i} \nu(\gamma_i(t) )dt < 0, \qquad i=1,2,
$$
where $T_i$ is the period of $\gamma_i$, $i=1,2$. Hence both cycles, by theorem 1 in \cite{GS}, are attractive. Let $A_1$ be the region of attraction of $\gamma_1$. $A_1$ is bounded, because it is enclosed by $\gamma_2$, which is not attracted to $\gamma_1$. The external component of $A_1$'s boundary is itself a cycle $\gamma_3$, because (\ref{sisphi}) has just one critical point at the origin. Again,
$$
\int_0^{T_3} \nu(\gamma_3(t) )dt < 0,
$$
hence $\gamma_3$ is attractive, too. This contradicts the fact that the solutions of (\ref{sisphi}) starting from its inner side are attracted to $\gamma_1$. Hence the system (\ref{sisphi}) can have at most a single limit cycle.
\hfill $\clubsuit$
In particular, the equation (\ref{ETBA}) considered in \cite{ETBA} has at most one limit cycle. In fact, in this case one has $\phi(x,y) = \epsilon (x^2 + xy + y^2 -1)$, so that one has
$$
\nu = x {\partial \phi \over \partial x} + y {\partial \phi \over \partial y} = 2 \epsilon y^2 (x^2 + xy + y^2) > 0 \quad {\rm for } \quad (x,y) \neq (0,0).
$$
It should be noted that even if the proof is essentially based on a stability argument, the divergence cannot be used in order to replace the function $\nu$. In fact, the divergence of system (\ref{sisphi}) is
$$
{\rm div } \big(y , -x - y \phi(x,y) \big)= - \phi - y {\partial \phi \over \partial y},
$$
which does not have constant sign, under our assumptions. Moreover, the divergence cannot have constant sign in presence of a repelling critical point and an attracting cycle.
Now we care about the existence of limit cycles. We say that $\gamma(t)$ is {\it positively bounded} if the semi-orbit $\gamma^+ = \{\gamma(t), \quad t \geq 0\}$ is contained in a bounded set. Let us denote by $D_r$ the disk $ \{ (x,y) : dist((x,y),O) \leq r \} $, and by $B_r$ its boundary $\{ (x,y) : dist((x,y),O) = r \} $.
In the following, we use the function $V(x,y) = \frac {x^2}2 + \frac {y^2}2$ as a Liapunov function.
Its derivative along the solutions of (\ref{sisphi}) is
$$
\dot V(x,y) = - y^2 \phi(x,y).
$$
\begin{lemma}\label{lemma} Let $U$ be a bounded set, with $\sigma := \sup \{ dist((x,y),O), (x,y) \in U \}$.
If $\phi(x,y) \geq 0$ out of $U$, and $\phi(x,y)$ does not vanish identically on any $B_r$, for $r > \sigma$, then every $\gamma(t)$ definitely enters the disk $D_\sigma$ and does not leave it.
\end{lemma}
{\it Proof.}
The level curves of $V(x,y)$ are circumferences. For every $r \geq \sigma$, the disk $D_r $ contains $U$. Since $\dot V(x,y) = - y^2 \phi(x,y) \leq 0$ on its boundary, such a disk is positively invariant.
Let $\gamma$ be an orbit with a point $\gamma(t^*)$ such that $d^* = dist(\gamma(t^*),O) > \sigma$. Then $\gamma$ does not leave the disk $D_{d^*}$, hence it is positively bounded.
Moreover $\gamma(t)$ cannot be definitely contained in $B_r$, for any $r > \sigma$, since $\dot V(x,y)$ does not vanish identically on any $B_r$, for $r > \sigma$. Now, assume by absurd that $\gamma(t)$ does not intersect $B_\sigma$. Then its positive limit set is a cycle $\overline{\gamma}(t)$, having no points in $D_\sigma$. The cycle $\overline{\gamma}(t)$ cannot cross outwards any $B_r$, hence it has to be contained in $B_r$, for some $r > \sigma$, contradicting the fact that $\dot V(x,y)$ does not vanish identically on any $B_r$, for $r > \sigma$. Hence there exists $t^+ > t^*$ such that $\gamma(t^+) \in D_\sigma$. Then, for every $t > t^+$, one has $\gamma(t) \in D_\sigma$, because $\dot V(x,y) \leq 0$ on $B_\sigma$.
\hfill$\clubsuit$
Collecting the results of the above statements, we may state a theorem of existence and uniqueness for limit cycles of a class of second order equations. We say that an equilibrium point $O$ is {\it negatively asymptotically stable} if it is asymptotically stable for the system obtained by reversing the time direction.
\begin{theorem} If the hypotheses of theorem \ref{teorema} and lemma \ref{lemma} hold, and $\phi(0,0) < 0$, then the system (\ref{sisphi}) has exactly one limit cycle, which attracts every non-constant solution.
\end{theorem}
{\it Proof.} By the above lemma, all the solutions are definitely contained in $D_\sigma$. The condition $\phi(0,0) < 0$ implies by continuity $\phi(x,y) < 0$ in a neighbourhood $N_O$ of the origin. This gives the negative asymptotic stability of the origin by Lasalle's invariance principle, since $\dot V(x,y) \geq 0$ in $N_O$, and the set $\{\dot V(x,y) = 0\} \cap N_O = \{y = 0\} \cap N_O$ does not contain any positive semi-orbit. The system has just one critical point at the origin, hence by Poincar\'e-Bendixson theorem there exist a limit cycle. By theorem \ref{teorema}, such a limit cycle is unique.
\hfill$\clubsuit$
This proves that every non-constant solution to the equation (\ref{ETBA}) studied in \cite{ETBA} is attracted to the unique limit cycle.
We can produce more complex systems with such a property. Let us set
$$
\phi(x,y) = -M +\sum_{k=1}^n H_{2k}(x,y),
$$
with $ H_{2k}(x,y)$ is a homogeneous function of degree $2k$, positive except at the origin, $M$ is a positive constant. Then, by Euler's identity, one has
$$
\nu = \sum_{k=1}^n \left( x{\partial H_{2k} \over \partial x} + y{\partial H_{2k} \over \partial y} \right) = \sum_{k=0}^n 2kH_{2k}(x,y) > 0 \quad {\rm for } \quad (x,y) \neq (0,0).
$$
If $\phi(x,y)$ does not vanish identically on any $B_r$, for instance if $H_{2k}(x,y) = (x^2 +xy + y^2)^k$, then the corresponding system (\ref{sisphi}) has a unique limit cycle.
In general, it is not necessary to assume the positiveness of all of the homogeneous functions $H_{2k}(x,y)$, as the following example shows. Let us set $Q(x,y) = x^2+xy+y^2$. Then take
$$
\phi(x,y) = -1 + Q - Q^2 + Q^3 .
$$
One has
$$
\nu = x {\partial \phi \over \partial x} + y {\partial \phi \over \partial y} = 2Q - 4Q^2 +6Q^3 = Q(2 - 4Q +6Q^2) .
$$
The discriminant of the quadratic polynomial $2 - 4Q +6Q^2$ is $\Delta = -32 < 0$ hence $\nu >0$ everywhere but at the origin. Moreover, $\phi(x,y)$ does not vanish identically on any circumference, hence the corresponding system (\ref{sisphi}) has a unique limit cycle.
| 2024-02-18T23:40:00.556Z | 2010-03-03T14:36:52.000Z | algebraic_stack_train_0000 | 1,075 | 1,894 |
|
proofpile-arXiv_065-5281 | \section{Preliminaries}
General Courant algebroids were studied first in a paper by Liu,
Weinstein and Xu \cite{LWX}, which appeared in 1997 and became the
object of an intensive research since then. Courant algebroids
provide the framework for Dirac structures and generalized
Hamiltonian formalisms. In \cite{V1} we have introduced the notions
of transversal-Courant and foliated Courant algebroid, thereby
extending the framework to bases that are a space of leaves of a
foliation rather than a manifold. In the present note we show that a
transversal-Courant algebroid over a foliated manifold can be
extended to a foliated Courant algebroid. A similar construction for
Lie algebroids (which is a simpler case) was given in \cite{V1}. We
assume that the reader has access to the paper \cite{V1}, from which
we also take the notation, and he will consult \cite{V1} for the
various definitions and results that we use here. In this paper we
assume that all the manifolds, foliations, mappings, bundles, etc.,
are $C^\infty$-differentiable.
A Courant algebroid over the manifold $M$ is a vector bundle
$E\rightarrow M$ endowed with a symmetric, non degenerate, inner
product $g_E\in\Gamma\odot^2E^*$, with a bundle morphism
$\sharp_E:E\rightarrow TM$ called the {\it anchor} and a
skew-symmetric bracket $[\,,\,]_E:\Gamma E\times\Gamma
E\rightarrow\Gamma E$, such that the following conditions (axioms)
are satisfied:
1) $\sharp_E[e_1,e_2]_E=[\sharp_Ee_1,\sharp_Ee_2]$,
2) $im(\sharp_{g_E}\circ^t\sharp_E)\subseteq ker\,\sharp_E$,
3)
$\sum_{Cycl}[[e_1,e_2]_E,e_3]_E=(1/3)\partial_E\sum_{Cycl}g_E([e_1,e_2]_E,e_3)$,
$\partial_E=(1/2)\sharp_{g_E}\circ\,^t\sharp_E:T^*M\rightarrow E$,
$\partial_E f=\partial_E(df)$,
4)
$[e_1,fe_2]_E=f[e_1,e_2]_E+(\sharp_Ee_1(f))e_2-g(e_1,e_2)\partial_E
f $,
5) $(\sharp_Ee)(g_E(e_1,e_2))=g_E([e,e_1]_E+\partial_E g(e,e_1)
,e_2)+g_E(e_1,[e,e_2]_E+\partial_E g(e,e_2)).$\\ In these
conditions, $e,e_1,e_2,e_3\in\Gamma E$, $f\in C^\infty(M)$ and $t$
denotes transposition. Notice also that the definition of
$\partial_E$ is equivalent with the formula
\begin{equation}\label{partialcug}
g_E(e,\partial_Ef)=\frac{1}{2}\sharp_Ee(f).\end{equation}
The index $E$ will be omitted if no confusion is possible.
The basic example of a Courant algebroid was studied in \cite{C}
and it consists of the {\it big tangent bundle} $T^{big}M=TM\oplus
T^*M$, with the anchor $\sharp(X\oplus\alpha)=X$ and with
\begin{equation}\label{gC}
g(X_1\oplus\alpha_1,X_2\oplus\alpha_2)=\frac{1}{2}(\alpha_1(X_2)+\alpha_2(X_1)),
\end{equation}
\begin{equation}\label{crosetCou}
[X_1\oplus\alpha_1,X_2\oplus\alpha_2]=[X_1,X_2]\oplus
(L_{X_1}\alpha_2- L_{X_2}\alpha_1+\frac{1}{2}d(\alpha_1(X_2)
-\alpha_2(X_1))).
\end{equation}
(The notation $X\oplus \alpha$ instead of the
accurate $X+\alpha$ or $(X,\alpha)$ has the advantage of showing
the place of the terms while avoiding some of the parentheses. The
unindexed bracket of vector fields is the usual Lie bracket.)
Furthermore, let $\mathcal{F}$ be a foliation of the manifold $M$.
We denote the tangent bundle $T\mathcal{F}$ by $F$ and define the
transversal bundle $\nu\mathcal{F}$ by the exact sequence $$
0\rightarrow F
\stackrel{\iota}{\rightarrow}TM\stackrel{\psi}{\rightarrow}\nu\mathcal{F}
\rightarrow0,$$ where $\iota$ is the inclusion and $\psi$ is the
natural projection. We also fix a decomposition
\begin{equation}\label{descTM} TM=F\oplus
Q,\;Q=im(\varphi:\nu\mathcal{F}
\rightarrow TM),\,\psi\circ\varphi=id.,\end{equation} which implies
\begin{equation}\label{descT*M} T^*M=Q^*\oplus
F^*,\;Q^*=ann\,F,\,F^*=ann\,Q\approx T^*M/ann\,F,\end{equation}
where the last isomorphism is induced by the transposed mapping
$^t\iota$. The decompositions (\ref{descTM}), (\ref{descT*M})
produce a bigrading $(p,q)$ of the Grassmann algebra bundles of
multivector fields and exterior forms where $p$ is the $Q$-degree
and $q$ is the $F$-degree \cite{V0}.
The vector bundle $T^{big}\mathcal{F}=F\oplus (T^*M/ann\,F)$ is
the big tangent bundle of the manifold $M^{\mathcal{F}}$, which is
the set $M$ endowed with the differentiable structure of the sum
of the leaves of $\mathcal{F}$. Hence, $T^{big}\mathcal{F}$ has
the corresponding Courant structure (\ref{gC}), (\ref{crosetCou}).
A cross section of $T^{big}\mathcal{F}$ may be represented as
$Y\oplus\bar\alpha$ $(Y\in\chi(M^{\mathcal{F}}),\alpha\in
\Omega^1(M^{\mathcal{F}}))$, where the bar denotes the equivalence class
of $\alpha$ modulo $ann\,F$ (this bar-notation is always used
hereafter); generally, these cross sections are differentiable on
the sum of leaves. If we consider $Y_{l}\oplus\bar\alpha_{l}$
($l=1,2$) such that $Y_{l}\in\chi(M)$ and
$\alpha_{l}\in\Omega^1(M)$ are differentiable with respect to the
initial differentiable structure of $M$ we get the inner product
and Courant bracket
\begin{equation}\label{gF} g_F(Y_1\oplus\bar\alpha_1,
Y_2\oplus\bar\alpha_2)=\frac{1}{2}(\alpha_1(Y_2)+\alpha_2(Y_1)),\end{equation}
\begin{equation}\label{crosetF}
[Y_1\oplus\bar\alpha_1,Y_2\oplus\bar\alpha_2]
=([Y_1,Y_2]\oplus\overline{(L_{Y_1}\alpha_2-
L_{Y_2}\alpha_1+\frac{1}{2}d(\alpha_1(Y_2)-\alpha_2(Y_1))}),\end{equation}
where the results remain unchanged if $\alpha_{l}\mapsto
\alpha_{l}+\gamma_{l}$ with $\gamma_{l}\in ann\,F$. Formulas
(\ref{gF}), (\ref{crosetF}) show that $T^{big}\mathcal{F}\rightarrow
M$, where $M$ has its initial differentiable structure, is a Courant
algebroid with the anchor given by projection on the first term.
Alternatively, we can prove the same result by starting with
(\ref{gF}), (\ref{crosetF}) as definition formulas and by checking
the axioms of a Courant algebroid by computation.
We will transfer the Courant structure of $T^{big}\mathcal{F}$ by
the isomorphism $$ \Phi=id\oplus\hspace{1pt}^t\hspace{-1pt}\iota:
F\oplus (T^*M/ann\,F)\rightarrow F\oplus ann\,Q,$$ i.e.,
$$\Phi(Y\oplus\bar\alpha)=Y\oplus\alpha_{0,1},\;\;(Y\in F,\alpha=
\alpha_{1,0}+\alpha_{0,1}\in T^*M).$$ This makes $F\oplus ann\,Q$
into a Courant algebroid, which we shall denote by
$\mathcal{Q}=T^{big}_Q\mathcal{F}$, with the anchor equal to the
projection on $F$, the metric given by (\ref{gF}) and the bracket $$
[Y_1\oplus\bar\alpha_1, Y_2\oplus\bar\alpha_2]_{\mathcal{Q}}
=[Y_1,Y_2]\oplus pr_{ann\,Q}(L_{Y_1}\alpha_2-
L_{Y_2}\alpha_1+\frac{1}{2}d(\alpha_1(Y_2)-\alpha_2(Y_1))$$
$\alpha_1,\alpha_2\in ann\,Q$. Using the formula $L_Y=i(Y)d+di(Y)$
and the well known decomposition
$d=d'_{1,0}+d''_{0,1}+\partial_{2,-1}$ \cite{V0}, the expression of
the previous bracket becomes $$[Y_1\oplus\bar\alpha_1,
Y_2\oplus\bar\alpha_2]_{\mathcal{Q}}
=([Y_1,Y_2]\oplus(i(Y_1)d''\alpha_2-i(Y_2)d''\alpha_1$$
$$+\frac{1}{2}d''(i(Y_1)\alpha_2-i(Y_2)\alpha_1))\;\;(\alpha_1,\alpha_2\in
T^*_{0,1}M).$$
\section{The extension theorem}
Let $(M,\mathcal{F})$ be a foliated manifold. If the definition of a
Courant algebroid is modified by asking the anchor to be a morphism
$E\rightarrow\nu\mathcal{F}$, by asking $E,g,\sharp_E$ to be
foliated, by asking only for a bracket
$[\,,\,]_E:\Gamma_{fol}E\times\Gamma_{fol}E\rightarrow\Gamma_{fol}E$
and by asking the axioms to hold for foliated cross sections and
functions, then, we get the notion of a {\it transversal-Courant
algebroid} $(E,g_E,\sharp_E,[\,,\,]_E)$ over $(M,\mathcal{F})$
\cite{V1}. (The index $fol$ denotes foliated objects, i.e., objects
that either project to or are a lift of a corresponding object of
the space of leaves.)
On the other hand, a subbundle $B$ of a Courant algebroid $A$
over $(M,\mathcal{F})$ is a {\it foliation} of $A$ if: i) $B$ is
$g_A$-isotropic and $\Gamma B$ is closed by $A$-brackets, ii)
$\sharp_A(B)=T\mathcal{F}$, iii) if $C=B^{\perp_{g_A}}$, then the
$A$-Courant structure induces the structure of a
transversal-Courant algebroid on the vector bundle $C/B$; then,
the pair $(A,B)$ is called a {\it foliated Courant algebroid} (see
\cite{V1} for details).
In this section we prove the announced result:\\
{\bf Theorem.} {\it Let $E$ be a transversal-Courant algebroid over
the foliated manifold $(M,\mathcal{F})$ and let $Q$ be a
complementary bundle of $F$ in $TM$. Then $E$ has a natural
extension to a foliated Courant algebroid $A$ with a foliation $B$
isomorphic to $F$.
\begin{proof} The proof of this theorem requires a lot of technical
calculations. We will only sketch the path to be followed, leaving
the actual calculations to the interested reader. We shall denote
the natural extension that we wish to construct, and its operations,
by the index $0$. Take $A_0=T^{big}_Q\mathcal{F}\oplus
E=\mathcal{Q}\oplus E$ with the metric $g_0=g_F\oplus g_E$ and the
anchor $\sharp_0=pr_F\oplus\rho$, where $\rho=\varphi\circ\sharp_E$
with $\varphi$ defined by (\ref{descTM}), therefore,
$\psi\circ\rho=\sharp_E$. Notice that this implies
\begin{equation}\label{partial0}\partial_0\lambda=(0,\lambda|_F)+\frac{1}{2}
\sharp_{g_E}(\lambda\circ\rho)=(0,\lambda|_F)+\partial_E(\hspace{1pt}
^t\hspace{-1pt}\varphi\lambda)\;\;(\lambda\in T^*M)\end{equation}
and, in particular, $$\partial_0f =\partial_{\mathcal{Q}}(d''f)
\oplus\partial_E(d'f)=(0,d''f) \oplus\partial_E(d'f)\;\;(f\in
C^\infty(M)).$$
Then, inspired by the case $T^{big}M=
\mathcal{Q}\oplus\nu\mathcal{F}$ where the formulas below hold,
we define the bracket of
generating cross sections $Y\oplus\alpha\in\Gamma \mathcal{Q}$,
$e\in\Gamma_{fol}E$ by
\begin{equation}\label{cr0} \begin{array}{l}
[Y_1\oplus\alpha_1,Y_2,\oplus\alpha_2]_0=
[Y_1\oplus\alpha_1,Y_2,\oplus\alpha_2]_{\mathcal{Q}}\vspace{2mm}\\
\hspace*{1cm}\oplus\frac{1}{2}\sharp_{g_E}((L_{Y_1}\alpha_2-
L_{Y_2}\alpha_1+\frac{1}{2}d(\alpha_1(Y_2)-\alpha_2(Y_1))\circ\rho)\vspace{2mm}\\
=([Y_1,Y_2]\oplus0)+\partial_0(L_{Y_1}\alpha_2-
L_{Y_2}\alpha_1+\frac{1}{2}d(\alpha_1(Y_2)-\alpha_2(Y_1)),\vspace{2mm}\\
[e,Y\oplus\alpha]_0= ([\rho e,Y]\oplus (L_{\rho e}\alpha)|_F)
\oplus\frac{1}{2}\sharp_{g_E}((L_{\rho e}\alpha)\circ\rho)\vspace{2mm}\\
\hspace*{1cm}=([\rho e,Y]\oplus0)+\partial_0L_{\rho e}\alpha),\vspace{2mm}\\
[e_1,e_2]_0=(([\rho e_1,\rho e_2]-\rho [e_1,e_2]_E)\oplus0) \oplus
[e_1,e_2]_E.\end{array}\end{equation} The first term of the right
hand side of the second formula belongs to $\Gamma \mathcal{Q}$
since $e\in\Gamma_{fol}E$ implies $[\rho e,Y]\in\Gamma F$. The
first term of the right hand side of the third formula belongs to
$\Gamma
\mathcal{Q}$ since we have
$$\psi([\rho e_1,\rho e_2]-\rho [e_1,e_2]_E)=\psi([\rho e_1,\rho e_2])-
\sharp_E [e_1,e_2]_E$$ $$=\psi([\rho e_1,\rho e_2])-
[\sharp_Ee_1,\sharp_Ee_2]_{\nu\mathcal{F}}=0.$$
Furthermore, we extend the bracket (\ref{cr0}) to arbitrary cross
sections in agreement with the axiom 4) of Courant algebroids,
i.e., for any functions $f,f_1,f_2\in C^\infty(M)$, we define
\begin{equation}\label{f} \begin{array}{l} [Y\oplus\alpha,fe]_0=
f[Y\oplus\alpha,e]_0\oplus(Yf)e,\vspace{2mm}\\
[f_1e_1,f_2e_2]_0=f_1f_2[e_1,e_2]_0+ f_1(\rho e_1(f_2))e_2
-f_2(\rho e_2(f_1))e_1\vspace{2mm}\\
\hspace*{2cm}-g_E(e_1,e_2)(f_1\partial_0f_2-f_2\partial_0f_1)\end{array}
\end{equation}
($Y\in\Gamma F,\alpha\in ann\,Q, e,e_1,e_2\in\Gamma_{fol}E$). It
follows easily that formulas (\ref{cr0}) and (\ref{f}) give the
same result if $f\in C^\infty_{fol}(M,\mathcal{F})$.
We have to check that the bracket defined by (\ref{cr0}),
(\ref{f}) satisfies the axioms of a Courant algebroid and it is
enough to do that for every possible combination of arguments of
the form $Y\oplus\alpha\in
\mathcal{Q}$ and $fe$, $e\in\Gamma_{fol}E$, $f\in
C^\infty(M)$.
To check axiom 1), apply the anchor $\sharp_0=pr_F+\rho$ to each
of the five formulas (\ref{cr0}), (\ref{f}) and use the
transversal-Courant algebroid axioms satisfied by $E$. To check
axiom 2), use formula (\ref{partial0}). The required results
follow straightforwardly. It is also easy to check axiom 4) from
(\ref{f}) and from axiom 4) for $\mathcal{Q}$ and $E$.
Furthermore, technical (lengthy) calculations show that if we have
a bracket such that axioms 1), 2), 4) hold, then, if 5) holds for
a triple of arguments, 5) also holds if the same arguments are
multiplied by arbitrary functions. Therefore, in our case it
suffices to check axiom 5) for the following six triples: (i)
$(e,e_1,e_2)$, (ii) $(Y\oplus\alpha,e_1,e_2)$, (iii)
$(e,Y\oplus\alpha,e')$, (iv) $(Y\oplus\alpha,Y'\oplus\alpha',e)$,
(v) $(e,Y_1\oplus\alpha_1,Y_2\oplus\alpha_2)$, (vi)
$(Y\oplus\alpha,Y_1\oplus\alpha_1,Y_2\oplus\alpha_2)$, where all
$Y\oplus\alpha\in\Gamma\mathcal{Q}$ and all $e\in\Gamma_{fol}E$.
In cases (i), (vi) the result follows from axiom 5) satisfied by
$E,\mathcal{Q}$, respectively. In the other cases computations
involving evaluations of Lie derivatives will do the job.
Finally, we have to check axiom 3). If we consider any vector
bundle $E$ with an anchor and a bracket that satisfy axioms 1),
2), 4), 5), then, by applying axiom 5) to the triple $(e,
e_1=\partial f,e_2)$ $(f\in C^\infty(M))$ we get
\begin{equation}\label{crosetpt5} [e,\partial
f]_E=\frac{1}{2}\partial(\sharp_Ee(f)),\end{equation} whence (using
local coordinates, for instance) the following general formula
follows
\begin{equation}\label{gen-partial} [e,\partial_E\alpha]_E=
\partial_E(L_{\sharp_Ee}\alpha-\frac{1}{2}d(\alpha(\sharp_Ee))).
\end{equation}
Furthermore, assuming again that axioms 1), 2), 4), 5) hold and
using (\ref{partialcug}) and (\ref{crosetpt5}) a lengthy but
technical calculation shows that, if axiom 3) holds for a triple
$(e_1,e_2,e_3)$, it also holds for $(e_1,e_2,fe_3)$ $(f\in
C^\infty(M))$ provided that
\begin{equation}\label{Erond} \begin{array}{c} \mathcal{E}:=
g([e_1,e_2],e_3)+\frac{1}{2}\sharp e_2(g(e_1,e_3))-
\frac{1}{2}\sharp
e_1(g(e_2,e_3))\vspace{2mm}\\=\frac{1}{3}\sum_{Cycl}g([e_1,e_2],e_3)\end{array}
\end{equation} ($:=$ denotes a definition). But, if the last two
terms in $\mathcal{E}$ are expressed by axiom 5) for $E$ followed
by (\ref{crosetpt5}), and after we repeat the same procedure one
more time, we get
$$ \mathcal{E}=\frac{1}{4}\sum_{Cycl}g([e_1,e_2],e_3)
+\frac{1}{4}\mathcal{E},$$ whence we see that (\ref{Erond}) holds
for any triple $(e_1,e_2,e_3)$.
Hence, it suffices to check axiom 3) for the following cases: (i)
$(e_1,e_2,e_3)$, (ii) $(e_1,e_2,Y\oplus\alpha)$, (iii)
$(e,Y_1\oplus\alpha_1,Y_2\oplus\alpha_2)$, (iv)
$(Y_1\oplus\alpha_1,Y_2\oplus\alpha_2,Y_3\oplus\alpha_3)$, where all
$Y\oplus\alpha\in\Gamma\mathcal{Q}$ and all $e\in\Gamma_{fol}E$. In
case (i), using the second and third formula (\ref{cr0}), we get
$$[[e_1,e_2]_0,e_3]_0=([[\rho e_1,\rho e_2],\rho e_3]-
\rho[[e_1,e_2]_E,e]_E,0)\oplus[[e_1,e_2]_E,e]_E$$ and the required
result follows in view of the Jacobi identity for vector fields and
of axiom 3) for $E$ (in this case, the right hand side of axiom 3)
for $A_0$ reduces to the one for $E$).
To check the result in the other cases simpler, we decompose
$$Y\oplus\alpha=(Y\oplus0)+(0\oplus\alpha)$$ and check axiom 3) for
each case induced by this decomposition.
For a triple $(e_1,e_2,Y\oplus0)$ the right hand side of axiom 3)
is zero and the left-hand side is $$([[\rho e_1,\rho
e_2],Y]+[[\rho e_2,Y],\rho e_1]+[[Y,\rho e_1],\rho
e_2])\oplus0=0$$ by the Jacobi identity for vector fields.
For a triple $(e_1,e_2,0\oplus\alpha)$, after cancelations, the
right hand side of axiom 3) becomes $(1/2)\alpha([\rho e_1,\rho
e_2])$. The same result is obtained for the left hand side if we use
the second form of the first two brackets defined by (\ref{cr0}) and
formula (\ref{gen-partial}).
For a triple $(e,Y_1\oplus0,Y_2\oplus0)$ the two sides of axiom 3)
vanish (the left hand side reduces to the Jacobi identity for the
vector fields $(\rho e,Y_1,Y_2)$), hence the axiom holds.
For a triple $(e,0\oplus\alpha_1,0\oplus\alpha_2)$, using the second
form of the second bracket (\ref{cr0}) and formula
(\ref{gen-partial}), axiom 3) reduces to $0=0$, i.e., the axiom
holds.
For a triple $(e,Y_1\oplus0,0\oplus\alpha)$, if we notice that
$0\oplus\alpha=\partial_0\alpha$ (see (\ref{partial0})) and use
(\ref{gen-partial}), we see that the two sides of the equality
required by axiom 3) are equal to $(1/2)\partial_0(\alpha([\rho
e,Y])-(1/2)\rho e(\alpha(Y)))$, hence the axiom holds.
The case $(Y_1\oplus0,Y_2\oplus0,Y_3\oplus0)$ is trivial. In the
case $(Y_1\oplus0,Y_2\oplus0,0\oplus\alpha=\partial_0\alpha)$
similar computations give the value
$(1/4)\partial_0(\alpha([Y_1,Y_2])-d\alpha(Y_1,Y_2))$ for the two
sides of the corresponding expression of axiom 3). Finally, in the
cases $(Y\oplus0,\partial_0\alpha_1,\partial_0\alpha_2)$ and
$(\partial_0\alpha_1,\partial_0\alpha_2,\partial_0\alpha_3)$ the two
sides of the required equality are $0$ since the image of
$\partial_0$ is isotropic and the restriction of the bracket to this
image is zero (use axiom 2) and formula (\ref{gen-partial})).
\end{proof}
| 2024-02-18T23:40:00.587Z | 2010-03-01T10:06:08.000Z | algebraic_stack_train_0000 | 1,079 | 2,966 |
|
proofpile-arXiv_065-5639 | \section{Slices}
Given a category \(\Cat C\) and an object \(X\),
let's consider all the arrows into \(X\). This forms a
collection of arrows \[\bigcup_{Y\in \Cat C}\Hom(Y, X).\]
We shall take this collection of arrows as the \emph{objects}
of a new category, named \(\Cat C/X\).
What should the morphisms be? Consider any commutative
diagram of the form
\[\begin{tikzcd}
A && B \\
\\
& X
\arrow["f"', from=1-1, to=3-2]
\arrow["g", from=1-3, to=3-2]
\arrow["u", curve={height=-6pt}, from=1-1, to=1-3]
\end{tikzcd}\]
By saying that the diagram ``commutes'', I mean
\(g \circ u = f\). It would be natural to take \(u\)
as a morphism from \(f\) to \(g\). This defines the
\textbf{slice} category over \(X\).
\begin{example}
Here are some simple examples of slice categories.
\begin{itemize}
\item If \(\Cat C\) has a terminal object, then \(\Cat C/1 \cong \Cat C\).
\item Take \(2\) to be the set \(\{\mathsf{blue}, \mathsf{red}\}\).
\(\mathsf{Set}/2\) is the category of \emph{two-colored sets}.
In other words, its objects are sets where each element is assigned
either the color \(\mathsf{blue}\) or \(\mathsf{red}\).
Morphisms are set-theoretic functions that maps blue elements
to blue ones, and vice versa.
\item \(\mathsf{Set}/\varnothing\) contains only one object and one morphism.
\item \textsc{Exercise}: Come up with one more example. Make it as interesting as you can.
\end{itemize}
\end{example}
Notice that given \emph{any} object in a category, we can
make a slice category out of it. So suppose we have two
objects and a morphism \(X \xrightarrow{f}Y\). What can we say
of the two slice categories?
\[\begin{tikzcd}
& A \\
\\
X && Y
\arrow["u"', from=1-2, to=3-1]
\arrow["f\circ u", from=1-2, to=3-3]
\arrow["f"', from=3-1, to=3-3]
\end{tikzcd}\]
Here, \(u \in \Cat C/X\) and \(f\circ u \in \Cat C/Y\).
Therefore, there is a map from the objects of \(\Cat C/X\)
to the objects of \(\Cat C/Y\). The next question to ask,
is whether the map is \emph{functorial}.
Here's the relevant diagram. The verification is left as an exercise.
\[\begin{tikzcd}
A && B \\
\\
X && Y
\arrow["u"', from=1-1, to=3-1]
\arrow["{f\circ u}"{description, pos=0.4}, from=1-1, to=3-3]
\arrow["f"', from=3-1, to=3-3]
\arrow["p", from=1-3, to=1-1]
\arrow["{f\circ u\circ p}", from=1-3, to=3-3]
\arrow[shift right=1, curve={height=-12pt}, from=1-3, to=3-1]
\end{tikzcd}\]
We give this functor a name: \(f_! : \Cat C/X \to \Cat C/Y\).
\section{Pullbacks}
The next thing we do requires more structure in the category \(\Cat C\).
Let's take three objects \(B \xrightarrow{f} A \xleftarrow{g} C\).
If there happens to exist \(X\) together with arrows
\(B \xleftarrow{p} X \xrightarrow{q} C\) such that the square commutes,
and additionally...
\[\begin{tikzcd}
Y \\
& X && C \\
\\
& B && A
\arrow["p", from=2-2, to=4-2]
\arrow["f", from=4-2, to=4-4]
\arrow["q"', from=2-2, to=2-4]
\arrow["g"', from=2-4, to=4-4]
\arrow[dashed, from=1-1, to=2-2]
\arrow["{p'}"', curve={height=6pt}, from=1-1, to=4-2]
\arrow["{q'}", curve={height=-6pt}, from=1-1, to=2-4]
\end{tikzcd}\]
... For every given \(Y\) with morphisms \(p',q'\), there is a
unique arrow \(Y \to X\) such that the diagram commutes. In
this case, we call \(X\) a \textbf{pullback}.
What are pullbacks like? We need to find arrows
\(p, q\) that ``reconcile'' \(f\) and \(g\). In \(\mathsf{Set}\),
the pullback is given by the set
\[\{(b, c) \mid f(b) = g(c)\},\]
equipped with the obvious projections \(p, q\).
But there's another way to look at it. Each point \(a \in A\)
determines a set \(f^{-1}(a) = \{ b \mid f(b) = a\}\), and similarly
\(g^{-1}(a)\). This is called the \textbf{preimage}.
In this way, \(B\) can be rewritten as a union of preimages:
\[ B = \bigcup_{a \in A} f^{-1}(a). \]
\textsc{Exercise}: In this union, each set is disjoint from each other. Can
you see why? Since they are disjoint, we can use \(\coprod\) instead
of \(\bigcup\) to emphasize this (these two symbols have the same
meaning except \(\coprod\) implies disjointness).
Therefore, we may regard \(B\) as a space composed of ``fibers''
\(f^{-1}(a)\). For example, if \(B = \mathbb R^2\), and \(A = \mathbb R\),
take \[f(x, y) = x^2 + y^2.\] Then \(B\) is divided into concentric
circles \(f^{-1}(r^2)\) of radius \(r\) about the origin.
Note that \(f^{-1}(-1)\) is empty, meaning that the fiber that
lies over \(-1\) is \(\varnothing\).
What does this has to do with pullbacks? Well, we can rewrite
\(X\) in this way:
\[X = \coprod_{a\in A} f^{-1}(a) \times g^{-1}(a).\]
It is another fibered space, where each fiber is the \emph{product}
of the corresponding fibers in \(B\) and \(C\).
From this perspective, we may call the pullback as \textbf{fibered product},
denoted \(B \times_A C\). \textsc{Exercise}: Prove that \(B \times_1 C \cong B \times C\)
holds in any category with a terminal object.
The reader should be familiar with the fact that
\(A \times (-)\) is a functor. This is in accordance with
the Haskell typeclass instance \texttt{Functor ((,) a)}.
In fact, pullbacks, being called the fibered product, is also
a functor. To verify this, we need a diagram:
\[\begin{tikzcd}
Y && X && C \\
\\
D && B && A
\arrow["p", from=1-3, to=3-3]
\arrow["f", from=3-3, to=3-5]
\arrow["q"', from=1-3, to=1-5]
\arrow["g"', from=1-5, to=3-5]
\arrow["h", from=3-1, to=3-3]
\arrow["s", curve={height=-12pt}, from=1-1, to=1-5]
\arrow["r"{description}, from=1-1, to=3-1]
\arrow["\lrcorner"{anchor=center, pos=0.125}, draw=none, from=1-3, to=3-5]
\arrow["\lrcorner"{anchor=center, pos=0.125}, draw=none, from=1-1, to=3-5]
\end{tikzcd}\]
Here the little right-angle marks say that there are two
pullback squares. We need to prove that there is
an arrow \((\mathsf{fmap}\, h) : Y \to X\). This follows
directly from the universal property of pullbacks. Next,
we need the functor law.
\[\begin{tikzcd}
Z && Y && X && C \\
\\
E && D && B && A
\arrow["p", from=1-5, to=3-5]
\arrow["f", from=3-5, to=3-7]
\arrow["q"', from=1-5, to=1-7]
\arrow["g"', from=1-7, to=3-7]
\arrow["h", from=3-3, to=3-5]
\arrow["s", curve={height=-12pt}, from=1-3, to=1-7]
\arrow["r"{description}, from=1-3, to=3-3]
\arrow["\lrcorner"{anchor=center, pos=0.125}, draw=none, from=1-5, to=3-7]
\arrow["\lrcorner"{anchor=center, pos=0.125}, draw=none, from=1-3, to=3-7]
\arrow["k", from=3-1, to=3-3]
\arrow["u"{description}, from=1-1, to=3-1]
\arrow["v"{description}, shift left=1, curve={height=-30pt}, from=1-1, to=1-7]
\arrow["\lrcorner"{anchor=center, pos=0.125, rotate=45}, draw=none, from=1-1, to=3-7]
\end{tikzcd}\]
The reader shall complete the argument using the given diagram.
Before we move on, let's pause for a moment and ponder what
we just proved. Note that to use \((\mathsf{fmap}\,h)\),
the large square \(Y,C,D,A\) cannot be arbitrary:
The lower edge has to be \(D\xrightarrow{f\circ h}A\).
So what is the ``functor'' that we've just found?
What is its source and target categories? It turns out that
\((-) \times_A C\) is actually a functor \(\Cat C/A \to \Cat C/C\)!
The choice of these categories are important.
Note that although in the notation \(B \times_A C\), the two
arrows \(f, g\) doesn't appear, they are the essential
ingredients. \textsc{Exercise}: Give an example
of two pullbacks \(B \times_A C\) with different \(g\), such
that the results are not isomorphic.
Saying that the functor is in \(\Cat C/A \to \Cat C/C\)
instead of \(\Cat C \to \Cat C\) adds the important information
of the respective arrows into \(A\). And this ensures that
a morphism in \(\Cat C/A\) always commutes with these arrows.
To emphasize the importance of the morphisms, we write \(g^* :\Cat C/A \to \Cat C/C\)
for the functor. Note that the functor goes in the opposite
direction of \(g : C \to A\). But this does \emph{not} make
\(g^*\) a contravariant functor. As you have proved in the previous
section, \(g^*\) turns \(h : M \to N\)
into \((\mathsf{fmap}\,g^*)h : M \times_A C \to N \times_A C\),
which means it is covariant.
\section{Adjoint Yoga}
Anyway, we now have a functor \(g_! : \Cat C/C \to \Cat C/A\)
from the first section, and \(g^* : \Cat C/A \to \Cat C/C\)
from the second section. In category theory, whenever you encounter
this, make a bet that they are adjoint.
What is adjunction? There are two equivalent definitions
that I find the most natural. The first one describes an
adjoint pair as an \emph{almost} inverse pair of functors.
\begin{definition}
Two functors \(F : \Cat C \to \Cat D\) and
\(G : \Cat D \to \Cat C\) are called \textbf{adjoint} if the following holds.
For each object \(X \in \Cat D\),
there is a morphism \(\epsilon_X : FGX \to X\), and similarly
for each \(Y \in \Cat C\) a \(\eta_Y : Y \to GFY\), satisfying
the following conditions:
\begin{itemize}
\item The assignment of morphisms \(\epsilon_X\) is natural.
In other words, for a morphism \(f : X_1 \to X_2\), we have
\(\mathsf{fmap}_{FG}f : FGX_1 \to FGX_2\), this forms a square
\[\begin{tikzcd}
{FGX_1} && {FGX_2} \\
\\
{X_1} && {X_2}
\arrow["{\epsilon_{X_1}}"{description}, from=1-1, to=3-1]
\arrow["{\epsilon_{X_2}}"{description}, from=1-3, to=3-3]
\arrow["{\mathsf{fmap}\, f}"{description}, from=1-1, to=1-3]
\arrow["f"{description}, from=3-1, to=3-3]
\end{tikzcd}\]
The naturality condition states that all these squares commute.
Similar conditions hold for \(\eta\).
\item \(\epsilon,\eta\) settles the situation for composing \emph{two}
functors. In the case of three functors, we have two maps
\[FGFX \xleftrightharpoons[\epsilon_{FX}]{\mathsf{fmap}\, \eta_{X}} FX.\]
These should compose to get the identity on \(FX\). Similar conditions
hold fo \(GY\).
\end{itemize}
In this case, \(F\) is called the left adjoint, and \(G\) the right adjoint,
denoted as \(F \dashv G\).
\end{definition}
I won't linger too much on the concept of adjunction. But here're
two quick examples.
\begin{itemize}
\item \(U : \mathsf{Mon} \to \mathsf{Set}\) is
a functor that maps a monoid to its underlying set. And
\(F : \mathsf{Set} \to \mathsf{Mon}\) maps a set \(X\)
to the collection of lists \(\mathsf{List}(X)\), with list
concatenation as monoid multiplication, and the
empty list \([]\) as the neutral element. \(F\) is left adjoint
to \(U\).
\item Let \(\Delta : \mathsf{Set} \to \mathsf{Set}\times\mathsf{Set}\)
be the diagonal functor, sending \(X\) to \((X,X)\).
The product functor \((-)\times(-) : \mathsf{Set}\times\mathsf{Set} \to \mathsf{Set}\)
is the right adjoint of \(\Delta\).
\end{itemize}
The second definition is more catchy:
\begin{definition}
Two functors \(F : \Cat C \to \Cat D\) and
\(G : \Cat D \to \Cat C\) are adjoint iff
\[\Hom(FX, Y) \cong \Hom(X,GY)\]
such that the isomorphism is natural in \(X\) and \(Y\).
\end{definition}
The reader shall verify that these two definitions are equivalent,
and that the two examples given are indeed adjoints (using both definitions).
Now let's turn back to our two functors \(f_!, f^*\).
We draw a diagram to compose them and see what happens.
First look at \(f_!f^*x\).
\[\begin{tikzcd}
\bullet && \bullet && C \\
\\
N && M && A
\arrow["f"{description}, from=1-5, to=3-5]
\arrow["x"{description}, from=3-3, to=3-5]
\arrow["{g\circ x}"{description}, curve={height=12pt}, from=3-1, to=3-5]
\arrow["g"{description}, from=3-1, to=3-3]
\arrow["{f^*x}"{description}, from=1-3, to=1-5]
\arrow[from=1-1, to=1-3]
\arrow["{f^*(g\circ x)}"{description}, curve={height=-12pt}, from=1-1, to=1-5]
\arrow[dashed, from=1-1, to=3-5]
\arrow["{f_!f^*x}"{description}, dashed, from=1-3, to=3-5]
\arrow[color={rgb,255:red,214;green,92;blue,92}, from=1-1, to=3-1]
\arrow[color={rgb,255:red,214;green,92;blue,92}, from=1-3, to=3-3]
\end{tikzcd}\]
The lower half is in \(\Cat C/A\), and the upper half in
\(\Cat C/C\). The two dashed arrows are \(x\) and \(g\circ x\)
under the functor \(f_!f^*\). They lie in \(\Cat C/A\).
Now notice the red arrows generated from the pullback.
Composing them with each \(x \in \Cat C/A\)
gives a transformation from \(x\) to \(f_!f^*x\).
This gives \(\eta_x : x \to f_!f^*x\).
What about the naturality condition? \textsc{Exercise}: Argue that
the square \(\bullet,\bullet,N,M\) commutes, and explain
why this proves the naturality condition for \(\eta\).
Next, the reverse composition \(f^*f_!\). It is slightly
trickier:
\[\begin{tikzcd}
{M\times_AB} \\
&& B \\
M \\
&& A
\arrow["f"{description}, from=2-3, to=4-3]
\arrow["x"{description}, from=3-1, to=2-3]
\arrow["{f_!x=f\circ x}"{description}, from=3-1, to=4-3]
\arrow["p"{description}, from=1-1, to=3-1]
\arrow["q"{description}, from=1-1, to=2-3]
\arrow["{!}"{description}, curve={height=-12pt}, dashed, from=3-1, to=1-1]
\end{tikzcd}\]
Here we have \(x \in \Cat C/B\). Therefore, there is
a well-hidden commutative square:
\[\begin{tikzcd}
M \\
& {M\times_AB} && B \\
\\
& M && A
\arrow["f"{description}, from=2-4, to=4-4]
\arrow["{f_!x=f\circ x}"{description}, from=4-2, to=4-4]
\arrow["p"{description}, from=2-2, to=4-2]
\arrow["q"{description}, from=2-2, to=2-4]
\arrow["{\mathrm{id}}"{description}, curve={height=6pt}, from=1-1, to=4-2]
\arrow["x"{description}, curve={height=-6pt}, from=1-1, to=2-4]
\arrow["{!}"{description}, dashed, from=1-1, to=2-2]
\end{tikzcd}\]
... Which creates the unique morphism \(!\), such that
\(p \circ {!} = \mathrm{id}\) and \(q \circ {!} = x\).
Now recall that \(q = f^*f_!x\). Therefore, composing
with \(!\) gives a natural transformation
\(\epsilon_x : f^*f_!x \to x\).
\[\begin{tikzcd}
{N\times_A B} && {M\times_AB} \\
&&&& B \\
N && M \\
&&&& A
\arrow["f"{description}, from=2-5, to=4-5]
\arrow["x"{description}, from=3-3, to=2-5]
\arrow["{f_!x=f\circ x}"{description}, from=3-3, to=4-5]
\arrow["{p_1}"{description}, from=1-3, to=3-3]
\arrow["{f^*f_!x}"{description}, from=1-3, to=2-5]
\arrow[curve={height=-12pt}, dashed, from=3-3, to=1-3]
\arrow["g"{description}, from=3-1, to=3-3]
\arrow["{g\circ x}"{description, pos=0.3}, from=3-1, to=2-5]
\arrow["{f_!(g\circ x)}"{description}, curve={height=6pt}, from=3-1, to=4-5]
\arrow[from=1-1, to=2-5]
\arrow["{p_2}"{description}, from=1-1, to=3-1]
\arrow[from=1-1, to=1-3]
\arrow[curve={height=-12pt}, dashed, from=3-1, to=1-1]
\end{tikzcd}\]
The naturality condition amounts
to proving that the two dashed arrows form a commutative
square. This follows immediately from the universal
property of pullbacks.
If you find this dizzying, why not try the other definition?
\[\begin{tikzcd}
& N && B \\
X \\
& M && A
\arrow["x"{description}, from=3-2, to=3-4]
\arrow["y"{description}, from=1-2, to=1-4]
\arrow["f"{description}, from=1-4, to=3-4]
\arrow[from=2-1, to=3-2]
\arrow["{f^*x}"{description, pos=0.3}, from=2-1, to=1-4]
\arrow["{f_!y}"{description}, from=1-2, to=3-4]
\end{tikzcd}\]
You need to find a natural isomorphism between
\(\{g \mid y = f^* x \circ g\}\) and \(\{g \mid g\circ x = f\circ y\}\).
One direction is given by composition, and the other is given
by the universal property of pullbacks.
\section{Dependent Sum}
It's time to reveal the meaning of these constructions.
Recall how we can regard a morphism \(p : E \to B\)
as a fibered space \[E = \coprod_{x : B} p^{-1}(x).\]
So in the slice category \(\Cat C/B\), everything is fibered
along \(B\). If we take the map \({!}:B\to1\), then it
induces the functor \(\Cat C/B \to \Cat C/1\).
which takes a fibered space \(p : E \to B\) to \(E \to 1\).
Although this looks trivial, looking from the perspective
of fibered spaces, we get something different: \(p : E \to B\)
describes \(E\) with fibers over \(B\). And the functor
turns it into \(! : E \to 1\), where all the fibers are
merged into one big component. This corresponds to the
\textbf{dependent sum}:
\[ \sum_{x:B}p^{-1}(x). \]
We can generalize this by replacing the
terminal object with an arbitrary object \(A\),
and the morphism \({!} : B \to 1\) with
an arbitrary morphism
\(f : B \to A\), whose induced functor \(f_!\)
takes a ``fiberwise dependent sum'', i.e. for each
\(a \in A\), the fiber over \(a\) is
\[\sum_{x:B_a} p^{-1}(x),\]
where \(B_a\) is the fiber of \(B\) over \(a\).
What, then, is the functor \(f^*\)? Similarly we first take
\(A = 1\), and let \(f\) be
the unique morphism \({!} : B \to 1\). The pullback functor takes
\(p' : E \to 1\)
to \(\pi_1 : B \times E \to B\) projecting
to the first component.\footnote{Note that now
\(p' \in \Cat C/A\) (and we are studying the special
case \(A = 1\)), where in the last paragraph
\(p \in \Cat C/B\). This is because the functor
\(f^*\) goes in the opposite direction of \(f_!\),
and we need \(p'\) to be in the \emph{source} category
of the functor we are discussing.}
In the fibered space language,
it creates a trivial fibered space where each fiber looks
identical to \(E\).
Now generalizing to arbitrary \(f : B \to A\),
the pullback functor takes \(p' : E \to A\)
to a morphism \(E \times_A B \to B\). In the category
\(\mathsf{Set}\), the fibers of the new space looks
like \[{p'}^{-1}(f(b))\] for each \(b \in B\). In effect,
it changes the \emph{base space} from \(A\) to \(B\).
And thus it is named the \textbf{base change} functor.
\section{Towards Dependent Product}
The next goal is to characterize dependent products.
Following our previous experiences, it should
be a functor \(f_* : \Cat C/B \to \Cat C/A\) for \(f : B \to A\).
Similar to the dependent sum functor, it should
take a ``fiberwise dependent product'':
\[\prod_{x:B_a}p^{-1}(x),\]
where \(p : E \to B\) is regarded as a fibered space over \(B\).
As usual, we should consider the easy case where \(A = 1\),
and we only need to construct
\[\prod_{x:B}p^{-1}(x).\]
How should it be defined? \(\prod_{x:B}M\), where
\(M\) does not depend on \(x\),
is exactly the function space \(M^B\).
This suggests that we can define the dependent
product set \(\prod_{x:B}p^{-1}(x)\) as a subset of
the functions \(B \to \coprod_{x:B}p^{-1}(x)\). Of course, to
be type-correct, it needs to map \(b\in B\) to an
element of \(p^{-1}(b)\).
This can be expressed as it being a right inverse of
\(p\). So to sum up,
our quest is now to find right inverses \({?} \circ p = \mathrm{id}\)
of \(p\).
\subsection*{Interlude: Exponentials}
Actually, we not only need to find the right inverses.
In \(\mathsf{Set}\), we need a \emph{set} of right inverses,
which means instead of a collection of morphisms we need a
\emph{single object} that stands for the set of right inverses.
Before we tackle that, we shall look at how we can create a
single object that stands for the set of functions
--- the \emph{exponential object}.
How should a set of functions behave? Given sets \(X, Y\),
if we have a set of functions \(E = Y^X\), then we should
be able to \emph{evaluate} the functions at a given point
\(x\in X\). This is called the \emph{evaluation functional}%
\footnote{The ``-al'' part of the word ``functional''
is just something that stuck with mathematicians. It doesn't
really mean anything special.}
\[\mathrm{ev}(-,-) : E \times X \to Y.\]
So we already have the first parts of the definition:
\begin{definition}
Given objects \(X, Y\), an \textbf{exponential object} is defined
as an object \(E\) equipped with a morphism \(\mathrm{ev}: E \times X \to Y\),
such that ...
\end{definition}
Then, as accustomed with category theory, we need some
universal property. Since \(\mathrm{ev}\) already describes
how to form morphisms \emph{out of} \(E\), our universal
property describes how to create morphisms \emph{into} \(E\):
\begin{definition*}[Continued]
... if there is an object \(S\) with a morphism \(u : S \times X \to Y\),
then there is a unique morphism \(v : S \to E\)
\[\begin{tikzcd}
S & {S \times X} \\
\\
E & {E\times X} && Y
\arrow["{\mathrm{ev}}", from=3-2, to=3-4]
\arrow["u", from=1-2, to=3-4]
\arrow[dashed, from=1-2, to=3-2]
\arrow["{v}"{description}, dashed, from=1-1, to=3-1]
\end{tikzcd}\]
such that, if the dashed arrow in the triangle is filled
with \(v \times \mathrm{id}\) (which is the Haskell
\texttt{first v = v *** id}), then the diagram commutes.
\end{definition*}
This is basically describing \emph{lambda abstraction}.
Given a function \(u\), we have \(u(s, x) \in Y\), so
we can form the function \(v(s) = \lambda x. u(s, x)\).
\footnote{Note how we use ``pointful'' notation
--- notation involving elements \(x \in X\) etc. ---
to give intuition of ``point-free'' definitions.
In this article it is only a convenient device to describe
\emph{rough feelings} of certain definitions. But in fact,
it can be made rigorous as the \textbf{internal language}
of a topos, where we can freely write expressions like this,
and be confident that they can be traslated back into the
category language.}
The exponential construction creates a functor \((-)^X\). Also,
in Haskell language, the \(\mathsf{fmap}\, f\) instance
of \((-)^X\) is exactly \texttt{(f .)}, the left compositions.
A brilliant insight of exponentials is that they are completely
characterized by \emph{currying}:
\begin{theorem}
There is a natural isomorphism \[\Hom(X \times Y , Z) \cong \Hom(X, Z^Y).\]
In other words, \[(-) \times Y \dashv (-)^{Y}.\]
\end{theorem}
The interested reader shall complete the proof. Next, we continue
on our quest of right inverses. We of course want to
express the \emph{identity morphism} first:
\[\begin{tikzcd}
{1\times X} \\
\\
{X^X \times X} && X
\arrow["{\mathrm{ev}}"{description}, from=3-1, to=3-3]
\arrow["{\pi_2}"{description}, from=1-1, to=3-3]
\arrow[dashed, from=1-1, to=3-1]
\end{tikzcd}\]
Here the dashed line is the unique morphism
\(\mathfrak{id} \times \mathrm{id}\), where
\(\mathfrak{id} : 1 \to X^X\) picks out the identity function
in the object \(X^X\).
Now that we have \(\mathfrak{id}\) as our equipment, consider
this pullback, where \(f : Y \to X\):
\[\begin{tikzcd}
Z && {Y^X} \\
\\
1 && {X^X}
\arrow["{\mathfrak{id}}"{description}, from=3-1, to=3-3]
\arrow["{\mathsf{fmap}\,f}"{description}, from=1-3, to=3-3]
\arrow[from=1-1, to=1-3]
\arrow[dashed, from=1-1, to=3-1]
\arrow["\lrcorner"{anchor=center, pos=0.125}, draw=none, from=1-1, to=3-3]
\end{tikzcd}\]
Returning to where we tangented off, the pullback
\(Z\) is, in the category \(\mathsf{Set}\),
the set \(\{g \in Y^X \mid f\circ g = \mathrm{id}\}\).
(Recall that \(g \in Y^X\) means \(g\) is a function \(X \to Y\).)
This captures exactly the right inverses of \(f\).
\subsection*{Fiberwise juggling}
Putting the solution in use, since a fibered space
\(E = \sum_{x:B} p^{-1}(x)\) is defined by a morphism
\(p : E \to B\), we need to find the space of right
inverses of \(p\), which should give the space of dependent products.
\[\begin{tikzcd}
Z && {E^B} & E \\
\\
1 && {B^B} & B
\arrow["{\mathsf{fmap}\,p}"{description}, from=1-3, to=3-3]
\arrow["{\mathfrak{id}}"{description}, from=3-1, to=3-3]
\arrow[dashed, from=1-1, to=3-1]
\arrow[from=1-1, to=1-3]
\arrow["p"{description}, from=1-4, to=3-4]
\end{tikzcd}\]
This \(Z\) (considered as a fibered space
\(Z \to 1\)) is then what we sought for.
Now we can generalize from \(\Cat C/1\) to arbitrary
slice categories \(\Cat C/A\).
We are now given a morphism \(f : B \to A\), and we
are supposed to construct a functor \(f_* : \Cat C/B \to \Cat C/A\).
As before, let \(p : E \to B\) be an object of
\(\Cat C/B\). Thinking in \(\mathsf{Set}\)-language,
we should have a ``fiberwise right inverse'' \(p_a^{-1}(x)\),
whose domain is the fiber \(B_a = f^{-1}(a)\) of \(B\) over \(a \in A\).
Its codomain would naturally be \(E_a\), which is a fiber
of \(E\) when considered as a fibered space \((f\circ p) : E \to A\).
Each fiber of the dependent product object \(f_* p\) should look like
\[\prod_{x:B_a} p_a^{-1}(x).\]
The fiberwise right inverse is easy enough to construct (note
that we are still working in \(\mathsf{Set}\)). We just
replace everything in the previous construction.
\[\begin{tikzcd}
Z_a && {{E_a}^{B_a}} & E_a \\
\\
1 && {{B_a}^{B_a}} & B_a
\arrow["{\mathsf{fmap}\,p_a}"{description}, from=1-3, to=3-3]
\arrow["{\mathfrak{id}}"{description}, from=3-1, to=3-3]
\arrow[dashed, from=1-1, to=3-1]
\arrow[from=1-1, to=1-3]
\arrow["{p_a}"{description}, from=1-4, to=3-4]
\end{tikzcd}\]
We have the fiberwise constructions ready. How can we ``collect the
fibers'' to create a definition that does not refer to the ``points''
\(a \in A\)? It looks like we are stuck.
Maybe it's time to take a retrospect of what we've achieved.
\section{The True Nature of Slice Categories}
Concepts in category theory are like elephants. You may,
through analogies, theorems, or practical applications,
grasp a feeling of what those concepts are like. But in truth,
these feelings are only describing a part of the elephant.
So let me reveal yet another part, yet another blind man's
description of elephants:
\begin{center}
\emph{Slice categories descibe local, fiberwise constructs.}
\end{center}
Let's return again to the definition of a fibered product.
\[\begin{tikzcd}
X \\
& Z && C \\
\\
& B && A
\arrow["g"{description}, from=2-4, to=4-4]
\arrow["f"{description}, from=4-2, to=4-4]
\arrow["h"{description}, from=2-2, to=4-4]
\arrow[from=2-2, to=4-2]
\arrow[from=2-2, to=2-4]
\arrow[dashed, from=1-1, to=2-2]
\arrow[curve={height=6pt}, from=1-1, to=4-2]
\arrow[curve={height=-6pt}, from=1-1, to=2-4]
\end{tikzcd}\]
I have added another morphism \(h\), which does not change
the definition since everything commutes in this diagram.
But it brings an interesting change of perspective:
\(h : Z \to A\), regarded as an object in \(\Cat C/A\), is
exactly the usual product of the objects \(f\) and \(g\)!
On second thought this is very natural: Everything in slice
categories needs to respect fibers, i.e. given two fibered spaces
\(B \to A\) and \(C \to A\), any morphisms between them
must map anything in the fiber \(B_a\) over \(a\) to
the fiber \(C_a\). Therefore, the categorical product
of two fibered spaces should also be the fiberwise product.
This immediately generalizes to any construction.
\textsc{Exercise}: Define the notion of fibered coproducts, and
explain why it is the coproduct in the slice category.
Also, explain why the ``fiberwise terminal object'' is exactly
\(\mathrm{id} : A \to A\).
One thing to keep in mind: When we are talking about
the category \(\Cat C/A\), the fibers are considered
to be over \(A\). So when we switch to a different category
\(\Cat C/B\), the spaces are now considered fibered over \(B\).
That's essentially the content of the base change functor:
it changes the base space of the fiber spaces.
Armed with new weapons, we can finally write down the definition
of dependent products:
\[\begin{tikzcd}
{(Z \to A)} && {(E\stackrel {f\circ p} \to A)^{(B\stackrel f \to A)}} & E \\
\\
{(A\stackrel {\mathrm{id}} \to A)} && {(B\stackrel f \to A)^{(B\stackrel f \to A)}} & B
\arrow["{\mathsf{fmap}\,p}"{description}, from=1-3, to=3-3]
\arrow["{\mathfrak{id}}"{description}, from=3-1, to=3-3]
\arrow[dashed, from=1-1, to=3-1]
\arrow[from=1-1, to=1-3]
\arrow["p"{description}, from=1-4, to=3-4]
\end{tikzcd}\]
Note that this commutative diagram is entirely
in the slice category \(\Cat C/A\), where each object
are \emph{arrows} in \(\Cat C\). The exponential objects
are also inside the slice category.
This pullback gives a space \(Z \to A\).
According to our guess at the beginning of this section,
we should denote \(Z \to A\) as \(f_* p\). But of course
we need to verify the functorality of this construction.
But it should be clear, since everything used (exponentials,
products and pullbacks) is functorial.
But there is an even more succinct description of
all these: the dependent product functor
is exactly the \textbf{right adjoint} of
the base change functor \(f^*\). The proof is not
hard, although the diagram involved is a bit messy
if you insist on drawing everything in \(\Cat C\)
instead of the slice categories.
\section{Locally Cartesian Closed}
A cartesian closed category is a category where the terminal
object, all binary products and all exponentials exist.
A locally cartesian closed category is a category whose
\emph{slice} categories are all cartesian closed. Let's
unpack the definition and see what this means.
The terminal object in a slice category \(\Cat C/A\) is
exactly \(\mathrm{id} : A \to A\). So it always exists
in slice categories.
A binary product in a slice category, as we have discussed,
is exactly the fibered product, or pullback.
Therefore, a locally cartesian closed category
should have all pullbacks.
What about local exponentials? If there are two
objects \(p : Y \to A\) and \(q : X \to A\), then the local
exponential object \(p^q : E \to A\) should be defined by
the following diagram:
\[\begin{tikzcd}
S & {S\times_A X} \\
\\
E & {E \times_A X} && Y \\
&&& A
\arrow["{\mathrm{ev}}"{description}, from=3-2, to=3-4]
\arrow["p", from=3-4, to=4-4]
\arrow[from=3-2, to=4-4]
\arrow["u", from=1-2, to=3-4]
\arrow["{p^q}"{description}, curve={height=6pt}, from=3-1, to=4-4]
\arrow[dashed, from=1-2, to=3-2]
\arrow["{!}"{description}, dashed, from=1-1, to=3-1]
\end{tikzcd}\]
... Well, this looks messy. Let's try the adjoint functor
definition of exponentials: The exponential
functor \((-)^{Y}\) is the right adjoint of the
product functor \((-) \times Y\). So in other words
we should find a right adjoint to the pullback functor
\((-)\times_A Y\). But hey! That looks like the
dependent product functor in the last section.
However, the acute reader may have noticed a discrepancy:
Our dependent product functor is defined as a pullback
of an exponential object. It can't exactly be
the exponential functor, can it?
In fact they have different codomains: Given \(f : C \to A\),
the dependent product functor \(f_* : \Cat C/C \to \Cat C/A\)
is the adjoint of the base change functor
\(f^* : \Cat C/A \to \Cat C/C\). But when we are looking for
the exponential functor, the pullback functor we
want is \((-)\times_A C : \Cat C/A \to \Cat C/A\).
Looking at the diagram for pullbacks we see why:
\[\begin{tikzcd}
X \\
& Z && C \\
\\
& B && A
\arrow["f"{description}, from=2-4, to=4-4]
\arrow["g"{description}, from=4-2, to=4-4]
\arrow["h"{description}, from=2-2, to=4-4]
\arrow[from=2-2, to=4-2]
\arrow["u"{description}, from=2-2, to=2-4]
\arrow[dashed, from=1-1, to=2-2]
\arrow[curve={height=6pt}, from=1-1, to=4-2]
\arrow[curve={height=-6pt}, from=1-1, to=2-4]
\end{tikzcd}\]
The functor \((-)\times_A C : \Cat C/A \to \Cat C/A\)
sends \(g\) to \(h\), while the functor \(f^*\) sends
\(g\) to \(u\). Since \(h = f\circ u\), you can see
that the functor \((-)\times_A C\) is the composition
of two functors \(f_! f^*\).
Now we can save a tremendous amount of work
with this theorem:
\begin{theorem}
Given two adjoint pairs:
\[\begin{tikzcd}
{\Cat C} && {\Cat D} && {\Cat E}
\arrow[""{name=0, anchor=center, inner sep=0}, "{F_1}"{description}, curve={height=-12pt}, rightarrow, from=1-1, to=1-3]
\arrow[""{name=1, anchor=center, inner sep=0}, "{F_2}"{description}, curve={height=-12pt}, rightarrow, from=1-3, to=1-5]
\arrow[""{name=2, anchor=center, inner sep=0}, "{G_1}"{description}, curve={height=-12pt}, rightarrow, from=1-3, to=1-1]
\arrow[""{name=3, anchor=center, inner sep=0}, "{G_2}"{description}, curve={height=-12pt}, rightarrow, from=1-5, to=1-3]
\arrow["\dashv"{anchor=center, rotate=-90}, draw=none, from=0, to=2]
\arrow["\dashv"{anchor=center, rotate=-90}, draw=none, from=1, to=3]
\end{tikzcd}\]
The composition also forms an adjunction
\[F_2F_1 \dashv G_1G_2.\]
\end{theorem}
\begin{proof}%
\[\Hom_{\Cat E}(F_2F_1X, Y) \cong
\Hom_{\Cat D}(F_1X, G_2Y) \cong
\Hom_{\Cat C}(X, G_1G_2 Y).\qedhere\]
\end{proof}
\[\begin{tikzcd}
{\Cat C / A} &&& {\Cat C/C} &&& {\Cat C/A}
\arrow["{(-)\times_A Y}"{description}, curve={height=30pt}, from=1-1, to=1-7]
\arrow[""{name=0, anchor=center, inner sep=0}, "{f_!}"{description}, curve={height=12pt}, from=1-4, to=1-7]
\arrow["{?}"{description}, curve={height=30pt}, from=1-7, to=1-1]
\arrow[""{name=1, anchor=center, inner sep=0}, "{f^*}"{description}, curve={height=12pt}, from=1-7, to=1-4]
\arrow[""{name=2, anchor=center, inner sep=0}, "{f^*}"{description}, curve={height=12pt}, from=1-1, to=1-4]
\arrow[""{name=3, anchor=center, inner sep=0}, "{f_*}"{description}, curve={height=12pt}, from=1-4, to=1-1]
\arrow["\dashv"{anchor=center, rotate=90}, draw=none, from=0, to=1]
\arrow["\dashv"{anchor=center, rotate=90}, draw=none, from=2, to=3]
\end{tikzcd}\]
With this diagram it is crystal clear that the
fibered exponential fits exactly in the position of the
question mark.
In fact, the condition that the
dependent sum functor \(f_!\) (which exists in
every category) has a chain of three adjoints
\[f_! \dashv f^* \dashv f_*, \]
is equivalent to
the condition that the category is locally cartesian closed.
The backward implication is precisely what we proved
in the last section. As for the forward implication,
it is proved by our discussion in the previous few paragraphs.
\section{Prospects}
This introduction has gotten way too lengthy. But I shall
point out several direction to proceed before I end.
Cartesian closed category, as can be seen in the definition,
serves as the semantics of simply typed lambda calculus.
You might not be able to figure out the details at once,
but you should see that there is a probable connection here.
On the other hand, \emph{locally} cartesian closed categories
are central to the semantic interpretation of \emph{dependent
types}. Type dependency is, fundamentally, expressing
fiber spaces; working with dependent types amounts to
making fiberwise comstructions. The classical reference for this
is \cite{seely}.
Although I did not mention any topology in the text, fiber
spaces ultimately came from topology. And it is the fact that
there is a notion of ``neighbourhoodness'' between fibers
that makes them important --- otherwise they are just random
sets.
Going further in this direction,
the adjunction \(f^* \dashv f_*\) is called a \textbf{geometric
morphism} in the language of topos. If it has further adjoints,
it becomes ``smoother'' in the geometric sense. This plays
the central role in topos theory. More can be read at \cite{sketches}.
| 2024-02-18T23:40:02.117Z | 2022-02-10T02:24:23.000Z | algebraic_stack_train_0000 | 1,143 | 6,104 |
|
proofpile-arXiv_065-5681 | \section{Introduction}
Quantum photonic experiments can generally be described as preparing quantum states of light, evolving them through linear optical interferometers, and detecting the output photons.
While the most common types of photonic states, Fock states and Gaussian states, can be routinely prepared via spontaneous processes in optical non-linearities or (artificial) atomic systems and processed with high-fidelity linear optical components~\cite{flamini2018}, photon number detection is typically approximated via the use of threshold photon detectors.
Threshold detection, i.e. a measurement distinguishing only between vacuum and the presence of one or more photons, is widely available, e.g. via high-efficiency superconducting nanowires~\cite{reddy2020superconducting} or room-temperature avalanche photodiodes~\cite{warburton2009free}, making it the standard measurement apparatus in quantum photonics.
Its use in experiments, represented in Fig.~\ref{concept figure}, encompasses many areas of quantum research, including demonstrations of quantum advantages, e.g. in computation~\cite{zhong2021phase}, measurement sensitivity~\cite{slussarenko2017unconditional}, and loophole-free tests of non-locality~\cite{shalm2015strong}.
However, threshold detection only provides a meaningful approximation of the desired Fock basis measurement projectors in the regime of low mean photon numbers per mode.
On the other hand, as technology progresses, mean photon numbers increase and higher fidelities are demanded~\cite{zhong2021phase}, making this approximation less appropriate.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{click_fig.pdf}
\caption{Types of typical quantum photonic experiments which can be modelled using the results of this work.
Threshold detection statistics of Gaussian states (highlighted in red) formed by squeezing, displacement and linear optics are captured by the loop Torontonian matrix function.
Displacement can be generated using coherent states from lasers,
squeezing can come from nonlinear processes such as Spontaneous Parametric Down Conversion (SPDC) or Spontaneous Four-Wave Mixing (SFWM).
Linear optics can be implemented in a variety of platforms, including bulk and integrated optics.
States created by the linear optical interference of Fock states (highlighted in blue), as generated by (artificial) atoms, lead to threshold detection probabilities given by the Bristolian matrix function.}
\label{concept figure}
\end{figure}
To circumvent this issue, experiments can be described directly using the output statistics of threshold detection instead of its photon number resolving approximation.
However, despite the wide adoption of such systems, there are in general no closed form expressions in the literature for computing measurement probabilities of threshold detectors.
In fact, while an expression for the threshold detection of zero-displaced Gaussian states is known, given by the Torontonian matrix function~\cite{quesada2018gaussian}, no analogous expressions exist for other commonly used states, e.g. Fock or displaced Gaussian states.
For example, for Fock states with fixed photon number, threshold probabilities could be exactly calculated by summing over all possible output states which lead to the given threshold detector outcome.
However, this method requires calculating a number of probabilities scaling combinatorially with the number of clicked detectors, rendering it impractical already for smaller-scale experiments~\cite{wang2018toward, wang2019boson, thekkadath2022experimental}.
New methods are required to describe quantum photonic technologies which use threshold detection.
Here, we provide such methods by developing a unified picture to compute threshold statistics for most quantum photonic states of experimental interest.
As described in Table~\ref{table}, this is achieved by introducing two new matrix functions, the \textit{Bristolian} and the \textit{loop Torontonian}, for threshold statistics with Fock and displaced Gaussian states, respectively, and demonstrating close connections between them and to other existing matrix functions.
The developed tools provide exact simulation, design, and analysis methods for current~\cite{bentivegna2015, wang2018toward, wang2019boson, paesani2019, zhong2021phase, thekkadath2022experimental} and
future quantum photonic systems that use threshold detection.
We wish to highlight the different challenges between computing probabilities, known as \textit{strong simulation}, which we focus on in this work, and drawing samples from a probability distribution, known as \textit{weak simulation}~\cite{van2010classical}.
These tasks often have very different complexity.
For example, using methods from Ref.~\cite{bulmer2021boundary}, we can sample threshold detector outcomes without ever calculating a threshold detection probability.
\begin{table}[t]
\begin{tabular}{c|cc}
\hline \hline
\multirow{2}{*}{\textbf{State}} & \multicolumn{2}{c}{\textbf{Detector}} \\
& number resolving & threshold \\ \hline
Fock & permanent & Bristolian* \\
zero-mean Gaussian & Hafnian & Torontonian \\
displaced Gaussian & loop Hafnian & loop Torontonian* \\ \hline \hline
\end{tabular}
\caption{Matrix functions for the calculation of measurement probabilities in quantum photonics. (*)-symbol is used to indicate functions which are introduced in this work.}
\label{table}
\end{table}
\section{Threshold detection statistics from vacuum statistics}
Threshold detectors are described by the measurement operators
\begin{subequations}
\begin{align}
\hat{\Pi}_j^{(0)} &= \ketbra{0_j}{0_j}, \\
\hat{\Pi}_j^{(1)} &= \sum_{k=1}^\infty \ketbra{k_j}{k_j} = \mathbb{I} - \ketbra{0_j}{0_j} \label{click_op},
\end{align}
\end{subequations}
for vacuum (0) and click (1) outcomes on a mode described by label $j$. We use $\ket{0}$ to denote the vacuum state of an optical mode, $\ket{k} = (\hat{a}^\dagger)^k \ket{0}/ \sqrt{k!}$ for Fock states of the optical mode, and $\mathbb{I}$ is the identity operator (we will always assume its dimension to be the same as the other operators appearing in the equation).
We write the outcome of $M$ threshold detectors, labelled with $ j\in[M] = \{1,2,\dots,M\}$, using a length-$M$ bit-string $\d$, where the $j$th element gives the measurement outcome of the $j$th mode.
Defining a set of modes which clicked, $C = \{j\in [M] \ |\ d_j=1\}$, and a set for modes with the vacuum outcome, $V = \{j\in [M]\ |\ d_j =0\}$, we can write the multimode measurement operator as
\begin{align}
\hat{\Pi}^{(\d)} & = \bigotimes_{j=1}^M \hat{\Pi}^{(d_j)}_j = \bigotimes_{j \in C} \left(\mathbb{I} - \ketbra{0_j}{0_j}\right)
\bigotimes_{k \in V} \ketbra{0_k}{0_k},
\end{align}
which can be rearranged to give
\begin{align}
\hat{\Pi}^{(\d)} = \ketbra{\vec{0}_V}{\vec{0}_V}
\sum_{Z \in P(C)} (-1)^{|Z|} \ketbra{\vec{0}_{Z}}{\vec{0}_{Z}}.
\label{incexc}
\end{align}
Here, we use $P(C)$ to denote the powerset of $C$ and $|Z|$ for the number of elements in a set $Z$.
$\ketbra{\vec{0}_V}{\vec{0}_V}$ describes the vacuum projector in all the vacuum outcome modes and $\ketbra{\vec{0}_{Z}}{\vec{0}_{Z}}$ describes the vacuum projector in all modes in a subset $Z\subseteq C$.
Eq.~\eqref{incexc} indicates that to calculate the threshold detection probabilities for any state it is sufficient to calculate marginal vacuum probabilities, which are used in an inclusion/exclusion sum as described by Eq.~\eqref{incexc}.
Using this measurement operator and the Born rule $p(\d) = \tr(\hat{\Pi}^{(\d)} \rho)$ on some state $\rho$, we find:
\begin{align}
p(\vec{d}) =
\sum_{Z \in P(C)} (-1)^{|Z|} p(\d_V=\vec{0}, \d_Z=\vec{0})
\label{thresh_vac},
\end{align}
where $p(\d_V=\vec{0}, \d_Z=\vec{0})= \tr(\ketbra{\vec{0}_V}{\vec{0}_V}\otimes\ketbra{\vec{0}_{Z}}{\vec{0}_{Z}} \rho)$.
This formula provides our starting point for deriving general expressions for threshold detection statistics.
\section{Marginal vacuum probabilities from the photon number probability generating function}
For an $M$-mode linear optical interferometer, described by an $M \times M$ matrix $U$ and the operator $\hat{\mathcal{U}}$, the creation operators are transformed as
\begin{align}
\hat{\mathcal{U}} \hat{a}^\dagger_j \hat{\mathcal{U}}^\dagger = \sum_{k=1}^M U_{kj} \hat{a}^\dagger_k.
\end{align}
Considering an input state $\ket{\Phi_0}$, the output photon number probability distribution is then
\begin{align}
p(\vec{m}) = \left| \bra{\vec{m}}\hat{\mathcal{U}}\ket{\Phi_0} \right|^2
\label{fock_probs}
\end{align}
where $\vec{m}$ is a length-$M$ list describing the photon number in each mode at the output of the interferometer, and
$\ket{\vec{m}} = \bigotimes_{j=1}^M \left((\hat{a}^\dagger_j)^{m_j}/\sqrt{m_j!}\right)\ket{0}$.
Following Ref.~\cite{ivanov2020complexity}, considering the Fourier transform of the probability distribution of photon number basis measurements we define the characteristic function
\begin{align}
\chi(\vec{\phi}) = \sum_{\vec{m}} \exp \left(\mathrm{i} \sum_{j=1}^M \phi_j m_j \right) p(\vec{m})
\label{char_def}
\end{align}
which, with some manipulation (see Appendix~\ref{deriv}), can be expressed as
\begin{align}
\chi(\vec{\phi}) = \bra{\Phi_0} \hat{\mathcal{U}}^\dagger \ \hat{\mathcal{U}}_{\vec{\phi}}\ \hat{\mathcal{U}} \ket{\Phi_0}
\label{char}
\end{align}
where $\hat{\mathcal{U}}_{\vec{\phi}}$ is the operator given by the evolution due to the linear optical transformation
\begin{equation}
U_{\vec{\phi}}=\bigoplus_{j=1}^M \exp (\mathrm{i} \phi_j).
\label{eq:uphi}
\end{equation}
We can transform this into a probability generating function, $G$, using the substitution $x_j = \exp(\mathrm{i} \phi_j)$:
\begin{align}
G(\vec{x}) = \sum_{\vec{m}} \left( \prod_{j=1}^M x_j^{m_j} \right) p(\vec{m}).
\label{prob_gen}
\end{align}
The function $G(\vec{x})$ has the following useful properties.
To marginalise the $j$th mode, we simply set $x_j=1$.
If we set $x_j=0$, this gives us the probability for $n_j=0$.
Therefore, if we want to calculate the probability that some subset of the modes, $V$, measure vacuum and we marginalise over all other modes, $B$, we can evaluate
\begin{align}
p(\vec{m}_V = \vec{0}) = G(\vec{x}_V = \vec{0}, \vec{x}_B = \vec{1}),
\label{vac_gen}
\end{align}
where $\vec{0}$ ($\vec{1}$) is a vector with 0 (1) in all entries.
$G(\vec{x})$ is the probability distribution generating function, so by taking derivatives of $G(\vec{x})$, we can find information about the photon number basis probability distribution~\cite{ivanov2020complexity}.
By using the expression for the characteristic function in Eq.~\eqref{char}, we can see that this amounts to calculating the scattering amplitude of $\ket{\Phi_0}$ to itself, through a linear optical interferometer described by the transformation $U^\dagger U_{\vec{\phi}} U$.
From again using the substitution $x_j = \exp(\mathrm{i} \phi_j)$ in Eq.~\eqref{eq:uphi}, we see that $U_{\vec{\phi}}$ physically corresponds to either zero transmission for modes in $V$, or unit transmission for modes in $B$.
As we show in the next sections, we can use this, in conjunction with Eq.~\eqref{thresh_vac} to calculate threshold detection probabilities for all the experimental scenarios outlined in Fig.~\ref{concept figure}.
\section{Fock state inputs}
Recall that the scattering amplitudes of Fock states evolved through a lossless interferometer are given by the permanent matrix function~\cite{scheel2004permanents}
\begin{align}
\bra{\vec{m}} \hat{\mathcal{U}} \ket{\vec{n}} = \frac{\per(U_{\vec{m}, \vec{n}})}{\sqrt{\prod_{j=1}^M n_j! m_j!}}
\label{per}
\end{align}
where $U_{\vec{m}, \vec{n}}$ is constructed from $U$ by repeating its $j$th row $m_j$ times and its $j$th column $n_j$ times for all $j \in [M]$.
Therefore if we have an $N$-photon input Fock state, $\ket{\Phi_0}=\ket{\vec{n}}$, we can use Eq.~\eqref{char}, Eq.~\eqref{prob_gen} and Eq.~\eqref{per} to write
\begin{align}
G(\vec{x}) = \frac{\per\left([U^\dagger U_{\vec{x}} U]_{\vec{n},\vec{n}} \right)}{\prod_{j=1}^M n_j !}
\label{fock_gen}
\end{align}
where $U_{\vec{x}}$ is formed like $U_{\vec{\phi}}$, but with diagonal matrix elements: $[U_{\vec{x}}]_{jj} = x_j$.
For this equation to be valid, we must have a lossless unitary transformation.
However, we are free to marginalise over modes by allowing elements of $\vec{x}$ to be set to 1 for any mode we wish to marginalise over, including any loss modes, as shown in Eq.~\eqref{vac_gen}.
Because Eq.~\eqref{fock_gen} provides us with a closed form expression for marginal vacuum probabilities, and because
Eq.~\eqref{thresh_vac} shows us that marginal vacuum probabilities are sufficient to calculate threshold detection probabilities, we can use this to derive a matrix function for calculating threshold detection probabilities of Fock states.
For more generality, we first consider a linear optical transformation with losses, described by an $M_\text{in} \times M_\text{out}$ matrix $T$, with singular values upper bounded by $1$~\cite{garcia2019simulating}.
In Appendix~\ref{der_brs}
we show that if the input state is an $M_\text{in}$-mode Fock state, $\vec{n}$, then by combining Eq.~\eqref{thresh_vac} with Eq.~\eqref{fock_gen},
we can calculate the threshold detection probability of the outcome described by an $M_\text{out}$-length bit-string $\d$ as:
\begin{align}
p(\d) = \frac{\brs\left(T_{\d,\vec{n}}, E(T)_{\vec{n},\vec{n}} \right)}{\prod_{j=1}^{M_\text{in
}} n_j ! }.
\end{align}
Here we have introduced a matrix function, the \textit{Bristolian}, defined as
\begin{multline}
\brs\left(A, E \right) = \\
\sum_{Y \in P([m])} (-1)^{m - |Y|} \per \left(
[A_Y]^\dagger A_Y + E
\right),
\label{brs_def}
\end{multline}
where $A$ is an $m \times n$ matrix and $E$ is an $n \times n$ matrix.
$A_Y$ denotes selecting the rows of $A$ according to the elements of $Y$, and $[m] = \{1,2,\dots,m\}$.
We have also defined a matrix which accounts for the mixing with vacuum in the environment modes
\begin{align}
E(T) &= \mathbb{I} -T^\dagger T .
\end{align}
Our naming of the Bristolian is inspired by the convention established by the Hafnian and Torontonian matrix functions, which are named after the cities of their discovery.
By noticing that $\mathbb{I} - T^\dagger T$ gives a zero matrix when $T$ is unitary, the $\brs$ function can be simplified when $T$ is a unitary matrix $U$, only requiring the rows of $U$ which correspond to modes with a detector click, providing
\begin{align}
p(\d) = \frac{\ubrs\left( U_{\d,\vec{n}} \right)}{\prod_{j=1}^M n_j!}.
\end{align}
Here we defined the \textit{Unitary Bristolian} acting on an $m \times m$ matrix, $A$, as
\begin{align}
\ubrs(A) = \sum_{Y \in P([m])} (-1)^{m - |Y|}
\per\left([A_Y]^\dagger A_Y \right).
\label{ubrs}
\end{align}
\section{Displaced Gaussian state inputs}
Gaussian states are the set of states that have a Gaussian characteristic function. A Gaussian state $\rho$ is uniquely characterized by its vector of means with entries
\begin{align}
\vec{\alpha}_i = \text{tr}\left[ \rho \hat{\vec{\zeta}} _i \right],
\end{align}
and its Husimi covariance matrix with entries
\begin{align}
\Sigma_{i,j} = \tfrac12 \text{tr}\left( \left[\hat{\vec{\zeta}} _i \hat{\vec{\zeta}} _j^\dagger + \hat{\vec{\zeta}} _j^\dagger \hat{\vec{\zeta}} _i \right] \rho \right) - \vec{\alpha}_i \vec{\alpha}_j^* + \tfrac12 \delta_{i,j},
\end{align}
where we have used a vector of creation and annihilation operators
\begin{align}\label{mode_ordering}
\hat{\vec{\zeta}} = \left(\hat{a}_1,\ldots,\hat{a}_{M}, \hat{a}_1^\dagger,\ldots,\hat{a}_{M}^\dagger \right).
\end{align}
The Husimi function $Q(\vec{r}) = \bra{\vec{r}} \rho \ket{\vec{r}}$
maps displacement vectors, $\vec{r}$, to probabilities, so to calculate vacuum probabilities we can evaluate the Husimi function at the origin.
Noting that we can marginalise over modes by deleting all the corresponding elements of $\Sigma$ and $\vec{\alpha}$, we obtain~\cite{Serafini_2017}
\begin{align}
p(\vec{m}_V = \vec{0}) &= Q(\vec{r}_V = \vec{0}) \\
&=\frac{\exp\left(-\frac{1}{2}\vec{\alpha}_V^\dagger[\Sigma_{V V}]^{-1}\vec{\alpha}_V \right)}{\sqrt{\det(\Sigma_{V V})}}.
\label{eq:gaussvac}
\end{align}
The notation $\Sigma_{V\vacs}$ and $\vec{\alpha}_V$ differs slightly here from the previous section, as now there are two basis vectors for each mode of our system, corresponding to each mode's $\hat{a}$ and $\hat{a}^\dagger$ operator.
We form $\Sigma_{V\vacs}$ by selecting both rows/columns of $\Sigma$ which correspond to each element of $V$ and we form $\vec{\alpha}_V$ by selecting both elements of $\vec{\alpha}$ corresponding to each element of $V$.
We can use this to immediately arrive at a threshold detection probability for displaced Gaussian states using Eq.~\eqref{thresh_vac}.
However, here we must invert and compute determinants for square matrices of size $2(|V|+|Z|)$.
It would be preferable if we could reduce these to matrices of size $2|Z|$.
It would also be helpful conceptually to have a formula which can be connected to other relevant matrix functions, the Torontonian~\cite{quesada2018gaussian} and the loop Hafnian~\cite{quesada2019franck}.
Therefore, it is of interest to write this probability in terms of
\begin{align}
O = \mathbb{I} - \Sigma^{-1} \text{ and } \vec{\gamma} = (\Sigma^{-1}\vec{\alpha})^*.
\end{align}
In Appendix~\ref{ltor_derivation}, we show how Eq.~\eqref{thresh_vac} can be rearranged into the following:
\begin{align}\label{eq:prob_ltor}
p(\d) = p(\vec{0}) \ltor\left( O_{CC}, \vec{\gamma}_{C} \right),
\end{align}
where $C$ is given by the index of the elements of $\d$ where $d_j=1$, so $O_{CC}$ and $\vec{\gamma}_{C}$ are the matrix and vector formed by selecting the rows/columns of $O$ and elements of $\vec{\gamma}$ which correspond to modes which see a detector click.
$p(\vec{0})$ is the probability of detecting vacuum in all modes, and can be calculated using Eq.~\eqref{eq:gaussvac}.
We introduce the \textit{loop Torontonian}, which is defined as
\begin{multline}
\ltor\left( O, \vec{\gamma} \right) = \\
\sum_{Y \in P([m])} (-1)^{m-|Y|} \frac{\exp\left[ \tfrac12 \vec{\gamma}_Y^t [\mathbb{I} - O_{YY}]^{-1} \vec{\gamma}_Y^* \right]}{\sqrt{\det(\mathbb{I} - O_{YY})}},
\label{eq:ltor_def}
\end{multline}
where $O$ is a $2m \times 2m$ matrix and $\vec{\gamma}$ is a $2m$-length vector.
\section{Connections between matrix functions}
In the limit of no displacement $\vec{\alpha} = \vec{\gamma} = \vec{0}$, the exponential terms in the numerator of Eq.~\eqref{eq:ltor_def} becomes 1 and thus $\ltor\left( O, \vec{0} \right) = \tor\left( O \right) $, where $\tor$ is the Torontonian function from Ref.~\cite{quesada2018gaussian}. One can show, using the scattershot construction~\cite{lund2014boson}, that the Torontonian and Bristolian are related via the following limit
\begin{multline}
\brs\left(T_{\vec{d},\vec{n}}, E(T)_{\vec{n},\vec{n}} \right) = \\ \lim_{\varepsilon \to 0 } (\varepsilon^{-2} - 1)^{N} \tor\left( O(\varepsilon)_{CC} \right),
\end{multline}
\begin{equation}
O(\varepsilon) = \varepsilon \begin{pmatrix}
0 & 0 & 0 & T \\
0 & \varepsilon E(T)^
& T^t & 0 \\
0 & T^* & 0 & 0\\
T^\dagger & 0 & 0 & \varepsilon E(T
\end{pmatrix}.
\end{equation}
where $\vec{n}$ is a bitstring (implying that this identity is only valid for single-photon or vacuum inputs), $N = \sum_{i} \vec{n}_i $ and $C$ is the union of the labels of the modes in which single photons were input into the interferometer and the labels of the modes in which clicks are registered.
This relation is proven in Appendix~\ref{app:bris_to_tor}.
As we show in
Appendix~\ref{lhaf}, the loop Torontonian can also be used as a generating function for the loop Hafnian,
\begin{equation}
\lhaf(X O_{C C}, \vec{\gamma}_{C}) = \left.\frac{1}{\ell!} \frac{d^\ell}{d \eta ^{\ell}} \ltor\left( \eta O_{C C}, \sqrt{\eta} \vec{\gamma}_{C} \right) \right|_{\eta = 0}
\end{equation}
where $X = \left[\begin{smallmatrix} 0 & \mathbb{I} \\ \mathbb{I} & 0 \end{smallmatrix}\right] $ and $\ell = |C|$.
We use this to derive the trace formula for the loop Hafnian, the fastest known method for computing photon number resolved measurement probabilities on displaced Gaussian states~\cite{bjorklund2019faster, quesada2019franck, quesada2019simulating}.
Because the loop Hafnian of a bipartite graph is given by the matrix permanent~\cite{bjorklund2019faster}, all the matrix functions in Table~\ref{table} can be derived from the loop Torontonian.
We also see a connection between the Bristolian and the permanent when an $N$ photon Fock state results in $N$ threshold detector clicks.
In this case, each threshold detector must have seen exactly 1 photon, so we can describe the measurement operator of each threshold detector click as a single photon projector, which leads to describing the event with permanents, as given by Eq.~\eqref{per}.
In Appendix~\ref{brs_per},
we show this link directly by first describing the Unitary Bristolian for $N$ photon, $N$ click events as the permanent of an $N \times N \times N$ 3-tensor~\cite{tichy2015sampling}.
\section{Time complexities}
In Appendix \ref{app:time_comp}, we discuss the time complexities for the Bristolian and the loop Torontonian.
We find that, using the formulae presented in this work, the Bristolian, $\brs(A,E)$, has a time complexity of $\mathcal{O}(n2^{n+m})$ for an $m \times n$ matrix $A$ and $n \times n$ matrix $E$ and the loop Torontonian, $\ltor(O, \vec{\gamma})$, has time complexity of $\mathcal{O}(m^3 2^m)$ for a $2m \times 2m$ matrix $O$ and $2m$-length vector $\vec{\gamma}$.
For the loop Torontonian, this complexity can be reduced using a recursive strategy which exploits Cholesky decomposition~\cite{kaposi2021polynomial}.
We also believe that the Bristolian's time complexity can likely be reduced, and we leave this as an open problem.
\section{Improved accuracy of a threshold detection model}
To assess the improvements offered by using the correct description of threshold detection over the common approximation of single photon projective measurement, we present two representative examples.
By simulating the probability distribution for 100 different Haar random unitaries in lossy 4 input photon Fock state Boson sampling experiments on mode numbers from 4 to 12, we evaluate the total variation distance (TVD) between probability distributions from the exact model, which uses the Bristolian, and an approximate model, which uses a sum over matrix permanents, as discussed in
Appendix~\ref{app:brs_using_pers}.
Although the TVD is reduced for higher numbers of modes, the approximation is always 5\% - 12\% removed from the correct distribution.
To test the loop Torontonian, we use experimental data from Ref.~\cite{thekkadath2022experimental}. We see that for the 2 photon distribution for different levels of displacement, the loop Torontonian offers a better match to the experiment of up to 16\%.
See Appendix \ref{app:accuracy}
for more detail.
\section{Conclusion}
The new methods we have derived, in particular the Bristolian and the loop Torontonian functions, are useful tools to model and analyse a wide variety of quantum photonic experiments and applications.
For example, the Bristolian is relevant to applications including linear-optical quantum computing~\cite{knill2001scheme, kok2007linear, rudolph2017optimistic}, Boson Sampling~\cite{aaronson2011computational, wang2019boson} and quantum communications~\cite{you2021quantum}, commonly based on threshold detection.
The loop Torontonian can be applied to applications including
Gaussian state reconstruction~\cite{thekkadath2022experimental},
measuring graph similarity~\cite{schuld2020measuring},
calculations of vibronic spectra of molecules~\cite{huh2015boson}, and quantum metrology~\cite{afek2010high}, and has already been applied for evaluating proposed quantum communication protocols~\footnote{The initial inspiration for us to derive the loop Torontonian came from the need to calculate threshold detection statistics for the quantum communication protocols proposed in Ref.~\cite{bacco2021proposal}}. %
To facilitate their use, we provide example calculations of common experimental scenarios in Appendix~\ref{sec:examples}
using the Bristolian and loop Torontonian, and have made available implementations in the open-source Python package \texttt{The Walrus}~\cite{gupt2019walrus}.
Details for the software implementation are provided in Appendix~\ref{app:code}.
The connections that we have shown between the Bristolian and the permanent (Appendix ~\ref{brs_per}),
the loop Torontonian and the loop Hafnian (Appendix~\ref{lhaf}),
and the Bristolian and the Torontonian (Appendix~\ref{app:bris_to_tor})
indicate that these functions can provide a useful mathematical and conceptual tool for a deeper understanding of bosonic statistics in photonic experiments.
\section*{Acknowledgements}
JFFB and RSC acknowledge support from EPSRC (EP/N509711/1, EP/LO15730/1).
NQ acknowledges support from the Minist\`ere de l'\'Economie et de l'Innovation du Qu\'ebec and the Natural Sciences and Engineering Research Council of Canada.
SP acknowledges funding from the Cisco University Research Program Fund nr. 2021-234494.
We thank G. S. Thekkadath for useful discussions and sharing experimental data from Ref.~\cite{thekkadath2022experimental}.
NQ thanks S. Duque Mesa, B. Lanthier, D. Leclerc, B. Turcotte, and J. Zhao for valuable discussions.
We thank G. Morse for implementing the generalisation of the recursive Torontonian formula~\cite{kaposi2021polynomial} to the loop Torontonian, see \href{https://github.com/XanaduAI/thewalrus/pull/332}{pull request (332)} to \texttt{The Walrus}~\cite{gupt2019walrus}.
| 2024-02-18T23:40:02.311Z | 2022-11-10T02:11:15.000Z | algebraic_stack_train_0000 | 1,151 | 4,038 |
|
proofpile-arXiv_065-5818 | \section{Introduction}
Functional ultrasound (fUS) is a neuroimaging technique that indirectly measures brain activity by detecting changes in cerebral blood flow (CBF) and volume (CBV) \citep{fusrbc}. The fUS signal is related to brain activity through a process known as neurovascular coupling (NVC). When a brain region becomes active, it calls for an additional supply of oxygen-rich blood, which creates a hemodynamic response (HR), i.e., an increase of blood flow to that region. NVC describes this interaction between local neural activity and blood flow \citep{b2}. Functional ultrasound is able to measure the HR because of its sensitivity to fluctuations in blood flow and volume \citep{b1}. In the past decade, fUS has been successfully applied in a variety of animal and clinical studies, showing the technique's potential for detection of sensory stimuli, as well as complex brain states and behavior \citep{fus_npixels}. These include studies on small rodents \citep{param1,setup,cube1}, birds \citep{rau} and humans \citep{sadaf,humanfus,humanfus2}.
Understanding the HR has been an important challenge not only for fUS \citep{b3}, but also for several other established functional neuroimaging modalities, including functional magnetic resonance imaging (fMRI) \citep{b4} and functional near-infrared spectroscopy (fNIRS) \citep{b5}. The HR can be characterized by a function representing the impulse response of the neurovascular system, known as the hemodynamic response function (HRF) \citep{b6}. To form a model for the HR, the HRF gets convolved with an input signal representing the experimental paradigm (EP), which is expressed as a binary vector that shows the on- and off- times of a given stimulus. However, not all brain activity can be explained via such predefined and external stimuli \citep{b13}. Indeed, even when no stimulus is presented, there can still be spontaneous, non-random activity in the brain, reported to be as large as the activity evoked by intentional stimulation \citep{Gilbert}. Therefore, the input signals that trigger brain activity should be generalized beyond merely the EP. This issue has been addressed by \citep{b13,actinduc2,actinduc}, where the authors have defined the term \emph{activity-inducing} signal, which, as the name suggests, comprises any input signal that induces hemodynamic activity. We will refer to activity-inducing signals as \emph{source signals} in the rest of this paper, which steers the reader to broader terminology not only used in biomedical signal processing, but also in acoustics and telecommunications \citep{sources}, and emphasizes that recorded output data are \emph{sourced} by such signals.
An accurate estimation of the HRF is crucial to correctly interpret both the hemodynamic activity itself and the underlying source signals. Furthermore, the HRF has shown potential as a biomarker for pathological brain functioning, examples of which include obsessive-compulsive disorder \citep{hrfocd}, mild traumatic brain injury \citep{hrfinjury}, Alzheimer's disease \citep{hrfdementia}, epilepsy \citep{eegfmri} and severe psychosocial stress \citep{hrfstress}. While HRFs can as well be defined in nonlinear and dynamic frameworks with the help of Volterra kernels \citep{volterra}, linear models have particularly gained popularity due to the combination of their remarkable performance and simplicity. Several approaches have been proposed in the literature which employ linear modelling for estimating the HRF. The strictest approach assumes a constant a priori shape of the HRF, i.e. a mathematical function with fixed parameters, and is only concerned with finding its scaling (the activation level). The shape used in this approach is usually given by the canonical HRF model \citep{b7}. As such, this approach does not incorporate HRF variability, yet the HRF is known to change significantly across subjects, brain regions and triggering events \citep{b8, hrfchange1, hrfchange2}. A second approach is to estimate the parameters of the chosen shape function, which provides a more flexible and unbiased solution \citep{b3}. Alternatively, HRF estimation can be reformulated as a regression problem by expressing the HRF as a linear combination of several basis functions (which are often chosen to be the canonical HRF and its derivatives). This approach is known as the general linear model (GLM) \citep{b9}. Finally, it is also possible to apply no shape constraints on the HRF, and predict the value of the HRF distinctly at each time point. This approach suffers from high computational complexity and can result in arbitrary or physiologically meaningless forms \citep{b10}.
Note that the majority of studies which tackle HRF estimation presume that the source signal is known and equal to the EP, leaving only one unknown in the convolution: the HRF \citep{neuralknown}. However, as mentioned earlier, a functional brain response can be triggered by more sources than the EP alone. These sources can be extrinsic, i.e., related to environmental events, such as unintended background stimulation or noise artefacts. They might also be intrinsic sources, which can emerge spontaneously during rest \citep{rest}. Under such complex and multi-causal circumstances, recovering the rather 'hidden' source signal(s) can be of interest. Moreover, even the EP itself can be much more complex than what a simple binary pattern allows for. Indeed, the hemodynamic response to, for instance, a visual stimulus, can vary greatly depending on its parameters, such as its contrast \citep{param1} or frequency \citep{param2}.
In contrast to the aforementioned methods, where the goal was to estimate HRFs from a known source signal, there have also been attempts to predict the sources by assuming a known and fixed HRF \citep{b12} \citep{b13}. However, these methods fall short of depicting the HRF variability.
To sum up, neither the sources nor the HRF are straightforward to model, and as such, when either is assumed to be fixed, it can easily lead to misspecification of the other. Therefore, we consider the problem of jointly estimating the source signals and HRFs from multivariate fUS time-series. This problem has been addressed by \citep{b14}, \citep{b15} and \citep{b16}. In \citep{b14}, it is assumed that the source signal (here considered as the neural activity) lies in a high frequency band compared to the HRF, and can thus be recovered using homomorphic filtering. On the other hand, \citep{b15} first estimates a spike-like source signal by thresholding the fMRI data and selecting the time points where the response begins, and subsequently fits a GLM using the estimated source signal to determine the HRF. Both of the mentioned techniques share the limitation of being univariate methods: although they analyze multiple regions and/or subjects, the analysis is performed separately on each time series, thereby ignoring any mutual information shared amongst biologically relevant ROIs.
Recently, a multivariate deconvolution of fMRI time series has been proposed in \citep{b16}. The authors proposed an fMRI signal model, where neural activation is represented as a low-rank matrix - constructed by a certain (low) number of temporal activation patterns and corresponding spatial maps encoding functional networks - and the neural activation is linked with the observed fMRI signals via region-specific HRFs. The main advantage of this approach is that it allows whole-brain estimation of HRF and neural activation. However, all HRFs are defined via the dilation of a presumed shape, which may not be enough to capture all possible variations of the HRF, as the width and peak latency of the HRF are coupled into a single parameter. Moreover, the estimated HRFs are region-specific, but not activation-specific. Therefore, the model cannot account for variations in the HRF due to varying stimulus properties. Yet, the length and intensity of stimuli appear to have a significant effect on HRF shape even within the same region, as observed in recent fast fMRI studies \citep{stimhrf}.
In order to account for the possible variations of the HRF for both different sources and regions, we model the fUS signal in the framework of convolutive mixtures, where multiple input signals (sources) are related to multiple observations (measurements from a brain region) via convolutive mixing filters. In the context of fUS, the convolutive mixing filters stand for the HRFs, which are unique for each possible combination of sources and regions, allowing variability across different brain areas and triggering events. In order to improve identifiability, we make certain assumptions, namely that the shape of the HRFs can be parametrized and that the source signals are uncorrelated. Considering the flexibility of tensor-based formulations for the purpose of representing such structures and constraints that exist in different modes or factors of data \citep{b19}, we solve the deconvolution by applying block-term decomposition (BTD) on the tensor of lagged measurement autocorrelation matrices.
While in our previous work \citep{b20} we had considered a similar BTD-based deconvolution, this paper presents several novel contributions. First, we improve the robustness of the algorithm via additional constraints and a more sophisticated selection procedure for the final solution from multiple optimization runs. We also present a more detailed simulation study considering a large range of possible HRF shapes. Finally, instead of applying deconvolution on a few single pixel time-series, we now focus on fUS responses of entire ROIs, as determined by spatial independent component analysis (ICA). The selected ROIs represent three crucial anatomical structures within the mouse brain's colliculo-cortical, image-forming visual pathway: the lateral geniculate nucleus (LGN), the superior colliculus (SC) and the primary visual cortex (V1). These regions are vision-involved anatomical structures of importance \citep{huberman, seabrook}, which can be captured together well in a minimal number of coronal and sagittal slices \citep{bregma}, and have proven to consistently yield clear responses using fUS imaging \citep{param1,mace_visual}.
The vast majority of information about visual stimuli are conveyed via the retinal ganglion cells (RGCs) to the downstream subcortical targets LGN and SC, before being relayed to V1
The LGN and SC are known to receive both similar and distinct visual input information from RGCs \citep{sclgn}. The asymmetry in information projected by the mouse retina to these two downstream targets is reflected in the output of these areas \citep{Ellis}.
Our goal is to compare the hemodynamic activity in these regions by deconvolving the CBF/CBV changes recorded with fUS in response to visual stimulus.
The rest of this paper is organized as follows. First, we describe our data model and the proposed tensor-based solution for deconvolution. Next, we describe the experimental setup and data acquisition steps used for fUS imaging of a mouse subject. This is followed by the deconvolution results, which are presented in two-folds: \emph{(i)} Numerical simulations, and \emph{(ii)} Results on real fUS data. Next, under discussion, we review the highlights of our modelling and results, and elaborate on the neuroscientific relevance of our findings. Finally, we state several future extensions and conclude our paper.
\section{Signal Model}
Naturally fUS images contain far more pixels than the number of anatomical or functional regions. We therefore expect
certain groups of pixels to show similar signal fluctuations, and we consider the fUS images as parcellated in space into several regions. Consequently, we represent the overall fUS data as an $M \times N$ matrix, where each of the $M$ rows contain the average pixel time-series within a region-of-interest (ROI), and $N$ is the number of time samples.
Assuming a single source signal, an individual ROI time-series $y(t)$ can be written as the convolution between the HRF $h(t)$ and the input source signal $s(t)$ as:
\begin{equation}
\label{eq:singleconv}
y(t) = \sum_{l=0}^L h(l)s(t-l)
\end{equation}
where $L+1$ gives the HRF filter length.
However, a single ROI time-series may be affected by a number of ($R$) different source signals. Each source signal $s_r(t)$ may elicit a different HRF, $h_r(t)$. Therefore, the observed time-series is the summation of the effect of all underlying sources:
\begin{equation}
\label{eq:ins_ica}
y(t) = \sum_{r=1}^R \sum_{l=0}^L h_r(l)s_r(t-l).
\end{equation}
Finally, extending our model to multiple ROIs, where each ROI may have a different HRF, we arrive to the following multivariate convolutive mixture formulation:
\begin{equation}
\label{eq:convolutive}
y_m(t) = \sum_{r=1}^R \sum_{l=0}^L h_{mr}(l)s_r(t-l)
\end{equation}
where $h_{mr}(l)$ is the convolutive mixing filter, belonging to the ROI $m$ and source $r$ \citep{b21}.
In the context of fUS, the sources that lead to the time-series can be task-related ($T$), such as the EP, or artifact-related ($A$). The task-related sources are convolved with an HRF, whereas the artifact-related sources are directly additive on the measured time-series \citep{b22}. Yet, the strength of the effect that an artifact source exerts on a region should still depend on the artifact type and the brain region. To incorporate this in Eq. \ref{eq:convolutive}, each $h_{mr}(l)$ with $r \in A$ should correspond to a scaled (by $a_{mr}$) unit impulse function. Thus, we rewrite Eq. \ref{eq:convolutive} as:
\begin{align}
\label{eq:convolutive2}
y_m(t) &= \sum_{r\in T} \sum_{l=0}^L h_{mr}(l)s_r(t-l)+\sum_{r\in A} \sum_{l=0}^{L} a_{mr} \delta(l)s_r(t-l) \nonumber \\
&= \sum_{r\in T} \sum_{l=0}^L h_{mr}(l)s_r(t-l)+\sum_{r\in A} a_{mr} s_r(t).
\end{align}
We aim at solving this deconvolution problem to recover the sources ($s_r,r\in T$) and HRFs ($h_{mr}, r\in T$) of interest separately at each ROI $m$.
\section{Proposed Method}
In this section, we will present the steps of the proposed tensor-based deconvolution method. We will first introduce how deconvolution of the observations modeled as in Eq. \ref{eq:convolutive2} can be expressed as a BTD. Due to the fact that this problem is highly non-convex, we will subsequently explain our approach to identifying a final solution for the decomposition. Finally, we will describe source signal estimation using the HRFs predicted by BTD.
\subsection{Formulating the Block-Term Decomposition}
We start by expressing the convolutive mixtures formulation in Eq. \ref{eq:convolutive} in matrix form as $\mathbf{Y}=\mathbf{H}\mathbf{S}$. The columns of $\mathbf{Y}$ and $\mathbf{S}$ are given by $\mathbf{y}(n)$, $n=1,\dots,N-L'$ and $\mathbf{s}(n)$, $n=1,\dots,N-(L+L')$, respectively. These column vectors are constructed as follows \citep{b23}:
\begin{align}
\begin{aligned}
\label{eq:matrix_y_s}
\mathbf{y}(n) &= [y_1(n),...,y_1(n-L'+1), \\
& ...,y_M(n),...,y_M(n-L'+1)]^T\; \; \text{and}\\
\mathbf{s}(n) &= [s_1(n),...,s_1(n-(L+L')+1), \\
& ...,s_R(n),...,s_R(n-(L+L')+1)]^T
\end{aligned}
\end{align}
\noindent where $L'$ is chosen such that $ML'\geq R(L+L')$. Notice that $M$ has to be greater than $R$, and both matrices $\mathbf{Y}$ and $\mathbf{S}$ consists of Hankel blocks.
The mixing matrix $\mathbf{H}$ is equal to
\begin{equation}
\label{eq:H}
\mathbf{H}=[\mathbf{H}_1 \quad \dots \quad \mathbf{H}_R]=
\begin{bmatrix}
\mathbf{H_{11}} & \dots & \mathbf{H_{1R}}\\
\vdots & \ddots & \vdots \\
\mathbf{H_{M1}} & \dots & \mathbf{H_{MR}}
\end{bmatrix}
\end{equation}
\noindent whose any block-entry $\mathbf{H}_{mr}$ is the Toeplitz matrix of $h_{mr}(l)$:
\begin{equation}
\label{eq:H_ij}
\mathbf{H}_{mr}=
\begin{bmatrix}
h_{mr}(0) & \dots & h_{mr}(L) & \dots & 0\\
& \ddots & \ddots & \ddots & \\
0 & \dots & h_{mr}(0) & \dots & h_{mr}(L)
\end{bmatrix}
.
\end{equation}
Next, the autocorrelation $\mathbf{R}_{\mathbf{y}}(\tau)$ for a time lag $\tau$ is expressed as:
\begin{align}
\label{eq:cov}
\mathbf{R}_{\mathbf{y}}(\tau)&= \mathrm{E}\{\mathbf{y}(n)\mathbf{y}(n+\tau)^T\} = \mathrm{E}\{\mathbf{H}\mathbf{s}(n)\mathbf{s}(n+\tau)^T\mathbf{H}^T\} \nonumber \\
&=\mathbf{H} \mathbf{R}_{\mathbf{s}}(\tau)\mathbf{H}^T, \; \; \; \; \forall\tau.
\end{align}
Assuming that the sources are uncorrelated, the matrices $\mathbf{R}_\mathbf{s}(\tau)$ are block-diagonal, i.e. non-block-diagonal terms representing the correlations between different sources are 0. Therefore, the output autocorrelation matrix $\mathbf{R}_\mathbf{y}(\tau)$ is written as the block-diagonal matrix $\mathbf{R}_\mathbf{s}(\tau)$ multiplied by the mixing matrix $\mathbf{H}$ from the left and by $\mathbf{H}^\text{T}$ from the right. Then, stacking the set of output autocorrelation matrices $\mathbf{R}_\mathbf{y}(\tau)$ for various $\tau$ values will give rise to a tensor $\boldsymbol{\mathcal{T}}$ that admits a so-called block-term decomposition (BTD). More specifically, $\boldsymbol{\mathcal{T}}$ can be written as a sum of low-multilinear rank tensors, in this specific case a rank of $(L+L',L+L',\cdot)$ \citep{b24}. Due to the Hankel-block structure of $\mathbf{Y}$ and $\mathbf{S}$, $\mathbf{R}_{\mathbf{y}}(\tau)$ and $ \mathbf{R}_{\mathbf{s}}(\tau)$ are Toeplitz-block matrices. Note that the number of time-lags to be included is a hyperparameter of the algorithm, and we take it as equal to the filter length in this work.
The decomposition for $R=2$ is illustrated in Fig. \ref{fig:btd_im}. Considering our signal model, where we have defined two types of sources, we can rewrite the block-columns of $\mathbf{H}=[\mathbf{H}_1 \; \mathbf{H}_2]$ (Eq. \ref{eq:H}) simply as $\mathbf{H}=[\mathbf{H}_T \; \mathbf{H}_A]$ instead. Here, $\mathbf{H}_T$ relates to the task-source, i.e. includes the region-specific HRFs, whereas $\mathbf{H}_A$ includes the region-specific scalings of the artifact source.
\begin{figure}[H]
\centering
\includegraphics[width=.7\textwidth]{btd.pdf}
\caption{A demonstration of BTD for $R=2$. The tensor $\boldsymbol{\mathcal{T}}$ of stacked measurement autocorrelations $\mathbf{R}_\mathbf{y}(\tau)$, $\forall \tau$ is first expressed in terms of the convolutive mixing matrix $\mathbf{H}$ and a core tensor $\boldsymbol{\mathcal{C}}$ which shows the stacked source autocorrelations $\mathbf{R}_\mathbf{s}(\tau)$, $\forall \tau$. Each $\mathbf{R}_\mathbf{s}(\tau)$ corresponds to a frontal slice of $\boldsymbol{\mathcal{C}}$ and exhibits a block-diagonal structure with inner Toeplitz-blocks. Note that, each slice comes as a lagged version of the preceeding slice. $\boldsymbol{\mathcal{T}}$ is decomposed into $R=2$ terms, each of which contains a core tensor ($\boldsymbol{\mathcal{C}}_T$ or $\boldsymbol{\mathcal{C}}_A$, which represents the autocorrelation of the corresponding source) and a block column of $\mathbf{H}$ ($\mathbf{H}_T$ or $\mathbf{H}_A$).}
\label{fig:btd_im}
\end{figure}
In addition, we impose a shape constraint to the HRFs such that they are physiologically interpretable. For this purpose, we employed the model described in \citep{b3}, which is a fUS-based adaptation of the well-known canonical model used predominantly in fMRI studies \citep{b7} for depicting CBF or CBV changes, where the second gamma function leading to the undershoot response is removed from the canonical model, resulting in a reduced number of parameters. This model expresses the HRF in terms of a single gamma function defined on a parameter set $\boldsymbol{\theta}$ as below:
\begin{equation}
\label{eq:gamma}
f(t,\boldsymbol{\theta}) = \theta_1(\Gamma(\theta_2)^{-1} \theta_3^{\theta_2}t^{\theta_2-1}\rm{e}^{-\theta_3t})
\end{equation}
\noindent where $\theta_1$ is the scaling parameter to account for the strength of an HRF and the rest of the parameters define the shape of the HRF.
Finally, the BTD is computed by minimizing the cost function:
\begin{align}
\label{eq:cost}
J(\boldsymbol{\mathcal{C}},\boldsymbol{\theta},\mathbf{a}) = \lVert \boldsymbol{\mathcal{T}} &- \sum_{r \in T} \boldsymbol{\mathcal{C}}_r \times_1 \mathbf{H}_r(\boldsymbol{\theta}_r) \times_2 \mathbf{H}_r(\boldsymbol{\theta}_r) \nonumber \\
&-\sum_{r \in A} \boldsymbol{\mathcal{C}}_r \times_1 \mathbf{H}_r(\mathbf{a}_r) \times_2 \mathbf{H}_r(\mathbf{a}_r)
\rVert^2_F
\end{align}
\noindent while all $\mathbf{H}_r$'s and $\boldsymbol{\mathcal{C}}_r$'s are structured to have Toeplitz blocks. The operator $||\cdot||_F$ is the Frobenius norm. The BTD is implemented using the structured data fusion (SDF) framework, more specifically using the quasi-Newton algorithm \texttt{sdf\_minf}, offered by Tensorlab \citep{b25}.
\subsection{Identifying a Stable Solution for BTD}
For many matrix and tensor-based factorizations, such as the BTD described above, the objective functions are non-convex. As such, the algorithm selected for solving the non-convex optimization might converge to local optimas of the problem \citep{tdunique}. In order to identify a stable solution, it is common practice to run the optimization multiple times, with a different initialization at each run. Finally, a choice needs to be made amongst different repetitions of the decomposition.
For our problem, each BTD repetition produces $M$ HRFs, characterized by their parameters $\boldsymbol{\theta}_m, m=1,2,\dots,M$. We follow a similar approach as described in \citep{simonstable}, i.e. we cluster the solutions using the peak latencies of the estimated HRFs as features, and aim at finding the most coherent cluster. The steps of our clustering approach are as follows:
\begin{enumerate}
\item Run BTD $20$ times with random initializations, and from each run, store the following:
\begin{itemize}
\item Final value of the cost (i.e., objective) function
\item $M$ HRFs
\end{itemize}
\item Eliminate the $P$ outlier BTD repetitions having significantly higher cost values (We use Matlab's \texttt{imbinarize} for the elimination which chooses an optimal threshold value based on Otsu's method \citep{otsu}, as we expect the best solution to be amongst the low-cost solutions)
\item Form a matrix with $M$ columns (standing for the peak latencies of $M$ HRFs, these are the features) and $20-P$ rows (standing for the retained BTD repetitions, these are the observations)
\item Apply agglomerative hieararchical clustering to the columns of the matrix in Step 3
\item Compute the following intracluster distance metric for each cluster as:
\begin{equation}
\label{dist_cluster}
d_\text{C} = \frac{\max_{c_1,c_2 \in \text{C}} d(c_1,c_2)}{n_\text{C}}
\end{equation}
where the numerator gives the Euclidean distance between the two most remote observations inside the cluster $\text{C}$ (known as the complete diameter distance \citep{diameterdist}), and the denominator, $n_\text{C}$, is the number of observations included in $\text{C}$
\item Determine the most stable cluster as the one having the minimum intracluster distance
\item Calculate the mean of the estimated HRFs belonging to the cluster identified in Step 6
\end{enumerate}
To sum up, the clustering approach described above assumes that the best possible solution will be low-cost (Step 2), have low intracluster distance (numerator of Eq. \ref{dist_cluster}) and frequently-occurring (denominator of Eq. \ref{dist_cluster}). After we have the final HRF predictions, the last step is to estimate the sources.
\subsection{Estimation of the Source Signals}
The final HRF estimates are reorganized in a Toeplitz-block matrix as shown in Equations \ref{eq:H} and \ref{eq:H_ij}. This gives rise to $\hat{\mathbf{H}}_T$, i. e., the block columns of $\mathbf{H}$ that are of interest. Going back to our initial formulation $\mathbf{Y}=\mathbf{H}\mathbf{S}$, we can estimate the task-related source signals $\mathbf{S}_T$ by:
\begin{equation}
\hat{\mathbf{S}}_T=\hat{\mathbf{H}}_T^\dagger \mathbf{Y}
\label{s_ls}
\end{equation}
where $(.)^\dagger$ shows the Moore-Penrose pseudo-inverse.
In order to obtain the pseudo-inverse of $\hat{\mathbf{H}}_T$, we used truncated singular value decomposition (SVD). Truncated SVD is a method for calculating the pseudo-inverse of a rank-deficient matrix, which is the case for many signal processing applications on real data, such as for extraction of signals from noisy environments \citep{b26}. We applied this approach by heuristically setting the singular values of $\hat{\mathbf{H}}_T$ to be truncated as the lowest $90\%$ following our simulation study.
\section{Experimental Setup and Data Acquisition}
During our fUS experiment, we displayed visual stimuli to a mouse ($7$-months old, male, C57BL/6J; The Jackson laboratory) while recording the fUS-based HR of its brain via the setup depicted in Fig. \ref{fig:fussetup}. The mouse was housed with food and water \textit{ad libitum}, and was maintained under standard conditions (12/12 h light-darkness cycle, 22℃). Preparation of the mouse involved surgical pedestal placement and craniotomy.
First, an in-house developed titanium pedestal (8 mm in width) was placed on the exposed skull using an initial layer of bonding agent (OptiBond™) and dental cement (Charisma\textregistered). Subsequently, a surgical craniotomy was performed to expose the cortex from Bregma -1 mm to -7 mm.
After skull bone removal and subsequent habituation, the surgically prepared, awake mouse was head-fixed and placed on a movable wheel in front of two stimulation screens (Dell 23,8” S2417DG, 1280 x 720 pixels, 60 Hz) in landscape orientation, positioned at a 45° angle with respect to the antero-posterior axis of the mouse, as well as 20 cm away from the mouse’s eye, similar to \citep{setup}. All experimental procedures were approved \textit{a priori} by an independent animal ethical committee (DEC-Consult, Soest, the Netherlands), and were performed in accordance with the ethical guidelines as required by Dutch law and legislation on animal experimentation, as well as the relevant institutional regulations of Erasmus University Medical Center.
The visual stimulus consisted of a rectangular patch of randomly generated, high-contrast images - white ``speckles'' against a black background - which succeeded each other with $25$ frames per second, inspired by \citep{param2,param1,speckles}. The rectangular patch spanned across both stimulation screens such that it was centralized in front of the mouse, whereas the screens were kept entirely black during the rest (i.e., non-stimulus) periods. The visual stimulus was presented to the mouse in $20$ blocks of $4$ seconds in duration. Each repetition of the stimulus was followed by a random rest period between $10$ to $15$ seconds.
Before experimental acquisition, a high-resolution anatomical registration scan was made of the exposed brain's microvasculature so as to locate the most ideal imaging location for capturing the ROIs aided by the Allen Mouse Brain Atlas \citep{allen}, and to ensure optimal relative alignment of data across separately performed experiments. Ultimately, during the experiment, functional scans were performed on two slices of the mouse brain; one coronal at Bregma $-3.80$ mm, and one sagittal at Bregma $-2.15$ mm \citep{bregma}.
For data acquisition, $14$ tilted plane waves were transmitted from an ultrasonic transducer (Vermon L$22-14$v, $15$ MHz), which was coupled to the the mouse's cranial window with ultrasound transmission gel (Aquasonic). A compound image was obtained by Fourier-domain beamforming and angular compounding, and non-overlapping ensembles were formed by concatenating $200$ consecutive compound images. We applied SVD based clutter filtering to separate the blood signal from stationary and slow-changing ultrasound signals arising from other brain tissue \citep{svdfilter}. SVD-filtering was performed on each ensemble by setting the first (i.e., largest) $30$\% of the singular values to $0$ and reconstructing the vascular signal of interest from the remaining singular components \citep{fus_setup}. Images were upsampled in the spatial frequency domain to an isotropic resolution of $25\mu$m.
Finally, a Power-Doppler Image (PDI) was obtained by computing the power of the SVD-filtered signal for each pixel over the ensemble dimension. Hence, the time-series of a pixel (Eq. \ref{eq:convolutive2}) corresponds to the variation of its power across the PDI stream.
A total of 3 ROIs (SC, LGN and V1) were selected from the captured slices. For this purpose, the data was first parcellated using spatial ICA with $10$ components in both slices \citep{spatial_ica}. A spatial mask was defined based on the spatial signature of the component corresponding to the SC from the coronal; and LGN and V1 from the sagittal slice. To obtain a representative time-series for each ROI, we averaged the time-series of pixels which are captured within the boundaries of each mask. Finally, the ROI time-series were normalized to zero-mean and unit-variance before proceeding with the BTD.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{fussetup.jpg}
\caption{The setup and flowchart for fUS imaging of the ROIs. In A, the experimental setup is shown, with the awake, head-fixed mouse walking on a movable wheel. During an experiment, either a rectangular patch of speckles (stimulus) or an entirely black screen (rest) is displayed across both monitors. In B, the process of forming a PDI is demonstrated for a coronal brain slice. First, back-scattered ultrasonic waves obtained at different imaging angles are beamformed, resulting in compound images. Next, the compound images are progressed to SVD-based clutter filtering in batches in order to remove the tissue motion from the vascular signal. From each filtered batch, a PDI is constructed by computing the power per-pixel. In C, the ROIs that we will focus on in the rest of this paper are shown. The pointed arrows represent the signal flow for processing of visual information.}
\label{fig:fussetup}
\end{figure}
\section*{Data and Code Availability Statement}
The data and MATLAB scripts that support the findings of this study are publicly available in \href{https://github.com/ayerol/btd_deconv}{https://github.com/ayerol/btd\_deconv}.
\section{Results}
To demonstrate the power of our BTD-based deconvolution approach, the following sections discuss a simulation study and the results of the \textit{in vivo} mouse experiment respectively. In both cases, we consider an EP with repeated stimulation. While we consider a task-related source (expected to be similar to the EP) to affect multiple brain regions through unique HRFs, we also take into account the influence of artifacts and possible hemodynamic changes which are unrelated to the EP on the region HRs. Note that we will use a single additive component (the second term in Eq. \ref{eq:convolutive}) to describe the sources of no interest.
\subsection{Numerical Simulations} \label{simulation_sec}
We simulated three ROI time-series, each with a unique HRF that is characterized using Eq. \ref{eq:gamma} on a different parameter set $\boldsymbol{\theta}$. We assumed that there are two underlying common sources that build up to the ROI time-series. The first source signal is a binary vector representing the EP. The EP involves $20$ repetitions of a $4$-seconds stimulus (where the vector takes the value $1$) interleaved with $10-15$ seconds of random non-stimulus intervals (where the vector takes the value $0$). This is the same paradigm that will be used later for deconvolution of \textit{in vivo}, mouse-based fUS data (Section \ref{deconv_results}). The EP is assumed to drive the hemodynamic activity in all ROIs, but the measured fUS signals are linked to the EP through possibly different HRFs. The second source signal stands for the artifact component and is generated as a Gaussian process with changing mean, in accordance with the system noise and artifacts modeled in \citep{noisesource}.
Each ROI time-series is obtained by convolving the corresponding HRF and the common EP, and subsequently adding on the noise source. Note that the variance of the noise source is dependent on the region. In addition, the noise variance values are adjusted in order to assess the performance of the proposed method under various signal-to-noise ratios (SNRs). The data generation steps are illustrated in Fig. \ref{fig:sim}.
We normalized the time-series to zero-mean and unit-variance before proceeding with the BTD. Due to the fact that the true source, in this case the EP, was generated as a binary vector, we binarized the source signal estimated after BTD as well to allow for a fair comparison. More specifically, we binarized the estimated source signal by applying a global threshold, and evaluated the performance of our source estimation by comparing the true onsets and duration of the EP with the predicted ones.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{simulations3.pdf}
\caption{Illustration of the simulator. Both of the simulated sources are shown in the left section, one being task-related (the EP) and one being artifact-related. In the middle section, the convolutive mixing filters are depicted. The filters which are convolved with the EP are the HRFs, whereas the filters which are convolved with the artifact source only differ by their scaling and modeled as impulses, such that their convolution with the artifact source lead to a direct summation on the measured time-series. In the last section, the convolved results are added together to deliver the time-series at each ROI.}
\label{fig:sim}
\end{figure}
We performed a Monte Carlo simulation of $100$ iterations for different SNR values. In each iteration, the HRF parameters were generated randomly in such a way that the peak latency (PL) and width (measured as full-width at half-maximum; FWHM) of the simulated HRFs varied between $[0.25,4.5]$ and $[0.5,4.5]$ seconds respectively. These ranges generously cover the CBV/CBF-based HRF peak latencies (reported as $2.1 \pm 0.3$ s in \citep{fus_npixels}, and between $0.9$ and $2$ seconds in \citep{b3,hrf_rng1,hrf_rng2}) and FWHMs (reported as $2.9 \pm 0.6$ s in \citep{fus_npixels}) observed in previous mouse studies.
We defined the following metrics at each Monte Carlo iteration to validate the performance of the algorithm:
\begin{itemize}
\item For quantifying the match between the estimated and true EP, we calculated the Intersection-over-Union (IoU) between them at each repetition of the EP. For example, if the true EP takes place between $[3,7]$ seconds but this is estimated as $[3.4,7.5]$ seconds, the IoU value will be: $\sfrac{(7-3.4)}{(7.5-3)}=0.8.$ For an easier interpretation, we converted the unit of the IoU ratio to seconds as follows: Since the ideal estimation should give an exact match of $4$ seconds (which corresponds to an IoU of $1$), we multiplied the IoU ratio by $4$. The IoU of $0.8$ in the example above corresponds to a match of $3.2$ seconds. Finally, we averaged the IoU values of $20$ repetitions of the EP to get one final value.
\item We computed the absolute PL difference (in terms of seconds) between the true and estimated HRFs, averaged for $M=3$ ROIs.
\end{itemize}
Simulation results are provided in Fig. \ref{simresults}. Under $0$ dB SNR, the estimated HRFs have an error of $0.3 \pm 0.4$ (median $\pm$ standard deviation) seconds in the peak latencies across the Monte-Carlo iterations. In order to emphasize the importance of incorporating HRF variability in the signal model, we also compared the EP estimation results when a fixed HRF is assumed (the canonical HRF). The results (Fig. \ref{simresults}(d)) show that using a fixed HRF causes a significant decrease in EP estimation performance. In the context of real neuroimaging data, this difference could cause a misinterpretation of the underlying source signals and neurovascular dynamics.
\begin{figure}[H]
\centering
\begin{subfigure}[t]{.49\textwidth}
\centering
\includegraphics[width=.87\textwidth]{hrf_range.pdf}
\caption{Range of simulated HRFs. The first HRF (blue) has a peak latency and width of $0.25$ and $0.5$ seconds, whereas the second HRF (orange) has both its peak latency and width as $4.5$ seconds respectively. The peak latency and width of the second HRF are also displayed on its plot.}
\end{subfigure}
\hspace*{\fill}
\begin{subfigure}[t]{.47\textwidth}
\centering
\includegraphics[width=\textwidth]{example_hrf_est.pdf}
\caption{Visualization of the simulated HRFs and their corresponding estimates under $0$ dB SNR (from one Monte-Carlo iteration).}
\end{subfigure}
\par\medskip
\centering
\begin{subfigure}[t]{.49\textwidth}
\centering
\includegraphics[width=\textwidth]{example_source_est.pdf}
\caption{Visualization of the estimated source signal versus the true EP under $0$ dB SNR (from one Monte-Carlo iteration). For a more precise comparison, we further binarize the estimated source signal by thresholding it as shown.}
\end{subfigure}
\hspace*{\fill}
\begin{subfigure}[t]{.49\textwidth}
\centering
\includegraphics[width=\textwidth]{ep_iou_plot.pdf}
\captionsetup{width=.91\linewidth}
\caption{EP estimation performance with respect to SNR. The markers and errorbars denote the median and standard deviation of the IoU of EP estimation across the Monte-Carlo iterations respectively. When a fixed HRF is assumed, we see that the EP estimation is much less accurate.}
\end{subfigure}
\caption{Simulation results.}
\label{simresults}
\end{figure}
\subsection{Experimental Data} \label{deconv_results}
The selected ROIs are displayed in Fig. \ref{hrf_exp}(a) and Fig. \ref{hrf_exp}(b), showing SC in the former; LGN and V1 in the latter plot. The raw, normalized fUS time-series belonging to each region are displayed in Fig. \ref{hrf_exp}(c). By deconvolving this multivariate time-series data, we estimated the region-specific HRFs and the underlying source signal of interest. In Fig. \ref{hrf_exp}(d), the estimated HRFs are provided. Our results point to a peak latency of $1$ s in SC, $1.75$ s in LGN and $2$ s in V1. Similarly, the FWHMs are found as $1.25$ s in SC, $1.75$ s in LGN and $1.75$ s in V1. These results reveal that SC gives the fastest reaction to the visual stimulus amongst the ROIs, followed by the LGN. In addition, the HRF in SC is observed to be steeper than in LGN and V1.
Fig. \ref{hrf_exp}(e) demonstrates the estimated source signal of interest. Unlike the simulations, we see that the source signal exhibits a substantial variation in amplitude. In order to interpret this behavior of the estimated source signal, we further investigated the raw fUS signals shown in Fig. \ref{hrf_exp}(c). When the responses given to consecutive repetitions of the stimulus are compared within each region, it can be observed that SC reacts most consistently to the stimulus, while the reproducibility of the evoked responses in LGN and V1 (particularly in V1) are much lower, especially in the second half of the repetitions. To better quantify and compare the region-specific differences in response-variability, we computed the Fano factor (FF) as the ratio of the variance to mean peak amplitude of each region's post-stimulus response \citep{ffmaxamp}, defined in a window $[0,10]$ seconds after a stimulus has been shown. We found an FF value of $0.23, 0.42$ and $0.8$ respectively for SC, LGN and V1. These findings indicate that the consistency of the HR strength is halved from SC to LGN, and again from LGN to V1.
We can even see cases where there is almost no reaction (as detected by fUS) to the stimulus in V1, such as in repetitions $10, 12, 15, 16$ and $20$. These repetitions coincide with the points in Fig. \ref{hrf_exp}(e) wherein the most considerable drops in the estimated source signal were observed. As such, the variability of responses can explain the unexpected amplitude shifts of the estimated source signal.
Due to its changing amplitude, comparing the estimated source signal to the EP becomes a more challenging task than in simulations, as binarization using a single global threshold would not work well (Fig. \ref{hrf_exp}(e)). However, it is still possible to observe local peaks of the estimated source signal occurring around the times that the stimulus was shown. While applying a global threshold can uncover $13$ out of $20$ repetitions, with a detection of local peaks, this number increases to $19$ out of $20$ repetitions. After detecting the peaks, we located the time points where for the first time a significant rise (and drop) was observed before (and after) the peak, leading to the starting (and ending) times of the estimated repetitions. Hence, we obtained an estimation of the EP by constructing a binary vector of all $0$'s with the exception of the time periods in between the predicted starting and ending points. In Fig. \ref{hrf_exp}(f), we compared our EP estimation (averaged across repetitions) with the true EP. We can appreciate that our EP estimation is a slightly shifted ($<0.5$ seconds) version of the true EP. Here, we also displayed the responses in SC, LGN and V1 (averaged across repetitions), from which it can be observed that the estimated HRFs follow the same order as the responses during the \textit{in vivo} experiment, as expected.
Note that the observed trial-by-trial variability in temporal profile across the measured HRs underlines the importance of estimating the source signal. The conventional definition of the EP strictly assumes that the input of the convolution leading to the neuroimaging data (Eq. \ref{eq:singleconv}) is the same ($=1$) at each repetition of the stimulus. This would mean that the exact same input, shown at different times, outputs different responses, which would evidence a dynamic system \citep{balloon,dcm}. However, estimating the source signal allows for a non-binary and flexible characterization of the input, and thus LTI modelling \emph{can} remain plausible. Although extensive analysis of the repetition-dependent behavior of the vascular signal is beyond the scope of this work, we will discuss its possible foundations in the next section.
\begin{figure}[H]
\centering
\begin{subfigure}[t]{.49\textwidth}
\centering
\includegraphics[width=\textwidth,trim={2cm 2cm 2cm 1cm},clip]{sc.pdf}
\caption{ICA spatial map showing SC (blue).}
\end{subfigure}
\hspace*{\fill}
\begin{subfigure}[t]{.49\textwidth}
\centering
\includegraphics[width=\textwidth,trim={2cm 2cm 2cm 1cm},clip]{v1_lgn.pdf}
\caption{ICA spatial maps showing LGN (orange) and V1 (yellow).}
\end{subfigure}
\par\medskip
\centering
\begin{subfigure}[t]{.485\textwidth}
\centering
\includegraphics[width=\textwidth]{raw_data.pdf}
\caption{The normalized fUS responses in SC, LGN and V1 (the experimental paradigm is displayed in the background of the plots).}
\end{subfigure}
\hspace*{\fill}
\begin{subfigure}[t]{.495\textwidth}
\centering
\includegraphics[width=\textwidth]{hrfs_real.pdf}
\caption{Estimated HRFs.}
\end{subfigure}
\par\medskip
\centering
\begin{subfigure}[t]{.49\textwidth}
\centering
\includegraphics[width=\textwidth]{ep_real.pdf}
\caption{Estimated source signal.}
\end{subfigure}
\hspace*{\fill}
\begin{subfigure}[t]{.49\textwidth}
\centering
\includegraphics[width=\textwidth]{ep_real_mean_ep.pdf}
\captionsetup{width = .9\textwidth}
\caption{True EP, estimated EP and normalized responses in SC, LGN and V1, all averaged across stimulus repetitions.}
\label{reps_regions}
\end{subfigure}
\par\medskip
\caption{Deconvolution results on fUS data. Figures (a) and (b) show the ROIs determined with ICA. The regional fUS responses are displayed in (c). HRF and source signal estimation results are given in (d) and (e-f) respectively.}
\label{hrf_exp}
\end{figure}
\newpage
\section{Discussion} \label{sec_disc}
In this study, we considered the problem of deconvolving multivariate fUS time-series by assuming that the HRFs are parametric and source signals are uncorrelated. We formulated this problem as a block-term decomposition, which delivers estimates for the source signals and region-specific HRFs. We investigated the fUS response in three ROIs of a mouse subject, namely the SC, LGN and V1, which together compose significant pathways between the eye and the brain.
The proposed method for deconvolution of the hemodynamic response has the advantage of not requiring the source signal(s) to be specified. As such, it can potentially take into account numerous sources besides the EP, that are unrelated to the intended task and/or outside of the experimenters' control. As mentioned in the Introduction, imaged responses might encompass both stimulus-driven and stimulus-independent signals \citep{Xi2020DiverseCN,stringer}. The experimental subject may experience a state of so-called ``quiet wakefulness'' \citep{quitewakeful}, or ``wakeful rest'' \citep{wakefulrest}; a default brain state, during which the unstimulated and resting brain exhibits spontaneous and non-random activity \citep{Gilbert}. This exemplifies how a large fraction of the brain's response is not only triggered by the EP, but also highly influenced by the brain's top-down modulation. This assumption is further supported by recent fUS-based research on functional connectivity \citep{Osmanski} and the default mode network in mice \citep{Dizeux}. Other types of unintentional triggers could be spontaneous epileptic discharges \citep{bori_ica_ep}.
Despite the fact that the proposed solution has the premise of identifying multiple sources, the number of sources is limited by the selected number of ROIs. As we chose to focus on three ROIs, we were bound to assume less, i.e. two, underlying sources. Accordingly, we accounted for one of the sources to be task-related (related to the visual paradigm), whereas all other noise and artifact terms were combined to represent the second source. Our signal model assumes that the task-related source gets convolved with region-specific HRFs, whereas the artifact-related source is additive on the measured fUS data. As such, the signal model intrinsically assumes that the HRF in each studied region is driven by one common source signal. In fact, incorporating more ROIs and thus more sources can achieve a more realistic approximation of the vascular signal due to the aforementioned reasons. However, it should be noted that addition of more sources would introduce additional uncertainties: what should be the number of sources, and how do we match a source with a certain activity? For instance, several sources can represent the spontaneous (or resting state) brain activity, several sources can represent the external stimuli (depending on the number of different types of stimuli used during the experiment), and the remaining sources can represent the noise and artifacts. To model such cases, the simulations should be extended to include more sources. In addition, thorough studies are needed to explore accurate matching of estimated sources to the activity they symbolize. The assignment of sources can indeed require a priori knowledge of the activities, such as expecting a certain activity to be prominent over the others \citep{cpd_sources}, or defining frequency bands for dividing the signal subspace \citep{reswater}.
When we applied our method to \textit{in vivo} mouse-based fUS data, we observed unforeseen amplitude variations in the estimated source signal. To examine this further, we investigated the hemodynamic responses in the selected ROIs across repetitions. We noticed that the response variability in the visual system increases from the subcortical to the cortical level. Consistent with our findings, electrophysiological studies such as \citep{catlgn} report an increase in trial-by-trial variability from subcortex to cortex, doubling from retina to LGN and again from LGN to visual cortex. Variability in responses could be related to external stimulation other than the EP, such as unintended auditory stimulation from experimental surroundings \citep{ito}.
In addition, literature points to eye movements as a source of high response variability in V1, a behavior which can be found in head-fixated, but awake mice following attempted head rotation \citep{mouseeye}, which can extraordinarily alter stimulus-evoked responses \citep{eyemov}.
We noted that the SC has the fastest reaction to stimuli, followed respectively by the LGN and V1. As V1 does not receive direct input from the retina, but via LGN and SC, its delayed HRF is consistent with the underlying subcortical-cortical connections of the visual processing pathway, as has also been reported by \citep{brunner,rats,lewis}. What's more, the SC's particular aptness to swiftly respond to the visual stimulus aligns with its biological function to indicate potential threats (such as flashing, moving or looming spots \citep{Gale, Wang, Inayat}).
Compared to our previous BTD-based deconvolution, we have made several improvements in this work. To start with, the current method exploits all the structures in the decomposition scheme. For example, previously the core tensor representing the lagged source autocorrelations was structured to be having Toeplitz slices, yet, these slices were not enforced to be shifted versions of each other. Incorporating such theoretically-supported structures significantly reduced the computation time of BTD by lowering the number of parameters to be estimated. In addition, we increased the robustness of our algorithm by applying a clustering-based selection of the final HRF estimates amongst multiple randomly-initialized BTD runs. Nevertheless, the formulated optimization problem is highly non-convex with many local minima, and the simulations show that there is still room for improvement. For instance, the selection of hyperparameters - namely the HRF filter length, number of time lags in the autocorrelation tensor, and the number of BTD repetitions - affect the performance of the algorithm. In addition, the selection of the ``best'' solution amongst several repetitions of such a non-convex factorization can be made in various ways, such as with different clustering criteria \citep{icasso}. Although many methods have been proposed to estimate the HRF so far, it is challenging to completely rely on one. First and foremost, we do not know the ground truth HRFs within the brain. As such, it is a difficult research problem on its own to assess the accuracy of a real HRF estimate. Furthermore, all methods make different assumptions on the data to perform deconvolution, such as uncorrelatedness of source signals (this study), the spectral characteristics of neural activity \citep{b14}, and Gaussian-distributed noise terms \citep{b16}. While making certain assumptions about the data model might be inevitable, it is important to keep an open mind about which assumption would remain valid in practice, and under which conditions. Hence, further experiments can be performed in the future to explore the limits of our assumptions.
\section{Conclusion}
In this paper, we deconvolved the fUS-based hemodynamic response in several regions of interest along the mouse visual pathway. We started with a multivariate model of fUS time-series using convolutive mixtures, which allowed us to define region-specific HRFs and multiple underlying source signals. By assuming that the source signals are uncorrelated, we formulated the blind deconvolution problem into a block-term decomposition of the lagged autocorrelation tensor of fUS measurements. The HRFs estimated in SC, LGN and V1 are consistent with the literature and align with the commonly accepted neuroanatomical, biological, neuroscientific functions and interconnections of said areas, whereas the predicted source signal matches well with the experimental paradigm. Overall, our results show that convolutive mixtures with the accompanying tensor-based solution provides a flexible framework for deconvolution, while revealing a detailed and reliable characterization of hemodynamic responses in functional neuroimaging data.
\section*{Acknowlegements}
This study was funded by the Synergy Grant of Department of Microelectronics of Delft University of Technology and the Delft Technology Fellowship.
\newpage
| 2024-02-18T23:40:02.952Z | 2022-04-11T02:02:50.000Z | algebraic_stack_train_0000 | 1,178 | 8,371 |
|
proofpile-arXiv_065-5949 | \section{\bf Introduction}
Deep recurrent neural networks have seen tremendous success in the last decade across domains like NLP, speech and audio processing \cite{Aaron16}, computer vision \cite{Wang16}, time series
classification, forecasting and so on. In particular, it has achieved state-of-art performance (and beyond) in tasks like handwriting recognition
\cite{Graves09}, speech recognition~\cite{Graves13,Graves14}, machine translation~\cite{Cho14,Sutskever14} and image captioning~\cite{Kiros14,Xu15}, to name
a few. A salient feature in all these applications that RNNs exploits during learning is the sequential aspect of the data.
In this paper, our focus is on classical time-series, specifically the forecasting issue. The earliest attempts on employing RNNs for time series forecasting happened
more than two decades ago \cite{Connor91}, where RNNs were viewed as candidate nonlinear ARMA models. The main advantage of RNNs vs a feed-forward
structure in building non-linear AR models is the weight sharing across time-steps, which keeps the number of weight parameters independent of the order of
auto-regression. The early plain RNN unit was further enhanced with LSTM units \cite{Hochreiter97,Gers00} which increased their ability for long-range
dependencies and mitigated the vanishing gradient problem to an extent. With the exploding, ever-increasing data (sequential) available across domains
like energy, transportation, retail, finance etc., accurate time-series forecasting continues to be an active research area. In particular, RNNs continue to be one of the widely preferred predictive modelling tools for time-series forecasting \cite{Fillippo17}.
\comment{
Prediction in the presence of missing data is an old research problem. Researchers have provided a variety of techniques for data imputation over the years, which can be followed by the predictive modelling step. In particular, RNNs have also been employed for prediction under missing sequential data over the years. This paper addresses the missing data issue using RNNs in a novel way.
}
Encoder-Decoder (ED) OR Seq2Seq architectures used to map variable length sequences to another variable length sequence were first successfully applied for machine translation \cite{Cho14,Sutskever14} tasks. From then on, the ED framework has been successfully applied in many other tasks like speech recognition\cite{Liang15},
image captioning etc. Given the variable length Seq2Seq mapping ability, ED framework can be naturally utilized for multi-step (target) learning and
prediction where target vector length can be independent of input vector length.
\subsection{\bf Contributions}
Seasonal ARMAX models \cite{Box90} are generalizations of ARMAX models which additionally capture stochastic seasonal correlations in the
presence of exogenous inputs. While there are many non-linear variants of linear ARX or ARMAX models, there is no explicit non-linear
variant of the SARMAX linear statistical model, which is capable of accurate multi-step (target) learning \& prediction to the best of our knowledge.
Existing related DL works either (1) consider some form of the ED approach for multi-step time-series prediction with exogenous inputs,
without incorporating stochastic seasonal correlations\cite{Wen17}, (2) many-to-many RNN architecture with poor multi-step predictive ability \cite{Flunkert17,DeepStateSpace18}, (3) incorporate stochastic seasonal correlations using additional skip connections via
a non-ED predictive approach \cite{Lai18} to emphasize correlations from one cycle (period) behind, (4) capture deterministic seasonality (additive
periodic components) in time-series using a Fourier basis in a overall feed-forward DL framework \cite{Oreshkin20}.
Our main contribution here
involves a novel ED architecture for seasonal modelling using more than one encoder and incorporating multi-step (target) learning.
The overall contribution summary is as follows:
\begin{itemize}
\item We propose a novel Encoder-Decoder based nonlinear SARX model which explicitly incorporates (stochastic) seasonal correlations. The framework can elegantly address
multi-step (target) learning with exogenous inputs. It allows for multiple encoders depending on the order of seasonality. {\em The seasonal
lag inputs are intelligently split between encoder and decoder inputs based on idea of the multiplicative seasonal AR model.}
\comment{
\item We propose a novel ED based learning scheme for time-series prediction in presence of {\em missing data}. The missingness pattern in the
input window is encoded as it is (without imputation) using two encoders with variable length inputs. The resulting encoding is lossless
while the compression can be substantial depending on the structure of the missingness pattern. }
\item To utilize the above novel scheme for multiple sequence data, where per sequence data or variation in exogenous variables is less, we propose a novel greedy recursive procedure by
grouping normalized sequences. The idea is to build one or a few background models which can be used to predict for all sequences.
\item We demonstrate effectiveness of the proposed architecture on real data sets involving both single and multiple sequences.
Our experiments illustrate the proposed method's
performance is competitive with state-of-art and outperforms some of the existing methods.
\end{itemize}
The paper overall is organized as follows. Sec.~\ref{sec:RW} discusses related work and puts the proposed work in perspective of current literature. Sec.~\ref{sec:Methodology}
describes the proposed seasonal architecture for a single time-series. We next explain a novel recursive grouping algorithm in Sec.~\ref{sec:GreedyRec}, which allows the proposed
seasonal architecture to handle multiple
sequence data. By bench-marking against various state-of-art baselines, we demonstrate the efficacy of our proposed architecture on both single and multiple sequence scenarios in
Sec.~\ref{sec:Results}. We provide concluding remarks in Sec.~\ref{sec:Conc}.
\section{\bf Related Work}
\label{sec:RW}
Times series forecasting has a long literature spanning more than five decades. Classical approaches include AR, MA, ARMA\cite{Box90}, exponential smoothing, linear state space models etc. A wide spectrum of non-linear approaches have also been explored over the years ranging from feed-forward networks, RNNs \cite{Yagmur17,Connor91}, SVMs\cite{SVM09}, random forests \cite{RF14} and so on to the recent temporal convolutional networks (TCN) \cite{TCN18}, with applications spanning across domains. These non-linear techniques have been shown to outperform the traditional techniques. The renewed surge in ANN research over the past decade has seen DL in particular being significantly explored in time series forecasting as well. For a review on deep networks for time series modelling, please refer to \cite{Langkvist14}.
While traditionally time-series forecasting has focused on single time-series or a bunch of independent sequences, the problem of simultaneously forecasting a huge set of correlated time series is gaining traction given the increased availability of such data.
Examples include demand forecasting of items in retail, price prediction of stocks, traffic state/congestion across signals, weather patterns across locations etc. There exist many recent sophisticated approaches tackling this high-dimensional problem \cite{Sen19,TRMF16,Flunkert17,DeepStateSpace18,Wen17,Lai18}.
Of these, \cite{Sen19,TRMF16} adopt some variant of a matrix/tensor factorization (MF) approach on the multi-variate data. \cite{Lai18} employs CNN layers first (on the 2-D data) followed by RNN layers.
\cite{Flunkert17,DeepStateSpace18,Wen17} consider an RNN architecture at a single time-series level and extend it to multiple time series by essentially scaling sequences.
Our proposed architecture is also at a single time-series level but different due to the seasonality feature.
Our approach to handle multi-sequence learning also employs sequence-specific scaling but goes much beyond this, as described in Sec.~\ref{sec:GreedyRec}.
\subsection{\bf Proposed architecture in perspective}
\label{sec:SeasEDSurvey}
Two uni-variate ED based attention variants and simple multivariate extensions of these (which do not look scalable to large number of
sequences) is considered in \cite{Yagmur17}.
\cite{Yagmur17} doesn't consider exogenous inputs in their architecture.
To incorporate seasonal correlations especially when period or cycle length is large, it
would need a proportionately large input window width depending on order of seasonality. This would need a large
proportionate set of additional parameters to be learnt which capture the position based attention feature of \cite{Yagmur17}.
This can also lead to (i)processing irrelevant inputs which may not influence prediction (ii)vanishing gradient issue in-spite of attention. While in our method, we consider multiple encoders where the $k^{th}$ encoder captures the correlations from exactly $k$ periods behind the current instant.
{\em By picking the state from the last time-step of each of these encoders and concatenating them, there is equal emphasis/representation from each of the cycles towards the decoder input.}
This is also unlike \cite{Yagmur17}, where a convex combination of all
states (of the single encoder) is the context vector. This may not retain information distinctly from each seasonal cycle for instance.
Also, in our approach, since we split inputs across cycles into
parallel encoders each of length much lesser compared to a single encoder, the vanishing gradient issue is potentially better mitigated.
\comment{
The context vector from the encoder that is fed at every step of the decoder is different in the presence of an attention layer.
The context vector is a convex combination of the state vectors at each time-step of the decoder. They introduce and learn an additional weight or emphasis vector to s
}
\cite{Wen17} propose an ED architecture where targets are fed at decoder output, while exogenous inputs at prediction instants are fed as
decoder inputs.
{\em Our seasonal ED architecture can be viewed as a non-trivial generalization of this Seq2Seq architecture for multi-step prediction.}
They also suggest certain improvements to the basic ED architecture for quicker learning etc. They also consider probabilistic (or interval forecasts) using quantile loss, which can be readily incorporated in our proposed architecture as well.
DeepAR \cite{Flunkert17} and DeepSS \cite{DeepStateSpace18} use a many-to-many architecture
with exogenous inputs (NARX model) for sales prediction of multiple amazon products). They don't consider
stochastic seasonal correlations.
During multi-step
prediction, the prediction of the previous step is recursively fed as input to the next time-step, which means this strategy can lead to recursive error accumulation.
Both methods also consider probabilistic forecasts by modelling network outputs as parameters of a negative binomial OR a linear dynamical system.
\cite{Lai18} propose LSTNet, another multiple time-series approach where a combination of CNN and RNN approaches are employed. The convolutional filters filter in only one dimension (namely time) across the 2-D input time-window to learn cross-correlations across sequences. The subsequent RNN part attempts to capture
stochastic seasonal correlations via skip connections from a cycle (or period) behind. Applications where period is large results in unusually long input windows.
While in our approach, we avoid skip connections and feed the seasonal lags from each cycle into a separate encoder (one or more depending on the order of seasonality).
N-Beats \cite{Oreshkin20} considers a DL approach using feed-forward structures where deterministic periodicities (referred to seasonality as well in literature) are captured using Fourier basis. A signal in general can exhibit both kinds of seasonality: (i)deterministic periodic components (ii)stochastic seasonal correlations. A linear seasonal ARMA model for instance only captures stochastic seasonal correlations which is different from additive
deterministic periodicity. Our seasonal ED architecture precisely captures such stochastic correlations making it distinct from N-Beats.
Earlier described MF approaches though capture global features in the multi time-series setting, either do
not (1)capture seasonality, (2)not allow for predictions with exogenous inputs or (3)learn with multi-step targets as in our approach. For details on MF methods, refer to appendix.
\section{\bf Proposed Seasonal ED Architecture}
\label{sec:Methodology}
Sec.~\ref{sec:Seas} and \ref{sec:EDMult} respectively describe the motivation (from the linear multiplicative seasonal model) and actual proposed RNN architecture
capturing a seasonal NARX
model capable of multi-step prediction.
Sec.~\ref{sec:GreedyRec} describes a heuristic algorithm to adapt the proposed architecture for multiple sequence prediction.
Amongst three standard recurrent structure choices of plain RNN (without gating), LSTM~\cite{Hochreiter97} and GRU~\cite{Chung14}, we choose GRU
in this paper. Like LSTM unit, GRU also has a gating mechanism to mitigate vanishing gradients and have more persistent memory. But
lesser gate count in GRU keeps number of weight parameters much smaller. GRU unit as the building block for RNNs is currently
ubiquitous across sequence prediction
applications \cite{Gupta17,Ravanelli18,Che16,Nicole20}.
A single hidden layer plain RNN unit's hidden state would be
\begin{equation}
h_t = \sigma(W^{h} h_{t-1} + W^{u} u_t)
\end{equation}
where $W^{h}$, $W_{u}$ are the weight matrices associated with the state at the previous time-instant $h_t$ and the current input ($u(t)$) respectively,
$\sigma(.)$ denotes sigma function.
GRU based cell computes its hidden state (for one layer as follows)
\begin{eqnarray}
z_t & = &\sigma(W^z u_t + U^z h_{t-1}) \\
r_t & = & \sigma(W^r u_t + U^r h_{t-1}) \\
\tilde{h}_t & = &tanh(r_t \circ U h_{t-1} + W u_t) \\
h_{t} & = & z_t \circ h_t + (1 - z_t)\circ \tilde{h}_t
\end{eqnarray}
where $z_t$ is update gate vector and $r_t$ is the reset gate vector. If the two gates were absent, we essentially have the plain RNN. $\tilde{h}_t$ is
the new memory (summary of all inputs so far) which is a function of $u_t$ and $h_{t-1}$, the previous hidden state. The reset signal controls the
influence of the previous state on the new memory. The final current hidden state is a convex combination (controlled by $z_t$) of the new memory and the memory at the previous
step, $h_{t-1}$. All associated weights $W^z$, $W^r$, $W$, $U^z$, $U^r$, $U$ are trained using back-propagation through time (BPTT).
\begin{figure*}[!thbp]
\center
\includegraphics[width=7.0in, height=3.95in]{./Figures/SequenceSeas.pdf}
\caption{Single sequence which starts from top left and ends on bottom right. Illustration of the Multi-step Seasonal NARX model input-outputs.}
\label{fig:SeasMS}
\end{figure*}
\subsection{\bf Motivation from multiplicative SAR model}
\label{sec:Seas}
{\bf } The proposed {\em ED-based} multi-step, seasonal-NARX architecture can be motivated from the classical linear multiplicative SAR model as
described in this section. A
multiplicative
seasonal AR model \cite{brock:11} is a stochastic process which satisfies the following equation.
\begin{equation}
(1-\psi_1L - \dots -\psi_pL^p)(1-\Psi_1L^{\scr{S}} - \dots -\Psi_kL^{P\scr{S}})y(t) = e(t).
\label{eq:MSAR}
\end{equation}
where, $e(t)$ is a zero mean, white noise process with unknown variance, $\sigma_e^2$. $L^p$ is the one-step delay operator applied $p$ times i.e. $L^p{y(t)} =
y(t-p)$. In a multiplicative SAR model (\ref{eq:MSAR}), the auto-regressive term is a product of two lag polynomials: (a) first capturing standard lags of
order up to $p$, (b) second capturing influence of seasonal lags at multiples of the period $\scr{S}$ and order up to $P$. Let us expand the associated product in eqn.~(\ref{eq:MSAR}) to obtain
\begin{align}
y(t)= & { a_1y(t-1)+a_2y(t-2)+\dots+ a_py(t-p)} + { b_0^{1}y(t-\scr{S})}\nonumber \\
& {+b_1^{1}y(t-\scr{S}-1) +\dots+b_{p}^{1}y(t-\scr{S}-p)} + \cdots\,\cdots+ \nonumber \\ & {
b_0^{P}y(t-P\scr{S})+b_1^{P}y(t-P\scr{S}-1) +\dots+} \nonumber \\ &
{ b_{p}^{P}y(t-P\scr{S}-p)} + e(t)
\label{eq:ASAR}
\end{align}
Note in the above equations $p$ is assumed significantly less than $S$. Expanding out the product of the two polynomials in
eqn.~(\ref{eq:MSAR}) and comparing it with eqn.~(\ref{eq:ASAR}), yields the following relations between the respective coefficients: $a_i = \psi_i$, $b_0^1 = \Psi_1$, $b_i^1 = -\psi_i \Psi_1$, $b_0^{k} =
\Psi_k$,
$b_i^{k} = -\psi_i \Psi_k$.
{\em Observe from eqn.~\ref{eq:ASAR} that $y(t)$ is linearly auto-regressed w.r.t three types (categories) of inputs: (a) its $p$ previous values ($y(t-1),y(t-2) \dots y(t-p)$) (b) values exactly a
period $\scr{S}$ (or an integral multiple of $\scr{S}$ lags) behind, up to $P$ cycles $\left(y(t-S), y(t-2S)\dots y(t-PS)\right)$ (c) $P$
groups of $p$ consecutive values, where $i^{th}$ such group is immediately
previous to $y(t-iS)$, where $i=1,2,\dots P$. For instance, $y(t-iS-1), y(t-iS-2),\dots y(t-iS-p)$ is the $i^{th}$ group of these $P$ groups}.
{\em We generalize the above structure of the expanded SAR model as follows.} We first allow all co-efficients in
eqn.~(\ref{eq:ASAR}) to be unconstrained. We further assume that the $P$ groups of consecutive values
(indicated in (c) above) need not be of the {\em same} size $p$. The below equation demonstrates this.
\begin{align}
y(t)= & { a_1y(t-1)+\dots+ a_py(t-p)} + { b_0^1y(t-\scr{S})}\nonumber \\
& {+b_1^1y(t-\scr{S}-1) +\dots+b_{Q_1}^1y(t-\scr{S}-Q_1)} + \cdots\,\cdots+ \nonumber \\ & {
b_0^{P}y(t-P\scr{S})+b_1^{P}y(t-P\scr{S}-1) +\dots+} \nonumber \\ &
{ b_{Q_{P}}^{P}y(t-P\scr{S}-Q_{P})} + e(t)
\label{eq:ASARGen}
\end{align}
Please note that $Q_{i}$ denotes the size of the $i^{th}$ such group. The RNN structure we propose here adopts a nonlinear auto-regressive version
of eqn.~(\ref{eq:ASARGen}) given as follows.
\begin{align}
y(t)= & { F\left(\dashuline{y(t-1),y(t-2),\dots\dots\dots\dots\dots, y(t-p)},\right.} \nonumber \\
& { \underbrace{y(t-S), y(t-2S)\dots\dots\dots\dots\dots y(t-PS)}, } \nonumber \\
& {\underline{y(t-\scr{S}-1),y(t-\scr{S}-2),\dots\dots\dots, y(t-\scr{S}-Q_1)}, } \nonumber \\
& {\underline{y(t-2\scr{S}-1),y(t-2\scr{S}-2),\dots, y(t-2\scr{S}-Q_2)},\dots } \nonumber \\
& {\left. \underline{y(t-P\scr{S}-1),y(t-P\scr{S}-2),\dots,y(t-P\scr{S}-Q_{P})}\right) }
\label{eq:ASNARGen}
\end{align}
Note that the $3$ categories above indicated by the $3$ different styles of underlining correspond to the three categories ((a),(b),(c)) described
earlier.
{\em In the
presence of an additional exogenous variable (process) $x(t)$, we additionally regress the endogenous variable
$y(t)$ w.r.t to $x(t)$ at the current time $t$ and all those previous time instants where $y(t)$ is exactly auto-regressed as per eqn.~(\ref{eq:ASARGen})
as follows.}
\begin{align}
y(t)=&{F\left(x(t),\dashuline{x(t-1),y(t-1),\dots\dots\dots, x(t-p),y(t-p)},\right.} \nonumber \\
& { \underbrace{x(t-S),y(t-S),\dots\dots\dots,x(t-PS),y(t-PS)}, } \nonumber \\
& {\underline{x(t-\scr{S}-1),y(t-\scr{S}-1),\dots,y(t-\scr{S}-Q_1)}, } \nonumber \\
& {\underline{x(t-2\scr{S}-1),y(t-2\scr{S}-1),\dots,y(t-2\scr{S}-Q_2)},\dots } \nonumber \\
& {\left. \underline{x(t-P\scr{S}-1),y(t-P\scr{S}-1),\dots,y(t-P\scr{S}-Q_{P})}\right) }
\label{eq:ASNARXGen}
\vspace{-0.10in}
\end{align}
The above model could be used for multi-step prediction by recursively computing single-step predictions sequentially. However, this
can accumulate errors and lead to poor performance.
Instead of predicting $y(t)$ alone, to predict $y(t)$ and $y(t+1)$ simultaneously, by the grammar of the multiplicative SARX model (given
data till $t-1$), we
potentially need {\em additional inputs} for prediction. Being a model with exogenous inputs, it definitely needs $x(t+1)$ additionally. Being
a seasonal model, it would need
additional inputs from lags which are one seasonal lag (or its multiples) behind $y(t+1)$. Specifically, it would need
$x(t+1-S),y(t+1-S),x(t+1-2S),y(t+1-2S),\dots\dots\dots,
x(t+1-PS),y(t+1-PS)$. Generalizing this to a $K+1$-step ahead one-shot predictive model, we obtain
a model or map $F(.)$ which predicts multi-step vector targets in presence of exogenous variables incorporating
stochastic seasonal
correlations.
\begin{align}
&\Big(y(t),y(t+1),y(t+2),\dots,y(t+K) \Big) = \nonumber \\
&{F\left(\dotuline{x(t),x(t+1),x(t+2),\dots\dots\dots\dots\dots,x(t+K),} \right.} \nonumber \\
&{\dashuline{x(t-1),y(t-1),x(t-2),\dots\dots\dots, x(t-p),y(t-p),}} \nonumber \\
& { \underbrace{x(t-S),y(t-S),\dots\dots\dots,x(t-PS),y(t-PS),} } \nonumber \\
& { \underbrace{x(t+1-S),y(t+1-S),\dots,y(t+1-PS),}\dots\dots, } \nonumber \\
& { \underbrace{x(t+K-S),y(t+K-S),\dots\dots,y(t+K-PS),} } \nonumber \\
& {\underline{x(t-\scr{S}-1),y(t-\scr{S}-1),\dots\dots,y(t-\scr{S}-Q_1),} } \nonumber \\
& {\underline{x(t-2\scr{S}-1),y(t-2\scr{S}-1),\dots\dots y(t-2\scr{S}-Q_2),}\dots\dots, } \nonumber \\
& {\left. \underline{x(t-P\scr{S}-1),y(t-P\scr{S}-1),\dots\dots,y(t-P\scr{S}-Q_{P})}\right) }
\label{eq:ASNARXMulti-StepGen}
\end{align}
For each additional component $y(t+i)$ in the target, there is an additional group of $2P$ inputs $x(t+i-S),y(t+i-S),\dots\dots\dots,
x(t+i-PS),y(t+i-PS)$ that need to be added as inputs. Since $k$ varies from $0$ to $K$, we have $K$ such additional groups each of size of
$2P$. These additional groups of inputs can all be viewed as belonging to a generalized category (b), introduced
earlier. This is indicated by the additional groups of inputs introduced in
eqn.~(\ref{eq:ASNARXMulti-StepGen}) over eqn.~(\ref{eq:ASNARXGen}). All these additional $K$ groups are indicated using horizontal curly braces
used earlier for inputs of category (b). Note that an additional category of the future exogenous inputs
$x(t),x(t+1),x(t+2),\dots\dots,x(t+K)$ grouped with a dotted underline additionally appears in eqn.~(\ref{eq:ASNARXMulti-StepGen}).
\subsection{\bf Encoder-Decoder RNN Architecture for Multi-step Seasonal NARX Model}
\label{sec:EDMult}
\begin{figure*}[htbp]
\center
\includegraphics[width=7.0in, height=6.0in]{./Figures/PaperSeasonalED.pdf}
\caption{Encoder-Decoder RNN based Seasonal Multi-step NARX Architecture}
\label{fig:SeasArch}
\vspace{-0.10in}
\end{figure*}
To predict accurately during multi-step prediction, we train with vector-valued targets, the vector size equal to
prediction
horizon.
The classical Seq2Seq (ED)
can be neatly
adapted to the multi-step
context where the decoder is unfolded as much as the prediction horizon length ($K+1$ steps).
Fig.~\ref{fig:SeasMS} gives an example sequence shown in four rows, where the sequence starts from the top left and ends on the bottom
right. It gives a pictorial view of the inputs utilized (up-to $P$ cycles behind the current
time $t$) for prediction from time $t$ on-wards. Fig.~\ref{fig:SeasArch} describes
the associated proposed architecture with color coding of inputs and blocks matched with that of Fig.~\ref{fig:SeasMS}.
We implement eqn.~(\ref{eq:ASNARXMulti-StepGen}) in an intelligent and non-redundant fashion using a novel
ED architecture.
We rewrite eqn.~\ref{eq:ASNARXMulti-StepGen} by reorganizing its inputs as follows which aids us in clearly associating the inputs and outputs of the above multi-step
seasonal NARX model to the proposed ED architecture.
\begin{align}
&\Big(y(t),y(t+1),y(t+2),\dots,y(t+k) \Big) = \nonumber \\
&{F\left(\dashuline{x(t-1),y(t-1),x(t-2),\dots\dots\dots, x(t-p),y(t-p),} \right.} \nonumber \\
& {\underline{x(t-\scr{S}-1),y(t-\scr{S}-1),\dots,x(t-\scr{S}-Q_1),y(t-\scr{S}-Q_1),} } \nonumber \\
& {\underline{x(t-2\scr{S}-1),y(t-2\scr{S}-1),\dots\dots y(t-2\scr{S}-Q_2),}\dots\dots\dots } \nonumber \\
& {\underline{x(t-P\scr{S}-1),y(t-P\scr{S}-1),\dots,y(t-P\scr{S}-Q_{P})},} \nonumber \\
& { \underbrace{x(t),x(t-S),y(t-S),\dots\dots\dots,x(t-PS),y(t-PS),} } \nonumber \\
& {\underbrace{x(t+1),x(t+1-S),y(t+1-S),\dots,y(t+1-PS)},\dots,} \nonumber \\
& {\left. \underbrace{x(t+K),x(t+K-S),y(t+K-S),\dots,y(t+K-PS)} \right)}
\label{eq:ASNARXMulti-StepGen-ED} \vspace{-0mm}
\end{align}
The finer splits in inputs in Fig.~\ref{fig:SeasMS} are in sync with the input
rearrangement of eqn.~(\ref{eq:ASNARXMulti-StepGen-ED}).
The first group in the rearrangement of eqn.~(\ref{eq:ASNARXMulti-StepGen-ED}) comes from standard immediate consecutive lags of order $p$.
Note the lags shaded in blue and grouped as ``standard auto-regressive" lags in Fig.~\ref{fig:SeasMS}. These inputs of category (a) are fed
as input to Encoder $0$ (GRU units shaded in blue). This is followed by $P$ groups of
seasonal lags, where the $i^{th}$ such group is a bunch of $Q_i$ consecutive lags starting exactly $iS$ lags behind current time $t$.
All time points in Fig.~\ref{fig:SeasMS} above the "standard auto-regressive lags`` represent these $P$ groups. Each group
here is colored differently and is fed into a separate encoder (Fig.~\ref{fig:SeasArch}) with GRU units colored in sync with the color of the associated time points in
Fig.~\ref{fig:SeasMS}.
In essence, we propose to have multiple encoders depending on the order $P$ of the model. {\em This ensures that there is equal emphasis
from seasonal lags of all orders during model building.}
As observed, we also feed all the associated past exogenous inputs/observations at these various encoder time steps.
{\em Context vectors obtained at
the last time-step of each of these $P$ encoders are appended before feeding further as
initial state at the decoder's first time-step.} To ensure better learning, the appended context vector can also be additionally fed as an
input at each time step of the decoder.
{\em In addition, the ED framework can admit exogenous
inputs during $(K+1)$-step forecast horizon
as additional inputs at the respective time-steps of the decoder.} Let us return to the rearrangement of inputs in
eqn.~(\ref{eq:ASNARXMulti-StepGen-ED}) from eqn.~(\ref{eq:ASNARXMulti-StepGen}). We observe that the future exogenous inputs
$x(t),x(t+1),\dots,x(t+K)$ were distinctly present as the first group of inputs in eqn.~(\ref{eq:ASNARXMulti-StepGen}). For each of these
future exogenous inputs in eqn.~(\ref{eq:ASNARXMulti-StepGen}), there exists a unique group of $2P$ past inputs/observations (endogenous
+ exogenous)
$x(t+k-S),y(t+k-S),\dots,x(t+k-PS),y(t+k-PS)$, underlined with horizontal curly braces in eqn.~(\ref{eq:ASNARXMulti-StepGen}). Eqn.~(\ref{eq:ASNARXMulti-StepGen-ED}) merges each
$x(t+k), k=0,1,\dots,K$ with its uniquely associated group as just described above. Therefore we have $K+1$ such groups of $2P+1$ inputs.
Each of these groups is indicated in Fig.~\ref{fig:SeasMS} with a yellow shaded vertical grouping enclosed in blue dotted rectangles. The associated inputs of each group is
color coded differently and the same color is carried forward in the decoder inputs of Fig.~\ref{fig:SeasArch}. Specifically, the $k^{th}$ such group of $2P+1$
inputs is fed as input to the $k^{th}$ time-step of the decoder (Fig.~\ref{fig:SeasArch}). The $K+1$ outputs in
eqn.~(\ref{eq:ASNARXMulti-StepGen}) are the targets at the $K+1$ time-steps of the decoder output.
\comment{
{\em Overall the seasonal lag inputs
are intelligently split
between encoder and decoder inputs avoiding any redundant inputs.} Lags that influence all instants in the prediction horizon are fed via the encoders, while lags which are exactly a cycle (or its integral multiple) behind a point in the predictive horizon are fed at the decoder in a synchronous fashion. For more details around this, refer to appendix.
}
\subsubsection{\bf Seasonal lag distribution between encoder \& decoder}
Let us start with $P=1$ case. To predict for $y(t)$, recall that the one-step
endogenous model eqn.~(\ref{eq:ASARGen}) needs as input $y(t-S)$ from exactly one cycle behind and
$Q_1$ lags preceding it. A $(K+1)$-step ahead predictive model (eqn.~\ref{eq:ASNARXMulti-StepGen-ED} without exogenous inputs) from $y(t)$ to $y(t+K)$ would depend on data from $y(t-S)$ to $y(t-S+k)$ (exactly one cycle behind the prediction instants)
and
$Q_1$ lags preceding $y(t-S)$. {\em Hence, note
that this block of $Q_1$ lags are invariant to the multi-step prediction horizon length. That's why this block of lags is fed as a separate encoder. Its immediately succeeding
lags are fed at the decoder end as they can be exactly matched (OR synchronized) with one of the multi-step targets exactly one period ahead.} Thats why we
see data from time points $y(t-S)$ to $y(t-S+k)$ fed as decoder inputs from the first to the $(K+1)^{th}$ time-step respectively.
For $P>1$, this kind of synchronized inputs at the
decoder can additionally come from lags exactly $2\scr{S},\dots P\scr{S}$ time points behind. This means depending on $P$, each time step of the decoder receives
input from $P$ time points each of which are exactly $i\scr{S}$ steps behind, where $i=1,\dots,P$.
\begin{remark}
{\em Overall the seasonal lag inputs
are intelligently split
between the encoder and decoder inputs avoiding any redundant inputs. Lags that influence all instants in the prediction horizon are fed via the encoders, while lags which are exactly a cycle (or its integral multiple) behind a point in the predictive horizon are fed at the decoder in a synchronous fashion.}
{\em Even in the absence of exogenous
inputs, the ED architecture proposed still holds with multiple encoders and the decoder inputs coming from the synchronized past
observations of the endogenous variable.}
\end{remark}
\begin{remark}
{\em We adopt a moving window approach in this paper and form a
training example from every possible window in the time series.} An example input-output window is best illustrated in Fig.~\ref{fig:SeasMS}.
\end{remark}
\section{\bf Training and prediction on multiple sequences}
\label{sec:GreedyRec}
\begin{algorithm}[!t]
\caption{Build one or more background models to cover all sequences.}
\label{algo:BM}
\linesnumbered
\KwIn{Set of sequences $\scr{T}$.}
\KwOut{a set of models $\scr{M}$ which cover all sequences.}
Perform a sequence specific scaling (min-max normalization) of each sequence (time-series) $T$ (both $Y$ and $X$ components of $T$) in $\scr{T}$. \\
Initialize $\scr{G}\leftarrow\scr{T}$. \\
{\em Model-Recursive-fn($\scr{G}$)}\;
\end{algorithm}
\begin{algorithm}[!t]
\caption{{\em Model-Recursive-fn($\scr{G}$)}}
\label{algo:RecFunc}
\linesnumbered
\KwIn{$\scr{G}$ (subset of sequences from $\scr{T}$)
Form training examples from all the sequences in $G$\;
Build one RNN model $M$ using the above possible examples, which can predict for all $T \in \scr{G}$. ($M$ predicts for all sequences in the
normalized domain). \;
Apply inverse sequence specific scaling for each sequence to evaluate sequence specific prediction errors\;
Form $G_1 := \{T \in \scr{G} : e(T)\leq E_{th}\}$\;
\If{$G_1 \neq \phi$}{
Add $(M,\scr{G}_1)$ to $\scr{M}$\;
Form $\scr{G}_2 \ensuremath{\leftarrow} \scr{G}\setminus\scr{G}_1$\;
\If{$\scr{G}_2 == \phi$}{
return\;
}
\Else{
{\em Model-Recursive-fn($\scr{G}_2$)}\;
}
}
\Else{
We build sequence specific models to all sequences in $\scr{G}$\;
Add all these models to $\scr{M}$\;
return\;
}
\end{algorithm}
While the proposed architecture can be used for a single time series case, using it for multiple time-series prediction is not clear. To be
able to adapt our architectures for prediction across {\em short length} multiple time-series,
we present a novel
greedy recursive algorithm. The overall idea of this algorithm is that there exist one or at most a handful of models which explain all the sequences
in a scaled (or normalized) domain. This can be employed in situations where building RNN models per sequence is infeasible due to data scarcity at a sequence
level. This could be very useful in situations where the variations in the exogenous variable is also little for every sequence.
Alg.~\ref{algo:BM} together with Alg.~\ref{algo:RecFunc} explains the overall procedure. Line $1$ performs a seq-specific
scaling of the both the $Y$ (endogenous) and $X$ (exogenous) components as an attempt to build common models in the normalized domain. The set of scaled sequences are now fed
to the recursive function {\em Model-Recursive-fn(.)} (given in Alg.~\ref{algo:RecFunc}). Build a model using all sequences in $\scr{G}$ which can predict for
all sequences $T$ in $\scr{G}$ (line $2$). We evaluate the model performance sequence-wise for every $T\in \scr{G}$ (based on a separate validation set). The error metrics could be MASE or MAPE for
instance (refer to the next section for details). We keep aside all sequences $T\in \scr{G}$ on which the model $M$'s (validation) error ($e(T)$) is below a user-defined threshold $(E_{th}$) into $\scr{G}_1$ (line $4$).
If $G_1$ is empty (line $12$), then we build sequence specific models using linear time series OR shallow RNN models (line $13 - 15$) and
return. If $\scr{G}_1$ is non-empty (line $5$), we first
add the current model $M$ and the current set of sequences $\scr{G}_1$ to $\scr{M}$ (line $6$). While
its compliment in $\scr{G}$, namely $\scr{G}_2$ (line $7$) is also non-empty (line $10$), we attempt to build an additional model greedily on $\scr{G}_2$ (set of sequences on which the
current model $M$ performs poorly), as per line $11$. If $\scr{G}_2$ is empty, we return (line $9$).
\section{\bf Results}
\label{sec:Results}
We first describe the data sets used for testing, followed by error metrics and hyper-parameters for evaluation, and performance results in comparison to some strong base lines.
\subsection{\bf Data Sets}
\begin{itemize}
\item {\bf D1:} This is demand data from National Electricity Market of Australia. Australian geography is split into $5$ disjoint regions, which means
we have five power demand time series including an aggregate temperature time-series (exogenous input) in each of these regions. This is a single time-series data-set consisting of $5$ independent time-series. D1 is $3$ months of summer data (Dec to Feb) from these $5$ regions. The granularity of the time-series here is half-hourly. The last $2$ weeks were used for testing.
\item{\bf D2:} M5 data-set\footnote{https://www.kaggle.com/c/m5-forecasting-accuracy/data} is a publicly available
data-set from Walmart and contains the unit sales of different products on a daily basis spanning 5.4 years. This data is distributed across $12$ different aggregation levels. Amongst these levels, aggregation level $8$ contains unit sales of all products, aggregated for each store and category. Price is used as exogenous input here. {\em This level contains $30$ sequences. Based on the PACF value at the $365^{th}$ (S=365) lag, we choose top 3 sequences, which we refer to as D2.} We focus on sequences which have significant PACF
based evidence of stochastic seasonal correlations.
\item {\bf D3:} This is publicly available from Walmart\footnote{https://www.kaggle.com/c/walmart-recruiting-store-sales-forecasting/data}. The measurements are weekly sales at a department level of multiple departments across 45 Walmart
stores. In addition to sales, there are other related measurements like CPI (consumer price index), mark-down price etc. which we use as
exogenous variables for weekly sales prediction. The data is collected across $3$ years and its a multiple time-series data. The whole data set consists of $2628$
sequences. For purposes of this paper, {\em we ranked sequences based on total variation of the sales and considered the top $20\%$ of
the sequences (denoted as D3) for testing}. We essentially pick the hardest sequences which exhibit sufficient variation. The total
variation of a $T$ length sequence $x$ is
defined as
\vspace{-0.1in}
\begin{equation}
TV = \sum_{i=2}^{T} |(x(i+1) - x(i)|
\end{equation}
\item{\bf D4:} Data at level $10$ from M5 data-set(described above) contains unit sales product-wise aggregated for all store/states. This level contains a total of 3049 sequences as there are that many products. In order to simulate a situation where per sequence data is relatively less, we consider only last $3$ years data at a weekly resolution (instead of daily) by further aggregating sale units across a week. A single price is the exogenous variable here.
Further we ranked the 3049 sequences based on total variation and considered the top 20\% of these sequences (similar to D3), which we refer to as D4 (containing 609 sequences in total). D4, product level data (level 10 - M5) has characteristics
different from D3 (Walmart), which is department level sales (aggregated across products).
\end{itemize}
\subsection{\bf Error metrics and Hyper-parameter choices}
We consider the following two error metrics: (1){\bf MAPE} (Mean Absolute Percentage Error) (2){\bf MASE} (Mean Absolute Scale Error\cite{Hyndman06})
\comment{
\begin{itemize}
\item {\bf MAPE} (Mean Absolute Percentage Error)
\item {\bf MASE} (Mean Absolute Scale Error\cite{Hyndman06})
\end{itemize}
}
The APE
is essentially relative error (RE) expressed in percentage. If $\widehat{X}$ is predicted value, while $X$ is the true value, the RE = $(\widehat{X} - X)/X$. In
the current multi-step setting, APE is computed for each step and is averaged over all steps to obtain the MAPE for one window of the prediction horizon.
The APE while has the advantage of being a scale independent metric, can assume abnormally high values and can be misleading when the true value is very low. An
alternative complementary error metric which is scale-free could be MASE.
The MASE is computed with reference to a baseline metric. The choice of baseline is
typically the copy previous predictor, which just replicates the previous observed value as the prediction for the next step.
For a given window of one prediction horizon of $K$ steps ahead,
let us denote the $i^{th}$ step error by $|\widehat{X}_i - X_i|$. The $i^{th}$ scaled error is
defined as
\vspace{-0.1in}
\begin{equation}
e_s^i = \frac{|\widehat{X}_i - X_i|} { \frac{1}{n-K}\sum_{j=K+1}^{n}|X_j - X_{j-K}|}
\end{equation}
where $n$ is no. of points in the training set. The normalizing factor is essentially the average $i^{th}$ step-ahead error of the copy-previous baseline
on the training set. Hence MASE on a multi-step prediction window $w$ of size $K$ will be
\vspace{-0.1in}
\begin{equation}
MASE(w,K) = \frac{1}{K}\sum_{j=1}^{K} e_s^j
\end{equation}
\subsubsection{\bf Hyper-parameters}
Tab.~\ref{tab:HP} describes the broad choice of hyper-parameters during training in our experiments.
\begin{table}[!htbp]
\caption{Model parameters for during training.}
\label{tab:HP}
\centering
\begin{tabular}{|c |c| }
\hline
{\bf Parameters} & {\bf Description} \\ \hline
Batch size & 256/64 \\
Learning Rate & 0.002 \\
No. of Epochs & 40/70 \\
Number of Hidden layers & 1/2 \\
Hidden vector dimensionality & 7/17 \\
Optimizer & RMSProp \\ \hline
\end{tabular}
\end{table}
\subsection{\bf Baselines}
\label{sec:SA}
We denote our seasonal approach compactly by SEDX. We focused on base-lining against traditional time-series and state-of-art RNN approaches.
We didn't consider non-RNN DL approaches like N-Beats, Temporal Convolutional Networks etc. whose underlying
architectures are a bit different.
The baselines we benchmark our method against are as follows:
\begin{enumerate}
\item SARX - Seasonal AR with exogenous inputs (strong standard linear time-series baseline). We stick to an AR model here as a sufficiently long AR model can approximate
any ARMA model in general. For D1, D2 the AR orders are also determined from PACF. Given
half-an-hour granularity of the data, we choose $S = 48$ to capture daily seasonality in D1. In D2, which is daily
data, we choose $S=365$
to capture yearly seasonality. For D3, D4 we read off the sequence-specific orders $p$ from the respective PACFs,
$S=52$ to capture yearly seasonality. For D3 and D4, we choose
$P=1$ (eqn. \ref{eq:MSAR}) not only for SARX, but all the other non-linear baselines, as we have only about 3 years of data per sequence.
\item BEDX - Basic Encoder Decoder (with only one encoder capturing the immediate lags), while the exogenous inputs of the prediction instants are fed as
inputs to the decoder as considered in \cite{Wen17}. It is a simplified SEDX with all structures and inputs from the seasonal lags excluded.
\item DeepAR \cite{Flunkert17} - explained earlier in Sec. \ref{sec:SeasEDSurvey}. We use a squared error loss for training which is equivalent to a Gaussian conditional log-likelihood of the targets
(with a constant variance). The input window length is chosen consistent with that of SEDX.
\item LSTNet \cite{Lai18} - explained earlier in Sec. \ref{sec:SeasEDSurvey}. Other parameters include history length = 60,No of filters = 16, AR window size = 14, skip length = 52, dropout rate = 0.3.
\item PSATT \cite{Yagmur17} - explained earlier in Sec. \ref{sec:SeasEDSurvey}. We only consider the first variant of position-based attention where each state component is weighted
same. The second variant involving state component dependent weights can lead to too many extra parameters. Its multivariate extension does not scale well for large
dimensions due to increased number of parameters to be learnt. In particular, data-sets D3 and D4 have a few hundred sequences (hence a high input dimension), while
the per-sequence length is limited. Hence we do not consider PSATT for bench-marking on D3, D4. For D1 $\&$ D2, input window length is chosen to accommodate up-to $PS$ lags.
\end{enumerate}
In both D1 and D2, we didn't observe any evidence for integrating type non-stationarity in the ACF plots for instance. Typically, a very slow decay in the ACF is indicative of a
possible integrating type non-stationarity in the data. Similarly a slow decay at the seasonal lags is indicative of a need for seasonal differencing to cancel the random walk
non-stationarity.
\subsubsection{\bf Assessing significance of mean error differences statistically}
We have conducted a Welch t-test (unequal variance) based significance assessment (across all relevant
experiments) under both the mean metric (MASE, MAPE)
differences (SEDX vs Baseline) with a significance level of 0.05 for
null hypothesis rejection. In all subsequent tables, a simple way to indicate the best performing method is to highlight in bold
the best/least MASE/MAPE. Our test of significance can strengthen this visualization by allowing for
MASE/MAPE highlighting of other methods (multiple) whose mean error difference with the best method's MASE/MAPE (need not be SEDX always)
is statistically insignificant. We test
significance of mean differences when averaged at the (finest) test
example level.
\subsection{\bf Results on Single-Sequence Data-sets (D1,D2)}
\subsubsection{\bf Results on D1}
For D1, the multi-step horizon was chosen to be $48$ to be able to predict a day ahead (each time point is half-hourly demand). {\em There was
evidence for seasonal correlations in the ACF in terms of local maxima at lags $48$ and $96$, which prompted us to choose $S=48$ and seasonal order
($P=2$).} To choose the length of the associated encoders, we look to the significant PACF values just behind the $48^{th}$ and $96^{th}$
lags. Tab.~\ref{tab:NEMSeas} indicates the error metrics in comparison to $4$ feasible baselines (LSTNet is not applicable for the
single sequence case). Our results demonstrate superior performance of SEDX against improvements of up to $1.94$ in MASE and $34\%$ in MAPE.
In particular, SEDX outperforms SARX.
Our significance analysis reveals SEDX’s MASE/MAPE reduction
compared to BEDX (in-spite of visually close MAPEs) is actually statistically
significant for R1 to R4.
\begin{table}[!htbp]
\vspace{-0.15in}
\caption{(MASE, MAPE) across five regions in Australia}
\label{tab:NEMSeas}
\centering
\begin{tabular}{|c ||c|c|c|c|c|c| }
\hline
Region & SEDX & SARX & BEDX &DeepAR &PSATT \\ \hline
R1 &\bf{(0.38,6)} &(0.86,15) &(0.58,8) &(1.32,16) &(0.98,14) \\
R2 &\bf{(0.37,4)} &(0.41,5) &(0.46,5) &(1.18,18) & (1.09,11)\\
R3 &\bf{(0.64,4)} &(1.00,8) &(0.69,5) &(1.32,9) & (0.87,5.85) \\
R4 &\bf{(0.71,10)} &(1.58,24) &(0.73,11) &(1.48,18) &(1.11,15.22) \\
R5 &(0.58,11) &(1.10,21) &\bf{(0.55,10)} &(2.52,45) &(0.91,15.97) \\ \hline
\end{tabular}
\vspace{-0.00in}
\end{table}
\subsubsection{\bf Results on D2}
\label{resultsd3}
For D2, the prediction horizon was set to be $28$ days ($K=28$). A test size of $33$ days (time-points) was set aside for each sequence in D2. This means we tested for
$6$ windows of width $28$ on the $33$ day test set per sequence.
{\em Here we choose the seasonal order $P = 1$ with $S=365$ (yearly seasonality is exploited) by analyzing ACF values. }
Tab.~\ref{tab:NEMSeas1} gives the detailed comparison with all the $4$ (single sequence) strong baselines in terms of both the error metrics. Our results demonstrate that
SEDX mostly outperforms all $4$ baselines on all $3$ sequences, except for BEDX doing equally well (in a statistical sense) on seq 3. In particular,
we observe improvements of up to $1.09$ in MASE and $17.4\%$ in MAPE in favor of our method.
\begin{table}[!htbp]
\caption{(MASE,MAPE) across $3$ sequences in D2}
\label{tab:NEMSeas1}
\centering
\begin{tabular}{|c ||c|c|c|c|c| }
\hline
Seq & SEDX & BEDX & DeepAR & SARX & PSATT \\ \hline
\comment{
1 & \bf{(0.38,5 )} & (0.56,7 ) &(0.49,8 ) &(1.90,45 ) \\
2 &(0.90,10 ) & \bf{(0.87,9 )} &(1.03,13 ) &(1.92,34 ) \\
3 &(0.90,14 ) & (0.98,14 ) &\bf{(0.85,13 )} &(1.40,24) \\
4 &\bf{(0.70,12 )} & (1.23,19 ) &(0.73,13 ) &(1.03,20 ) \\
5 &\bf{(0.52,9 )} & (0.64,10 ) &(0.53,9 ) &(1.18,23 ) \\
6 &\bf{(0.64,8 )} & (1.80,23 ) &(0.74,10 ) &(0.86,14 ) \\
7 &\bf{(0.41,8 )} & (0.81,15 ) &(0.43,8 ) &(0.76,15 ) \\
8 &(0.48,7 ) & (0.55,7 ) &\bf{(0.43,6 )} &(2.41,42 ) \\
9 &\bf{(0.43,7 )} & (1.18,15 ) &(0.49,8 ) &(1.61,40 ) \\
10 &\bf{(0.52,8 )} & (1.17,17 ) &(0.53,9 ) &(1.11,16 ) \\ \hline
4 &\bf{(0.60,8.7 )} & (0.77,11.22 ) &(0.99,15.6 ) &\bf{(0.64,7.8)} &(0.78,11.93)\\
5 &\bf{(0.39,8.6 )} & \bf{(0.4,8.3 )} &(0.93,19.7) &(0.70,13.76) &(1.24,24.4) \\ \hline
}
1 & \bf{(0.62,10)} & (0.77,11.8) &(1.39,22.2) &(1.15,17.8) &(1.71,27.4) \\
2 &\bf{(0.69,9.9)} & (0.82,11.8) &(1.11,16.2) &(0.88,12.2) &(0.99,13.9) \\
3 &\bf{(0.69,11.1)} & \bf{(0.76,11.7)} &(1.52,23.2) &(0.91,14) &(0.95,14.1)\\ \hline
\end{tabular}
\end{table}
\subsection{\bf Results on Multiple-Sequence Data-sets (D3,D4)}
\subsubsection{\bf Results on D3}
We first demonstrate the effectiveness of SEDX on D3. A test size of $15$ weeks (time-points) was set aside for each sequence in D3. We choose $K=10$ time-steps in the decoder for
training which means we tested for $6$ windows of width $10$ on the $15$ week test set per sequence. For both D3 and D4, we append all sequences (normalized using
sequence-specific normalization) into one long sequence and look at its Partial auto-correlation function (PACF). This enables us to fix the number of time-steps in the encoders
corresponding to the seasonal correlations, which is typically much lesser than the number of time-steps in encoder $1$ (which captures
standard lags). On D3, with MASE based threshold ($E_{th}$) of 0.3, {\em Model-Recursive-fn()} ran for $2$ rounds.
\comment{
\begin{table}[!htbp]
\caption{Percentage of sequences where SEDX does better.}
\label{tab:FracSl}
\centering
\begin{tabular}{|c ||c|c| }
\hline
Baseline &MASE based & MAPE based \\ \hline
BEDX &79 & 79 \\
BED & 78 & 79 \\
MTO & 80 & 80 \\
SARMAX & 62 & 61 \\ \hline
\end{tabular}
\end{table}
}
\begin{figure}[!htbp]
\center
\includegraphics[width=3.2in,height=1.5in]{./Figures/PercentSeqSl.pdf}
\caption{Percentage of Sequences where SEDX does better}
\label{fig:FracSl}
\vspace{-0.1in}
\end{figure}
\begin{table}[!htbp]
\vspace{-0.00in}
\caption{Max, Avg \& Min of MASE \& MAPE across all sequences}
\label{tab:MaxAvgMinSeas}
\footnotesize
\centering
\begin{tabular}{|c ||c|c|c|c|c|c| }
\hline
& \multicolumn{3}{|c|}{MASE based} & \multicolumn{3}{|c|}{MAPE based in \%}\\ \cline{2-7}
Method & Max & Avg & Min & Max & Avg & Min \\ \hline
SEDX &2.75 & {\bf0.54} &0.16 & 76 &{\bf13} & 2 \\
SARX &2.38 &0.56 &0.11 &69 &14 &2 \\
BEDX &2.45 &0.65 &0.13 &265 &19 &2 \\
DeepAR &7.58 &1.32 &0.27 &278 &28 &3 \\
LSTNet &5.48 & 1.39 &0.24 &156 &32 & 3 \\
\hline
\end{tabular}
\vspace{-0.000in}
\end{table}
Fig.~\ref{fig:FracSl} gives a detailed breakup of percentage of sequences on which SEDX did better compared to the $4$ baselines. It demonstrates {\em SEDX does better on at least
$50\%$ and up to $80\%$ of the sequences compared to all considered baselines.}
Tab.~\ref{tab:MaxAvgMinSeas} gives average, max and min across sequences (of MASE and MAPE) for all methods. It demonstrates that on an average SEDX does better than all
baselines based on both complementary metrics. {\em MASE improvements are up to $0.85$ while the MAPE improvements are up to $19\%$.}
Min and max were
provided in Tab.~\ref{tab:MaxAvgMinSeas} (and Tab.~\ref{tab:MaxAvgMinSeasm5mase3new}) to honestly gauge error spread limits
across sequences in D3 and D4. Viewing them as metric can be
misleading.
On D3, SEDX performance might appear similar to
SARX. But note from Fig.~\ref{fig:FracSl} and Tab.~\ref{tab:MaxAvgMinSeas}, that SEDX
is actually complementing SARX on the 526 sequences of D1.
SEDX is doing better than SARX on $50\%$ of the sequences and
giving a statistically significant improvement of $0.2$(MASE)
and $6\%$ (MAPE) on sequences where SEDX does better (Tab.~\ref{tab:CondAvgMASESeas},\ref{tab:CondAvgMAPESeas}).
Tab.~\ref{tab:CondAvgMASESeas} looks at the (conditional) average MASE under two conditions with respect to each baseline: (i)average over those sequences on
which SEDX fares better, (ii)average over those sequences on which the baseline does better. At this level, MASE improvements of at least $0.20$ while up to $0.93$ are observed.
Tab.~\ref{tab:CondAvgMAPESeas} considers a similar (conditional) average MAPE. At this level of MAPE, there are improvements of at least $6\%$ to up to $20\%$.
\begin{table}[!htbp]
\vspace{-0.0in}
\caption{Avg MASE when (i)SEDX is better (ii)Baseline is better.}
\label{tab:CondAvgMASESeas}
\footnotesize
\centering
\begin{tabular}{|c ||c|c|c|c|c|c| }
\hline
& \multicolumn{3}{|c|}{SEDX better} & \multicolumn{3}{|c|}{Baseline better}\\ \cline{2-7}
Method & SEDX & BLine & Diff & SEDX & BLine & Diff \\ \hline
SARX &0.47 &0.67 &{\bf0.20} &0.61 &0.46 &0.15 \\
BEDX &0.49 &0.79 &{\bf0.30} &0.62 &0.44 &0.18 \\
DeepAR &0.50 &1.40 &{\bf0.90} &0.86 &0.65 &0.21 \\
LSTNet &0.51 &1.44 &{\bf0.93} &0.98 &0.70 &0.28 \\
\hline
\end{tabular}
\vspace{-0.00in}
\end{table}
\begin{table}[!htbp]
\caption{Avg MAPE when (i)SEDX is better (ii)Baseline is better.}
\label{tab:CondAvgMAPESeas}
\footnotesize
\centering
\begin{tabular}{|c ||c|c|c|c|c|c| }
\hline
& \multicolumn{3}{|c|}{SEDX better} & \multicolumn{3}{|c|}{Baseline better}\\ \cline{2-7}
Method & SEDX & BLine & Diff & SEDX & BLine & Diff \\ \hline
SARX &10 &16 &{\bf6} &18 &13 &5 \\
BEDX &11 &24 &{\bf13} &18 &12 &6 \\
DeepAR &13 &32 &{\bf19} &17 &12 &5 \\
LSTNet &14 &34 &{\bf20} &18 &14 &4 \\
\hline
\end{tabular}
\vspace{-0.00in}
\end{table}
\subsubsection{\bf Results on D4}
\label{resultsd4}
For D4, where data is at a weekly granularity a prediction horizon of $8$ weeks ($K$ = 8) was chosen for training while the chosen test size was $11$ weeks for each sequence. This means for each sequence there are $4$ (input-output) windows of output window width $8$ on which we test.
We chose a yearly seasonality here (with $S=52$) similar to D3. On D4, with MASE based threshold ($E_{th}$) of 0.3, {\em
Model-Recursive-fn()} ran for $1$ round.
In this experiment, we observe a considerable fraction of sequences, where each of the $5$ methods (proposed + $4$ baselines) exhibit either high MASE or high MAPE. To quantify how high is unacceptable, we consider an MASE threshold of $1$ (beyond which a naive copy previous predictor would be better) and MAPE threshold of $30\%$. For these thresholds, we find $143$ sequences on which each of the methods either have an MASE $>1$ or MAPE $>30\%$. We have excluded such sequences because neither our method nor any of the proposed baselines perform within acceptable limits on these. For such sequences, one can potentially explore simpler baselines like copy previous predictor or ARX model (without seasonality). We demonstrate results on the remaining $466$ sequences where at least one of the $5$ methods (models) have both MASE $<1$ and MAPE $<30\%$.
\comment{
\begin{table}[!htbp]
\caption{Percentage of sequences where SEDX does better}
\label{tab:m5mape3new}
\centering
\begin{tabular}{|c |c| c|}
\hline
{\bf Models} & {\bf MASE} &{\bf MAPE} \\ \hline
SARX &59 &56 \\
BEDX &62&63 \\
DeepAR &69 &72 \\
LSTNet &97 &97\\ \hline
\end{tabular}
\end{table}
}
\begin{figure}[!htbp]
\center
\includegraphics[width=3.2in,height=1.5in]{./Figures/PercentSeqSlD4.pdf}
\caption{Percentage of Sequences when SEDX does better(pictorially)}
\label{fig:FracSld4}
\vspace{-0.1in}
\end{figure}
Fig.~\ref{fig:FracSld4} represents the percentage of (the 466) sequences on which SEDX is better compared to all four baselines in terms of both the metrics. It shows that SEDX does better on at-least 59\%(56\%) of the sequences and up-to 97\%(97\%) in terms of MASE(MAPE), compared to all baselines.
\begin{table}[!htbp]
\vspace{-0.00in}
\caption{Max, Avg and Min of MASE and MAPE across all sequences}
\label{tab:MaxAvgMinSeasm5mase3new}
\footnotesize
\centering
\begin{tabular}{|c ||c|c|c|c|c|c| }
\hline
& \multicolumn{3}{|c|}{MASE based} & \multicolumn{3}{|c|}{MAPE based in \%}\\ \cline{2-7}
Method & Max & Avg & Min & Max & Avg & Min \\ \hline
SEDX &6.45 &0.75 &0.01 &81.35 &\bf{15.42}&3.57 \\
SARX &3.12 &\bf{0.68} &0.16 &76.15 &16.48 &4.44 \\
BEDX &6.78 &0.83 &0.01 &99.36 &17.29 &3.51 \\
DeepAR &6.62 &0.91 &0.02 &88.22 &19.74 &2.97 \\
LSTNet &17.29 &1.78 &0.02 &307.54 &35.08 &6.20 \\
\hline
\end{tabular}
\end{table}
Tab.~\ref{tab:MaxAvgMinSeasm5mase3new} gives the average, max and min of MASE and MAPE across the (466) sequences for all the models. It shows that on an average SEDX is better than all considered baselines in terms of MAPE while SARX does better in terms of MASE.
\begin{table}[!htbp]
\vspace{-0.0in}
\caption{Average MASE when (i)SEDX is better (ii)Baseline is better.}
\label{tab:CondAvgMASESeasnew}
\footnotesize
\centering
\begin{tabular}{|c ||c|c|c|c|c|c| }
\hline
& \multicolumn{3}{|c|}{SEDX better} & \multicolumn{3}{|c|}{Baseline better}\\ \cline{2-7}
Method & SEDX & BLine & Diff & SEDX & BLine & Diff \\ \hline
SARX &0.37 &0.78 &0.41 &1.31 &0.54 &\bf{0.77} \\
BEDX &0.73 &0.92 &\bf{0.19} &0.79 &0.68 &0.11 \\
DeepAR &0.69 &1.00 &\bf{0.31} &0.89 &0.71 &0.18 \\
LSTNet & 0.75 &1.81 &\bf{1.06} &1.01 &0.89 &0.12 \\
\hline
\end{tabular}
\vspace{-0.00in}
\end{table}
Tab.~\ref{tab:CondAvgMASESeasnew} represents the average MASE under two conditions: (i) sequences where SEDX does better (ii) sequences where baseline does better. It shows that MASE improvement of our proposed method is at-least $0.19$ and up-to $1.06$.
\begin{table}[!htbp]
\vspace{-0.00in}
\caption{Average MAPE when (i)SEDX is better (ii)Baseline is better.}
\label{tab:CondAvgMAPESeasnew}
\footnotesize
\centering
\begin{tabular}{|c ||c|c|c|c|c|c| }
\hline
& \multicolumn{3}{|c|}{SEDX better} & \multicolumn{3}{|c|}{Baseline better}\\ \cline{2-7}
Method & SEDX & BLine & Diff & SEDX & BLine & Diff \\ \hline
SARX &12.83 &18.40 &\bf{5.57} &18.69 &14.06 &4.63 \\
BEDX &15.04 &19.22 &\bf{4.18} &16.06 &14.05 &2.01 \\
DeepAR &15.15 &22.43 &\bf{7.28} &16.11 &12.77 &3.34 \\
LSTNet &15.09 &35.50 &\bf{20.41} &25.98 &21.51 &4.47 \\
\hline
\end{tabular}
\vspace{-0.0in}
\end{table}
Tab.~\ref{tab:CondAvgMAPESeasnew} also represents similar conditions based on MAPE. There are MAPE improvement of at-least $4.18\%$ and up-to $20.41\%$. Note that based on MAPE, the average conditional improvement (indicated as Diff) of SEDX when it does better (left half of the table) is uniformly better than the average conditional improvements of all other baselines (Diff column on the right half of the table).
On D4, SEDX performance
(over SARX) is actually better than in D3. Even though
overall MASE avg of 0.68 (not MAPE please note) across all
sequences looks better for SARX (Tab.~\ref{tab:MaxAvgMinSeasm5mase3new}), its only better on $41\%$
of sequences in D4 (Fig.~\ref{fig:FracSld4}). Tab.~\ref{tab:CondAvgMAPESeasnew} indicates SEDX achieves a statistically significant
improvement of $5.57\%$ (over SARX) on $56\%$ of sequences.
\section{\bf Conclusions}
\label{sec:Conc}
We proposed a novel ED architecture for forecasting with multi-step (target) learning feature and prediction ability.
The architecture generalized a linear multiplicative Seasonal ARX model using multiple encoders, each of which capture correlations from one or more cycles
behind the prediction instant. The seasonal inputs were intelligently split between encoder and decoder without redundancy. We also proposed a greedy recursive grouping algorithm
to build background predictive models (one or at most a few) for the multiple time
series problem
We tested the proposed architecture and grouping algorithm on multiple real data sets, where our proposed architecture did mostly better than all strong baselines
while it outperformed many of them.
As future work, we would like to investigate how the proposed architecture could be utilized to better capture cross-sequence effects for multi time-series prediction. We reckon
that the proposed architecture can be potentially useful in many more real world scenarios.
\bibliographystyle{IEEEtran}
| 2024-02-18T23:40:03.619Z | 2022-07-12T02:02:12.000Z | algebraic_stack_train_0000 | 1,209 | 9,881 |
|
proofpile-arXiv_065-6040 | \section{Introduction}
\label{intro}
\subsection{Measurements of Rapidity Gaps at the Double Diffraction Dissociation}
The recent measurement of diffraction gaps in ATLAS \cite{atlas} has shown that the behavior of their distribution has different character in the different gap ranges. The histogram at the large values of gap indicates some discrete states of gaps. In this area only the process of double diffraction dissociation (DD) gives the contribution. Of course, the discrete pattern can be initiated by poor statistics. Nevertheless, we have to learn here the diagrams that lead to discrete levels of DD gap.
\subsection{Topological Expansion and Pomeron Exchange}
Pomeron exchange is presented with the topological QCD diagram that is responsible for multi particle production in p-p collisions at LHC energies. The quark-gluon content was drawn in the topological expansion \cite{topologyexp} as the the cylindrical net of gluon exchanges with the random amount of quark-antiquark loops inserted. The topological expansion gives the chance to classify the contributions from general diagrams of multi-particle production in the hadron interactions. This expansion has practically allowed us to develope the Quark-Gluon String Model (QGSM) \cite{kaidalov,qgsm,baryon}. Few orders in topological expansion are graphically presented in the figure from my PhD thesis figure~\ref{myphd}, where the third order is named pomeron with handle. Double diffraction dissociation in this presentation looks like the cylinder of one pomeron exchange with the toroidal handle that left uncut, so no particles have been produced in the central rapidity region.
\begin{figure}[htpb]
\centering
\includegraphics[width=7.0cm, angle=0]{myPhD.eps}
\caption{The fragment of graphical presentation of pomeron exchange in the topological expansion, where b is the number of boundaries and h is the number of handles.}
\label{myphd}
\end{figure}
\section{ Double Diffractive Dissociation as an Exchange with Pomeron Torus}
Double diffraction dissociation (DD) is a next order in the topological expansion after the pomeron exchange and should be presented as one pomeron diagram with two-pomeron loop in the center, see figure~\ref{DD}. Actually, the DD configuration is similar to cylinder with a handle that takes $(1/9)^2$ from the single pomeron exchange cross section (1.2 percents of $\sigma_{prod}$) at high energies.
\begin{figure}[htpb]
\centering
\includegraphics[width=2.0cm, angle=0]{pomeronloop.eps}
\caption{Pomeron loop in the center of one-pomeron exchange.}
\label{DD}
\end{figure}
If the central pomeron loop was not cut, we are having the DD spectra of produced hadrons: two intervals at the ends of rapidity range, which are populated with multi particle production, and the valuable gap in the center of rapidity.
Looking at two-pomeron loop in this diagram, we are realizing that it is torus in 3D topology. This interesting object should be considered separately in order to reveal some remarkable features for the experimental detection.
\section{Baryon Junction-Antijunction Hexagon Net and Discrete Dynamics of DD Gaps}
As we remember the pomeron cylinder is built by gluon exchange net, let us consider only three-gluon-connections on the surface of torus (or pomeron loop). This String-Junction type of gluon vertices has been studied in our early researches \cite{baryon,baryonasymmetry} and plays the important role in multiple production of baryons. Since this object brings the baryon charge, the anti-SJ also exists and brings the charge of antibaryon. The only charge-neutral way to construct the net from string-junctions and anti-string-junctions is hexagon where antibaryon charge is following the baryon one as it is shown in the figure~\ref{onecell}.
\begin{figure}[htpb]
\centering
\includegraphics[width=2.0cm, angle=0]{onecell.eps}
\caption{One cell of hexagon net with the SJ and antiSJ.}
\label{onecell}
\end{figure}
The closed net of six hexagons on the torus is shown in the figure~\ref{torus}.
\begin{figure}[htpb]
\centering
\includegraphics[width=5.0cm, angle=0]{SixbeeTorus.eps}
\caption{Closed net on the surface of torus: a) six hexagon construction and b) torus covered with six hexagon net.}
\label{torus}
\end{figure}
If people are trying to match the eligible number of hexagons, it becomes clear that there is a discrete row: Hexnumb=4, 6, 8, 12, 16, 24, 32, 48, 64 etc, see figure~\ref{torus}. It means that the pomeron torus has certain levels of energy. It leads to discrete gap states at DD \cite{myatlastalk} and to some other signatures in multi-particle production spectra \cite{torusasDD}. The similar construction was presended in \cite{kharzeev} as a complicate fullerene sphere that is build with SJ's.
\section{More Suggestions on the Pomeron Torus}
It is time to imagine, where to the pomeron torus could contribute. What we have, if our gluon-junction construction that looks like a "compactificated" pomeron string would be released as metastable particle? It is charge neutral QCD cluster with the certain potential energy, which is determined by number of hexagons, Hexnumb. If such cluster would be stable, this is appropriate candidate for the dark matter (DM)\cite{ICPPA}. The masses of these states are suspected similar to the very mass sequence of heavy neutral hadron states invented in \cite{ICHEP}. The reason, why this object is hardly dissipated in the collisions with matter, is following: the atomic numbers of elements in the space are too small in comparison with the number of SJ-antiSJ vertices in our toroidal constructions (let us name them "baryoniums"), therefore compact torus leaves intact after the collision with the less dense light atoms. Since each high energy proton collision in the space, wherever it takes place, contributes 1.2 percents of energy into DM, the valuable DM mass has been accumulated in the Universe, even though some amount of low mass baryoniums dissipates back into baryons and mesons at the collisions with the interstellar light atoms. It seems \cite{ICPPA}, nevertheless, that stable baryonium DM should be mostly concentrated near Supermassive Black Holes due to the huge gravitation pressure. Such a way, DM particles mostly appear in space as the result of the giant jets radiation and the partial distruction of BHs. This idea has to be verified with more observation of SMBH.
\section{Conclusion}
The topological presentation of pomeron exchange at the proton-proton collision of high energy is cylinder that is covered with quark-gluon net \cite{topologyexp}. I suggest that the process of double diffraction (DD) can be presented as one pomeron exchange with the central loop of two uncut pomeron cylinder loop or torus \cite{qgsm,myPhD}. Taking into account that the junction of three gluons (SJ) has the positive baryon number, as well as the antijunction is of negative baryon charge, our neutral pomeron construction can be covered by only a certain number of hexagons with 3 string junction and 3 antijunction vertices each \cite{torusasDD}. It is reasonable to expect that the dynamics of rapidity gaps in DD should be determined by the number of hexagons on the surface of pomeron torus. Therefore, the gap distribution in DD events has the discrete structure in the region of large gaps \cite{atlas}. The positive baryon production asymmetries that have been measured at LHC are the demonstration of string junction participation in proton-proton interactions of high energy \cite{baryonasymmetry}. Moreover, the string-junction torus can be released in the course of pp interaction as metastable particle (baryonium) and is getting suspected as "baryonium" Dark Matter candidate \cite{ICPPA}. The possibility of production of the states with many string junctions has been discussed recently by G.C. Rossi and G. Veneziano \cite{newveneziano}.
\section{Acknowledgments}
Author would like to express her gratitude to Oleg Kancheli for numerous discussions and to Vladimir Tskhay for designing the figure with torus.
| 2024-02-18T23:40:03.996Z | 2019-09-19T02:17:33.000Z | algebraic_stack_train_0000 | 1,228 | 1,286 |
|
proofpile-arXiv_065-6061 | \section{Introduction} \label{sec:intro}
The active growth of supermassive black holes occurs via accretion of dust and gas in the nuclei of galaxies. The standard model of unification of active galactic nuclei (AGN) posits that the parsec-scale dusty molecular structure of the accretion process forms a geometrically thick entity -- dubbed the ``torus'' -- around the central X-ray/UV/optical emission source, providing the angle-dependent obscuration to explain the difference between type 1 (unobscured) and type 2 (obscured) AGN \citep[e.g.][]{Ant85,Ant93,Urr95,Ram17}. While the ``torus'' was originally introduced for its obscuring nature to optical emission \citep{Ant85}, it has since become a catch phrase for dense, dusty molecular gas present in the tens of parsec-scale environment of the AGN \citep[e.g.][and references therein]{Ram17,Com19}.
In recent years, high-angular resolution observations of local Seyfert galaxies in the infrared (IR) with the Very Large Telescope Interferometer (VLTI) showed that the dust on these scales is not distributed in one single, toroidal structure \citep{Hon12,Hon13,Tri14,Lop14,Lop16,Lef18}. Instead, simultaneous radiative transfer modelling of IR interferometry and the IR spectral energy distribution (SED) implies a two-component structure with an equatorial, thin disk and a polar-extended feature, which may originate from a dusty wind forming a hollow cone and defining the edges of the narrow-line region \citep{Hon17,Sta17}. This is contrary to classical thick torus models that assume that the obscuring and emitting mass is distributed either smoothly or in clumps in a single toroidal structure extending from sub-parsec to tens of parsec scales.
In parallel, the Atacama Large sub-Millimeter Array (ALMA) probed the molecular phase of the dusty gas on similar spatial scales in several local AGN \citep[e.g.][]{Gal16,Ima16,Gar16,Alo18,Alo19,Com19}. These observations found nuclear rotational structures in several molecular lines (e.g. CO, HCN, HCO$^+$) in most of these galaxies, with potential outflow signatures present in some of them. Those ``molecular tori'' are about a factor of 10 larger than the IR emission sizes (several 10\,pc in the sub-mm) and almost exclusively located in the plane of the accretion disk. Taken at face value, those observations are qualitatively more consistent with a single ``geometric torus''.
Fundamentally, both IR and sub-mm observations trace the same gas flow. Hence, both sets of data, while sometimes considered contradictory, should be governed by the same physics. This paper takes a look at the structures that can be inferred empirically from the observations in the IR and sub-mm and applies some fundamental physical principles to unify the observations at both wavelength regimes. This work will focus on observations of radio-quiet AGN in the Seyfert-luminosity regime, but relations to higher luminosity sources will be discussed where applicable.
\section{The infrared continuum view: Dusty disk and polar elongation}\label{sec:ir_cont_view}
\subsection{Basic emission characteristics}\label{subsec:basics}
The near- and mid-IR emission from AGN usually presents itself as a point source in single-dish 8m-class telescope data \citep[for a comprehensive summary see][]{Asm14}, with notable exceptions that will be discussed below \citep[e.g.][]{Boc00,Pac05,Reu10,Hon10a,Asm16,Asm19}. For nearby Seyfert galaxies ($D_A < 100\,$Mpc), this corresponds to scales of $<$30\,pc at 2.2\,$\micron$ and $<$180\,pc at 12\,$\micron$. Several authors have compiled SEDs for Seyfert AGN at those resolutions, which will be summarised in the following \citep[e.g.][]{Ram09,Hon10a,Alo11,Esq14}.
In general, the IR SED of Seyferts is characterised by a well-documented rise from about 1\,$\micron$ towards longer wavelengths. The sharp and universal turn at $\sim$1\,$\micron$ has its origin in the fact that dust will sublimate when heated above a temperature of $1200-1900\,$K, depending on dust species \citep[e.g.][]{Bar87,Net15}. Dust sublimation introduces a characteristic sublimation radius $r_\mathrm{sub}$ marking the inner boundary of the dust distribution (see Sect.~\ref{subsec:ir_intf} for a more detailed description). As predicted by the local thermal equilibrium equation and as observationally confirmed \citep[e.g.][]{Sug06,Kis11a,Kos14}, $r_\mathrm{sub}$ scales with the square-root of the AGN luminosity, $r_\mathrm{sub} \propto L^{1/2}$.
The difference between obscured type 2 sources and unobscured type 1 AGN is most prominent in the $3-5\,\micron$ range, with the hot dust emission significantly suppressed in type 2s due to self-absorption of the obscuring structure. In addition, several authors point out an excess of hot dust emission in unobscured AGN above what single-structure radiative transfer models predict for full IR SED fits \citep[e.g][]{Mor09,Alo11}. This $3-5\,\micron$ excess, or ``bump'' as it may display a local or global maximum in the SED in some sources, has been interpreted as a separate emission component with its strength varying from source to source. In contrast, the mid-IR luminosity is of similar magnitude in both obscured and unobscured AGN when compared to the intrinsic AGN luminosity, with the anisotropy as low as a factor of $\sim$1.5 at 12\,$\micron$. \citep[e.g.,][]{Hor08,Gan09,Ram11,Hon11,Asm11,Asm15}. This still holds when accounting for anisotropy of the primary accretion disk radiation \citep[e.g.][]{Sta16}.
For dusty, molecular gas accreted from the host galaxy, it is expected that the silicates produce an absorption or emission feature at $\sim$10\,$\micron$, depending on the line-of-sight to the hottest dust. While some obscured AGN show strong silicate absorption features as expected, the corresponding silicate emission features in unobscured AGN from the hot dust are very shallow \citep{Hao07,Hon10a,Alo11,Gar17}. Indeed, the strongest absorption features may be due to dust in the host galaxy and not related to the AGN environment \citep{Gou12}, meaning that the silicate absorption originating from the nuclear environment is rather weak.
\subsection{IR interferometry}\label{subsec:ir_intf}
About 40 nearby AGN have been observed with IR interferometry in the near-IR or mid-IR. In the near-IR, observations are in general agreement with the hot dust emission emerging from close to the dust sublimation region, showing the expected $r_\mathrm{sub} \propto L_\mathrm{AGN}$ scaling \citep[e.g.][]{Swa03,Kis07,Kis09b,Pot10,Kis11a,Wei12}. However, both near-IR interferometry and reverberation mapping \citep[e.g.][]{Sug06,Kos14} find that the absolute scaling of the size-luminosity relation is offset towards smaller sizes by a factor of 2$-$3 with respect to predictions based on mixed-dust opacities in line with standard ISM dust for an average size of 0.07\,$\micron$. The observations can be reconciled with theory when assuming that the hot dust is primarily comprised of large graphite grains \citep[e.g.][]{Kis07,Hon17}. This is supported by the observed near-unity dust surface emissivities determined from near-IR interferometry \citep{Kis11a,Kis11b} and follows naturally from differential dust sublimation and the fact that the 2.2\,$\micron$ emission traces temperatures that can only be survived by the largest grains \citep{Gar17,Hon17}.
In the mid-IR, the picture is more complex: Most sources show a drop in visibility from unity to between 0.1$-$0.8 on shorter baselines of 40--60\,m. On longer baselines up to 130\,m, the visibility remains essentially unchanged with respect to the short baselines, indicating that some of the emission stays unresolved \citep[e.g.][]{Bur13,Lop16,Lef18,Lef19}. Without further information, the emission is commonly interpreted by means of a two-component model, consisting of a resolved and unresolved emission source.
In a few sources, it has been found that the resolved mid-IR component is elongated in the rough direction of the polar direction of the AGN \citep{Hon12,Hon13,Tri14,Lop14}. More precisely, the mid-IR emission is extended towards the edge of the ionisation cones \citep[e.g][]{Lop16,Sta17,Lef18,Sta19}. The extended emission accounts for a significant part of the point-like source seen in most single-telescope mid-IR imaging data, reaching 60-80\% in the interferometrically best-covered AGN in NGC1068, the Circinus galaxy, and NGC3783. \citet{Asm16} report that in 21 nearby AGN such ``polar'' features can be traced out to 100s of parsecs, and the detection frequency is presumably limited not by occurance of these features but by surface brightness sensitivity \citep[see][]{Asm19}. They speculate that these very extended features are the continuation of the 0.1$-$1\,pc polar emission structures seen by IR interferometry.
Circinus and NGC1068 are close and bright enough that the mid-IR emission source can be resolved below visibilities of 0.1 \citep{Tri14,Lop14}. At these levels, the otherwise unresolved component is partially resolved as a geometrically thin disk approximately parallel to the maser disk seen in both objects (with a potential third emission source at low flux contribution present in both sources). The maximum size of the disks are smaller than the size of the polar-extended emission (factor 1.6 in Circinus, 9 in NGC1068) and their axis ratios indicate a high inclination (major:minor axis ratio of $>$2:1 in Circinus and 6:1 in NGC1068). By extrapolation, the interferometrically unresolved mid-IR emission seen in more distant AGN may well be the disk component that is partially resolved in NGC1068 and Circinus.
It is worth noting that silicate emission or absorption features are mostly absent from the interferometric data, or, at least, not more pronounced than in unresolved, single-telescope data. The notable exception to this is NGC1068, where the spectrally-resolved visibilities show a deep silicate absorption feature at about 9.7$\,\micron$. Such a behaviour would be expected for a clumpy torus seen under type-2-like inclination, where self-absorption due to higher opacities in the feature as compared to the continuum causes the source to appear larger \citep{Hon10b}. That no other AGN shows a similar behaviour, contradicts this explanation as a genuine feature in AGN.
\subsection{Modelling the resolved and unresolved IR emission}\label{subsec:model_ir}
The size and magnitude of the polar emission components in both type 1 and type 2 AGN are difficult to reconcile with torus models using a single toroidal mass distribution, unless very specific geometries and line-of-sights across all sources are assumed. Hence, new radiative transfer models consisting of a disk and a polar component have been developed \citep[e.g][]{Gal15,Hon17,Sta17}. In line with observations, these models assume that the polar component originates from a hollow cone at the edges of the ionisation region. They successfully and simultaneously reproduced the total and resolved emission in NGC3783 \citep{Hon17} and the Circinus galaxy \citep{Sta17,Sta19}. In addition, \citet{Hon17} noted that such models are able to qualitatively and quantitatively reproduce the distinct $3-5\,\micron$ bump in the SED of unobscured AGN, associating it primarily with the inner hot part of the disk component \citep[see also][]{Gon19}.
\subsection{The phenomenological picture in the IR}\label{subsec:pheno_ir}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{fig1.pdf}
\end{center}
\caption{\label{fig:uv_ir}
Schematic view of the pc-scale AGN infrared emission consisting of a geometrically-thin disk in the equatorial plane (light brown) and a hollow dusty cone towards the polar region (dark brown). The inner part of the disk (pink) emits the near-IR emission dominating the $3-5\,\micron$ bump.}
\end{figure}
Empirically, this leads to the conclusion that the infrared emission is originating from two components: a disk and a hollow cone (see Fig.~\ref{fig:uv_ir}). While the disk may dominate the near-IR emission in unobscured AGN, the bulk of the mid-IR emission emerges from a hollow dusty cone towards the edge of the high ionisation region. As pointed out in literature, in many AGN with confirmed polar mir-IR emission features, only one rim of the dusty cone seems to be detected \citep[e.g][]{Tri14,Lop14,Lop16}. \citet{Sta17} use detailed radiative transfer modelling to argue that such one-sided emission requires both anisotropic emission of the big blue bump as expected for an accretion disk \citep{Net87} as well as a misalignment between the axis of the accretion disk and the axis of the hollow cone. Alternatively, partial obscuration of one rim by the host galaxy due to a misalignment of AGN and galaxy rotation axis may contribute. A physical interpretation of the two-component structure will be discussed in Sect.~\ref{sec:phys_desc}.
\section{The molecular view: disks}\label{sec:mol_view}
\subsection{CO lines observed with ALMA}\label{subsec:co_alma}
While sub-mm observations of AGN have been previously performed \citep[e.g.][]{San89,Bar89,Pla91,Jac93,Tac94,Tac99,Sch99,Sch00b,Dav04,Kri05,Cas08}, ALMA revolutionised spatial resolution, $uv$-plane, and frequency coverage. Several authors have presented results for nearby ($D<50$\,Mpc) Seyfert and low-luminosity AGN with spatial resolution of the order of a few to 10\,pc, i.e. only slightly lower than the VLT interferometer in the IR \citep[e.g.][]{Com13,Com14,Gar14,Ima16,Gal16,Gar16,Aud17,Ima18,Alo18,Com19,Alo19}. Most of these observations specifically target various rotational bands of CO, HCN, and HCO$^+$, tracing gas densities of a few $\times 10^4$\,cm$^{-3}$ to $10^6$\,cm$^{-3}$.
\begin{table*}
\begin{center}
\caption{\label{tab:scaleheight} \textup{
Kinematic properties and derived scale heights for sub-mm CO and infrared H$_2$ emission lines.
}
}
\begin{tabular}{lccccl}
\hline\hline
Object & line & $v_\mathrm{rot}\,^a$ & $\sigma\,^b$ & $H/R\,^c$ & reference \\
& & (km/s) & (km/s) & & \\ \hline
\multicolumn{6}{c}{sub-mm CO lines} \\ \hline
NGC3227\,$^d$ & CO(2--1), CO(3--2) & 150--200 & 20--30 & 0.1--0.2 & \citet{Alo19} \\
NGC5643 & CO(2--1) & 115 & 60 & 0.52 & \citet{Alo18} \\
NGC1365 & CO(3--2) & 95--187 & 35 & 0.19--0.37 & \citet{Com19} \\
NGC1566 & CO(3--2) & 134--480 & 40 & 0.08--0.30 & \citet{Com19} \\
IC5063 & CO(2--1), CO(4--3) & 175--250 & 50 & 0.20--0.28 & \citet{Das16} \\
Circinus & CO(3--2) & 70 & 35 & 0.5 & \citet{Kaw19} \\
NGC1068 & CO(6--5) & 70 & 10 & 0.14 & \citet{Gal16} \\ \hline
\multicolumn{6}{c}{infrared H$_2$ lines} \\ \hline
NGC3227 & H$_2$(1--0)S(1) & 65 & 90 & 1.38 & \citet{Hic09} \\
NGC3783 & H$_2$(1--0)S(1) & 26 & 33 & 1.27 & \citet{Hic09} \\
NGC4051 & H$_2$(1--0)S(1) & 37 & 44 & 1.19 & \citet{Hic09} \\
NGC4151 & H$_2$(1--0)S(1) & 82 & 67 & 0.82 & \citet{Hic09} \\
NGC6814 & H$_2$(1--0)S(1) & 23 & 43 & 1.87 & \citet{Hic09} \\
NGC7469 & H$_2$(1--0)S(1) & 38 & 63 & 1.66 & \citet{Hic09} \\
Circinus & H$_2$(1--0)S(1) & 36 & 51 & 1.42 & \citet{Hic09} \\
NGC1068 & H$_2$(1--0)S(1) & 21 & 102 & 4.85 & \citet{Hic09} \\
\hline
\end{tabular}
\end{center}
\textit{--- Notes:}$^a$\,deprojected rotational velocity except for the CO data of IC5063, Circinus, and NGC1068 where no inclination was given; $^b$\,velocity dispersion of the molecular gas; $^c$\, scale height estimate for the rotational disks , $H/R \approx \sigma/v_\mathrm{rot}$; $^d$\,model-inferred values after accounting for outflow and bar motion, as given in Sect.~5.3 in \citet{Alo19}.
\end{table*}
The kinematics observed in the molecular emission lines can be complex, with influences from the host galaxy on larger scales ($\ga$100\,pc, e.g. via bars or other dynamic resonances) and rotation and outflows dominating smaller scales \citep[e.g.][]{Ima16,Gal16,Gar16,Aal16,Aud17,Aal17,Alo18,Com19,Alo19}. The central 30$-$50\,pc in Seyfert nuclei recently observed with ALMA do have in common that they show a clear rotational pattern for the bulk of the CO gas\footnote{At the time this was written, the various rotational levels of CO had the best observational coverage across local Seyfert galaxies. When comparing CO to other molecular lines, e.g HCN, in some of the publications, the overall kinematic properties are similar, though the spatial distribution may vary (see Sects.~\ref{subsubsec:thinmoldisk_rad} \& \ref{subsec:uni_colddisk}). Hence CO is used as a proxy here.}. An overview of extracted nuclear disk properties of Seyfert-type AGN are listed in Table~\ref{tab:scaleheight}. The rotational velocities, $v_\mathrm{rot}$, and gas velocity dispersion, $\sigma$, have either been directly given in the referenced papers or extracted from the moment maps and pv-diagrams. Excluded are objects where the ALMA data has resolution $\ga50$\,pc, where the publication does not convey sufficient kinematic information to determine the scale height, that do not unequivocally show AGN activity in optical \citep{Ver10} and X-ray observations \citep{Bau13}, or that have an implied level of activity that would not typically identify them as ``normal'' Seyfert AGN.
Assuming a disk in hydrostatic equilibrium (see Sect.~\ref{subsubsec:thinmoldisk_vert}), it is possible to estimate the scale height as $H/R \approx \sigma/v_\mathrm{rot}$. The corresponding values for the CO emission in local Seyfert galaxies are also shown in Table~\ref{tab:scaleheight}. Interestingly, none of these molecular disks are geometrically thick. The typical scale heights of the CO gas in those ALMA observations is $H/R\sim0.15-0.3$.
The required scale height of a single obscurer in terms of the torus picture can be inferred from number ratios of X-ray obscured (=type 2) and unobscured (=type 1) sources assuming that the X-ray obscuring gas is located within the ``torus'' region. Using the X-ray background analysis from \citet{Ued14}, the ratio between type 1 ($N_H < 10^{22}$\,cm$^{-2}$) and type 2 ($N_H \ge 10^{22}$\,cm$^{-2}$) sources can be inferred in the range of 1:1-2.3 for AGN with Seyfert luminosities of $\sim10^{42}-10^{45}$\,erg/s. This corresponds to an X-ray covering factor of $\sim0.3-0.7$\footnote{\citet{Ric17} showed that the X-ray covering factor in hard X-ray-selected AGN is Eddington-ratio dependent. For Eddington ratios consistent with Seyfert-type AGN in the range of $\ell_\mathrm{Edd}\sim0.01-0.2$\, the observed covering factor is $\sim$0.3$-$0.8, consistent with the \citet{Ued14} result once the Eddington ratio distribution is factored in.}. From simple geometric considerations, $C = (H/R)/\sqrt{1+(H/R)^2)}$, which implies that the observed CO gas has a covering factor of only $C_\mathrm{CO} \sim 0.2-0.3$.
\subsection{H$_2$ in the near-IR and H$_2$O masers}\label{subsec:h2_H2o}
Similar spatial scales as with ALMA are reached in the near-IR for the H$_2$ molecule. \citet{Hic09} report kinematic modelling of the near-IR H$_2$(1--0)S(1) ro-vibrational transition at $2.212\,\micron$, which will be referred to simply as H$_2$ through the rest of the paper. This emission line traces hotter gas than the sub-mm CO rotational lines --- 1000--2000\,K as compared to 20--50\,K. Deprojected rotational velocities and velocity dispersions at 30\,pc from the nucleus from \citet{Hic09} and the derived scale heights for a sample of local Seyfert galaxies are listed in Table~\ref{tab:scaleheight}. The H$_2$ emission seems vertically more extended than the CO lines due to its higher observed velocity dispersion, with typical $H/R \sim 1.2-1.4$. \citet{Hic09} note that due to the observed co-planar rotation, warps cannot be responsible for the observed high $\sigma$ values. The authors further point out that the velocity dispersion of H$_2$ is significantly larger than the stellar velocity dispersion in the same region, implying that the stellar cluster cannot account for the observed turbulence. As a result, \citet{Hic09} conclude that the H$_2$ must form a geometrically thick disk, potentially inflated by starformation near the nucleus. Converting $H/R$ into an H$_2$ covering factor leads to $C_{\mathrm{H}_2} \sim 0.77-0.81$, i.e. almost a spherical distribution.
It is important to note that separating outflows from disk components is not straight-forward. Due to conservation of angular momentum, outflows can have a rotational component that can be mistaken for disk rotation. Hence, part of the comparably large $\sigma(\mathrm{H}_2)$ may be due to a combination of a disk and an outflow. The kinematically estimated $H/R$ for H$_2$ and its corresponding $C_{\mathrm{H}_2}$ should, therefore, be considered an upper limit. Nevertheless, it is likely that the H$_2$ disks have a larger $H/R$ than disks seen in sub-mm molecules.
Finally, NGC 1068 and Circinus are well-known for their nuclear maser disks \citep[e.g.][]{Gre97,McC09}. These features are usually seen on parsec scales, i.e. at about the same scale as the VLTI observations and slightly smaller than ALMA. H$_2$O masers trace the densest gas with $n_\mathrm{H}\ga10^9$\,cm$^{-2}$. After accounting for warps, the geometric thickness of those maser disks is very small.
\subsection{The phenomenological picture in the sub-mm}\label{subsec:pheno_submm}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{fig2.pdf}
\end{center}
\caption{\label{fig:uv_submm}
Schematic view of the AGN environment from molecular emission lines. The high-density masers (green) are located in the equatorial plane with CO/HCN (red) forming a thin disk and H$_2$ (yellow) a thick disk.}
\end{figure}
Fig.~\ref{fig:uv_submm} puts together the empirical view of the circumnuclear environment of radio-quiet Seyfert AGN from the perspective of the observed molecular emission lines. The vertical distribution of gas shows a clear density and temperature stratification. Qualitatively, such a profile is expected for a disk in hydrostatic equilibrium where the densest material sinks to the plane of the disk. The schematic view also accounts for the nuclear molecular outflows seen in at least some AGN \citep[e.g.][]{Gal16,Alo19}. As discussed in Sect.~\ref{subsec:mdotw}, some AGN show more jet-like outflow features. These are usually seen in more radio-loud/intermediate objects and may be brought into the presented schematic by adding collimated molecular emission close to rotation axis.
\section{A physical description of the mass distribution}\label{sec:phys_desc}
\subsection{Physical conditions of the dusty molecular gas}\label{subsec:phys_cond}
The previous sections demonstrated that the observed emission in the IR dust continuum and molecular lines show very different structures. To form a unifying view of the mass distribution, it is necessary to take into account some general physical principles, which are well established in literature based on theory or observations.
\paragraph{1. The IR dust continuum delineates the $\tau_\mathrm{IR} \sim 1$ surface of the dust distribution and may not be representative of the entire mass distribution.} This is a result of the radiative transfer equation as, in a simplified version, the emitted intensity $S_\lambda$ can be approximated by $S_\lambda \propto \tau_\lambda B_\lambda$, where $\tau_\lambda$ is the optical depth at wavelength $\lambda$ and $B_\lambda$ is the Planck function. Therefore, the largest emission contribution will come from $\tau_\lambda\sim1$\footnote{Note that this description is a simplification of the full radiative transfer and neglects self-absorption. For $\tau_\lambda>1$, the observed intensity will be $S_\lambda \approx B_\lambda\cdot\exp(-\tau_\lambda)$. Including this self-absorption effect, the emitting surface would be more correctly set at $\tau_\lambda = 2/3$.}. The observed ``surface'' of a mass distribution is, therefore, wavelength dependent, which means that different wavelengths will trace different regions of the mass distribution.
\paragraph{2. For typical conditions in the circumnuclear environment, the gas and dust will not have the same temperature, i.e. the gas temperature $T_\mathrm{g}$ is greater than the dust temperature $T_\mathrm{d}$.} \citet{Wil19} show from their radiation-hydrodynamical (RHD) simulations that even if dust and gas are hydrodynamically coupled, the temperature may be quite different. From their Fig.~9, an approximate relation of $$\log T_\mathrm{g} \approx 1.7\cdot \log T_\mathrm{d} - 0.25$$ can be derived for non-shocked dusty gas. This means that hot gas with $T_\mathrm{g} \sim 4500$\,K can be co-spatial with dust radiating in the mid-IR at $T_\mathrm{d} \sim 200$\,K.
\paragraph{3. As the dust opacity $\kappa_\mathrm{d} \sim 10^{2-3}\cdot\kappa_\mathrm{g}$ is much greater than the gas opacity\footnote{Here, the term ``gas opacity'' refers to the opacity of ionised Hydrogen gas as used in the Eddington limit, $\kappa_\mathrm{g}=\sigma_\mathrm{Thom}/m_p$, defined by the Thomson cross-section $\sigma_\mathrm{Thom}$ to electron scattering and the proton mass $m_p$.} $\kappa_\mathrm{g}$, AGN with Eddington ratios $\ell_\mathrm{Edd} \ga 0.01$ will inevitably drive dusty winds by radiation pressure.} Several authors have noted this fact for the direct UV/optical AGN emission \citep[e.g.][]{Pie92,Hon07,Fab09,Ric17} and RHD simulations confirm it for both single dusty clumps as well as massive dusty molecular disks \citep[e.g.][]{Dor11,Sch11,Wad12,Nam14,Nam16,Cha17,Wil19}. Such radiation pressure driving is most effective for regions with $\tau_V\sim1$. It is important to note that this principle also applies to other wavelengths where the AGN emits a significant fraction of its overall luminosity. As such, dusty gas with near-IR optical depth of at least $\tau_\mathrm{NIR}\sim1$ will also contribute to wind driving \citep[e.g.][]{Kro07,Cha17,Ven19}. As a consequence, IR radiation pressure will affect the distribution of dusty gas \citep{Kro07,Nam16}.
\paragraph{4. Absent significant vertical pressure, a geometrically thick configuration $H/R\sim1$ cannot be easily maintained.} While being a trivial statement from a theoretical point of few, the practical consequence of it is that the simplest approaches to producing a geometrically thick torus have often failed. \citet{Kro88} already noted that for typical gas and dust temperatures observed in AGN, thermal pressure is too low to sustain the observed velocity dispersion. Alternatively, a clumpy medium requiring fully elastic cloud-cloud collisions \citep[e.g][see discussion in Sect.~\ref{subsubsec:thinmoldisk_vert}]{Kro88,Vol04,Bec04}, high supernova rates \citep[e.g.][]{Wad12}, turbulence from accretion shocks \citep[e.g.][]{Vol18} or warps \citep[e.g.][]{Sch00a} have been explored.
\subsection{A physical picture}\label{subsec:phys_pic}
The empirical picture laid out in Sects.~\ref{sec:ir_cont_view} \& \ref{sec:mol_view} implies the presence of a relatively thin disk and a polar hollow cone towards the edge of the ionisation region. This hollow cone may be interpreted as a dusty, molecular wind on parsec scales. The following will discuss physical mechanisms related to the disk and wind regions, based on the principles laid out in Sect.~\ref{subsec:phys_cond}. While the remainder of Sect.\ref{sec:phys_desc} will be based on theoretical arguments, Sect.~\ref{sec:unify} will discuss the implications of the emerging picture in the context of observations.
\subsubsection{The geometrically thin molecular disk: vertical density stratification of molecular lines}\label{subsubsec:thinmoldisk_vert}
The high-density molecular tracer lines imply that on pc scales (H$_2$O disk masers) to 10s pc scales (e.g. CO, HCN), the accretion flow is settled to a relatively thin disk. This molecular disk appears to be co-planar with the thin-disk-like mid-IR emission seen in Circinus and NGC1068. Following from principle 1, it is reasonable to assume that the observed disk-like mid-IR emission originates from the $\tau_\mathrm{MIR}\sim1$ surface of this dusty molecular disk \citep[see also][]{Sta19}. The mass in such a disk can be inferred to be $\ga$10\% of the mass of the black hole (see Sect.~\ref{subsec:uni_colddisk}), which make it at least vertically self-gravitating, although on 10s of pc scales, the total gravitational potential consists of a mix of black hole and nuclear star cluster potentials.
As per principle 4, the scale height of such cool disks is small. A commonly invoked physical model in these cases is an isothermal disk. Observations have shown that the cool, high-density tracer lines such as CO (few 10\,K) are about co-spatial with hotter, lower-density tracers such as H$_2$. Therefore, the dense disks are multi-phase media. Notwithstanding the choice of tracer, the vertical density distribution of an isothermal disks can be inferred from the hydrostatic equilibrium equation
\begin{equation}\label{eq:hse}
\frac{\partial P}{\partial z} = -g_z \ \rho(z)
\end{equation}
with the pressure $P=\rho(z) \cdot k_B T_\mathrm{gas}/m_\mathrm{gas}$, gas temperature $T_\mathrm{gas}$, and gas particle mass $m_\mathrm{gas}$. Solving the differential equation provides the well-known exponential profile $\rho(z) \propto \exp(-z^2/2h^2)$ with the square of the scale height $(h/r)^2=k_B T_\mathrm{gas}/(m_\mathrm{gas} \cdot v_\mathrm{rot}^2)$.
Such a vertical density structure implies that lower-density gas will appear thicker than higher-density gas.
As already pointed out in Sect.~\ref{subsec:phys_cond}, for typical gas temperatures of H$_2$ or CO, the scale height of an isothermal disk would be vanishingly small, requiring either a hotter embedding medium ($T_\mathrm{gas}\sim10^6\,$K), or additional turbulence, e.g. from cloud-cloud collisions \citep[e.g.][]{Kro88,Vol04,Bec04}, which may be interpreted as a dynamical temperature. Following \citet{Vol04}, the scale height for a turbulent medium with clouds obeying the Jeans criterion (i.e. being stable, or, in the limit, marginally stable) can be calculated as
\begin{equation}
h/r \le \frac{\sqrt{9kT_\mathrm{gas}}}{\Phi_V r m_\mathrm{gas} \sqrt{8Gn_H}},
\end{equation}
where $\Phi_V$ is the volume filling factor of the turbulent medium, and $n_H$ the Hydrogen number density of the gas in the cloud. In more convenient units, this resolves to $h/r \le 0.93 \cdot (T_{1000}/n_{H;5})^{1/2} \cdot r_{10}^{-1}$ with the gas temperature in units of 1000\,K, the Hydrogen number density in units of $10^{5}$\,cm$^{-2}$ and the distance from the centre in units of 10\,pc.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{fig3.pdf}
\end{center}
\caption{\label{fig:max_hr}
Maximum achievable scale height for H$_2$-emitting gas clouds ($3.5 < \log n_H < 5$, $1000\,\mathrm{K} < T_\mathrm{gas} < 2000\,\mathrm{K}$; red-shaded area) and CO-emitting gas clouds ($5 < \log n_H < 7$, $20\,\mathrm{K} < T_\mathrm{gas} < 50\,\mathrm{K}$, blue-shaded area) for the collisional disk model. The red circles with errors show the approximate average of scale heights observed by \citet{Hic09} for H$_2$ while the blue circle with error bars indicates the observed CO scale height from Table~\ref{tab:scaleheight}. The inner cut-offs mark distances where the clouds at the respective densities cannot withstand the shear of the gravitational potential of the central black hole.}
\end{figure}
Fig.~\ref{fig:max_hr} shows the maximum scale height for clouds in the density ranges where the near-IR H$_2$ line observed by \citet{Hic09} and the sub-mm CO or HCN lines are emitted. The shaded theoretical areas are compared to averages inferred from \citet[][Fig.7]{Hic09} for H$_2$ and the range of CO scale heights listed in Table~\ref{tab:scaleheight}. For this, a volume filling factor of the gas of $\Phi_V=0.1$ was assumed with an H$_2$ gas mass fraction of 0.5 and a CO-to-H$_2$ mass ratio of 20\%. This shows that the observed density stratification leading to a multi-phase medium can be reproduced quantitatively by a collisional model.
Note that even though the scale height of the H$_2$-emitting clouds can become geometrically thick, the maximum Hydrogen column densities per cloud implied from the Jeans radius and density criteria is $N_{H;\mathrm{max}} < 10^{22}$\,cm$^{-2}$. Given the drop of the radial density profile with radius (discussed in the following section), the number of clouds along a typical line of sight at distances $\ga$10\,pc at inclination 45$^o$ is $\ll$1. Together with the low column density, the geometrically thick part of the H$_2$ emitting gas is not expected to have a major role in obscuring the AGN, i.e. it can hardly count as \textit{``the torus''} (see also Sect.~\ref{subsec:uni_obsc}).
\subsubsection{The geometrically thin molecular disk: radial density profile of molecular lines}\label{subsubsec:thinmoldisk_rad}
The radial density structure of an accretion disk can be inferred from Poisson's equation,
$$\frac{\partial g_r}{\partial r} = -4\pi G \rho(r)$$
where $g_r = -\partial \Phi(r)/\partial r$ is the acceleration due to the gravitational potential $\Phi(r)$, which combines the various mass contributions, i.e. black hole, stellar cluster and the disk mass. For a radially non-self-gravitating disk, the density will follow $\rho(r) \propto r^{-1\ldots-3}$, with index $-3$ if the potential is dominated by the black hole and approximately $-1$ for domination by a stellar cluster \citep[e.g.][]{Bec04}.
The implication of this profile is that one can expect higher densities to be more prominent at smaller distances from the AGN. However, disks may be clumpy, as discussed in the previous section, in which case the density profile corresponds to the number density of clouds rather then the gas density of the clouds themselves, which in this case determine the emission from a particular region in the disk. Aside from the Jeans limit discussed above, clouds in a gravitational potential are also limited by shear and can be disrupted by tidal forces. Based on the Roche limit, clouds need to be denser than $n_H > 3 M_\mathrm{BH}/(8\pi m_\mathrm{gas} r^3)$. For a black hole with $M_\mathrm{BH}=10^7\,M_\sun$, clouds in the H$_2$-emitting density range will only be able to exist at distances $\ga8$\,pc from the AGN while CO-emitting clouds can reach down to $\sim3$\,pc (limits shown in Fig.~\ref{fig:max_hr}), unless these clouds are stabilised by some mechanism (e.g. magnetic fields) or the gravitational fields is shallower (e.g. due to dominance of a stellar cluster). Hence, observed disk sizes differ amongst different molecular emission lines and are expected to depend on typical density, with the highest density tracer lines showing the most compact emission. Indeed, the observed H$_2$O maser disks in Circinus and NGC1068 reach to sub-parsec scales, at which shear-stable densities are $n_H \ga 10^{10}$\,cm$^{-3}$, conducive to such maser emission.
Another consequence of molecular clouds becoming shear-instable at $\la$10\,pc is that the dusty molecular gas in this region is expected to become more smoothly distributed, with most substructure being filamentary or transitional in nature. This is consistent with radiation-(magneto-)hydrodynamic simulations \citep[e.g.][]{Dor12,Wad12,Dor17,Cha16,Cha17,Wil19}. In such an environment, the assumption of extreme clumpiness with very low volume filling factors and the treatment of the material in a purely stochastic framework breaks down. This affects commonly used radiative transfer models as well as the collisional disk model that provides the scale height via (elastic) cloud scattering. Therefore, at $r<10$\,pc, the disks are expected to flatten significantly as opposed to what is implied by the rising maximum $h/r$ in Fig.~\ref{fig:max_hr} at these distances.
\subsubsection{Geometrically thick obscuration and wind launching via IR radiation pressure}\label{subsubsec:windlaunch}
Thermal pressure is not able to sustain a thick disk over 10s of parsec scales, nor do high-density tracer lines show evidence for large column densities at large scale heights. Therefore, angle-dependent obscuration needs to occur on smaller scales. In addition, at $r<10$\,pc, collisional support is not viable as the medium becomes relatively smooth, so that other mechanisms are required to provide geometrical thickness (see previous section).
Towards the inner rim of the dusty disk, radiation pressure from the AGN gets stronger, with the radiation pressure force being $F_\mathrm{rad} \propto r^{-2}$. Radiation pressure on dust has been discussed as a potent mechanism to drive winds \citep[e.g.][]{Pie92,Hon07,Fab09,Ric17}, but most work focuses on the effects of the central AGN UV/optical radiation. \citet{Kro07} noted that, as the hot dust is optically thick to its own emission, near-IR radiation pressure can cause the inner rim of a dusty disk to puff up, providing some obscuration.
A dynamical analysis of the effect of IR radiation pressure is presented in \citet{Kro07}. In the present work, a simplified 1-dimensional description of the vertical structure in the innermost, hottest dust disk will be developed to demonstrate the effects of IR radiation pressure. The main interest lies in the IR radiation pressure's ability to increase the scale height of the otherwise thin disk. Therefore, near-IR radiation pressure is treated as a local pressure term. Following a similar description as for the pressure balance within stars, the near-IR radiation pressure can be written as $\mathrm{d}P_\mathrm{NIR}/\mathrm{d}r = \kappa_\mathrm{NIR}L_\mathrm{NIR}\rho(z)/(4\pi c r)$, with $L_\mathrm{NIR}$ being the near-IR luminosity of the dusty molecular disk and $\kappa_\mathrm{NIR}$ denoting the (Rosseland) mean opacity of the dust in the near-IR. The hydrostatic equilibrium (eq.~(\ref{eq:hse})) then leads to
$$ c_s^2\frac{\partial\rho}{\partial z} = -g_z \rho(z) \left(1-\ell_\mathrm{Edd} C_\mathrm{NIR} \frac{\kappa_d}{\kappa_g}\right) $$
where $\ell_\mathrm{Edd}$ is the Eddington ratio, $C_\mathrm{NIR}$ is the fraction of near-IR luminosity as compared to the AGN luminosity (= NIR ``covering factor''), $\kappa_d/\kappa_e$ is the ratio of near-IR opacity $\kappa_d$ and the gas opacity $\kappa_g$, and the sound speed $c_s = \sqrt{k_B T_\mathrm{gas}/m_\mathrm{gas}}$. This differential equation is very similar to the standard case of isothermal gas, with the scale height of the dusty molecular gas $(h/r)_d$ being a modified version of the scale height for isothermal gas $(h/r)$,
\begin{equation}\label{eq:hseir}
(h/r)_d = h/r \cdot \left(1-\ell_\mathrm{Edd} C_\mathrm{NIR} \frac{\kappa_d}{\kappa_g}\right)^{-1/2}.
\end{equation}
Gas in the sublimation region ($T_d\sim1500$\,K) has a temperature of up to $T_g\sim10^5$\,K (see Sect.~\ref{subsec:pheno_ir}). Accordingly, the scale height without dust would be $h/r \sim 0.04$ for a $10^7\,M_\sun$ black hole. On the other hand, a Seyfert AGN with $\ell_\mathrm{Edd}\sim0.05$ and a typical $3-5\,\micron$ bump with $C_\mathrm{NIR}\sim0.2$ \citep[e.g.][]{Mor12} will have $(h/r)_d \rightarrow \infty$ for dust grains with size $a<0.1\,\micron$, i.e. the IR radiation pressure will blow away such particles unless they are shielded.
When interpreting the opacity as the area per unit mass of a dusty gas cloud, $\kappa_{dc} = 1/(N_\mathrm{H} m_\mathrm{gas})$, then $\kappa_{dc}/\kappa_g = 1/(N_\mathrm{H} \sigma_T)$, with $\sigma_T$ denoting the Thomson cross section. Dusty gas clouds with a column density $N_H\sim10^{23}$\,cm$^{-2}$ are settled in the disk with a similar scale height as for gas. However, gas clouds with several $N_H\sim10^{22}$\,cm$^{-2}$ will be dominated by the IR radiation pressure and puff up to high scale heights. As such column densities correspond to optical depths of $\tau_V\ga10$, these clouds will obscure the AGN and give the appearance of an optically thick absorber. The corresponding optical depth in the near-IR is $\tau_\mathrm{NIR}\sim1$, which are conditions favourable to dominating the near-IR emission. Accordingly, it is expected that the puff-up region of the disk dominates the observed $3-5\,\micron$ bump.
It is important to point out that once dusty gas becomes vertically unbound (i.e. $h/r \rightarrow \infty$), it will eventually be exposed to the radiation from the AGN. As the optical/UV luminosity of the AGN $L_\mathrm{UVO} > L_\mathrm{NIR}$ and $\tau_V \ga\tau_\mathrm{NIR}$, unshielded material lifted above the disk's mid-plane will experience strong radially outward radiation pressure, leading to the generation of a dusty wind, as shown by radiation-hydrodynamical simulations. More detailed dynamic analyses of this scenario are presented elsewhere \citep[e.g.][]{Kro07,Ven19}. Here, the important point is that while the near-IR radiation pressure will puff up the disk, it will not dominate the dynamics of the uplifted material once exposed to the AGN radiation, i.e. eq.~(\ref{eq:hseir}) illustrates the near-IR radiation's ability to make the disk geometrically thick, but any interpretation of the resulting structure beyond this should be cautioned.
To summarize, the near-IR radiation observed in the $3-5\,\micron$ bump is sufficient to puff up the dusty molecular disk in its innermost region and produce conditions favourable for launching a dusty wind driven by the optical/UV radiation from the central AGN. Black-body-equivalent temperatures of the $3-5\,\micron$ emission suggest that the diameter of this region is $\sim5-10\,r_\mathrm{sub}$, corresponding to about $0.2-1$\,pc in nearby AGN.
\section{Discussion: Unifying molecular and IR continuum observations}\label{sec:unify}
The previous sections showed the seemingly different pictures emerging in the IR and sub-mm and presented a simple framework to assess the physical conditions that dominate at the (sub-)parsec scales and tens of parsec scales. Consequences of disk turbulence and radiation pressure point towards a multi-phase, multi-density structure forming around the AGN, as opposed to a literal interpretation of a ``dusty torus'' as a single physical entity. Indeed, the original cartoon depicting the obscurer in \citet{Ant85} does not allude to a large-scale geometric torus.
Fig.~\ref{fig:multiphase} shows a schematic of all the observed phases plotted on top of each other (see Sect.~\ref{sec:ir_cont_view} \& \ref{sec:mol_view}, with the colours matching those of Figs.~\ref{fig:uv_ir} \& \ref{fig:uv_submm}. In addition, arrows denote the dynamics of the respective component, combining the observed kinematics and theoretical expectations (see Sect.~\ref{subsec:phys_pic}). Two-sided arrows indicate turbulent motion, either due to cloud scattering, thermal processes, or IR radiation pressure, while single-sided arrows mark wind/outflow motion primarily due to AGN radiation pressure. The schematic contains similar elements as the one inferred by \citet{Ric17} based on the X-ray column-density distribution. This is not a surprise as the same fundamental physical effect -- radiation pressure on dusty gas -- has been invoked to explain those X-ray observations.
The following will sort the multiple phases into four regions that are distinct by the physical mechanism that dominate their shape and/or dynamics, based on the physical description in Sect.~\ref{sec:phys_desc}. Some consequences and relation to other phenomena are discussed.
\begin{figure*}
\begin{center}
\includegraphics[width=1.78\columnwidth]{fig4.pdf}
\end{center}
\caption{\label{fig:multiphase}
Schematic view of the multi-phase dusty molecular environment of an AGN. The central picture has all empirically identified components plotted on top of each other (see Secs.~\ref{sec:ir_cont_view} \& \ref{sec:mol_view}). The top row shows those components of the multi-phase structure that can be detected by IR continuum observations (top left: near-IR, top right: mid-IR) and near-IR H$_2$ emission lines (top-middle panel). The bottom row shows the view in commonly observed sub-mm molecular lines (bottom-left) as well as by the H$_2$O maser emission. The arrows indicate the kinematics of the respective emission lines (without the rotational component).}
\end{figure*}
\subsection{The cold, outer part of the equatorial disk}\label{subsec:uni_colddisk}
Summarising the results from Sects.~\ref{subsubsec:thinmoldisk_vert} \& \ref{subsubsec:thinmoldisk_rad}, at scales of $\ga5-10$\,pc, the dusty molecular mass is settled into a disk with a vertical density stratification. The dust temperatures in this region is relatively low (few 10\,K) as (self-)absorption of the central AGN radiation and its probable anisotropic radiation profile do not supply significant heating. Molecular excitation is dominated by collisions in a medium with high density contrast.
The cold disk in a relatively shallow gravitational potential is prone to disturbances by host-galaxy conditions. For example, resonances, bars, bulge/pseudo-bulge dynamics, central star formation, or misalignment of the host galaxy with the plane of accretion can induce warps or cut-offs to the mass distribution. In such a potential, it is not necessarily expected that the rotational velocity component follows a Keplerian profile of a point-like mass, $v_\mathrm{rot} \propto r^{-1/2}$. This will mostly affect the larger scales of the cold-disk mass distribution ($\sim50-100$\,pc). However, even on the 10\,pc scale with a massive black hole dominating the gravitational potential, the rotational kinematics can appear non-Keplerian as the radiation pressure from the central AGN will flatten the effective potential. Given the vertical density profile, it is expected that lower density tracer lines will have lower $v_\mathrm{rot}$ than higher density tracer lines emitted from the same AGN distances as the corresponding gas is lifted higher above the mid-plane, thus being more exposed to the AGN radiation.
On the larger scales, the cold disk can be considered the component linking the inner host galaxy to the accretion structure around the black hole. The total mass on tens of parsec scales is at least a significant fraction of the black hole mass \citep[e.g. $\sim5-10$\% in NGC3227, $\sim100$\% in NGC5643][]{Alo18,Alo19}, hence providing the mass reservoir for black hole growth. However, free-fall time scales for these distances are of the order of $10^7$ years, meaning that the observed gas masses are not directly linked to the current accretion/luminosity state of the AGN. Rather, they signify the mass available for the ongoing AGN duty cycle of the galaxy. The cold disk may therefore be considered the AGN feeding region.
An important note to make is that while the feeding mass may originate from galactic scales, some form of dynamic processing needs to happen on scales of the (pseudo-)bulge or nuclear star cluster. If the gas would fall straight in, then one may expect that the rotation axis of the AGN is aligned with the angular momentum direction of the host galaxy. It is well established, however, that the direction of the AGN rotation axis is randomly distributed with regards to the host galaxy axis \citep[e.g.][]{Kin00}.
\subsection{The hot inner part of the equatorial disk}\label{subsec:uni_hotdisk}
In typical Seyfert-type AGN, scales of $<$10\,pc are wholly within or close to the sphere-of-influence of the supermassive black hole. In this region, the potential well will be steeper, leading to stronger sheer forces, a smoother medium, as well as a more centrally-concentrated mass distribution (see Sect.~\ref{subsubsec:thinmoldisk_rad}). Incidentally, at the same spatial scales, the energy density of the AGN radiation field will be stronger than the stars within the same volume \citep[e.g.][]{Jen17}.
The inner ``hot'' part of the dusty molecular gas distribution is defined by sublimation of dust at $T_\mathrm{dust}\sim1\,500-2\,000$\,K. As shown in Sect.~\ref{subsubsec:windlaunch}, the dynamics of the dusty gas in this region will be strongly influenced by AGN and IR radiation pressure, leading to an inflation of the inner $1-5\,r_\mathrm{sub}$ and ideal conditions to launch a wind. Such puff-up regions are not unique to AGN, but also seen in other accreting systems with significant amounts of dust around them, e.g. young stellar objects \citep[e.g.][]{Nat01,Dul01}. This puffed-up disk/wind-launching region will dominate the near-infrared emission of the AGN, creating the $3-5\,\micron$ bump seen in unobscured AGN. The observed covering factor of $\sim15-30$\% of the puff-ed up region implies that the hot disk contributes to the angle-dependent obscuration of the AGN, which will be further discussed in Sect.~\ref{subsec:uni_obsc}.
As the puff-up/wind launching region of the disk is confined to the inner few $r_\mathrm{sub}$, some AGN emission will penetrate through and illuminate the surface of the cooler parts of the disk. This is seen in IR interferometry as the sub-dominant mid-IR disk in some nearby AGN \citep[e.g.][]{Tri14}. In a disk with a vertical density profile, the emission is primarily emerging from its ``mid-IR surface'' (see principle 1 in Sect.~\ref{subsec:phys_cond}) and, given the steep radial profile, will naturally appear more compact than the wind region.
In Sect.~\ref{subsec:ir_intf}, it was pointed out that the hot dust region is dominated by emission from large graphite grains. As the dusty winds are launched in this region, one may expect that the parsec-scale dusty winds will be dominated by the same grain chemistry and size distribution. This will naturally lead to a reduction in observed silicate emission features as the mid-IR emission is dominated by the wind. Radiative transfer models taking into account differential dust destruction (i.e. silicate/graphite chemistry and grain sizes) and wind launching from the hot-dust region are able to reproduce the observed small near-IR sizes as well as the shallow silicate features \citep{Hon17,Gar17}. Further evidence for this scenario comes from extinction curves of type 1 quasars, which suggest a dearth of small grains in the outflow region \citep{Gas04}. At the same time, the wind region will be seen in both type 1 and type 2 sources, giving rise to the low anisotropy observed in the mid-IR (see Sect.~\ref{subsec:basics}). Notwithstanding the properties on parsec scales, it is well possible that on larger scales silicate dust may re-form in the wind \citep{Elv02} or that the ISM host galaxy may get entrained in the outflows.
\subsection{The dusty molecular wind region}\label{subsec:uni_wind}
As described in Sect.~\ref{subsubsec:windlaunch}, IR radiation pressure will either increase the scale height of the hot disk (=``puff-up region'') or unbind dusty gas completely from the gravitational potential of the supermassive black hole. While this gas will experience a vertical pressure force, it will also be exposed to the AGN radiation. The AGN radiation pressure will be stronger than the near-IR radiation pressure by about a factor of $\tau_V/\tau_\mathrm{NIR}$, so that if dusty gas becomes unbound by IR radiation pressure and lifted upward, it will start to be radially pushed away from the AGN. Qualitatively, the shape is expected to be hyperbolic \citep[e.g.][]{Ven19}, as has been implied by radiative transfer modelling of IR interferometry \citep{Sta19}. This naturally leads to a hollow-cone configuration with dust confined towards the edges of the cone. A Similar configuration is also obtained assuming X-ray heating instead of radiation pressure \citep[e.g.][]{Bal93}, and is consistent with edge-brightened narrow-line regions seen in some AGN \citep[e.g.][]{Can03}.
In the Seyfert regime with $\ell_\mathrm{Edd} \sim 0.01-0.2$, wind launching and driving will be sustained by dusty gas with Hydrogen column densities of $10^{22}-10^{23}$\,cm$^{-2}$. This corresponds to optical depths of $\tau_V \sim 10-100$. Therefore, the dusty outflow will contribute to obscuring the AGN (see Sect.~\ref{subsec:uni_obsc} for a more comprehensive discussion). As dusty gas being exposed to the AGN radiation may heat up and expand adiabatically, the highest columns are probably expected closer to the AGN. However, as radiation-hydrodynamical simulations show, dense optically-thick gas is lifted from sub-parsec scales to parsec scales and beyond, providing the required $\tau_\lambda\sim1$-surface and covering factors in the mid-IR to contribute or even dominate the mid-IR emission of AGN within the central 100\,pc \citep[e.g.][]{Wad16,Wil19}.
While most of the discussion focused on radiation pressure on dusty gas, magnetic driving in dusty gas has also been considered as a possible mechanism \citep[e.g.][]{Kon94,Cha17,Vol18}. However, these models often invoke radiation pressure to initiate or maintain the outflow, demonstrating that mechanisms causing dusty winds require at least some degree of radiation support.
\subsection{Mass outflow rate from dusty molecular winds}\label{subsec:mdotw}
Winds are a major factor in AGN feedback on galaxies and several driving mechanisms, including radiation driving, have been proposed. Hence, it is worth putting the parsec-scale dusty molecular winds into the context of feedback and determine typical mass-loss rates from these winds. Radiation pressure is a form of momentum-driving, so that the mass outflow rate of an optically-thick, spherical wind $\dot{M}_w \approx L_\mathrm{AGN}/ (v_\infty c)$, where $v_w$ is the terminal speed of the wind and $v_w\sim v_\mathrm{esc}=\sqrt{2GM/R}$ is approximately of the same order as the escape velocity at the radius $R=R_{\tau=1}$ where the wind transitions from optically-thick to optically-thin \citep{Mur05,Tho15}. Replacing the black hole mass $M=\kappa_g L /(4\pi \ell_\mathrm{Edd} G c)$ leads to
\begin{equation}\label{eq:mdotw_full}
\dot{M}_w = \left(\frac{2\pi}{c \kappa_g}\right)^{1/2} \ \ell_\mathrm{Edd}^{1/2} \ L_\mathrm{AGN}^{1/2} \ R_{\tau=1}^{1/2}
\end{equation}
Evaluating eq.~(\ref{eq:mdotw_full}) requires an estimate for the radius where the wind becomes optically thin. A lower limit can be inferred from the significant contribution of the dusty winds to the mid-IR interferometry flux, which implies that the wind (or clumps/filaments in the wind) do have at least $\tau_\mathrm{12\,\micron}\sim1$, i.e. $\tau_V\sim10$, on parsec scales. Observationally, the observed 12\,$\micron$ emission sizes have been found to be of the order of few \,pc, mostly independent of AGN properties \citep[e.g.][]{Kis11b,Bur13}. Given the relatively small AGN parameter range probed by past IR interferometric samples, it is possibly that the 12\,$\micron$ sizes do nevertheless scale with $L$ or $\ell_\mathrm{Edd}$ \citep{Lef19}. An upper limit for $R_{\tau=1}$ can be obtain from the observation that some AGN do show low surface brightness polar features in single-telescope mid-IR images \citep[e.g.][]{Asm16,Asm19}. Those faint features on 10s to 100s pc scales may be emitted by optically-thin dusty gas, setting $R_{\tau=1}<10-100$\,pc. From radiative transfer modelling of mid-IR images of the Ciricnus galaxy, \citet{Sta17} derive $\tau_V\sim1$ out to 40\,pc. Based on these constraints, a normalisation of $R_{\tau=1}=5$\,pc may be considered a reasonable estimate in the absence of resolved mid-IR observations, but it may require adjustment if such constraints are available.
There are further factors of order unity that are not accounted for yet: First, eq.~(\ref{eq:mdotw_full}) assumes a spherical shell, while the dusty wind launching region has a covering factor of $C_\mathrm{NIR}\sim0.2$ (see Sect.~\ref{subsubsec:windlaunch}). Second, near-IR radiation from the disk and the optically-thick wind itself will boost the momentum transfer to the dusty gas by up to a factor of $b_\mathrm{NIR}\la10$ \citep[e.g.][]{Tho15,Ish15}. The resulting correction factor to eq.~(\ref{eq:mdotw_full}) is $f_c = b_\mathrm{NIR} C_\mathrm{NIR} \sim 0.8$ for $b_\mathrm{NIR}=4$ and $C_\mathrm{NIR}=0.2$. Accounting for this correction and setting $R_{\tau=1}=5$\,pc, eq.~(\ref{eq:mdotw_full}) can be re-written as
\begin{equation}\label{eq:mdotwind}
\dot{M}_w = 2.5\,M_\sun/\mathrm{yr} \times L_{\mathrm{AGN};44}^{1/2}\ \ell_{0.05}^{1/2} \ R_{\tau=1;5}^{1/2}
\end{equation}
with $L_\mathrm{AGN}$ in units of $10^{44}$\,erg/s, $\ell_\mathrm{Edd}=0.05$, and $R_{\tau=1}=5$\,pc.
The anticipated mass outflow rates are broadly consistent with order of magnitudes of observed small-scale nuclear outflows. As an example, \citet{Alo19} determine the mass outflow rate in NGC~3227 from ALMA CO observations within the inner 15\,pc around the AGN. They find $\dot{M}_w\sim0.6-5\,M_\sun$/yr for a typical Seyfert AGN with a luminosity of $L_\mathrm{AGN} \sim 2\times10^{43}$\,erg/s. The expected outflow rate from eq.~(\ref{eq:mdotwind}) would be $\dot{M}_w \sim 0.7\,M_\sun$/yr. Similarly, \citet{Zsc16} report a potentially AGN-driven molecular outflow in the Circinus Galaxy in the range of 0.35$-$12.3\,M$_\sun$/yr. Using the probable luminosity range and Eddington ratio of Circinus \citep[$\log L($erg/s$)=43.36-44.43$, $\ell_\mathrm{Edd}=0.2$][]{Sta19} implies that AGN radiation pressure on dusty gas can drive an outflow of 1.7$-$10.5\,M$_\sun$/yr, consistent with the observations. On the higher-luminosity end, \citet{Aal15} presented ALMA HCN and HCO$^+$ observations of Mrk 231. For their potentially highly accreting AGN ($\ell_\mathrm{Edd}\sim0.5$) with a luminosity $L_\mathrm{AGN}\sim10^{46}$\,erg/s, an outflow rate of $\dot{M}_w\sim250\,M_\sun$/yr is estimated from eq.~(\ref{eq:mdotwind}). The authors report a mass outflow rate as $80-800\,M_\sun$/yr, consistent with the presented model of a dusty wind being driven off the pc-scale environment.
Comparing the outflow rate on ``torus'' scales to the accretion rate on accretion-disk scales, $\dot{M}_\mathrm{acc} = L_\mathrm{acc}/(\eta c^2)\sim0.02\,\mathbf{M_\sun/\mathrm{yr}}\cdot L_{\mathrm{AGN};44}$, with $\eta\sim0.1$ being the accretion efficiency, shows that $\dot{M}_w$ exceeds the mass supply to the black hole through the inner accretion disk by a factor of 100. This is not too surprising as observed mass inflow rates of cold molecular gas in Seyfert galaxies on tens to hundreds of parsecs scales are of the order of $1-10\,M_\sun$ \citep[e.g.][]{Sto19}, exceeding the central accretion rate by a similar factor. As such, mass conservation requires the presence of massive outflows on parsec scales \citep[see also][]{Eli06}, probably combined with a fraction of the inflowing mass lost to star formation\footnote{On scales of $<$50$-$100\,pc, it is difficult to estimate the star formation rate as many tracer lines experience contamination from the AGN \citep[e.g.][]{Jen17}.}.
Please note that the meaning of outflow in the context of this paper refers to the parsec to hundreds of parsec scales, i.e. escape from the sphere of influence of the black hole. It is well possible that some of these outflows will not escape the host galaxy but will deposit or distribute the entrained mass within the bulge or galaxy. The outflow rates are significant enough to consider dust-driven molecular winds from near the torus region as a mode of AGN feedback and mechanism for self-regulation of black hole growth.
While the present work focuses on radio-quiet Seyfert-type AGN, molecular outflows are also observed for radio-loud \citep[e.g. 3C273][]{Hus19a} or radio-intermediate AGN \citep[e.g. IC5063, HE 1353-1917][]{Das16,Oos17,Hus19b}. In these objects, the molecular outflows are often collimated and co-spatial with the jet. While energetical and momentum arguments do not always allow for unequivocally pinning down the dominating physical mechanism \citep[e.g.][]{Das16,Hus19b}, it is likely that those outflows are driven by a jet mode rather than radiation pressure. However, the presence of a jet by itself is not always a clear discriminator between jet- or radiation-driven outflows \citep{Wyl18}.
\subsection{The dark fall-back region}\label{subsec:uni_fallback}
As the dusty molecular wind is optically thick to the AGN radiation at least close to the wind launching region, it will provide some obscuration (see Sect.~\ref{subsec:uni_obsc}). In order to sustain wind driving over some time, dusty gas must remain exposed to the AGN radiation near the launching region for maximum momentum deposition (see Sect.~\ref{subsec:mdotw}). Gas being swept up in a region that becomes self-obscured to the AGN radiation will have less momentum deposited, giving rise to a ``failed wind''. Such mass fallback has been seen in radiation-hydrodynamic simulations \citep[e.g][]{Wad12,Wil19}, with observational evidence also found in low-velocity molecular outflow components of the Ciricnus galaxy \citep{Izu18}. Gas presence in this fall-back region is transitional, probably of lower column density, and the lack of direct AGN radiation makes the region dark in the IR. The molecular gas in this region is co-spatial with the hotter, lower-density gas of the equatorial disk with which it may interact dynamically. As such, it is difficult to pin down the exact physical properties of gas falling back.
The simulations by \cite{Wad12} suggest that the fall-back material may induce strong turbulence in the disk, making it geometrically thick. However, other models do not find this effect \citep[e.g.][]{Dor12,Cha16,Wil19} as the shocked gas in the disk rapidly cools radiatively.
\subsection{Reproducing toroidal obscuration}\label{subsec:uni_obsc}
The original reason to postulate the torus was the observed angle-dependent obscuration of AGN. However, toroidal obscuration may be caused by a range of mass distributions with circo-symmetric geometry and does not imply geometrical thickness over a large radial range. Indeed, \citet{Ant85} only postulate geometrical thickness as a requirement, without any preference as to how the optically-thick mass is distributed. The structure discussed in this present paper does provide the required angle-dependent obscuration.
In the structure proposed here, the highest column densities, probably exceeding the Compton-thick limit with optical depth $\tau_V\ga1000$, are encountered when viewing the equatorial disk edge-on. As discussed in Sec.~\ref{subsubsec:thinmoldisk_vert}, the scale height of this region for the dense gas and $N_H\ga10^{23}$\,cm$^{-2}$ as seen in CO is typically below $\sim0.15-0.3$, with lower density gas reaching higher. The wind and wind-launching regions provide additional obscuration with higher covering factors. While the puff-up in the inner hot disk will have covering factors of $C_\mathrm{NIR}\sim0.2-0.3$, Sect.~\ref{subsubsec:windlaunch} discusses that, depending on $\ell_\mathrm{Edd}$, dense clouds will be elevated and driven away by the dusty wind. For the range of typical Seyfert Eddington ratios of $\ell_\mathrm{Edd}=0.01-0.2$, up-lifted dust clouds/filaments will have column densities of the order of $N_H=10^{22}-10^{23}$\,cm$^{-2}$ (see Sect.~\ref{subsec:uni_wind}). Hence, the wind will contribute to the obscuration of the AGN. The opening angle of the wind, therefore, delineates the obscured from the unobscured region and sets the observed covering factor.
With the wind contributing to obscuration, it can be expected that a subset of AGN will be viewed close to the edge of the hollow cone, with dense gas moving towards the observer. This outflowing material may be related to the warm absorber seen in some type 1 AGN in the X-rays, \citep[e.g.][]{Tur93,Ric10}, with NGC~3783 being such a candidate object with a warm absorber where modelling of the mid-IR interferometry implies a viewing angle along the edges of the outflow cone \citep{Hon17}.
\subsection{Relation to the accretion structure inside the dust sublimation radius}\label{subsec:inside_rsub}
The presented picture of the mass distribution around the AGN is limited to the structure from tens to hundreds of parsec scales down to the sublimation region, with the physics being linked to specific properties of dusty molecular gas. It is interesting that a similar structure emerges when considering the primarily dust-free\footnote{Note that it has been suggested that a small amount of dust may still be present inside the observed sublimation radius that could control some of the gas dynamics \citep[e.g.][]{Cze11,Bas18}.}, atomic and ionised gas phase inside the sublimation radius. \citet{Elv00} discuss the distribution of gas leading to broad and narrow absorption/emission lines in quasars. Similarly to the present work, radiation forces on the gas via its continuum and line opacity cause a wind to emerge from the accretion disk. Most of the absorption line phenomena are attributed to different phases of this wind. The dusty winds discussed in the present manuscript probably define the boundary to the outflows emerging from the dust-free region. While their velocities can be much higher, considering mass conservation, the mass load of the dusty winds discussed in Sect.~\ref{subsec:mdotw}, and the resulting reduced accretion rate in the accretion disk region as compared to the dusty region, these flows supposedly escape close to the skin of the dusty winds.
\section{Conclusions}\label{sec:conc}
This paper aimed at reviewing the general properties of radio-quiet Seyfert-type AGN as seen in the infrared (IR) and sub-mm on scales $<$100\,pc. Those scales refer to the dusty molecular environment commonly referred to as the ``torus''. The observations in both wavelength regimes have been unified with a simple set of physical principles, drawing the picture of a multi-phase, multi-component region. The major conclusions are as follows:
\begin{itemize}
\item The dusty molecular gas flows in from galactic scales of $\sim$100\,pc to the sub-parsec environment (with the sublimation radius $r_\mathrm{sub}$ as the inner boundary) via a disk with small to moderate scale height. Higher density gas in radial direction is observed closer to the AGN and in vertical direction closer towards the mid-plane.
\item The disk puffs up within $\sim$5\,$r_\mathrm{sub}$ of its inner edge due to IR radiation pressure. In this region, gas with column densities $N_H\la10^{22}-10^{23}$\,cm$^{-2}$ becomes unbound and is swept out in a dusty wind by radiation pressure from the AGN.
\item The radiation-pressure-driven dusty molecular wind carries significant amounts of mass. The $\sim$pc-scale wind outflow rate is estimated as $$\dot{M}_w = 2.5\,M_\sun/\mathrm{yr} \times L_{\mathrm{AGN};44}^{1/2}\ \ell_{0.05}^{1/2} \ R_{\tau=1;5}^{1/2},$$
which is broadly consistent with molecular outflows seen by ALMA on these scales. Such rates can explain the difference of a factor $\sim$10 between galactic-scale inflow rates onto AGNs and the small-scale accretion rates from the accretion disk. Interestingly, for a given black hole mass, $\dot{M}_w \propto L \cdot R_{\tau=1}$. If the sizes of the dusty winds do increase with luminosity (or Eddington ratio), then higher luminosity AGN will remove a lager fraction of the inflowing gas than their lower luminosity counterparts, thus limiting their own mass supply towards the black hole. Therefore, dusty molecular winds are a mechanism to self-regulate AGN activity and will provide feedback from the AGN to the host galaxy.
\item Angle-dependent obscuration is caused primarily by the cool disk (circumnuclear $N_H\ga10^{24}$\,cm$^{-2}$) as well as the wind-launching region and hollow-cone wind $N_H\sim10^{22}-10^{23}$\,cm$^{-2}$. Hence, even when defining the ``torus'' simply as the obscurer of the AGN, it will still consist of multiple spatial and dynamical components rather than a single entity.
\end{itemize}
It is important to point out that the picture drawn in this paper is derived from the similarities shared by the various AGN observed with IR interferometry and in the sub-mm. Individual sources will show some degree of deviation from this picture, specifically on the 10s parsec scales, as orientation of and interaction with the host galaxy and varying degrees of nuclear starformation will affect the mass flow. In the framework of radio-quiet, local Seyfert-type AGN, it is consistent with the proposed structure to explain X-ray obscuration of similar type of AGN \citep{Ric17}.
\acknowledgements
\textit{Acknowledgements} --- The author wants to thank C. Ramos Almeida who inspired this paper by her inviting the author to give a talk on this topic at the EWASS 2019 Session ``The ALMA view of nearby AGN: lessons learnt and future prospects.'' Further, the author is thankful to Ski Antonucci for many insightful comments and suggestions, D. Williamson for discussions on the outflow properties, and P. Gandhi for input from the X-rays. This work was supported by the EU Horizon 2020 framework programme via the ERC Starting Grant \textit{DUST-IN-THE-WIND} (ERC-2015-StG-677117).
| 2024-02-18T23:40:04.096Z | 2019-09-20T02:00:42.000Z | algebraic_stack_train_0000 | 1,234 | 11,476 |
|
proofpile-arXiv_065-6079 | \section{Introduction}
Stabilizing all moduli of a 4D string compactification, especially in the presence of supersymmetry (SUSY) breaking and positive cosmological constant, is notoriously difficult. Already the simplest realistic models~\cite{Kachru:2003aw,Balasubramanian:2005zx} involve several ingredients and significant tuning. As a result, some skepticism concerning these models may be justified (see~\cite{Bena:2009xk, McOrist:2012yc, Dasgupta:2014pma, Bena:2014jaa, Quigley:2015jia, Cohen-Maldonado:2015ssa, Junghans:2016abx, Moritz:2017xto, Sethi:2017phn, Danielsson:2018ztv, Moritz:2018sui, Cicoli:2018kdo, Kachru:2018aqn, Kallosh:2018nrk, Bena:2018fqc, Kallosh:2018psh, Hebecker:2018vxz, Gautason:2018gln, Heckman:2018mxl, Junghans:2018gdb, Armas:2018rsy, Gautason:2019jwq, Blumenhagen:2019qcg, Bena:2019mte, Dasgupta:2019gcd} for a selection of papers criticizing and defending de Sitter constructions). Recently, this has culminated in the proposal of a no-go theorem against stringy quasi-de Sitter constructions. Concretely, in the single-modulus case, this includes the claim that~\cite{Obied:2018sgi,Garg:2018reu,Ooguri:2018wrx}
\begin{equation}
\abs{V'}\geq c\cdot V \qquad \text{or}\qquad V''\leq - c'V\,, \label{conjecture}
\end{equation}
where $c$ and $c'$ are order-one numbers.\footnote{
We set $M_{\rm P}=1$ except in equations with units and when its explicit appearance enhances readability.} This may be taken as an incentive to better understand the KKLT and Large-Volume-Scenario (LVS) constructions and improve on them (see \cite{Hamada:2018qef, Kallosh:2019oxv, Hamada:2019ack, Carta:2019rhx, Kachru:2019dvo} for progress in refuting some of the criticism based on 10D considerations). However, it is also interesting to take the opposite perspective: Accept the above de Sitter swampland conjecture as true and see what would be left of string phenomenology.
The most direct way out has already been emphasized in~\cite{Obied:2018sgi, Agrawal:2018own}: The presently observed cosmic acceleration would have to come from a stringy version of quintessence \cite{Wetterich:1987fm,Peebles:1987ek,Caldwell:1997ii}.\footnote{
For the purpose of this paper we are generous concerning the parameter $c$, allowing it to be significantly smaller than unity to match experimental restrictions~\cite{Agrawal:2018own, Heisenberg:2018yae, Akrami:2018ylq, Raveri:2018ddi}.
}
The latter is, however, not easy to realize (see e.g.~\cite{Hellerman:2001yi, Chiang:2018jdg, Cicoli:2018kdo, Marsh:2018kub, Han:2018yrk, Acharya:2018deu, Hertzberg:2018suv, vandeBruck:2019vzd, Baldes:2019tkl} for discussions). The most promising candidates for stringy quintessence are moduli (see e.g.~\cite{Cicoli:2012tz, Olguin-Tejo:2018pfq, Emelin:2018igk}) and axions (see e.g.~\cite{Nomura:2000yk, Svrcek:2006hf, Panda:2010uq, Chiang:2018jdg, Cicoli:2018kdo, DAmico:2018mnx, Ibe:2018ffn}), which are both ubiquitous in string compactifications.
In the present paper, we attempt to make progress not so much towards providing an explicit model but at least towards carefully specifying the challenges that have to be overcome. Our focus will be on ultra-light K\"ahler moduli in type IIB flux compactification, following the most explicit examples available~\cite{Cicoli:2011yy,Cicoli:2012tz}. We will postpone comments on axion quintessence to section 5.
Quintessence models rely on a scalar slowly rolling down a potential. Cosmology constrains its mass, which we define as $\sqrt{V''}$, to be smaller than the Hubble scale: $\abs{m_\phi}\lesssim H_0\approx10^{-33}\text{ eV}\sim\order{10^{-60}}M_{\rm{P}}$ \cite{Tsujikawa:2013fta}. This lightness makes the quintessence scalar susceptible to fifth-force constraints, ruling out in particular the overall-volume modulus. Our main candidates will hence be ratios of certain 4-cycle volumes.
Stringy quintessence needs large hierarchies between the mass of the quintessence scalar, the volume-modulus mass, and the mass scale of Standard-Model (SM) superpartners. In the spirit of~\cite{Cicoli:2011yy, Cicoli:2012tz}, we use a large volume $\mathcal{V}$ and an anisotropic geometry to suppress the loop corrections which make the quintessence scalar massive. However, this also lowers the mass scale of the volume modulus, leading to what we want to call the ``light volume problem''.
Moreover, even if some new effect making the volume sufficiently heavy could be established (see~\cite{Cicoli:2011yy, Cicoli:2012tz} for suggestions), another problem remains: The SM-superpartner masses induced by the available K\"ahler modulus $F$-terms are too low. This can be overcome by introducing a dedicated SUSY-breaking sector on the SM brane. Yet, even taking the corresponding mediation and hence $F$-term energy scale as low as possible, a significant uplifting effect on the full scalar potential is induced. We call this the ``$F$-term problem''. In the given setting, the corresponding energy density is comparable to the positive and negative energy scales cancelling each other in the underlying no-scale model and much above the residual $1/{\cal V}^3$ AdS-potential of the LVS stabilization mechanism.
The rest of the paper is structured as follows: We introduce the phenomenological requirements in section 2 and translate them to model-building restrictions in section 3, where we re-derive the light volume problem. In section 4 we present the $F$-term problem arising from the phenomenologically required SUSY breaking. A discussion of possible loopholes, axion quintessence and alternative approaches follows in section 5 before we conclude in section 6.
\section{Preliminaries and Requirements}
We will focus on compactifications of type IIB string theory on Calabi-Yau orientifolds with O3/O7 planes. One reason is that this setting is particularly well-studied and has proven to be phenomenologically promising (see~e.g.~\cite{Kachru:2003aw, Balasubramanian:2005zx, Giddings:2001yu, Denef:2008wq}). A closely related reason is the no-scale structure arising after the flux stabilization of complex-structure moduli. This allows one to go to a large volume and make use of different small corrections to the K\"ahler-moduli scalar potential. As we will see, this appears to be precisely what one needs for the large hierarchies required in the present context.
The 4D effective theory arising at the classical level is characterized by ${\cal N}=1$ supergravity (SUGRA) with K\"ahler and superpotential
\begin{equation}
K_{\rm{tot}}=-2\ln{\cal V}(T+\bar{T})+K_{\rm{cs}}(z,\bar{z})\qquad \mbox{and} \qquad W=W(z)\,.
\end{equation}
Here $T$ stands symbolically for all K\"ahler moduli and $z$ for the complex-structure moduli together with the axio-dilaton. After solving the $F$-term equations $D_zW=(\partial_z+K_z)W=0$, by which the $z$-moduli get stabilized, one ends up with
\begin{equation}
K=-2\ln{\cal V}(T+\bar{T})\qquad \mbox{and} \qquad W=W_0=\,\mbox{const.}\,,
\end{equation}
where we have absorbed any additive constants in $K$ into a redefinition of $W$. Since the volume ${\cal V}$ is a homogeneous function of degree 3/2 of the K\"ahler moduli $T=\{T_1,T_2,\cdots\}$, the scalar potential vanishes identically,
\begin{equation}
V=e^K(K^{i\bar{\jmath}}D_i WD_{\bar{\jmath}}\bar{W}-3\abs{W}^2)=K_{i\bar{\jmath}}F^i \bar{F}^{\bar{\jmath}}-3e^K\abs{W}^2=0\,.\label{noscale}
\end{equation}
This no-scale structure breaks down due to quantum corrections, giving
\begin{equation}
V=\delta V_{\rm{np}}+\delta V_{\alpha'}+\delta V_{\rm{loop}}\neq 0\,,\label{corrs}
\end{equation}
where one distinguishes:
\begin{itemize}
\item \textbf{Non-perturbative corrections} due to D7-brane gaugino condensation or E3-brane instantons. While they generically correct both K\"ahler and superpotential, their main effect on the scalar potential comes from $W\,\,\to \,\, W=W_0+A_ie^{-a_iT^i}\,.$
\item \textbf{$\alpha'$ corrections}, which arise from higher-order terms in the 10D action. The established leading effect~\cite{Becker:2002nn} can be accounted for by $K\,\,\to\,\, K=-2\ln(\mathcal{V}+\xi)\,.$
\item \textbf{String-loop corrections}, which can also be viewed as field-theoretic loop corrections in a Kaluza-Klein (KK) compactification and would naively affect the K\"ahler potential more strongly than the $\alpha'$ corrections: $K\,\,\to\,\, K+\delta K_{\rm{loop}}\,.$ However, due to an extended no-scale cancellation, their effect on the scalar potential is subdominant~\cite{vonGersdorff:2005bf, Berg:2005ja, Berg:2005yu,Cicoli:2007xp}.
\end{itemize}
At large volume, the terms in (\ref{noscale}) scale as $1/{\cal V}^2$ and the no-scale structure may be viewed as an exact cancellation of scalar potential terms at this order. The terms in (\ref{corrs}) are suppressed by further volume powers, as we will discuss in more detail below. As a result, K\"ahler moduli are parametrically light at large ${\cal V}$, which makes them natural candidates for the quintessence scalar. Conversely, the extreme lightness of quintessence enforces ${\cal V}\gg 1$.
Possibilities for including the SM are fractional D3-branes at a singularity or D7-branes wrapping a 4-cycle~\cite{Conlon:2005ki}. In the best-understood examples, this will give rise to a SUSY version of the SM. SUSY will then have to be broken at least at about 1~TeV$\, \sim 10^{-15}M_{\rm{P}}$.
With this general setting fixed, we proceed by listing the phenomenological requirements, to be justified momentarily:
\begin{enumerate}
\item \textbf{Light quintessence modulus} $\phi$ with $m_{\phi}\lesssim 10^{-60}M_{\rm{P}}\,.$
\item \textbf{Heavy superpartners} with $m_S\gtrsim 10^{-15} M_{\rm{P}}\,.$
\item \textbf{Heavy KK scale} with $m_{\rm{KK}}\gtrsim 10^{-30}M_{\rm{P}}\,.$
\item \textbf{Heavy volume modulus} with $m_{\mathcal{V}}\gtrsim 10^{-30} M_{\rm{P}}\,.$
\end{enumerate}
The first two requirements are obvious from what has been said above: the need for a slowly rolling scalar and consistency with the LHC. The third requirement follows from the fact that standard 4D Newtonian gravity has been tested at scales below $0.2$~meV~$\sim$~$1$~mm$^{-1}$\cite{Kapner:2006si}.
Finally, the fourth requirement is obtained if one notices that, after compactification, the Ricci scalar of the 4D theory obtains a prefactor ${\cal V}$. Then, after Weyl rescaling to the 4D Einstein frame, the scalar field corresponding to ${\cal V}$ couples to matter fields (both from D3 and D7 branes) with approximately gravitational strength. However, such fifth-force effects are ruled out by the very same experiments that test gravity at the sub-millimeter scale \cite{Damour:2010rp, Acharya:2018deu, Kapner:2006si} (measuring the Eddington parameter in the post-Newtonian expansion). Hence the volume modulus must be sufficiently heavy.
Comparing the first and last requirement, it is immediately clear that $\phi$ cannot be the volume modulus. It can, however, be one of the K\"ahler moduli measuring the relative size of different 4-cycles. We will see below that, while these can be much lighter than ${\cal V}$, reaching the extreme level of $10^{-60}M_{\rm{P}}$ proves non-trivial. We also note that such K\"ahler moduli couple to matter, though not as strongly as ${\cal V}$. These couplings tend to violate the equivalence principle, forcing them to remain about a factor of $10^{-11}$ below gravitational strength~\cite{Damour:2010rp}. Fifth-force constraints on stringy quintessence models have recently been studied in detail in~\cite{Acharya:2018deu}, where a lower bound on the compactification volume, which suppresses the couplings to other K\"ahler moduli, was found for a number of models. Our focus in this paper is different and concerns the more elementary issue of mass hierarchies in the scalar potential and the SUSY-breaking scale. The volume needed for these hierarchies is in general even larger than prescribed by the bounds from fifth-force constraints.
\section{Mass Hierarchies and resulting Bounds}
As explained, we focus on K\"ahler moduli and rely on the corrections of (\ref{corrs}) to generate a non-zero potential. It will hence be useful to recall their generic volume-scaling (e.g. from \cite{Cicoli:2009zh}). In doing so, we suppress all ${\cal O}(1)$ coefficients and write $\tau^i:=\frac{1}{2}(T^i+\bar{T}^i)$:
\begin{equation}
\delta V_{\rm{np}}\sim \frac{\sqrt{\tau_\text{s}}{\rm{e}}^{-2a_s\tau_\text{s}}}{\cal V}+\frac{W_0 \tau_\text{s} {\rm{e}}^{-a_s\tau_\text{s}}}{{\cal V}^2}\,\,\to\,\,\frac{W_0^2}{{\cal V}^3}\log^{3/2}(W_0/{\cal V}) \,,\qquad
\delta V_{\alpha'}\sim\frac{W_0^2}{{\cal V}^3}\,,
\qquad \delta V_{\rm{loop}}\sim \frac{W_0^2}{{\cal V}^{10/3}}\,.
\label{ecorr}
\end{equation}
Naively, the non-perturbative correction is always subleading due to its exponential suppression. However, it may be relevant if it is induced by a `small cycle' $\tau_\text{s}$. In this case, after the modulus $\tau_\text{s}$ is integrated out, a volume-dependent effect arises which (up to a log-enhancement) scales in the same way as the $\alpha'$ correction. The interplay of these two effects may then provide the celebrated volume stabilization in LVS \cite{Balasubramanian:2005zx, Conlon:2005ki, Cicoli:2009zh} with an AdS minimum at ${\cal V}={\cal V}_0$ and
\begin{equation}
V_{\rm{LVS}}\,\,\sim\,\, \delta V_{\rm{np}}+\delta V_{\alpha'}\,\,\sim \,\,\frac{W_0^2}{{\cal V}_0^3}\,.\label{LVS}
\end{equation}
Here ${\cal V}_0$ can be exponentially large, with the exponent being $\sim \chi^{2/3}/g_\text{s}$ (where $\chi$ is the Euler characteristic of the Calabi-Yau and $g_\text{s}$ the string coupling).
As explained before, this is exactly what we need: The volume must be very large but stabilized at a sufficiently high scale to avoid fifth-force constraints. Crucially, even though ${\cal V}={\cal V}(T)$ is in general a complicated function of all K\"ahler moduli, $V_{\rm{LVS}}$ depends only on the overall volume. The role of quintessence can then be played by any combination of K\"ahler moduli other than the overall volume (and excluding any `small cycles' -- i.e.~those for which $\exp(-\tau)$ is not negligibly small).
We now need to discuss moduli masses in more detail. First, $\tau_\text{s}$ (and similar moduli stabilized by their non-perturbative corrections) are heavy: $m_{\tau_\text{s}}\sim W_0/{\cal V}$. We will not discuss them any further and also neglect their contributions to the volume. In the moduli space of the remaining `large cycles' $T^i$, one direction (corresponding to the overall volume ${\cal V}$) is stabilized by the non-perturbative and $\alpha'$ corrections. The other moduli receive a mass from $V_{\rm loop}$. Although also other corrections could contribute to the moduli masses, as for example the poly-instanton corrections in \cite{Cicoli:2011yy}, we will only discuss loop corrections here, since they generally contribute to any modulus and thus provide a lower limit on moduli masses. To discuss them, we focus on the submanifold defined by ${\cal V}=\,$const.~and, in addition, ignore the axions. The kinetic term is then defined by the metric $K_{i\bar{\jmath}}=K_{ij}$, restricted to that submanifold. After canonical normalization of the kinetic terms the moduli masses are obtained from the second-derivative matrix of the scalar potential $\partial_i \partial_{\bar{\jmath}}V$. The specific structure of $K_{ij}$ for large-cycle volumes allows one to estimate the masses simply by the square root of the relevant potential term (see the appendix and \cite{Skrzypek} for more details). This also holds for the volume modulus so that,
according to \eqref{ecorr} (see also \cite{Conlon:2005ki}), one finds parametrically
\begin{equation}
m_{\mathcal{V}}\sim\sqrt{\delta V_{\alpha'}}\sim\frac{W_0}{\mathcal{V}^{3/2}}\,\,,\qquad m_{\mathcal{\tau }^i}\sim m_\phi \sim\sqrt{\delta V_{\rm{loop}}}\sim\frac{W_0}{\mathcal{V}^{5/3}}\,.\label{masses}
\end{equation}
Here we use the notation $m_\phi$ since we already know that the quintessence field $\phi$ will be one of those large-cycle volumes (more precisely volume ratios) present in addition to ${\cal V}$.
Combining (\ref{masses}) with the required scales listed in the previous section, one finds
\begin{equation}
\order{10^{30}}\lesssim\frac{m_{\mathcal{V}}}{m_{\phi}}\sim\mathcal{V}^{1/6}\qquad \Rightarrow \qquad\mathcal{V}\gtrsim\order{10^{180}}\,.
\label{iso1}
\end{equation}
This is a very large volume and will result in very small KK scales given by
\begin{equation}
m_{\rm{KK}}=\frac{M_{\rm{s}}}{R}\sim\frac{M_{\rm{P}}}{\mathcal{V}^{1/2+1/6}}\lesssim\order{10^{-120}}M_{\rm{P}}\,,
\label{iso2}
\end{equation}
which is in conflict with requirement 3. Here we have used that the string scale $M_{\rm{s}}$ of the 10D Einstein frame is given by $M_{\rm{s}}=M_{\rm{P}}/\sqrt{\mathcal{V}}$ and the typical Radius $R$ of the compactification is the sixth root of the volume, assuming isotropy.
The loop corrections involving the quintessence modulus thus have to be suppressed more strongly than by $\mathcal{V}^{-10/3}$. As suggested in~ \cite{Cicoli:2011yy, Cicoli:2012tz}, anisotropic compactifications may provide the required suppression. To understand this idea, a heuristic argument for the power of $-10/3$ in the loop corrections is useful~\cite{Cicoli:2009zh, Cicoli:2011yy}: From a 4D point of view, loop corrections arise from loops of all light fields below a cutoff $\Lambda$, where the 4D description breaks down. This $\Lambda$ is assumed to be given by the lowest KK scale, where the theory becomes effectively higher-dimensional.\footnote{
This is a non-trivial assumption since loop corrections may, of course, also arise in higher-dimensional field theory or directly at the string level. In fact, one probably has to assume that the restoration of a sufficiently high level of SUSY above the KK scale cuts off the loop integrals. However, in the present case SUSY is broken by fluxes, and these penetrate not just the large-radius but {\it all} extra dimensions. So further scrutiny may in fact be required to justify the use of the {\it lowest} KK scale as a cutoff.}
The fields running in the loops contribute with different masses and signs and the potential at 1-loop order will be the SUSY analogue of the Coleman-Weinberg potential \cite{Coleman:1973jx, Ferrara:1994kg}:
\begin{equation}
V=V_\text{tree}+\frac{1}{64\pi^2}\text{STr}\mathcal{M}^0\cdot\Lambda^4\log\frac{\Lambda^2}{\mu^2}+\frac{1}{32\pi^2}\text{STr}\mathcal{M}^2\cdot\Lambda^2+\frac{1}{64\pi^2}\text{STr}\mathcal{M}^4\log\frac{\mathcal{M}^2}{\Lambda^2}+...\,.
\label{cw}
\end{equation}
The second term disappears due to SUSY. The third term involves the supertrace $\text{STr}\mathcal{M}^2$ of all fields running in the loops. In general 4D $\mathcal{N}=1$ SUGRA, this supertrace is given by $\text{STr}\mathcal{M}^2=2Qm_{3/2}^2$, where $Q$ is a model dependent $\order{1}$ coefficient, while $m_{3/2}$ is the gravitino mass given by $\abs{W}/\mathcal{V}$. This allows us to estimate the lowest order loop corrections by
\begin{equation}
\delta V_{\text{loop}}\sim Am_{\rm{KK}}^2m_{3/2}^2+Bm_{3/2}^4\sim A m_{\rm{KK}}^2 \frac{W_0^2}{\mathcal{V}^2}+ B\frac{W_0^4}{\mathcal{V}^4}
\label{potential}
\end{equation}
with $\order{1}$ constants $A$ and $B$.\footnote{Although the terms in \eqref{potential} could in principle cancel each other, we will not discuss cancellations here and refer to the discussion.} As discussed earlier, in an isotropic compactification the first term gives exactly the familiar $\mathcal{V}^{-10/3}$ dependence which results in too small KK scales. Therefore, we now assume an anisotropic compactification with $l$ large dimensions of radius $R\sim\mathcal{V}^{1/l}$ and the other $6-l$ dimensions at string scale for highest possible suppression. This creates a hierarchy between the KK scales so that the heavy KK modes have masses at string scale while the light ones have masses of order $m_{\rm{KK}}\sim\mathcal{V}^{-(1/2+1/l)}$. Looking only at the first term in \eqref{potential}, we observe that smaller $l$ makes the quintessence field lighter. However, this improvement ends when the value of the first term falls below that of the second, $m_{\rm{KK}}$-independent term. This occurs at $l=2$, which is hence the optimal value on which we now focus. We note that further suppression can apparently be achieved if $l=1$ and, in addition, $W_0$ is tuned small. But, as we will explain below, this does not resolve the problems we will face.
Thus, in the anisotropic scenario with $l=2$, the quintessence scalar gets loop corrections only at order $\mathcal{V}^{-4}$ which in contrast to \eqref{masses} induces a quintessence mass\footnote{
We again refer to the appendix for a justification of the formula $m_\phi\sim\sqrt{\delta V_{\rm loop}}$.}
\begin{equation}
m_\phi \sim\sqrt{\delta V_{\rm{loop}}}\sim\frac{W_0}{\mathcal{V}^2}\,.\label{suppressed}
\end{equation}
Since requirement 3 bounds the volume to $\mathcal{V}\lesssim \order{10^{30}}$ we can marginally source the right quintessence mass. However, using $m_{\mathcal{V}}$ from \eqref{masses} and $m_\phi$ from \eqref{suppressed} together with our phenomenological requirements 1 and 4, we conclude
\begin{equation}
\order{10^{-30}}\gtrsim\frac{m_\phi}{m_{\mathcal{V}}}\sim\mathcal{V}^{-1/2}\sim m_{\rm{KK}}^{1/2}\qquad \Rightarrow \qquad \order{10^{-60}}\gtrsim m_{\rm{KK}}\label{h1}\,,
\end{equation}
where in the last step, we see a contradiction with requirement 3 arising as the KK scale becomes too low.
So even in the anisotropic case the required hierarchy cannot be achieved through the standard LVS approach.\footnote{As mentioned above, we can further suppress $V_\text{loop}$ by choosing $l<2$ and tuning $W_0$ small. The obvious possibility is $l=1$ corresponding to one large and five small dimensions. One may also consider more complicated geometries where several radii between $1/M_{\rm{s}}$ and some maximal radius $1/M_{\rm{KK}}$ are used. This latter case may be treated by using an effective $l$ with $1 \leq l \leq 2$ in the crucial formula for $m_{\rm{KK}}$. Either way, repeating the analysis which led to \eqref{h1} one arrives at $m_{\rm{KK}} \leq \mathcal O (10^{-30-15l})$ for general $l$. Thus, requirement 3 is always violated and the light volume problem cannot be resolved by going to $l\leq2$.}
We will refer to this problem, which has already been noted in~\cite{Cicoli:2011yy, Cicoli:2012tz}, as the ``light volume problem''. To resolve it, one needs an extra contribution to the scalar potential, which gives the volume modulus a higher mass. This is already critical. However, as we will see momentarily, things get even more challenging if we take into account SUSY breaking. This will provide an independent argument for a new scalar-potential term, fixing also its sign and prescribing a significant overall magnitude.
\section{The $F$-term Problem}
It is necessary to ensure that the SM superpartners are sufficiently heavy (requirement 2). This will prove to be very challenging. For instance, the gaugino mass is given by
\begin{equation}
m_{1/2} = \frac{1}{2}\frac{F^m \partial_m f}{\text{Re} f},
\end{equation}
where $f$ is the gauge-kinetic function. If the SM gauge group is realized on D7-branes, $m_{1/2}$ scales as $|W|/\mathcal{V}$. For D3 realizations, the soft scale is suppressed more strongly~\cite{Conlon:2005ki} -- so this does not help. Due to the aforementioned phenomenological requirements 1 and 2, the hierarchy between the quintessence field and the gaugino must fulfill
\begin{equation}
\frac{m_\phi}{m_{1/2}} \lesssim \mathcal{O}(10^{-45}).
\end{equation}
We can furthermore use the first term in (\ref{potential}) to conclude that $m_\phi\gtrsim m_{\rm{KK}}m_{3/2}$ and observe that $m_{3/2}\sim m_{1/2}$ in the present setting. This implies $m_\phi/m_{1/2} \gtrsim m_{\rm{KK}}$, in conflict with requirement 3. We conclude that the gaugino mass cannot be generated by the SUSY breaking of the K\"ahler moduli alone.
Instead, to obtain large enough gaugino masses, we need a further source of SUSY breaking. One can realize this on the SM brane through mediation from a hidden sector where SUSY is broken spontaneously by the non-vanishing $F$-term of a spurion field $X$. Without loss of generality, we will use the language of spontaneous SUSY breaking even in the case that this breaking is realized locally (at the same Calabi-Yau singularity) and directly at the string scale.\footnote{
In this case one may speak of non-linearly realized SUSY (see~\cite{Ferrara:2016een} for recent progress in this context). One may, however, also continue to use the language of e.g.~$F$-term SUSY breaking in SUGRA, sending the masses of the fields in the SUSY-breaking sector to infinity.
}
According to \cite{Conlon:2005ki}, the moduli $X_{\alpha}$ of D3-branes enter the K\"ahler potential $K(T+\overline{T})$ through the replacement
\begin{equation}
2\tau^i=T^i+\bar{T}^{\bar{\imath}}\quad\to\quad2\tau'^i=T^i+\bar{T}^{\bar{\imath}}+k^i(X^\alpha,\bar{X}^{\bar{\alpha}})\,,
\end{equation}
where $k^i(X^\alpha,\bar{X}^{\bar{\alpha}})$ are some real-valued functions. These may be chosen quadratic or higher-order since any linear components can be absorbed into the definition of the $T^i$ or removed via a K\"ahler transformation. We will call the resulting new K\"ahler potential $K'$. Now computing the scalar potential involves inverting a $2\times 2$ block matrix, with the blocks corresponding to the $T^i$ or $X^\alpha$ variables. One finds that the $F$-term contribution from the K\"ahler moduli cancels against the gravitational term $-3{\rm{e}}^{K'}\abs{W}^2$ in standard no-scale fashion, leaving behind a term\footnote{
Here
we assume that $X=0$ in the vacuum. To be completely explicit, one may think of $k\sim X\overline{X}-a(X\overline{X})^2$ and $W=bX$ in the single-field case.
}
\begin{equation}
V\supset \delta V_X=K'_{\alpha\bar{\beta}}F_X^\alpha \bar{F}_X^{\bar{\beta}} \qquad{\rm{where}}\quad K'_{\alpha\bar{\beta}}=K_i\partial_\alpha\partial_{\bar{\beta}}k^i\,,\quad F_X^\alpha={\rm{e}}^{K'/2}K'^{\alpha\bar{\beta}}\partial_{\bar{\beta}}\bar{W}\,.
\end{equation}
Thus, SM-brane SUSY breaking gives a positive contribution to the scalar potential, which is added on top of the zero potential resulting from the K\"ahler-moduli no-scale structure. Now consider a simple toy model with a single spurion field $X$ and $F$-term $F_X\equiv F$. Let SUSY breaking be mediated through higher-dimension operators suppressed by $M$, which we define to be the mediation scale of the flat SUSY limit (see \cite{Skrzypek} for details). After canonical normalization of $X$ and its $F$-term, one has $m_{1/2}\sim F/M$
(and similarly for the other soft terms), which implies
\begin{equation}
\delta V_X\sim F^2\sim M^2m_{1/2}^2\,.
\label{fterm}
\end{equation}
In the D7-brane case, a similar substitution, $S+\bar{S}\to S+\bar{S}+k(X,\bar{X})\,,$ is applied to the dilaton term in $K$. Since the dilaton $S$ is stabilized by fluxes it can be treated as a constant, so the scalar potential is simply $\abs{D_XW}^2$. This generates the positive $F$-term even more directly so we will not discuss this case separately.
Soft masses are phenomenologically constrained to be at least $\sim\,$TeV$\,\sim\order{10^{-15}}M_{\rm{P}}$. Moreover, $M$ should be high enough to hide the SUSY-breaking sector. It is then natural to assume $M\gtrsim\order{10^{-15}}M_{\rm{P}}\,,$\footnote{We will more carefully exclude lower values in Section \ref{sec:limits_on_delta_V}.}
which implies $\delta V_X \sim M^2m_{1/2}^2 \sim \order{10^{-60}}M_{\rm{P}}^4\,.$ This is of the same order of magnitude as the cancellation in the standard no-scale scenario, i.e.~far larger than the first-order LVS corrections.\footnote{
Indeed,
as noted earlier $m_\phi\gtrsim m_{\rm{KK}}m_{3/2}$ so that the canceling terms in the no-scale potential are of order $V_{\rm no-scale}\sim m_{3/2}^2\lesssim m_\phi^2/m_{\rm{KK}}^2\lesssim 10^{-60}M_{\rm P}^4\,,$ where we enforce requirements 1 and 3.}
Thus $\delta V_X$ raises the height of the scalar potential to very large positive values which cannot be canceled by the terms in $V_{\rm{LVS}}$ of \eqref{LVS}.
\subsection{Limits on $\delta V_X$} \label{sec:limits_on_delta_V}
Since $\delta V_X$ has emerged as a key issue for the most popular stringy quintessence models, we want to evaluate more carefully whether this hidden-sector contribution to the scalar potential can be consistently tuned to smaller values. Recall from \eqref{fterm} that it scales as $\delta V_X \sim m_{1/2}^2 M^2$. Since the gaugino mass should not be smaller than $\order{10^{-15}} M_{\rm{P}}$, the only option is to reduce $M$ and $F$ at the same time.
While explicit model building is not our main goal, we note in passing that realistic scenarios with small $F$-terms and correspondingly small mediation scale are not easy to get. For successful constructions in the 5D context and a discussion of the problems one encounters see~\cite{Dimopoulos:2014aua, Dimopoulos:2014psa, Garcia:2015sfa}.
A simultaneous reduction of $M$ and $F$ implies a reduction of the gravitino mass. In the past, there have been many investigations that aimed at constraining the latter using data from electroweak colliders \cite{Brignole:1997sk,Luty:1998np, Abbiendi:2000hh,Heister:2002ut,Achard:2003tx,Abdallah:2003np, Antoniadis:2012zz,Mawatari:2014cja} like LEP or hadronic ones \cite{Brignole:1998me,Acosta:2002eq,Klasen:2006kb,deAquino:2012ru,Aad:2015zva} like the Tevatron. These bounds on $m_{3/2}$ translate into lower limits of the SUSY-breaking scale, which typically constrain $\sqrt{F}$ to be larger than a few $100 \, \text{GeV}$.
The most recent and stringent bounds result from missing-momentum signatures in $pp$ collisions at the LHC. To understand the emergence of such bounds, let us consider an exemplary toy model where SUSY is spontaneously broken in a hidden sector through a non-vanishing $F$-term in the vacuum and mediated to the SM sector via the interaction terms
\begin{equation}
\mathcal{L}_\text{int} = \frac{a}{M^2} \int d^4 \theta X^\dagger X \Phi^\dagger \Phi + \frac{b}{M} \int d^2 \theta X W^\alpha W_\alpha + \text{h.c.}\,, \label{eq:lagrangian_interaction}
\end{equation}
where $\Phi$ is a chiral superfield representing quarks $q$ and squarks $\tilde{q}$ whereas $W^\alpha$ is the supersymmetric field-strength tensor of a vector superfield $V$ representing gluons $g$ and gluinos $\tilde{g}$. A non-zero $F$ in the vacuum will generate soft masses for the squarks and gluinos, which are given by $m_{\tilde{q}}^2 = a F^2/M^2$ and $m_{\tilde{g}} \sim b F/M$, respectively. The hidden-sector field $X$ contains the goldstino $\tilde{G}$, which gets eaten by the gravitino due to the super-Higgs mechanism. In the limit $\sqrt{s}/m_{3/2} \gg 1$, the helicity-1/2 modes dominate over the helicity-3/2 modes and, according to the gravitino-goldstino equivalence theorem \cite{Casalbuoni:1988kv, Casalbuoni:1988qd}, yield the same S-matrix elements as the goldstinos. Hence in this simple discussion, we identify the gravitino with the goldstino. We are now interested in processes which turn two hadrons into a hadronic shower plus gravitinos, where the latter induce a missing-momentum signature. For instance, we can consider the process of two quarks in the initial state and two gravitinos in the final state with a gluon being eradiated from one of the initial quarks, resulting in a hadronic shower. The gluon radiation costs a factor $\sqrt{\alpha_S}$. Several beyond-SM processes contribute to the crucial $qq$-$\tilde{G}\tilde{G}$-amplitude. One of them is the direct 4-particle coupling from \eqref{eq:lagrangian_interaction}:
\begin{equation}
\sim \frac{a}{M^2} \bar{\tilde{G}} \tilde{G} \bar{q} q \subset \frac{a}{M^2} \int d^4 \theta X^\dagger X \Phi^\dagger \Phi\,.
\end{equation}
Due to the prefactor $a/M^2$, this vertex contributes a factor $1/F^2$ to the amplitude so that the cross section will be proportional to $\alpha_S/F^4$. This $F^{-4}$-dependence of the cross section is typical
for such processes and therefore the upper limits on them, provided by measurements at hadron colliders, translate into lower bounds on $F$.
In a recent experimental analysis of the ATLAS collaboration \cite{Aad:2015zva}, the process $pp \rightarrow \tilde{G} + \tilde{q}/\tilde{g}$ is considered, whereupon the squark or gluino decays into a gravitino and a quark or gluon, respectively. Depending on the squark and gluino masses, as well as on their ratios, the authors derive lower bounds on the gravitino mass around $m_{3/2} \approx (1 - 5) \times 10^{-4} \, \text{eV}$ corresponding to SUSY-breaking scales $\sqrt{F} \approx (650 - 1460) \, \text{GeV}$.
In \cite{Maltoni:2015twa}, not only the process $pp \rightarrow \tilde{G} + \tilde{q}/\tilde{g} \rightarrow 2 \tilde{G} + q/g$ but also direct gravitino-pair production with a quark or gluon emitted from the initial proton as well as squark or gluino pair production with a following decay into gravitinos and quarks or gluons are considered. Taking into account all three processes, the authors of \cite{Maltoni:2015twa} use the model-independent 95\% confidence-level upper limits by ATLAS \cite{ATLAS:2012zim} on the cross section for gravitino + squark/gluino production to constrain $\sqrt{F} > 850 \, \text{GeV}$. This is done for the case when the squark and gluino masses are much larger than those of the SM particles so that they can effectively be integrated out (in the paper, the value $m_{\tilde{q}/\tilde{g}} = 20 \, \text{TeV}$ is used). In other scenarios, where one or both of these two types of superpartners have lower masses, the bound becomes even higher.
We conclude that, in accordance with the current experimental status, the mass scale of SUSY breaking $\sqrt{F}$ cannot be lowered significantly below $100 \, \text{GeV} - 1 \, \text{TeV}$ so that $\delta V_X$ can be at most a few orders of magnitude below $\order{10^{-60}}M_{\rm{P}}^4$. Such a contribution cannot be canceled by any known term in our scenario as has been discussed already.
\subsection{Need for a new contribution}
We have seen that requirement 2 of heavy superpartners implies the presence of a large positive contribution $\delta V_X$ to the scalar potential. This would raise the potential far above the observed energy density $\order{10^{-120}}M_{\rm{P}}^4$, rendering this whole scenario unviable. Since we do not know how to avoid this effect, it appears logical to assume the presence of a further negative contribution of equal magnitude, which fine-tunes $V$ to a level consistent with observations. In the preferred case of $l=2$ and for $W_0\sim{\cal O}(1)$, the required magnitude is $\delta V_{\text{new}}\sim\mathcal{V}^{-2}$. Such a contribution may also solve the light volume problem \eqref{h1}. Indeed, if its volume dependence is generic, one expects an induced volume-modulus mass $m_{\mathcal{V}}\sim\mathcal{V}^{-1}$. This is just enough to build all required hierarchies.
We emphasize that this contribution is substantially hypothetical and that the nature of its generation and form is not understood. Possible effects suggested in \cite{Cicoli:2011yy,Cicoli:2012tz} are loop corrections from open strings on the SM brane and the back-reaction of the bulk to the brane tension along the lines of the SLED models \cite{Burgess:2004xk}. Open string loops may induce a Coleman-Weinberg potential with cutoff at the string scale $M_{\rm{s}}\sim M_{\rm{P}}/\sqrt{\mathcal{V}}$, such that the leading term scales as $M_{\rm{s}}^4\sim M_{\rm{P}}^4/\mathcal{V}^2$. Although this is the correct order of magnitude for $\delta V_{\text{new}}$, the volume dependence appears to be too simple to allow for volume-modulus stabilization. Moreover, being a higher-order correction to the brane sector, we would assume it to already be part of the low-energy effective K\"ahler potential for $X$ and the SM fields which we used to derive $F$-terms and induce superpartner masses. As such it could not contribute the required negative energy to cancel the critical $F$-term.
As mentioned above, a counteracting contribution could also be found in the bulk back-reaction. Since the SM-brane tension is the origin of the large $F$-term, a back-reaction to this tension from the bulk appears to be promising. Still, as our analysis shows, it remains a challenge to include this in the 4D effective theory, specifically in the 4D effective SUGRA, which we expect to arise at low energies in the string theoretic settings we consider (see also \cite{Nilles:2003km,Burgess:2005wu, Burgess:2011mt,Cicoli:2011yy, Cicoli:2012tz} for related discussions).
Finally, in the context of the de Sitter swampland conjecture \eqref{conjecture}, our $F$-term implies yet another difficulty. Even if the new term $\delta V_{\text{new}}$ cancels the $F$-term to leave a sufficiently small potential, a small change in the SM or SUSY-breaking parameters can raise the $F$-term and with it the residual scalar potential to violate the conjecture. This is also problematic in other models and we will come back to this issue in the following sections.
\section{Loopholes and alternative Approaches}
There are several potential loopholes in our analysis. The first one is the possibility that the quintessence modulus is extremely light (i.e.~the loop-induced potential is extremely flat) by fine-tuning.\footnote{For example, one could imagine a model where the two terms in \eqref{potential} cancel to a very small residue.}
However, this seems implausible for the following reason:
The flatness must hold on a time scale of order $H_0^{-1}$. In quintessence models which respect the de Sitter conjecture \eqref{conjecture}, the scalar field has to run sufficiently far during such a period. Indeed, from the Klein-Gordon-equation in Friedmann-Robertson-Walker background together with $|V'|/V\lesssim 1$ it follows that $\Delta\phi\sim\order{1}$ in one Hubble time.
In a Taylor expansion of $\delta V_{\rm{loop}}$, we therefore have to take into account all orders of $\Delta\phi$. It is thus not enough to fine-tune $\delta V_\text{loop}$ at one point but we must tune an infinite number of derivatives to small values. This cannot be coincidental but has to be based on some mechanism or symmetry. Although in our specific model such a perfect decoupling of one K\"ahler modulus from the loop corrections seems implausible, there might of course be other constructions where the required sequestering can be achieved (see \cite{Acharya:2018deu, Heckman:2019bzm} for discussions).
Another possibly critical point is the approximation of loop corrections through the Coleman-Weinberg potential \eqref{cw} with $m_{\rm{KK}}$ as a cutoff. Here, one has to be concerned that no other, stronger corrections arise. This seems possible, for example, since the KK scale is far below the weak scale. Thus, when applying the formula, one has to do so in a setting where the SM~brane (with SUSY broken at a higher scale) has already been integrated out. This needs further scrutiny. Another concern is that even in the bulk SUSY may not be fully restored above $m_{\rm{KK}}$ due to the effect of bulk fluxes.
Still, we trust the formula to at least give a lower bound on loop corrections that cannot be neglected and thus makes our conclusions inevitable.
A number of alternative approaches to quintessence building from string theory have been proposed.
Let us first comment on the possibility of axion quintessence. Based on the SUGRA scalar potential, one generically expects an axion potential
\begin{equation}
V=\Lambda^4\cos\left(\frac{\phi}{f}\right)+a\,, \qquad \Lambda^4\sim M_{\rm{P}}^2m_{3/2}^2\rm{e}^{-S_{\rm inst.}}\,.
\label{axion}
\end{equation}
This could provide the required dark energy if $\phi$ is at the ``hilltop'' and, at the same time, satisfy the second condition of \eqref{conjecture} (assuming reasonably small $c'$). For simplicity, let us start the discussion taking $a=0$. Then the slow-roll condition, which we need phenomenologically, requires a trans-Planckian axion decay constant $f$~\cite{Panda:2010uq}. But this is in conflict with quantum-gravity expectations or, more concretely, the weak gravity conjecture for axions~\cite{Banks:2003sx, ArkaniHamed:2006dz}:
\begin{equation}
f\leq\order{1}M_{\rm{P}}\qquad\mbox{or}\qquad S_{\rm{inst.}}\leq \alpha \frac{M_{\rm{P}}}{f}\,.
\end{equation}
The conflict is strengthened if one recalls that the potential must be tiny, i.e. $M_{\rm{P}}^2m_{3/2}^2{\rm{e}}^{-\alpha M_{\rm{P}}/f}\lesssim 10^{-120}M_{\rm{P}}^4$. For $\alpha \sim\order{1}$, this implies $f\sim\order{10^{-2}}M_{\rm{P}}$, which is in conflict with slow-roll. As suggested in \cite{Cicoli:2018kdo}, one might hope to ease the tension by employing the constant contribution $a$ to the potential \eqref{axion}.\footnote{Another idea to resolve the conflict would be to move away from the hilltop to a point in field space where both slow-roll conditions are as weak as possible. This turns out not to work.} If $a$ is negative, the slow-roll condition is violated even more strongly. Positive $a$ greater than $\Lambda^4$ leads to a violation of the de Sitter conjecture at the minimum. The best option is then $a=\Lambda^4$ which, however, does not help much: The slow-roll requirements on $f$ change only by a factor $\sqrt{2}$, so $f$ still needs to be at the Planck scale.
With this naive approach we would have to violate the weak gravity conjecture by assuming an unacceptably large $S_{\rm inst.}$. However, the weak gravity conjecture is presumably on stronger footing than the de Sitter conjecture, so this is against the spirit of the swampland discussion. Instead, alternative elements of model building may be invoked to save axion quintessence. An option is the use of axion monodromy~\cite{Panda:2010uq}. Another idea developed and discussed in \cite{Nomura:2000yk, delaFuente:2014aca, Hebecker:2017uix, Ibe:2018ffn, Hebecker:2019vyf} is a further suppression of the prefactor of the axion potential. A specific model with a highly suppressed axion potential for an electroweak axion has been developed in \cite{Nomura:2000yk, Ibe:2018ffn}. We note that the most obvious suppression effects are related to high-quality global symmetries in the fermion sector, suggesting a relation between the weak gravity conjecture and global-symmetry censorship~\cite{Hebecker:2019vyf,Fichet:2019ugl}.
If such models succeed in providing a sufficiently flat potential, we still have to account for large enough SUSY breaking in the full model to generate heavy SM superpartners. The large $F$-term required has to be canceled to allow for the flat axion potential to dominate. Assuming this cancellation to be implemented, we can again slightly change the SUSY-breaking contributions to shift the axion potential to positive values and violate the de Sitter conjecture at the minima. The full model would need to balance out these changes by some intricate mechanism.
An alternative approach to building a quintessence potential from KKLT-like ingredients has been taken in \cite{Emelin:2018igk} where the quintessence field is given by the real part of a complexified K\"ahler modulus. This K\"ahler modulus runs down a valley of local axionic minima in the real direction. Since the universe is assumed to be in a non-supersymmetric non-equilibrium state today, it can evolve at positive potential energies. However, since the potential has to be sufficiently small to constitute a quintessence model, the superpotential has to be tuned to very small values, which results in a small gravitino mass. It appears that one needs further SUSY breaking and the $F$-term problem re-emerges.
An interesting alternative to quintessence has been introduced in \cite{Hardy:2019apu}: The zero-temperature scalar potential is assumed to satisfy the de Sitter conjecture, but a thermally excited hidden sector stabilizes a scalar field at a positive-energy hilltop. The authors illustrate this idea using a simple Higgs-like potential $V=-m_\phi^2\phi^2/2+\lambda\phi^4+C$. Since the hidden sector must not introduce too much dark radiation, the temperature and hence also $m_\phi$ are bounded from above by today's CMB temperature, which is roughly $0.24$~meV.
Since this model does not need an approximate no-scale structure to ensure an extremely flat potential at large ${\cal V}$, our $F$-term problem does not immediately arise.
However, it makes an indirect appearance as follows: Both the present toy model potential as well as more general models of this type are expected to have a minimum somewhere. In the present case, its depth is $m_\phi^4/16\lambda$, which is very small unless $\lambda$ is truly tiny. Now, since some $F$-term effect $\delta V_X$ must be present somewhere in the complete model, a small de-tuning of this $\delta V_X$ will be sufficient to lift the model into the swampland. Thus, some form of conspiracy must again be at work for this model to describe our world and the de Sitter conjecture to hold simultaneously.
A way out is provided by assuming that $\lambda\sim \order{10^{-64}}$ and available $\delta V_X$ are bounded at $\sim\,$TeV. Then the minimum is too deep to be lifted to de Sitter by de-tuning. Even then, one has to be careful to ensure that $|V''|/V$ does not become too small as one uplifts the model by de-tuning the SUSY-breaking effect. We approximate the possible de-tuning by the order of magnitude of the $F$-term itself: $\Delta(\delta V_X)\sim\delta V_X\sim F^2$. As a result $|V''|/\Delta(\delta V_X) \sim m_\phi^2/F^2\sim \order{(10^{-31})^2/10^{-60}} \sim \order{10^{-2}}$, which is critical in view of the de Sitter conjecture. Thus, even in this rather extreme case, a version of the $F$-term problem can at best be avoided only marginally.
\section{Conclusion}
We have analyzed stringy quintessence on the basis of the phenomenologically required hierarchies between quintessence mass, volume-modulus mass, SUSY-breaking scale and KK scale. Within the type IIB framework, one is naturally led to the setting of~\cite{Cicoli:2012tz}, where quintessence corresponds to the rolling in K\"ahler moduli space at fixed overall volume. One also immediately notices the light volume problem, which requires a new ingredient (see~\cite{Cicoli:2011yy} for a suggestion) to make the volume modulus sufficiently heavy.
In addition, we have identified what one might call an $F$-term problem. It derives from the fact that SUSY-breaking by the $F$-terms of K\"ahler moduli is far to weak phenomenologically. Thus, an additional SUSY-breaking sector on the SM brane is required. This generates a sizable uplift contribution to the scalar potential. The well-known negative contributions associated with $\alpha'$-, loop and non-perturbative effects are much too small to cancel this uplift, given that we are at very large values of the volume modulus.
The situation can then be summarized as follows: The construction of quintessence from a K\"ahler modulus in Type IIB flux compactifications requires a yet unknown contribution to the scalar potential. This is not only needed to stabilize the volume modulus but, in addition, it must be negative and of the order $\delta V_{\text{new}}\sim\mathcal{V}^{-2}$ to compensate for the effect of SUSY breaking. Moreover, this correction may not raise the mass of the other K\"ahler moduli.
Finally, if the above requirements can be met, a further issue arises: In the framework envisioned above, today's tiny vacuum energy is the result of a precise cancellation between the SM-related $F$-term uplift and $\delta V_{\text{new}}$. It would then appear that models with a slightly higher $F$-term
uplift, induced by a tiny change in the SM or SUSY-breaking sector parameters, should also exist. Such models would have an unchanged tiny slope $V'$ but a much higher potential $V$, violating even a mild form of the de Sitter swampland conjecture (such as (\ref{conjecture}) with a fairly small $c$ and $c'$).
Possibilities to go forward include the specification and study of the missing potential effect $\delta V_{\text{new}}$, the construction of models which completely evade the effective-4D-SUGRA logic that we used, or the study of entirely different string-theoretic settings. The latter may, for example, use type IIA or the heterotic framework or appeal to different quintessence candidates, like the rolling towards large complex structure or small string coupling. Of course, in the first case one may find oneself at large volume after all, as suggested by mirror symmetry. In the second case, one faces the risk that the string scale falls below the KK scale. Returning to our analysis in this paper, we suspect that in many cases some variant of our $F$-term problem, rooted in the strong SUSY breaking in the SM, is likely to be relevant.
\section*{Acknowledgements}
We would like to thank Pablo Soler and Michele Cicoli for fruitful discussions. This work is supported by the Deutsche Forschungsgemeinschaft (DFG) under Germany's Excellence Strategy EXC-2181/1 - 390900948 (the Heidelberg STRUCTURES Excellence Cluster). Furthermore, M.W. thanks the DFG for support through the Research Training Group ``Particle Physics beyond the Standard Model'' (GRK 1940).
\section*{Appendix: Estimating Moduli Masses from the Potential}
We will argue that under reasonable assumptions the mass scale of a physical modulus is usually set by the highest order term $\delta V$ in the scalar potential that involves the respective modulus:
\begin{equation}
m^2\gtrsim \delta V\,.
\end{equation}
This is easy to see for the volume modulus but requires justification for the other moduli. Although heavier masses can easily arise for `small-cycle' moduli which correspond to small terms in ${\cal V}$, much lighter masses require some kind of cancellation, which will generally involve tuning.
To illustrate the idea, consider the toy model lagrangian
\begin{equation}
\mathcal{L}=\frac{\partial_\mu X \partial^\mu X}{2X^2} + V(X)\,,\qquad\mathrm{where}\qquad V''(X)\sim\frac{V(X)}{X^2}\,.
\end{equation}
The canonical field is introduced through $X=\exp(\phi)$. Then the physical mass squared is the second derivative of the potential w.r.t. $\phi$. Given our assumption about $V''(X)$, this is of the same order of magnitude as the potential itself. Thus, suppressing $\order{1}$ coefficients, the approximation $m^2\sim \delta V$ is justified.
For the volume modulus the argument is basically as in the toy model above. So we now restrict our attention to the submanifold of constant ${\cal V}$ in the space of real moduli $\tau^1,...,\tau^n$. We choose an arbitrary trajectory on this submanifold and parameterize it as
\begin{equation}
(\tau^1(\phi),...,\tau^n(\phi))=(\tau^1(0)\mathrm{e}^{\xi^1(\phi)\phi},...,\tau^n(0)\mathrm{e}^{\xi^n(\phi)\phi})\,.
\end{equation}
We normalize our parameter $\phi$ so that it takes the value $0$ at the point of interest $\tau^i\equiv \tau^i(0)$. The coefficient vector $\xi^i\equiv \xi^i(0)$ is chosen to be $\order{1}$ valued.
Now the lagrangian for motion along the trajectory contains the kinetic term
\begin{equation}
\mathcal{L}\supset \mathcal{L}_{\mathrm{kin}}=\sum_{ij}K_{ij}\tau^i\tau^j\xi^i\xi^j \partial_\mu \phi \partial^\mu \phi\,.
\end{equation}
We can compute the K\"ahler metric from the K\"ahler potential $K=-2\ln(\mathcal{V}(\tau^i))$ and since we are moving along the submanifold of constant volume we can use
\begin{equation}
\sum_{i}\mathcal{V}_i\tau^i\xi^i=0 \qquad \text{such that} \qquad \mathcal{L}_{\mathrm{kin}}=-2\sum_{ij}\frac{\mathcal{V}_{ij}}{\mathcal{V}}\tau^i\tau^j\xi^i\xi^j \partial_\mu \phi \partial^\mu \phi\,.
\end{equation}
Unless there is significant cancellation between terms in $\mathcal{V}$ we can assume
\begin{equation}
\mathcal{V}_{ij}\lesssim \frac{\mathcal{V}}{\tau^i\tau^j}
\label{heur}
\end{equation}
and since $\xi^i$ was chosen $\order{1}$, the whole prefactor of $\partial_\mu \phi \partial^\mu \phi$ can be assumed to be $\order{1}$ or smaller. A small prefactor can arise from a small contribution in $\mathcal{V}(\tau^i)$ as for example in the standard LVS example of $\mathcal{V}=\tau_\text{b}^{3/2}-\tau_\text{s}^{3/2}$ where $\tau_\text{s}$ is a small modulus and gets a small prefactor in the kinetic term. The canonical normalization will thus either not change or even increase the order of magnitude of the modulus mass.
Turning to the potential, we see that, since we move along the submanifold, any contribution only involving the volume does not contribute to the mass, as for example $V_{\rm{LVS}}$ in \eqref{LVS}. Turning to the leading-order contribution $\delta V$ involving the other moduli (in our case string-loop corrections) we will rewrite the potential in the coordinates $(\mathcal{V}, \tau^1,...\tau^{n-1})$ where we have solved the constraint of staying on the submanifold for a suitable $\tau^n$. We introduce indices $k$ and $l$ which run over $\{1,...,n-1\}$ in contrast to $i$ and $j$. The mass squared of our modulus is now determined by the Hessian of the potential contracted with the vector $\delta\tau^k$ corresponding to an infinitesimal shift in $\phi$ :
\begin{equation}
m^2\sim \delta V_{kl}\frac{\delta\tau^k}{\delta\phi}\frac{\delta\tau^l}{\delta\phi}=\sum_{kl}\delta V_{kl}\tau^k\tau^l\xi^k\xi^l\sim\order{\delta V}\,.
\end{equation}
Here we have to assume that after rewriting the potential in terms of $(\mathcal{V}, \tau^1,...\tau^{n-1})$ it is still sufficiently well behaved to allow for an order of magnitude estimate $\delta V_{kl}\sim\delta V/\tau^k\tau^l$, resembling \eqref{heur}. Since the choice of trajectory was arbitrary, we assume a similar scaling for all moduli involved except for the volume modulus. Bearing in mind the possible mass enhancement from the canonical normalization, we estimate
\begin{equation}
m^2\gtrsim \delta V\,.
\end{equation}
We note that the requirements are met in many simple cases, for example the models of \cite{Cicoli:2011yy, Cicoli:2012tz}. A more detailed analysis can be found in \cite{Skrzypek}.
| 2024-02-18T23:40:04.159Z | 2020-02-26T02:01:40.000Z | algebraic_stack_train_0000 | 1,240 | 9,172 |
|
proofpile-arXiv_065-6162 | \section*{Introduction}
The Standard Model (SM) is an incomplete description of observed phenomena in nature. However, explicit evidence of new
long-distance propagating states is lacking. Consequently, the SM is usefully thought of as an Effective Field Theory (EFT)
for measurements and data analysis, with characteristic energies proximate to the Electroweak scale
($\sqrt{2 \, \langle H^\dagger H} \rangle \equiv \bar{v}_T$) -- such as those made at the LHC or lower energies.
The Standard Model Effective Field Theory (SMEFT) is based on assuming that physics beyond the SM
is present at scales $\Lambda >\bar{v}_T$. The SMEFT also assumes
that there are no light hidden states in the spectrum with couplings
to the SM; and a $\rm SU(2)_L$ scalar doublet ($H$) with hypercharge
$\hyp_h = 1/2$ is present in the EFT.
A power-counting expansion in
the ratio of scales $\bar{v}_T/\Lambda <1$ defines the SMEFT Lagrangian as
\begin{align}
\Lagr_{\textrm{SMEFT}} &= \Lagr_{\textrm{SM}} + \Lagr^{(5)}+\Lagr^{(6)} +
\Lagr^{(7)} + \dots, \\ \nonumber \Lagr^{(d)} &= \sum_i \frac{C_i^{(d)}}{\Lambda^{d-4}}\mathcal{Q}_i^{(d)}
\quad \textrm{ for } d>4.
\end{align}
The higher-dimensional operators $\mathcal{Q}_i^{(d)}$ are labelled with a mass dimension $d$ superscript,
and multiply unknown, dimensionless Wilson coefficients $C_i^{(d)}$. The sum over $i$, after non-redundant operators are removed with field redefinitions
of the SM fields, runs over the operators in a particular operator basis. In this paper we use
the Warsaw basis \cite{Grzadkowski:2010es}. However, the main results are formulated in a basis independent manner
and constrain relationships between Lagragian parameters due to the linear realization of
$\rm SU(2)_L \times U(1)_Y$ in the SMEFT.
The SMEFT is a powerful practical tool, but it is also a well-defined
field theory. Many formal field-theory issues also have a new representation in the SMEFT. This can lead
to interesting subtleties, particularly when developing SMEFT analyses beyond leading order.
When calculating beyond leading order in the loop ($\hbar$) expansion, renormalization is required.
The counterterms for the SMEFT at dimension five \cite{Babu:1993qv, Antusch:2001ck}, and six \cite{Jenkins:2013zja,Jenkins:2013wua,Alonso:2013hga,Alonso:2014zka}
are known and preserve the $\rm SU(3) \times SU(2) \times U(1)$ symmetry
of the SM. Such unbroken (but non-manifest in some cases) symmetries are also represented in the naive Ward-Takahashi
identities \cite{Ward:1950xp,Takahashi:1957xn} when the
Background Field Method (BFM) \cite{DeWitt:1967ub,tHooft:1973bhk,Abbott:1981ke,Shore:1981mj,Einhorn:1988tc,
Denner:1994xt} is used to gauge fix the theory. In Ref.~\cite{Helset:2018fgq} it was shown how to gauge fix the SMEFT in the BFM in $R_\xi$ gauges, and we use this gauge-fixing procedure
in this work.
The BFM splits the fields in the theory into quantum and classical background fields ($F \rightarrow F + \hat{F}$),
with the latter denoted with a hat superscript. By performing a gauge-fixing procedure that preserves
the background-field gauge invariance, while breaking
explicitly the quantum-field gauge invariance, the Ward identities \cite{Ward:1950xp} are present
in a ``naive manner" -- i.e. the identities are related to those that would be directly inferred from the classical Lagrangian.
This approach is advantageous, as otherwise the gauge-fixing term, and ghost term, of the theory can make
symmetry constraints non-manifest in intermediate steps of calculations.
The BFM gauge-fixing procedure in the SMEFT relies
on a geometric description of the field connections, and real representations for the $\rm SU(2)_L \times U(1)_Y$ generators.
Using this formulation of the SMEFT allows a simple Ward-Takahashi identity to be derived, that constrains the $n$-point
vertex functions. The purpose of this paper is to report this result and derivation.\footnote{Modified Ward identities
in the SMEFT have been discussed in an on-shell scheme in Ref.~\cite{Cullen:2019nnr}.}
{\bf Path integral formulation.}
The BFM generating functional of the SMEFT is given by
\begin{align}
Z[\hat{F},J]=\int \mathcal{D} F \,{\rm det}\left[\frac{\Delta \mathcal{G}^A}{\Delta \alpha^B}\right]e^{i \left(S[F + \hat{F}] + \Lagr_{\textrm{GF}} +
{\rm source \, terms} \right)} \nonumber.
\end{align}
The integration over $d^4x$ is implicit in $\mathcal L_{\rm GF}$.
The generating functional is integrated over the quantum field configurations via $\mathcal{D} F$,
with $F$ field coordinates describing all long-distance propagating states.
$J$ stands for the dependence on the sources which
only couple to the quantum fields \cite{tHooft:1975uxh}. The background fields also effectively act as sources of the quantum fields.
$S$ is the action, initially classical, and augmented with a renormalization prescription to define loop corrections.
The scalar Higgs doublet is decomposed into field coordinates $\phi_{1,2,3,4}$, defined with the normalization
\begin{align}
H = \frac{1}{\sqrt{2}}\begin{bmatrix} \phi_2+i\phi_1 \\ \phi_4 - i\phi_3\end{bmatrix}.
\end{align}
The scalar kinetic term is defined with a field space metric introduced as
\begin{align}\label{scalarL6}
\Lagr_{\textrm{scalar,kin}} = & \frac{1}{2}h_{IJ}(\phi)\left(D_{\mu}\phi\right)^I\left(D^{\mu}\phi\right)^J,
\end{align}
where
$(D^{\mu}\phi)^I = (\partial^{\mu}\delta_J^I - \frac{1}{2}\mathcal{W}^{A,\mu}\tilde\gamma_{A,J}^I)\phi^J$, with real generators ($\tilde\gamma$)
and structure constants ($\tilde\epsilon^A_{\,\,BC}$) defined in the Appendix.
The corresponding kinetic term for the $\rm SU(2)_L \times U(1)_Y$ spin-one fields
is
\begin{align}\label{WBlagrangian}
\Lagr_{\textrm{gauge,kin}} &=-\frac{1}{4}g_{AB}(\phi) \mathcal{W}_{\mu\nu}^A \mathcal{W}^{B,\mu\nu},
\end{align}
where $A,B,C, \dots$ run over $\{1,2,3,4\}$, (as do $I,J$)
and $\mathcal{W}_{\mu\nu}^4=B_{\mu\nu}$. Extending this definition to include the gluons is
straight-forward.
A quantum-field gauge transformation involving these fields is indicated with a $\Delta$, with an infinitesimal quantum gauge parameter $\Delta\alpha^A$.
Explicitly, the transformations are
\begin{align}
\Delta \mathcal{W}^A_\mu &= - \tilde{\epsilon}^A_{\, \,BC} \, \Delta \alpha^B \, \left(\hat{\mathcal{W}}^{C, \mu} +\mathcal{W}^{C, \mu} \right) -\partial^\mu (\Delta \alpha^A), \nonumber \\
\Delta \phi^I &= - \Delta\alpha^A \, \frac{\tilde\gamma_{A,J}^{I}}{2}\, (\phi^J+ \hat{\phi}^J).
\end{align}
The BFM gauge-fixing term {\it of the quantum fields $\mathcal{W}^{A}$} is \cite{Helset:2018fgq}
\begin{align}\label{gaugefix}
\Lagr_{\textrm{GF}} &= -\frac{\hat{g}_{AB}}{2 \, \xi} \mathcal{G}^A \, \mathcal{G}^B, \\
\mathcal{G}^A &\equiv \partial_{\mu} \mathcal{W}^{A,\mu} -
\tilde\epsilon^{A}_{ \, \,CD}\hat{\mathcal{W}}_{\mu}^C \mathcal{W}^{D,\mu}
+ \frac{\xi}{2}\hat{g}^{AC}
\phi^{I} \, \hat{h}_{IK} \, \tilde\gamma^{K}_{C,J} \hat{\phi}^J. \nonumber
\end{align}
The introduction of field space metrics in the kinetic terms reflects the geometry of the field space due to the power-counting expansion.
These metrics are the core conceptual difference of the relation between Lagrangian parameters, compared to the SM, in the Ward identities
we derive.
The field spaces defined by these metrics are curved, see Refs.~\cite{Burgess:2010zq,Alonso:2015fsp,Alonso:2016oah}.
The background-field gauge fixing relies on the basis independent transformation properties of
$g_{AB}$ and $h_{IJ}$,\footnote{The explicit forms of $g_{AB}$ and $h_{IJ}$ are basis dependent. The forms of the corrections
for the Warsaw basis at $\mathcal{L}^{(6)}$ are given in Ref.~\cite{Helset:2018fgq}.} and the fields, under background-field gauge transformations ($\delta \hat{F}$)
with infinitesimal local gauge parameters $\delta \hat{\alpha}_A(x)$
given by
\begin{align}\label{backgroundfieldshifts}
\delta \, \hat{\phi}^I &= -\delta \hat{\alpha}^A \, \frac{\tilde{\gamma}_{A,J}^I}{2} \hat{\phi}^J, \nonumber \\
\delta \hat{\mathcal{W}}^{A, \mu} &= - (\partial^\mu \delta^A_B + \tilde{\epsilon}^A_{\, \,BC} \, \, \hat{\mathcal{W}}^{C, \mu}) \delta \hat{\alpha}^B, \nonumber \\
\delta \hat{h}_{IJ} &= \hat{h}_{KJ} \, \frac{\delta \hat{\alpha}^A \, \tilde{\gamma}_{A,I}^K}{2}+ \hat{h}_{IK} \, \frac{\delta \hat{\alpha}^A \, \tilde{\gamma}_{A,J}^K}{2}, \nonumber \\
\delta \hat{g}_{AB} &= \hat{g}_{CB} \,\tilde{\epsilon}^C_{\, \,DA} \, \delta \hat{\alpha}^D + \hat{g}_{AC} \,\tilde{\epsilon}^C_{\, \,DB} \, \delta \hat{\alpha}^D, \nonumber \\
\delta \mathcal{G}^X &= -\tilde{\epsilon}^X_{\, \,AB} \, \delta \hat{\alpha}^A \mathcal{G}^B,\nn
\delta f_i &= \Lambda_{A,i}^{j}\, \hat{\alpha}^A \, f_{j}, \nonumber \\
\delta \bar{f}_i &= \hat{\alpha}^A \, \bar{f}_j \bar{\Lambda}^{j}_{A,i},
\end{align}
where we have left the form of the transformation of the fermion fields implicit.
\iffalse
The transformation of the fermion fields follow from the transformation of the (chiral) fermion fields under gauge transformations
and the fermion fields have corresponding sources $J_{f},J_{\bar{f}}$.
\fi
Here $i,j$ are flavour indicies.
The background-field gauge invariance of the generating functional, i.e.
\begin{align}
\frac{\delta Z [\hat{F},J]}{\delta \hat{\alpha}^A} &= 0,
\end{align}
is established by using these gauge transformations in conjunction with
the linear change of variables on the quantum fields.
The generating functional of connected Green's functions is given by
\begin{align}
W[\hat{F},J] &= - i \log Z[\hat{F},J],
\end{align}
where $J = \{J^A_{\mu}, J^I_{\phi},J_{f},J_{\bar{f}}\}$.
As usual the effective action is the Legendre transform
\begin{align}
\Gamma [\hat{F},\tilde{F}] &= W[\hat{F},J] - \int dx^4 J \cdot \tilde{F} \vert_{\tilde{F} =\frac{\delta W}{\delta J}}.
\end{align}
Here our notation is chosen to match Ref.~\cite{Dekens:2019ept}.
$S$-matrix elements are constructed via \cite{Abbott:1983zw,Denner:1996gb,Dekens:2019ept}
\begin{align}
\label{eq:gammaFull}
\Gamma^{\rm full} [\hat{F},0] &= \Gamma [\hat{F},0]+ i \int d^4 x \mathcal{L}_{\textrm{GF}}^{\textrm{BF}}.
\end{align}
The last term in Eq.~\eqref{eq:gammaFull} is a gauge-fixing term for the background fields, formally independent from Eq.~\eqref{gaugefix}, and introduced to define the propagators of the background fields.
Finally, we define a generating functional of connected Green's functions $W_c[\hat{J}]$
as a further Legendre transform \cite{Denner:1996gb}
\begin{align}
W_c[\hat{J}] &= \Gamma^{\rm full} [\hat{F}] + i \int d^4 x \left[\sum_{\hat{F}} \hat{J}_{\hat{F}^\dagger} \hat{F} + \sum_f(\bar{f} \hat{J}_{\bar{f}} + \hat{J}_{f} f) \right].
\end{align}
with $ \hat{F} =\{\mathcal{W}^A, \phi^I\}$ and
\begin{align}
i \hat{J}_{\hat{F}^\dagger} &= - \frac{\delta \Gamma^{\rm full}}{\delta \hat{F}}, & i \hat{J}_{f} &= - \frac{\delta \Gamma^{\rm full}}{\delta \bar{f}},
& i \hat{J}_{\bar{f}} &= \frac{\delta \Gamma^{\rm full}}{\delta f}, \nonumber \\
\hat{F} &= \frac{\delta W_c}{i \delta \hat{J}_{\hat{F}^\dagger}},
& f &= \frac{\delta W_c}{i \delta \hat{J}_{\bar{f}}},
& \bar{f} &= - \frac{\delta W_c}{i \delta \hat{J}_{f}}.
\end{align}
{\bf{Weak eigenstate Ward identities.}}
The BFM Ward identities follow from the invariance of $\Gamma [\hat{F},0]$ under background-field gauge transformations,
\begin{align}
\frac{\delta \Gamma [\hat{F},0]}{\delta \hat{\alpha}^B} &= 0.
\end{align}
In position space, the identities are
\begin{align}
0 =& \left(\partial^\mu \delta^A_B - \tilde{\epsilon}^A_{\, \,BC} \, \, \hat{\mathcal{W}}^{C, \mu}\right)
\frac{\delta \Gamma}{\delta \hat{\mathcal{W}}_A^{\mu}} - \frac{\tilde{\gamma}_{B,J}^I}{2} \hat{\phi}^J \frac{\delta \Gamma}{\delta \hat{\phi}^I} \nonumber \\
& +\sum_j \left(\bar{f}_j \bar{\Lambda}_{B,i}^{j} \, \frac{\delta \Gamma}{\delta \bar{f}_{i}}
- \frac{\delta \Gamma}{\delta f_{i}} \Lambda_{B,j}^{i} f_j \right).
\end{align}
For some $n$-point function Ward identities, the background fields are set to their vacuum expectation values.
When this is defined through the minimum of the classical action $S$, where the scalar potential is a function of $H^\dagger H$, which we denote as $\langle \,\rangle$.
For example, the scalar vev defined in this manner is through $\sqrt{2 \, \langle H^\dagger H} \rangle \equiv \bar{v}_T$
and explicitly $\langle \phi^J \rangle$ with an entry set to the numerical value of the vev does not transform via $\tilde{\gamma}_{A,J}^I$.
A direct relation follows between the tadpoles (i.e. the one point functions $\delta\Gamma/\delta\hat\phi^I$) and, $\langle\hat\phi^J\rangle$, given by
\begin{align}
0 = \partial^\mu\frac{\delta\Gamma}{\delta\hat{\mathcal{W}}^{B,\mu}} - \frac{\tilde\gamma^I_{B,J}}{2}
\langle\hat\phi^J\rangle \frac{\delta\Gamma}{\delta\hat\phi^I}.
\end{align}
Requiring a Lorentz-invariant vacuum sets the tadpoles for the gauge fields to zero. Thus, for
the scalars
\begin{align}
\label{eq:tadpole}
0 = \frac{\tilde\gamma^I_{B,J}}{2}\langle\hat\phi^J\rangle \frac{\delta\Gamma}{\delta\hat\phi^I}.
\end{align}
$\gamma_{B} \langle \phi^J \rangle \neq 0$
and the unbroken combination $(\gamma_3 + \gamma_4) \langle \phi^J \rangle = 0$ corresponds to $\rm U(1)_{em}$.
Eq.~\eqref{eq:tadpole} with $B=3,4$ does not given linearly independent constraints.
This leads to the requirement of a further renormalization condition to define the tadpole $\delta\Gamma/\delta\hat\phi^4$ to vanish.
The Ward identities for the two-point functions are
\begin{align}
0 =& \partial^\mu \frac{\delta^2\Gamma}{\delta \hat{\mathcal{W}}^{A,\nu}\delta \hat{\mathcal{W}}^{B,\mu}}
- \frac{\tilde\gamma^I_{B,J}}{2}\langle\hat\phi^J\rangle
\frac{\delta^2\Gamma}{\delta \hat{\mathcal{W}}^{A,\nu}\delta\hat\phi^I}, \\
0 =& \partial^\mu \frac{\delta^2\Gamma}{\delta\hat\phi^K\delta \hat{\mathcal{W}}^{B,\mu}}
- \frac{\tilde\gamma^I_{B,J}}{2}\left(\langle\hat\phi^J\rangle
\frac{\delta^2\Gamma}{\delta\hat\phi^K\delta\hat\phi^I}
+ \delta^J_K\frac{\delta \Gamma}{\delta\hat\phi^I}\right).
\end{align}
The three-point Ward identities are
\begin{align}
0 =& \partial^\mu \frac{\delta^3\Gamma}{\delta \overline{f}_k\delta f_l\delta \hat{\mathcal{W}}^{B,\mu}}
- \frac{\tilde\gamma^I_{B,J}}{2}\langle\hat\phi^J\rangle \frac{\delta^3 \Gamma}{\delta
\overline{f}_k\delta f_l \delta\hat\phi^I} \nonumber \\
&+ \overline{\Lambda}^k_{B,i} \frac{\delta^2\Gamma}{\delta\overline{f}_i\delta f_l}
- \frac{\delta^2\Gamma}{\delta\overline{f}_k\delta f_i} \Lambda^i_{B,l},\\
0 =& \partial^\mu\frac{\delta^3\Gamma}{\delta \hat W^{A,\nu}\delta\hat W^{B,\mu}
\delta\hat W^{C,\rho}}
- \tilde\epsilon^D_{\,\,BC}\frac{\delta^2\Gamma}{\delta\hat{\mathcal{W}}^{D,\rho}
\delta\hat{\mathcal{W}}^{A,\nu}} \nonumber \\
&-\frac{\tilde\gamma^I_{B,J}}{2}\langle\hat\phi^J\rangle
\frac{\delta^3\Gamma}{\delta\hat\phi^I\delta\hat{\mathcal{W}}^{A,\nu}\delta\hat{\mathcal{W}}^{C,\rho}},
\end{align}
\begin{align}
0 =& \partial^\mu \frac{\delta^3\Gamma}{\delta\hat{\mathcal{W}}^{A,\nu}\delta\hat{\mathcal{W}}^{B,\mu}
\delta\hat \phi^K} - \tilde\epsilon^D_{\,\,BA}\frac{\delta^2\Gamma}{\delta\hat{\mathcal{W}}^{D,\nu}
\delta\hat\phi^K} \nonumber \\
&- \frac{\tilde\gamma^{I}_{B,J}}{2}\left(\langle\hat\phi^J\rangle
\frac{\delta^3\Gamma}{\delta\hat{\mathcal{W}}^{A,\nu}\delta\hat\phi^I\delta\hat\phi^K}
+ \delta^J_K \frac{\delta^2\Gamma}{\delta\hat{\mathcal{W}}^{A,\nu}\delta\hat\phi^I}\right), \\
0 =& \partial^\mu\frac{\delta^3\Gamma}{\delta\hat{\mathcal{W}}^{B,\mu}\delta\hat\phi^K\delta\hat\phi^L}
- \frac{\tilde\gamma^I_{B,J}}{2}\langle\hat\phi^J\rangle
\frac{\delta^3\Gamma}{\delta\hat\phi^I\delta\hat\phi^K\delta\hat\phi^L} \nonumber\\
&- \frac{\tilde\gamma^I_{B,J}}{2}\left(
\delta^J_K\frac{\delta^2\Gamma}{\delta\hat\phi^I\delta\hat\phi^L}
+ \delta^J_L\frac{\delta^2\Gamma}{\delta\hat\phi^I\delta\hat\phi^K}\right).
\end{align}
{\bf{Mass eigenstate Ward identities.}}
The mass eigenstate SM Ward identities in the BFM are summarized in Ref.~\cite{Denner:1994xt}.
The tranformation of the gauge fields, gauge parameters and scalar fields into mass eigenstates in the SMEFT is
\begin{align}\label{basicrotations}
\hat{\mathcal{W}}^{A,\nu} &= \sqrt{g}^{AB} U_{BC} \mathcal{\hat{A}}^{C,\nu}, \\
\hat{\alpha}^{A} &= \sqrt{g}^{AB} U_{BC} \mathcal{\hat{\beta}}^{C},\\
\hat{\phi}^{J} &= \sqrt{h}^{JK} V_{KL} \hat{\Phi}^{L},
\end{align}
with $\hat{\mathcal{A}}^C =(\hat{\mathcal{W}}^+,\hat{\mathcal{W}}^-,\hat{\mathcal{Z}},\hat{\mathcal{A}})$,
$\hat{\Phi}^L = \{\hat{\Phi}^+,\hat{\Phi}^-,\hat{\chi},\hat{H}^0 \}$.
This follows directly from the formalism in Ref.~\cite{Helset:2018fgq} (see also Ref.~\cite{Misiak:2018gvl}).
The matrices $U,V$ are unitary, with $\sqrt{g}^{AB}\sqrt{g}_{BC} \equiv \delta^A_C$ and
$ \sqrt{h}^{AB} \sqrt{h}_{BC}\equiv \delta^A_C$. The square root metrics are understood to be matrix square roots
and the entries are $\langle \rangle$ of the field space metrics entries.
The combinations $ \sqrt{g} U$ and $\sqrt{h} V$ perform the mass eigenstate rotation for the vector and scalar fields, and bring the corresponding
kinetic term to canonical form, including higher-dimensional-operator corrections.
We define the mass-eigenstate transformation matrices
\begin{align*}\label{wardrotations}
{\mathcal U}^{A}_C &= \sqrt{g}^{AB} U_{BC},& ({\mathcal U^{-1}})^{D}_F &= U^{DE} \sqrt{g}_{\, EF} , \\
{\mathcal V}^{A}_C &= \sqrt{h}^{AB} V_{BC}, & ({\mathcal V^{-1}})^{D}_F &= V^{DE} \sqrt{h}_{\, EF} ,
\end{align*}
to avoid a proliferation of index contractions.
The structure constants and generators, transformed to those corresponding to the mass eigenstates, are defined as
\begin{align*}
{ {\bm \epsilon}}^{C}_{\, \,GY} &= ({\mathcal U^{-1}})^C_A \tilde{\epsilon}^{A}_{\, \,DE} \, {\mathcal U}^D_G \,
{\mathcal U}^E_Y, &
{\bm \gamma}_{G,L}^{I} &= \frac{1}{2}\tilde{\gamma}_{A,L}^{I} \, {\mathcal U}^A_G,\nn
{\bm \Lambda}^i_{X,j} &=\Lambda_{A,j}^{i} \, {\mathcal U}^A_X.
\end{align*}
The background-field gauge transformations in the mass eigenstate are
\begin{align}
\delta \hat{\mathcal{A}}^{C,\mu} &= - \left[\partial^\mu \delta^C_G + { {\bm \epsilon}}^{C}_{\, \,GY} \hat{\mathcal{A}}^{Y,\mu} \right] \delta \hat{\beta}^G, \nn
\delta \hat{\Phi}^{K} &=- ({\mathcal V^{-1}})^K_I \,{\bm \gamma}_{G,L}^{I} \, {\mathcal V}^L_N \hat{\Phi}^{N} \delta \hat{\beta}^G.
\end{align}
The Ward identities are then expressed compactly as
\bea
0 &=& \frac{\delta \Gamma}{\delta \hat{\beta}^G}
\iffalse
= \frac{\delta \hat{\mathcal{A}}^{C,\mu}}{\delta \hat{\beta}^G} \frac{\delta \Gamma}{\delta \hat{\mathcal{A}}^{C,\mu}}
+ \frac{\delta \hat{\Phi}^{K}}{\delta \hat{\beta}^G} \frac{\delta \Gamma}{\delta \hat{\Phi}^{K}},
\fi
\\
&=& \partial^\mu \frac{\delta \Gamma}{\delta \hat{\mathcal{A}}^{X,\mu}}
+\sum_j \left(\bar{f}_j \overline{\bm \Lambda}^j_{X,i} \, \frac{\delta \Gamma}{\delta \bar{f}_{i}}
- \frac{\delta \Gamma}{\delta f_{i}} {\bm \Lambda}^i_{X,j} f_j \right) \nn
&-&
\frac{\delta \Gamma}{\delta \hat{\mathcal{A}}^{C\mu}} {\bm \epsilon}^{C}_{\, \,XY} \hat{\mathcal{A}}^{Y \mu}
- \frac{\delta \Gamma}{\delta \hat{\Phi}^K} ({\mathcal V^{-1}})^K_I {\bm \gamma}_{X,L}^{I} {\mathcal V}^L_N \hat{\Phi}^N. \nonumber
\eea
In this manner, the ``naive" form of the Ward identities is maintained.
The BFM Ward identities in the SMEFT take the same
form as those in the SM up to
terms involving the tadpoles.
This is the case once a consistent redefinition of couplings, masses and fields is made.
{\bf Two-point function Ward Identities.}
The Ward identities for the two-point functions take the form
\bea
0 &=& \partial^\mu \frac{\delta^2 \Gamma}{\delta \hat{\mathcal{A}}^{X \mu} \delta \hat{\mathcal{A}}^{Y \nu}} -
\frac{\delta^2 \Gamma}{\delta \hat{\mathcal{A}}^{Y \nu} \delta \hat{\Phi}^K} ({\mathcal V^{-1}})^K_I {\bm \gamma}_{X,L}^{I} {\mathcal V}^L_N \langle \hat{\Phi}^N \rangle, \nn
0 &=& \partial^\mu \frac{\delta^2 \Gamma}{\delta \hat{\mathcal{A}}^{X \mu} \delta \hat{\Phi}^O}
-
\frac{\delta^2 \Gamma}{\delta \hat{\Phi}^{K} \delta \hat{\Phi}^O} ({\mathcal V^{-1}})^K_I {\bm \gamma}_{X,L}^{I} {\mathcal V}^L_N \langle \hat{\Phi}^N \rangle \nn
&-&
\frac{\delta \Gamma}{\delta \hat{\Phi}^{K}} ({\mathcal V^{-1}})^K_I {\bm \gamma}_{X,L}^{I} {\mathcal V}^L_O.
\eea
{\bf Photon Identities}
The Ward identities for the two-point functions involving the photon are given by
\begin{align}
0 &= \partial^\mu \frac{\delta^2 \Gamma}{\delta \hat{\mathcal{A}}^{4 \mu} \delta \hat{\mathcal{A}}^{Y \nu}}, &
0 &= \partial^\mu \frac{\delta^2 \Gamma}{\delta \hat{\mathcal{A}}^{4 \mu} \delta \hat{\Phi}^{I}}.
\end{align}
Using the convention of Ref.~\cite{Denner:1994xt} for the decomposition of the vertex function
\begin{align}
-i \Gamma^{\hat{V},\hat{V}'}_{\mu \nu}(k,-k)&=
\left(-g_{\mu \nu} k^2 + k_\mu k_\nu + g_{\mu \nu} M_{\hat{V}}^2\right)\delta^{\hat{V} \hat{V}'}, \nonumber \\
&+\left(-g_{\mu \nu} +\frac{k_\mu k_\nu}{k^2} \right) \Sigma_{T}^{\hat{V},\hat{V}'}- \frac{k_\mu k_\nu}{k^2}
\Sigma_{L}^{\hat{V},\hat{V}'},\nonumber
\end{align}
an overall normalization
factors out of the photon two-point Ward identities compared to the SM, and
\begin{align}
\Sigma^{\mathcal{\hat{A}},\mathcal{\hat{A}}}_{L,\textrm{SMEFT}}(k^2) &= 0, & \Sigma^{\mathcal{\hat{A}},\mathcal{\hat{A}}}_{T,\textrm{SMEFT}}(0) &= 0.
\end{align}
The latter result follows from analyticity at $k^2 =0$.
{\bf $\boldsymbol{\mathcal{W}}^\pm, \boldsymbol{\mathcal{Z}}$ Identities.}
Directly, one finds the identities
\bea
0 &=& \partial^\mu \frac{\delta^2 \Gamma}{\delta \hat{\mathcal{A}}^{3 \mu} \delta \hat{\mathcal{A}}^{Y \nu}} -
\bar{M}_Z \, \frac{\delta^2 \Gamma}{\delta \hat{\Phi}^{3} \delta \hat{\mathcal{A}}^{Y \nu}}, \\
0 &=& \partial^\mu \! \! \frac{\delta^2 \Gamma}{\delta \hat{\mathcal{A}}^{3 \mu} \delta \hat{\Phi}^{I} }
-\bar{M}_Z \frac{\delta^2 \Gamma}{\delta \hat{\Phi}^{3} \delta \hat{\Phi}^{I} } \\
&+& \frac{\bar{g}_Z}{2} \frac{\delta \Gamma}{\delta \hat{\Phi}^{4}} \left(\sqrt{h}_{[4,4]} \sqrt{h}^{[3,3]} - \sqrt{h}_{[4,3]} \sqrt{h}^{[4,3]}\right) \delta^3_I \nonumber \\
&-& \frac{\bar{g}_Z}{2} \frac{\delta \Gamma}{\delta \hat{\Phi}^{4}} \left(\sqrt{h}_{[4,4]} \sqrt{h}^{[3,4]} - \sqrt{h}_{[4,3]} \sqrt{h}^{[4,4]}\right) \delta^4_I, \nonumber
\eea
and
\bea
0 &=& \partial^\mu \frac{\delta^2 \Gamma}{\delta \hat{\mathcal{W}}^{\pm \mu} \delta \hat{\mathcal{A}}^{Y \nu}} \pm
i \bar{M}_W \frac{\delta^2 \Gamma}{\delta \hat{\Phi}^{\pm} \delta \hat{\mathcal{A}}^{Y \nu}}, \\
0 &=& \partial^\mu \frac{\delta^2 \Gamma}{\delta \hat{\mathcal{W}}^{\pm \mu} \delta \hat{\Phi}^{I}}
\pm i \bar{M}_W \frac{\delta^2 \Gamma}{\delta \hat{\Phi}^{\pm} \delta \hat{\Phi}^{I}} \\
&\mp& \frac{i \bar{g}_2}{4} \frac{\delta \Gamma}{\delta \hat{\Phi}^{4}}
\left(\sqrt{h}_{[4,4]}\mp i \sqrt{h}_{[4,3]} \right) \times \nn
&\,&\left[(\sqrt{h}^{[1,1]}+ \sqrt{h}^{[2,2]} \mp i \sqrt{h}^{[1,2]} \pm i \sqrt{h}^{[2,1]}) \delta^{\mp}_I \right.\nn
&-& \left.(\sqrt{h}^{[1,1]}- \sqrt{h}^{[2,2]} \pm i \sqrt{h}^{[1,2]} \pm i \sqrt{h}^{[2,1]}) \delta^{\pm}_I\right]. \nonumber
\eea
These identities have the same structure as in the SM. The main differences are the factors multiplying the tadpole terms.
By definition, the vev is defined as
$\sqrt{2 \, \langle H^\dagger H} \rangle \equiv \bar{v}_T$. The substitution of
the vev leading to the $\hat{\mathcal{Z}}$ boson mass in the SMEFT ($\bar{M}_Z$)
absorbs a factor in the scalar mass-eigenstate transformation matrix as
$\sqrt{2 \, \langle H^\dagger H} \rangle = \sqrt{2 \, \langle H^\dagger {\mathcal V^{-1}}{\mathcal V} H \rangle}$.
If a scheme is chosen so that $\delta\Gamma/\delta\hat\phi^4$ vanishes, then rotation to the mass eigenstate basis of the one-point vector
$\delta\Gamma/\delta\hat\phi^i$ are still vanishing in each equation above.
One way to tackle tadpole corrections is to use the
FJ tadpole scheme, for discussion see Ref.~\cite{Fleischer:1980ub,Denner:2018opp}.
{\bf $\boldsymbol{\mathcal{A}},\boldsymbol{\mathcal{Z}}$ Identities.}
The mapping of the SM Ward identites for $\Gamma_{AZ}$ in the background field method given in Ref.~\cite{Denner:1994xt}
to the SMEFT is
\begin{align}
0 = \partial^\mu \frac{\delta^2\Gamma}{\delta \mathcal{\hat{A}}^{\nu}\delta \hat{\mathcal{Z}}^{\mu}}.
\end{align}
As an alternative derivation, the mapping between the mass eigenstate $(Z,A)$ fields in the
SM and the SMEFT ($\mathcal{Z},\mathcal{A}$) reported in Ref.~\cite{Brivio:2017btx} directly follows
from Eq.~\eqref{wardrotations}. Input parameter scheme dependence drops out when considering the two-point function
$\Gamma_{AZ}$ in the SM mapped to the SMEFT and a different overall normalization factors out.
One still finds $\Sigma^{\mathcal{\hat{A}},\mathcal{\hat{Z}}}_{L,{\rm SMEFT}}(k^2) = 0$ and, as a consequence of analyticity at $k^2 =0$,
$\Sigma^{\mathcal{\hat{A}},\mathcal{\hat{Z}}}_{T,{\rm SMEFT}}(0) = 0$. This result has been used in the BFM calculation
reported in Ref.~\cite{Hartmann:2015oia,Hartmann:2015aia}.
{\bf Conclusions.}
We have derived Ward identities for the SMEFT,
constraining both the perturbative and power-counting expansions. The results presented already provide a clarifying explanation to
some aspects of the structure of the SMEFT that has been determined at tree level. The utility of these results is expected
to become clear as studies of the SMEFT advance to include sub-leading corrections.
\begin{acknowledgments}
We acknowledge support from the Carlsberg Foundation, the Villum Fonden and the Danish National Research Foundation (DNRF91)
through the Discovery center.
We thank W. Dekens, A. Manohar, G. Passarino and P. Stoffer for discussions and/or comments on the draft, and P. van Nieuwenhuizen for his AQFT notes.
\end{acknowledgments}
\newpage
{\bf Notation.}
The metric forms and rotations to $\mathcal{L}^{(6)}$ in the Warsaw basis are explicitly
\cite{Grinstein:1991cd,Alonso:2013hga}
\begin{align}
\sqrt{g}^{AB} &= \begin{bmatrix}
1+\tilde{C}_{HW} & 0 & 0 & 0 \\
0 & 1+\tilde{C}_{HW} & 0 & 0 \\
0 & 0 & 1+\tilde{C}_{HW} & -\frac{\tilde{C}_{HWB}}{2} \\
0 & 0 & -\frac{\tilde{C}_{HWB}}{2} & 1+\tilde{C}_{HB}
\end{bmatrix}, \nonumber \\
U_{BC} &= \begin{bmatrix}
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0 & 0 \\
\frac{i}{\sqrt{2}} & \frac{-i}{\sqrt{2}} & 0 & 0 \\
0 & 0 & c_{\overline{\theta}} & s_{\overline{\theta}} \\
0 & 0 & -s_{\overline{\theta}} & c_{\overline{\theta}}
\end{bmatrix}, \nonumber \\
\sqrt{h}^{IJ} &= \begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1-\frac{1}{4}\tilde{C}_{HD} & 0 \\
0 & 0 & 0 & 1+\tilde{C}_{H\Box}-\frac{1}{4}\tilde{C}_{HD}
\end{bmatrix}, \nonumber \\
V_{JK} &= \begin{bmatrix}
\frac{-i}{\sqrt{2}} & \frac{i}{\sqrt{2}} & 0 & 0 \\
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0 & 0 \\
0 & 0 & -1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}.
\end{align}
The notation for dimensionless Wilson coefficients is $\tilde{C}_i = \bar{v}_T^2 C_i/\Lambda^2$.
The convention for
$s_{\bar{\theta}}$ here has a sign consistent with Ref.~\cite{Alonso:2013hga}, which has an
opposite sign compared to Ref.~\cite{Denner:1994xt}. For details and explicit
results on couplings for the SMEFT including $\mathcal{L}^{(6)}$ corrections in the Warsaw basis,
we note that we are consistent in notational conventions with Ref.~\cite{Alonso:2013hga}.
The generators are given as
\begin{align}
\gamma_{1,J}^{I} &= \begin{bmatrix}
0 & 0 & 0 & -1 \\
0 & 0 & -1 & 0 \\
0 & 1 & 0 & 0 \\
1 & 0 & 0 & 0
\end{bmatrix}, &
\gamma_{2,J}^{I} &= \begin{bmatrix}
0 & 0 & 1 & 0 \\
0 & 0 & 0 & -1 \\
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0
\end{bmatrix}, \nonumber \\
\gamma_{3,J}^{I} &= \begin{bmatrix}
0 & -1 & 0 & 0 \\
1 & 0 & 0 & 0 \\
0 & 0 & 0 & -1 \\
0 & 0 & 1 & 0
\end{bmatrix}, &
\gamma_{4,J}^{I} &= \begin{bmatrix}
0 & -1 & 0 & 0 \\
1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & -1 & 0
\end{bmatrix}.
\end{align}
The $\gamma_{4}$ generator is used for the $\rm U(1)_Y$ embedding.
The couplings are absorbed into the structure constants and generators leading to tilde superscripts,
\begin{align}
\tilde{\epsilon}^{A}_{\, \,BC} &= g_2 \, \epsilon^{A}_{\, \, BC}, \text{ \, \, with } \tilde{\epsilon}^{1}_{\, \, 23} = +g_2, \nonumber \\
\tilde{\gamma}_{A,J}^{I} &= \begin{cases} g_2 \, \gamma^{I}_{A,J}, & \text{for } A=1,2,3 \\
g_1\gamma^{I}_{A,J}, & \text{for } A=4.
\end{cases}
\end{align}
In mass eigenstate basis, the transformed generators are
\begin{align}
{\bm \gamma}_{1,J}^{I} &= \frac{\overline{g}_2}{2\sqrt{2}}\begin{bmatrix}
0 & 0 & i & -1 \\
0 & 0 & -1 & -i \\
-i & 1 & 0 & 0 \\
1 & i & 0 & 0
\end{bmatrix}, \nonumber \\
{\bm \gamma}_{2,J}^{I}&= \frac{\overline{g}_2}{2\sqrt{2}}\begin{bmatrix}
0 & 0 & -i & -1 \\
0 & 0 & -1 & i \\
i & 1 & 0 & 0 \\
1 & -i & 0 & 0
\end{bmatrix}, \nonumber
\end{align}
\begin{align}
{\bm \gamma}_{3,J}^{I} &= \frac{\overline{g}_Z}{2}\begin{bmatrix}
0 & -(c_{\overline{\theta}}^2 - s_{\overline{\theta}}^2) & 0 & 0 \\
(c_{\overline{\theta}}^2 - s_{\overline{\theta}}^2) & 0 & 0 & 0 \\
0 & 0 & 0 & -1 \\
0 & 0 & 1 & 0
\end{bmatrix}, \nonumber \\
{\bm \gamma}_{4,J}^{I} &= \overline{e}\begin{bmatrix}
0 & -1 & 0 & 0 \\
1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{bmatrix}.
\end{align}
{\bf Connected Green's functions formulation}
An alternative approach is to derive the Ward identities in terms of the generating functional for connected Green's functions -- $W_c$.
The non-invariance of $\mathcal{L}_{\textrm{GF}}^{\textrm{BF}}$ under background-field gauge transformations leads to
\begin{align}
\frac{\delta W_c}{\delta \alpha^B} = i \int d^4 x \, \frac{\delta}{\delta\hat\alpha^B}\Lagr_{\textrm{GF}}^{\textrm{BF}}.
\end{align}
We choose the gauge-fixing term {\it for the background fields}
\begin{align}
\Lagr_{\textrm{GF}}^{\textrm{BF}} &= -\frac{1}{2\xi} \langle{g}_{AB}\rangle {G}^A {G}^B, \\
G^X &= \partial_\mu \hat W^{X,\mu} + \frac{\xi}{2}\langle {g}^{XC}\rangle (\hat{\phi}^I - \langle \hat \phi^I
\rangle ) \langle{h}_{IK}\rangle\tilde\gamma^K_{C,J}\langle{\hat{\phi}}^J\rangle. \nonumber
\end{align}
The variation of the gauge-fixing term with respect to the background-gauge parameter is
\begin{align}
& \frac{\delta}{\delta\hat\alpha^B}\Lagr_{\textrm{GF}}^{\textrm{BF}} = \frac{1}{\xi}\langle{g}_{AD}\rangle
\left(
\Box\delta^A_B + i\partial^{\mu} \tilde\epsilon^A_{\,\,BC}\frac{\delta W_c}{\delta J_{\hat W^{C,\mu}}} \right. \\
& \left. + \frac{\xi}{2}\langle{g}^{AE}\rangle \frac{\tilde\gamma^I_{B,J}}{2}
\left( -i\frac{\delta W_c}{\delta J_{\hat\phi^J}} \right)
\langle{h}_{IK}\rangle
\tilde\gamma^{K}_{E,L}\langle{\phi}^L\rangle
\right)
G^D_{\mathcal{J}}, \nonumber
\end{align}
where
\begin{align}
G^D_{\mathcal{J}} = -i\partial^\nu \frac{\delta W_c}{\delta J_{\hat W^{D,\nu}}}
-i \frac{\xi}{2}\langle{g}^{DX}
\rangle \frac{\delta W_c}{\delta J_{\hat\phi^I}}\langle{h}_{IK}\rangle
\tilde\gamma^{K}_{X,J}\langle{\phi}^J\rangle. \nonumber
\end{align}
Consider the difference between the vev defined by $\langle \,\rangle$
and an alternate vev denoted by $\langle \phi^J\rangle^\prime$ where the minimum of the action still dictates the
numerical value, but in addition $\langle \phi^J\rangle^\prime$ transforms as $\delta \langle \phi^I\rangle^\prime = \tilde{\gamma}_{A,J}^I \langle \phi^J\rangle^\prime \, \hat{\alpha}^A$.
Replacing all instances of $\langle \rangle$ in the above equations with this expectation value, and related transformation properties on the modified metrics, one finds
\begin{align}
\frac{\delta}{\delta\hat\alpha^B}{\Lagr}_{\textrm{GF}}^{\textrm{BF}} = \frac{1}{\xi}\langle{g}_{BD}\rangle^\prime
\Box G^D_{\mathcal{J}}.
\end{align}
The two results coincide for on-shell observables, for further discussion this point, and tadpole schemes, see Ref.~\cite{Denner:1996gb}.
We postpone a detailed
discussion of these two approaches to a future publication.\\
| 2024-02-18T23:40:04.563Z | 2019-09-19T02:15:51.000Z | algebraic_stack_train_0000 | 1,250 | 5,463 |
|
proofpile-arXiv_065-6255 | \section{Introduction}\label{sec:1}
Magnetic reconnection is a fundamental process of energy release that lies at the core of many dynamic phenomena in the solar system such as solar flares, coronal heating events, geomagnetic substorms and flux transfer events. Reconnection in three dimensions has been shown to be completely different in many fundamental respects from the classically studied process in two dimensions \citep{schindler88,hesse88,priest03}. The main thrust of reconnection theory at present is to understand the different ways in which it may take place in three dimensions \citep[e.g., the books][]{priest00,birn07}. A key point is that in three dimensions reconnection occurs where a component of the electric field parallel to the magnetic field is present -- and this can be in many different field configurations. For example, reconnection may occur either at null points \citep[e.g.,][]{craig95,priest96a,craig98,craig00,hornig03,heerikhuisen04,pontin05a,pontin06a} or in the absence of null points at quasi-separatrix layers or hyperbolic flux tubes \citep{priest95a,demoulin96a,demoulin96b,demoulin97d,hornig98,titov03a,hesse05,demoulin06a,titov07a,titov09} or it may occur along separators that join one null point to another \citep{priest96a,longcope96a,galsgaard96b,galsgaard00b,longcope01,parnell04,priest05a,longcope05,parnell08a}.
Null points are common in the solar atmosphere \citep{filippov99,schrijver02,longcope03,close04c} and are sometimes implicated in solar flares and coronal mass ejections \citep{longcope96b,fletcher01,aulanier00,aulanier05a,aulanier06a,cook09}. Three-dimensional collapse of a null has been described \citep{bulanov84,bulanov97,klapper96,parnell96,parnell97} and stationary resistive flows near them have been modelled \citep{craig96,craig99,titov00}. In particular, for a linear null and uniform magnetic diffusivity, \citet{titov00} discovered field-aligned flows when the spine current is small and spiral field-crossing flows which do not cross the spine or fan when the spine current exceeds a critical value.
A three-dimensional null point possesses two different classes of field lines that connect to the null: for a so-called {\it positive null point}, a surface of field lines (called a {\it fan} by \citet{priest96a}) recede from the null, while an isolated field line (called the {\it spine} of the null) approaches it from two directions; for a {\it negative null point}, on the other hand the fan approaches the null, while the spine recedes from it. (For an alternative nomenclature see Ref.~\cite{lau90}.) The different types of linear null were categorised by \citet{parnell96}. The generic null in a potential magnetic field is an improper radial null, with the fan perpendicular to the spine and the field lines in the fan approaching or receding from essentially two directions (Fig.~\ref{fig:1}b). A particular case is the proper radial null in which the field lines in the fan are radial (Fig.~\ref{fig:1}a). The effect of a current along the fan is to make the fan and spine no longer perpendicular (Fig.~\ref{fig:3}b), whereas a strong enough current along the spine makes the fan field lines spiral (Fig.~\ref{fig:3}a).
There have been three steps towards categorising reconnection at a null point due to (i) analytical ideal modelling, (ii) kinematic resistive modelling and (iii) computational experiments. The initial analytical ideal treatment by \citet{priest96a} aimed to understand the types of ideal motions that are possible in the environment of a null point. They supposed that the nature of reconnection is determined to a large extent by the nature of the large-scale flows: they suggested that an ideal flow across the fan would drive {\it spine\ reconnection}, in which a current forms along the spine, whereas an ideal flow across the spine would drive {\it fan\ reconnection} with a strong current in the fan. They also proposed {\it separator\ reconnection} with a strong current along a separator joining two nulls.
Since then, as we shall see in this paper, although behaviour reminiscent of the early spine and fan models may be observed in certain limiting situations, recent numerical experiments have suggested different forms of spine and fan reconnection and also a hybrid spine-fan regime as being the generic modes that occur in practice. However, the existence of separator reconnection has been well confirmed by a series of numerical experiments \citep{galsgaard96b,galsgaard00a,parnell04,parnell08a} and its importance in the solar corona has been stressed \citep{longcope01,longcope05}. In addition, quasi-separatrix layer reconnection (called slip-running reconnection by \citet{aulanier06a}) has been confirmed in numerical experiments \cite{birn00,pontin05c,aulanier05a,mellor05,demoortel06a,demoortel06b,wilmot07a} and in bright point and flare simulations \citep{demoulin93b,demoulin97a,demoulin97d,masson09a,torok09}.
Our aim here is simply to look more closely at the nature of reconnection at a 3D null point and to propose a new categorisation to replace spine reconnection and fan reconnection. In the next section it is necessary to summarise the main results from theory and computational experiments on null-point reconnection and to reinterpret them in the light of the new regimes of reconnection that we are proposing. In the following sections we consider in turn the properties of the three new types of reconnection, namely, {\it torsional\ spine\ reconnection}, {\it torsional\ fan\ reconnection} and the most common regime {\it spine-fan\ reconnection}.
\section{Theory and numerical experiments} \label{sec:2}
\subsection{Null Points}\label{sec:2A}
The simplest linear null point (for which the magnetic field increases linearly from the null) has field components
\begin{equation}
(B_{x},B_{y},B_{z})=\frac{B_{0}}{L_{0}}(x,y,-2z)
\label{eq:1}
\end{equation}
in Cartesian coordinates or
\begin{displaymath}
(B_{R},B_{\phi},B_{z})=\frac{B_{0}}{L_{0}}(R,0,-2z)
\end{displaymath}
in cylindrical polars, so that ${\bm \del} \cdot {\bf B}=0$ identically, where $B_{0}$ and $L_{0}$ are constant. The field lines are given by
\begin{displaymath}
y=cx,\ \ \ \ \ \ \ \ \ z=k/x^{2},
\end{displaymath}
where $c$ and $k$ are constants. The $z$-axis is the spine and the $xy$-plane is the fan.
For this so-called {\it proper\ radial\ null} the fan field lines are straight (Figure \ref{fig:1}a).
\begin{figure}
\centering
\includegraphics[height=.25\textheight]{fig1a.eps}
\includegraphics*[height=.25\textheight]{fig1b.eps}
\caption{Field lines for (a) a proper radial null and (b) an improper radial null.}
\label{fig:1}
\end{figure}
It is a particular member (with $a=1$) of a wider class of current-free improper radial null points ($a\neq 1$) with curved fan field lines, having field components
\begin{displaymath}
(B_{x},B_{y},B_{z})=\frac{B_{0}}{L_{0}}[x,ay,-(a+1)z].
\end{displaymath}
This is the generic form for a current-free null, since the proper radial null is structurally unstable in the sense that it occurs only for a particular value of $a$, but for simplicity much of the theory so far has used a proper radial null.
More generally, each of the three field components of a linear null may be written in terms of three constants, making nine in all. However, \citet{parnell96} built on earlier work \citep{cowley73b,fukao75,greene88} and showed, by using ${\bm \del} \cdot {\bf B}=0$, by normalising and by rotating the axes, that the nine constants may be reduced to four constants $(a,b,j_{\parallel},j_{\perp})$ such that
\begin{gather*}
\begin{pmatrix}B_{x} \\ B_{y}\\ B_{z}\end{pmatrix}=
\frac{B_{0}}{L_{0}}\begin{pmatrix}1 & \textstyle{\frac{1}{2}}(b-j_{\parallel}) & 0\\
\textstyle{\frac{1}{2}}(b+j_{\parallel}) & a & 0\\
0 & j_{\perp} & -a-1
\end{pmatrix}
\begin{pmatrix}x \\ y\\ z\end{pmatrix},
\end{gather*}
where $ j_{\parallel}/\mu$ is the current parallel to the spine and
$j_{\perp}/\mu$ is the current perpendicular to the spine. Furthermore, both nulls and separators are susceptible to collapse to form current sheets when the boundary conditions allow it \citep{bulanov84,longcope96a,pontin05a}.
\subsection{Kinematic Ideal Models}\label{sec:2B}
The effects in the ideal region around a 3D null of steady reconnection were studied in the kinematic regime by \citet{priest96a} extending earlier ideas \cite{lau90}. They solved the equations
\begin{eqnarray}
{\bf E}+{\bf v} \times {\bf B}={\bf 0}
\label{eq:2}
\end{eqnarray}
and
\begin{eqnarray}
{\bm \del} \times {\bf E} = {\bf 0}
\label{eq:3}
\end{eqnarray}
for ${\bf v}$ and ${\bf E}$ when ${\bf B}$ is given by Equation (\ref{eq:1}) and a variety of different boundary conditions are imposed.
In particular, Eq.~(\ref{eq:3}) implies that ${\bf E}={\bm \nabla} \Phi$ and then the component of Equation (\ref{eq:2}) perpendicular to ${\bf B}$ yields
\begin{eqnarray}
{\bf B} \. {\bm \nabla} \Phi=0,
\label{eq:4}
\end{eqnarray}
which, for certain imposed boundary conditions, may be integrated along field lines (characteristics) to determine the value of $\Phi$ (and therefore ${\bf E}$) throughout the volume. Then the component of Eq. (\ref{eq:2}) perpendicular to ${\bf B}$ determines the plasma velocity normal to ${\bf B}$ everywhere as
\begin{eqnarray}
{\bf v}_{\perp}=\frac{{\bm \nabla} \Phi \times {\bf B}}{B^{2}}.
\label{eq:5}
\end{eqnarray}
\begin{figure}
\centering
\includegraphics[height=.30\textheight]{fig2a.eps}
\includegraphics[height=.30\textheight]{fig2b.eps}
\caption{Regimes envisaged from ideal motions: (a) Spine reconnection with a strong spine current driven by continuous motions across the fan. (b) Fan reconnection with a strong fan current and flipping of field lines above and below the fan produced by continuous motions across the spine.}
\label{fig:2}
\end{figure}
If a continuous flow is imposed across the fan (Figure \ref{fig:2}a), singularities in ${\bf E}$ and ${\bf v}$ are produced at the spine. \citet{priest96a} speculated that this would produce a strong current at the spine in what they dubbed {\it spine\ reconnection}. They considered the effect of diffusion in a preliminary manner, but they were unable at the time to resolve the singularities at the spine. As an example, they considered flows with no $\phi$-component and an electric field of the form $E_{\phi}=v_{e}B_{0} \sin \phi$ giving rise to a velocity
\begin{eqnarray}v_{\perp R}=\frac{2E_{\phi}L_{0}^{2}z/B_{0}}{R(R^{2}+4z^{2})},\ \ \ \ \ \ \ \ \ v_{\perp z}=\frac{E_{\phi}L_{0}^{2}z/B_{0}}{R^{2}+4z^{2}},
\nonumber
\end{eqnarray}
for which $v_{\perp z}$ is continuous at the fan $z=0$, while $v_{\perp R}$ is singular at the spine $R=0$.
If, on the other hand, a continuous flow is imposed across the spine (Figure \ref{fig:2}b), singularities are produced at the fan together with a strong {\it flipping\ flow} (that \citet{priest92b} had previously discovered). \citet{priest96a} suggested that this would produce a strong current at the fan in what they dubbed {\it fan\ reconnection}. A particular example is given in terms of ${\bar x}=x/L$, ${\bar y}=y/L$, ${\bar z}=z/L$ by a potential of the form $\Phi=v_{e}B_{e}[{\bar x}^{2}{\bar z}/(4+{\bar y}^{2}{\bar z)}^{\textstyle{\frac{1}{2}}}]$, which produces a flow field
\begin{eqnarray}
(v_{\perp {\bar x}},v_{\perp {\bar y}},v_{\perp {\bar z}})=\frac{v_{e}}{({\bar x}^{2}+{\bar y}^{2}+4{\bar z}^{2})(4+{\bar y}^{2}{\bar z})^{3/2}}\times\ \ \ \ \ \ \ \ \ \
\nonumber\\
\left(\frac{2{\bar x}{\bar y}{\bar z}({\bar z}^{3}-1)}{{\bar z}^{1/2}},\frac{2({\bar x}^{2}+4{\bar z}^{2}+{\bar y}^{2}{\bar z}^{3})}{{\bar z}^{1/2}},(4+{\bar y}^{2}{\bar z}+{\bar x}^{2}{\bar z}){\bar y}{\bar z}^{\textstyle{\frac{1}{2}}}\right),
\nonumber
\end{eqnarray}
for which $v_{\perp {\bar y}}$ is continuous on the planes ${\bar z}=\pm 1$, while $v_{\perp {\bar x}}$ and $v_{\perp {\bar y}}$ are singular at the fan (${\bar z}=0$). However, this analysis left open the questions as to whether it is possible to resolve the singularity and also whether these pure states are likely to be set up in practice.
\subsection{Kinematic Resistive Models}\label{sec:2C}
The next step in the theory was to consider the effect in 3D of an isolated diffusion region where frozen-in flux breaks down and the induction equation is typically of the form
\begin{eqnarray}
\frac{\partial {\bf B}}{\partial t}={\bm \del} \times ({\bf v} \times {\bf B})+\eta \nabla^{2}{\bf B}.
\nonumber
\end{eqnarray}
Reconnection in 3D is very different in many respects from that in 2D.
In 2D, a differentiable flux-transporting velocity ${\bf w}$ \citep{hornig96} satisfying
\begin{eqnarray}
\frac{\partial{\bB}}{\partial{t}}={\bm \del} \times ({\bf w} \times {\bf B})
\nonumber
\end{eqnarray}
always exists apart from at the X-point itself. This velocity has a hyperbolic singularity at an X-type null point, where the reconnection takes place. The magnetic flux moves at the velocity ${\bf w}$ and slips through the plasma, which itself moves at ${\bf v}$. Furthermore, the mapping of the field lines in 2D is discontinuous at the separatrix field lines that thread the X-point. This mapping discontinuity is associated with the fact that field lines break and reconnect at one point, namely, the X-point. While they are in the diffusion region, field lines preserve their connections everywhere, except at the X-point. Two flux tubes that move into the diffusion region break and rejoin perfectly to form two new flux tubes that move out.
In 3D, surprisingly, none of the above properties carry over and so the nature of reconnection is profoundly different \citep{priest03}. First of all, a single flux tube velocity (${\bf w}$) does not generally exist \citep{hornig96,hornig01} since ${\bf E} \. {\bf B} \neq 0$, but it may be replaced by a pair of flux velocities describing separately what happens to field lines that enter or leave the diffusion region \citep{priest03}. Secondly, the mapping of field lines is continuous if there is no 3D null point or separatrix surface. Thirdly, as they move through a 3D diffusion region, magnetic field lines continually change their connections. Fourthly, two tubes don't generally break and reform perfectly to give two other flux tubes: rather, when the two flux tubes are partly in the diffusion region and so are in the process of reconnecting, they split into four parts, each of which flips in a different manner, a manifestation of the continual change of connections. (Note that in general in 2D and 3D the flux velocity ${\bf w}$ is non-unique. We choose here to consider the case in which we select ${\bf w}$ by insisting that ${\bf w}={\bf v}$ in the ideal region. The crucial distinction is that in 2D a single ${\bf w}$ exists (and is singular), while in 3D reconnection {\it no single} velocity ${\bf w}$ exists that satisfies (\ref{eq:9}) together with the constraint that ${\bf w}={\bf v}$ in the ideal region. See Refs.~\citep{hornig96,hornig01} for further discussion.)
The first attempt to model kinematically the effect of an isolated diffusion region was by \citet{hornig03} who set up a formalism and applied it to a case without null points.
They solved
\begin{eqnarray}
{\bf E}+{\bf v} \times {\bf B}=\eta\ {\bf j},
\label{eq:6}
\end{eqnarray}
where ${\bm \del} \times {\bf E} = {\bf 0}$, ${\bf j}={\bm \del} \times {\bf B}/\mu$ and ${\bm \del} \cdot {\bf B}=0$.
The idea was to impose a sufficiently simple magnetic field that both the mapping and the inverse mapping of the field can be found analytically. Then, after writing ${\bf E}={\bm \nabla} \Phi$, the integral of the component of (\ref{eq:6}) parallel to ${\bf B}$ determines $\Phi$ everywhere as an integral
\begin{eqnarray}
\Phi =\int \frac{\eta\ {\bf j} \. {\bf B}}{B}\ ds +\Phi_{e}
\nonumber
\end{eqnarray}
along field lines, in terms of the values ($\Phi_{e}$) at one end of the field lines and the distance $s$ along field lines. More simply in terms of a dimensionless stretched distance $S$ such that $ds/B=L_{0}dS/B_{0}$,
\begin{equation}
\Phi =\int \frac{\eta\ L_{0}\ {\bf j} \. {\bf B}}{B_{0}}\ dS +\Phi_{e}.
\label{eq:7}
\end{equation}
One way of isolating the reconnection region in these kinematic solutions is by choosing a form of $\eta$ that is localised. So-called {\it pure} solutions have $\Phi_{e}\equiv 0$ and produce counter-rotating (or flipping) flows of field lines that link the diffusion region. The rate of flux reconnection is calculated by evaluating the integral
\begin{eqnarray}
\frac{d\Phi_{mag}}{dt} =\int E_{\parallel} ds
\label{eq:8}
\end{eqnarray}
along a field line through the diffusion region \citep{schindler91,hesse05}. Then the flow normal to the field lines is determined by the component of Equation (\ref{eq:6}) perpendicular to ${\bf B}$ as
\begin{eqnarray}
{\bf v}_{\perp}=\frac{({\bm \nabla} \Phi -\eta\ {\bf j}) \times {\bf B}}{B^{2}}.
\label{eq:9}
\end{eqnarray}
These solutions may be regarded as either kinematic (i.e., satisfying just the induction equation) or as fully dynamic in the limit of uniform density and slow flow (since they also satisfy the equations ${\bm \del} \cdot {\bf v}=0$ and ${\bm \nabla} p={\bf j} \times {\bf B}$).
\begin{figure}
\centering
\includegraphics[height=.35\textheight]{fig3a.eps}
\includegraphics[height=.35\textheight]{fig3b.eps}
\caption{The field near a null point with (a) uniform spine current and (b) uniform fan current.}
\label{fig:3}
\end{figure}
\citet{pontin04a} applied this formalism to determine the behaviour of the magnetic flux when an isolated diffusion region contains a spiral null point, i.e.~a null with current directed parallel to the spine line. The imposed magnetic field was
\begin{eqnarray}
(B_{x},B_{y},B_{z})=\frac{B_{0}}{L_{0}}\left(x-\textstyle{\frac{1}{2}} {\bar j_{0}} y,y+\textstyle{\frac{1}{2}} {\bar j_{0}} x,-2z\right)
\nonumber
\end{eqnarray}
or
\begin{eqnarray}
(B_{R},B_{\phi},B_{z})=\frac{B_{0}}{L_{0}}\left(R,\textstyle{\frac{1}{2}} {\bar j_{0}} R,-2z\right)
\label{eq:10}
\end{eqnarray}
in cylindrical polars, with the spine and current both directed along the $z$-axis, where ${\bar j_{0}}$ is a dimensionless current density. The diffusion region was assumed to be a cylinder of radius $a$ and height $2b$ (Figure \ref{fig:3}a).
First of all, a pure elementary solution which describes the core of the reconnection process was obtained by setting the flow to zero outside the volume defined by the `envelope' ($F$) of flux that threads the diffusion region. Inside $F$ the flow and flux velocities are purely rotational
(i.e., in the $\phi$-direction), so that there is no flow across either the spine or the fan. The reconnection rate is $\int E_{\parallel}dl$ along the spine, and measures the rate of rotational mis-matching of the flux velocities of field lines entering and leaving the diffusion region.
To this solution any ideal solution ($\Phi_{id}$) may be added and in particular they considered a stagnation-point flow of the form $\Phi_{id}=\phi_{0} x_{0}y_{0}$, which brings flux into $F$ and carries it out again. The result is a transition from O-type to X-type flow near the null when $\phi_{0}$ exceeds a critical value. What this solution suggests, therefore, is that a type of spine reconnection with strong current along the spine direction is possible when there are twisting flows about the spine. This is quite different from the spine reconnection that was envisaged in \citet{priest96a} and so here we propose to call it {\it torsional\ spine\ reconnection} and discuss its properties further in Section \ref{sec:3}.
Next, \citet{pontin05b} applied the same approach to a diffusion region ($D$) containing a null point having a uniform fan-aligned current ($B_{0}{\bar j_{0}}/(\mu L_{0})$) in the $x$-direction and field components
\begin{eqnarray}
(B_{x},B_{y},B_{z})=\frac{B_{0}}{L_{0}}(x,y-{\bar j_{0}}z,-2z).
\nonumber
\end{eqnarray}
The diffusion region was assumed to have the shape of a disc of radius $a$ and height $2b$ (see Fig.~\ref{fig:3}(b)), inside which the magnetic diffusivity decreases smoothly and monotonically from the null to zero at its boundary. Outside $D$ it vanishes.
The resulting plasma flow was surprisingly found to be quite different from the fan reconnection of \citet{priest96a}, since it is found to cross both the spine and fan of the null. Field lines traced from footpoints anchored in the fan-crossing flow are found to flip up and down the spine, whereas those that are traced from the top and bottom of the domain flip around the spine in the fan plane, as envisaged by \citet{priest96a}. The reconnection rate is again given by an integral of the form (\ref{eq:8}), this time along the fan field line parallel to the direction of current flow (here the $x$-axis). For such a mode of reconnection this expression can be shown to coincide with the rate of flux transport across the fan (separatrix) surface \cite{pontin05b}.
It is possible to find a solution that has similar field line behaviour to the pure fan reconnection envisaged by \citet{priest96a}, with flow across the spine but not the fan, by adopting instead a field of the form $(B_{0}/L_{0})(x,y-{\bar j_{0}}z^{3}/L_{0}^{2},-2z)$
with a fan $x$-current $3B_{0}{\bar j_{0}}z^{2}/(\mu L_{0}^{3})$
(see Ref.~\cite{pontin05b}). It is also possible to model pure spine reconnection with flow across the fan but not the spine by considering $(B_{0}/L_{0})(x,y,{\bar j_{0}}y^{3}/L_{0}^{2}-2z)$.
with a fan $x$-current $3B_{0}{\bar j_{0}}y^{2}/(\mu L_{0}^{3})$.
Both of these fields have a vanishing current at the null.
However, a key property of a null point is the hyperbolic field structure, which tends to focus disturbances and thus generate non-zero currents at the null for the {\it primary\ reconnection\ modes}. The above pure spine and fan solutions should therefore not be considered as fundamental or primary reconnection modes but as {\it secondary\ reconnection\ modes} in the sense that the current vanishes at the null.
It has been suggested that solutions for spine reconnection in incompressible plasmas \cite{craig96} may not be dynamically accessible, and while incompressible fan solutions \cite{craig95} are dynamically accessible \cite{craig98, pontin07c, titov04, tassi05}, this breaks down when the incompressibility assumption is relaxed \cite{pontin07c}.
It turns out that the generic null point reconnection mode that is observed in numerical experiments in response to shearing motions is one in which there is a strong fan current with flow across both spine and fan, and which is in some sense a combination of the spine and fan reconnection of \citet{priest96a}. We propose here to call it {\it spine-fan\ reconnection} and discuss its properties further in Section \ref{sec:5}.
\subsection{Numerical Experiments}\label{sec:2D}
Several numerical experiments have been conducted in order to go beyond the constraints of analytical theory and to shed more light on the nature of reconnection at a 3D null. The aim was also to see whether the types of reconnection envisaged qualitatively could indeed take place in practice and to discover whether any other regimes are possible.
First of all, \citet{galsgaard03a} investigated propagation of a helical Alfv{\'e}n wave towards the fan plane, launched by a rotational driving of the field lines around the spine. This led to the concentration of current in the fan plane and suggests the possibility of {\it torsional\ fan\ reconnection} which we shall propose in Section \ref{sec:4}. (For highly impulsive driving coupling to a fast mode wave that wraps around the null was also observed.) On the other hand \citet{pontin07a} used a resistive MHD code to show how rotational disturbances of field lines in the vicinity of the fan plane can also produce the strong currents along the spine that are symptomatic of {\it torsional\ spine\ reconnection} (Section \ref{sec:3}).
Then \citet{pontin05a} used an ideal Lagrangian relaxation code to follow the formation of current sheets by the collapse of a line-tied 3D null in a compressible plasma. This was a result of the focussing of externally generated large-scale stresses in the field in response to an initial shearing of either the spine axis or fan plane. Building on a previous linear theory by \citet{rickard96}, they found that locally the fan and spine collapse towards each other to form a current sheet singularity.
\begin{figure}
\centering
\includegraphics[height=.29\textheight]{fig4a.eps}
\includegraphics*[height=.26\textheight]{fig4b.eps}
\caption{(Color) (a) A shearing motion of a spine that is situated on the $z$-axis. (b) The resulting collapse of spine and fan to form {\it spine-fan\ reconnection}, showing the current-density contours (colour) and flow velocity (white) in the $x=0$ plane \citep[After][]{pontin07b}.}
\label{fig:4}
\end{figure}
This was followed up by \citet{pontin07b}, who used a resistive MHD code to investigate the formation and dissipation of the current sheet in response to shearing of the spine, as shown in Figure \ref{fig:4}. The results support the idea of {\it spine-fan\ reconnection} in which current concentrates around the null (in a sheet spanning the spine and fan). Including compressibility does not affect the results qualitatively, except that in the incompressible limit the spine-fan current is found to reduce purely to a fan current \citep{pontin07c} with behaviour closely resembling earlier fan reconnection models \cite{priest96a,craig95}. So pure {\it fan\ reconnection} can be either an incompressible limit of {\it spine-fan\ reconnection} or, as we have seen in Section \ref{sec:2C}, the result of a secondary fan current which vanishes at the null.
\section{Torsional spine reconnection}\label{sec:3}
The type of reconnection set up at a 3D null depends crucially on the nature of the flows and boundary conditions that are responsible for the reconnection. Let us suppose first that a rotation of the fan plane drives a current along the spine and gives rise to torsional spine reconnection, as sketched in Figure \ref{fig:5}a. The nature of the reconnection is that in the core of the spine current tube there is rotational slippage, with the field lines becoming disconnected and rotating around the spine (see \citet{pontin07a}): Figure \ref{fig:5}b shows on the left side a particular magnetic field line and its plasma elements at $t=t_{0}$; in the upper part of the figure (above the shaded diffusion region) this field line and its attached plasma elements rotate about the spine through positions at times $t_{1}$, $t_{2}$ and $t_{3}$; in the lower part of the figure (below the diffusion region) the plasma elements that were on the field line at $t_{0}$ rotate to positions at $t_{1}$, $t_{2}$ and $t_{3}$ that are on different field lines.
\begin{figure}
\centering
\includegraphics*[height=.29\textheight]{fig5a.eps}
\includegraphics*[height=.29\textheight]{fig5b.eps}
\includegraphics*[width=.50\textwidth]{fig5c.eps}
\caption{(a) A rotational motion of the fan (open arrows) driving torsional spine reconnection with a strong current (solid arrows) along the spine. (b) Rotational slippage of fields entering through the top of the diffusion region on a curved flux surface, showing as solid curves the locations of the plasma elements at $t=t_{1}$, $t=t_{2}$, $t=t_{3}$, that initially ($t=t_{0}$) lay on one field line. (c) The reconnection rate measures a rotational mismatching of flux threading the diffusion region, namely the difference between the rates of flux transport through surfaces A and B.}
\label{fig:5}
\end{figure}
A steady kinematic solution may be found following the approach of Section \ref{sec:2C}. The electric field may be written as the sum (${\bf E}={\bm \nabla} \Phi = {\bm \nabla} \Phi_{nid} + {\bm \nabla} \Phi_{id}$) of a nonideal pure (elementary) solution satisfying
\begin{equation}
{\bm \nabla} \Phi_{nid}+{\bf v}_{nid} \times {\bf B}=\eta {\bm \del} \times {\bf B},
\nonumber
\end{equation}
and an ideal solution satisfying
\begin{equation}
{\bm \nabla} \Phi_{id}+{\bf v}_{id} \times {\bf B}={\bf 0}.
\nonumber
\end{equation}
Consider a spiral null point (Equation \ref{eq:10}) and suppose the diffusion region is a cylinder of radius $a$ and height $2b$ and that the magnetic diffusivity has the form $\eta = \eta_{0}f(R,z)$, where $f(0,0)=1$ and $f(R,z)$ vanishes on the boundary of the diffusion region and outside it.
The field lines for this spiral null may be obtained by solving
\begin{equation}
\frac{dR}{dS}=\frac{L_{0}B_{R}}{B_{0}}=R, \ \ \ \ R\frac{d\phi}{dS}= {\textstyle{\frac{1}{2}}} {\bar j_{0}}R, \ \ \ \ \frac{dz}{dS} = -2 z. \nonumber
\end{equation}
Suppose we start a field line at the point $(R,\phi,z)=(R_{0},\phi_{0},b)$ at $S=0$. Then the field line equations are
\begin{equation}
R=R_{0}\ e^{S}, \ \ \ \ z=b\ e^{-2S}, \ \ \ \ \phi=\phi_{0}+\textstyle{\frac{1}{2}}\ {\bar j_{0}}\ S.
\label{eq:11}
\end{equation}
These give a mapping from an initial point $(R_{0},\phi_{0},b)$ to any other point $(R,\phi,z)$ along a field line. The inverse mapping is
\begin{equation}
R_{0}=R\ e^{-S}, \ \ \ \phi_{0}=\phi-\textstyle{\frac{1}{2}}\ {\bar j_{0}}\ S.
\label{eq:12}
\end{equation}
where $S=-{\textstyle{\frac{1}{2}}}\log(z/b)$.
\subsection{Pure Non-Ideal Solution}\label{sec:3.1}
The pure elementary solution describes the core of the reconnection process. It is obtained following Refs.~\cite{hornig03,pontin04a} by solving ${\bf E}+{\bf v} \times {\bf B}=\eta\ {\bf j},$ with ${\bm \del} \times {\bf E} = {\bf 0}$, ${\bf j}={\bm \del} \times {\bf B}/\mu$ and ${\bm \del} \cdot {\bf B}=0$. Thus we write ${\bf E}={\bm \nabla}\Phi_{nid}$ with $\Phi_{nid}$ given by Equation (\ref{eq:7}) and set $\Phi_{e}\equiv 0$ so that the flow vanishes outside the diffusion region. Inside the diffusion region the flow and flux velocities have no component across either the spine or the fan. For the spiral magnetic field $(B_{R},B_{\phi},B_{z})=(B_{0}/L_{0})(R,\textstyle{\frac{1}{2}} {\bar j_{0}} R,-2z)$ and the mapping (\ref{eq:11}), $\Phi_{nid}$ becomes
\begin{equation}
\Phi_{nid}\ =\ -\Phi_{nid0}\int \eta/\eta_{0} \ e^{-2S}dS,
\nonumber
\end{equation}
where $\Phi_{nid0}=2B_{0}b{\bar j_{0}}\eta_{0}/(\mu L_{0})$. Then, once a form for $\eta$ is assumed, this may be integrated to give $\Phi_{nid}(S,R_{0},\phi_{0})$. After using the inverse mapping (\ref{eq:12}), we can then deduce $\Phi_{nid}(R,\phi,z)$ and therefore ${\bf E}$ and ${\bf v}_{\perp}$ everywhere.
If a diffusion region is isolated, a change of connectivity of field
lines may be studied, by following field lines anchored in the ideal
region on either side of the diffusion region. A diffusion region is
in general isolated if $\eta {\bf j}$ is localised in space. In practical
cases in astrophysics, this is likely to be mainly because ${\bf j}$ is
localised but in addition sometimes because as a consequence $\eta$ is also
localised. Some numerical simulations have a localised $\eta$, whereas
others have a uniform $\eta$ or a purely numerical dissipation. But
the important feature in all these cases is that the product $\eta
{\bf j}$ is localised. Now, in each of our solutions below, we follow
Refs.~\cite{hornig03,pontin04a,pontin05b} in choosing a spatially
localised $\eta {\bf j}$ by imposing a spatially localised resistivity
profile together with a ${\bf j}$ that is not localised. The reason for
doing this is to render the mathematical equations tractable, since we
have not yet discovered a way to do so with a localised ${\bf j}$. The
quantitative spatial profiles of physical quantities will depend on
the $\eta$ profile, but the qualitative topological properties of the
field line behaviour in such models are expected to be generic and
independent of the particular profile chosen for $\eta$. Indeed, the
topological properties of the reconnection models of
Refs.~\cite{hornig03,pontin04a,pontin05b} have been verified by the
numerical simulations \cite{pontin05c,pontin07a,pontin07b}.
There are four regions with different forms for $\Phi_{nid}$, as illustrated in Figure 6, which shows a vertical cut in the first quadrant of the $Rz$-plane. In region (1) threaded by field lines that enter the diffusion region (shaded) from above, we assume $\Phi_{nid}(R,z)\equiv 0$, so that there is no electric field or flow. The same is true in region (2) which lies above the flux surface $zR^{2}=ba^{2}$ that touches the upper corner $(a,b)$ of the diffusion region. We calculate below the forms of $\Phi_{nid}(R,z)$ in the diffusion region (3) and in the region (4) threaded by field lines that leave the diffusion region through its sides.
\begin{figure}
\centering
\includegraphics*[width=.49\textwidth]{fig6.eps}
\caption{The projection of magnetic field lines and the diffusion region in the first quadrant of the $R$-$z$ plane, showing 4 different regions (1)-(4) in which $\Phi_{nid}(R,z)$ is calculated. A magnetic field line whose projection intersects the top of the diffusion region in $T(R,b)$ and the side in $Q(a,z_{s})$ contains typical points $P(R,z)$ inside and beyond the diffusion region. The bounding field line $zR^{2}=ba^{2}$ is shown dashed.}
\label{fig:6}
\end{figure}
For example, let us assume that $\eta$ vanishes outside the diffusion region ($D$) and that inside $D$ it has the form
\begin{equation}
\eta=\eta_{0}\left(1-\frac{R^{4}}{a^{4}}\right)\left(1-\frac{z^{2}}{b^{2}}\right),
\nonumber
\end{equation}
which peaks at the origin and vanishes on the boundary of $D$. First, we use the mapping (\ref{eq:11}) to substitute for $R$ and $z$, and integrate with respect to $S$ from the point $T(R,b)$ on the top of $D$ to the point $P(R,z)$ inside $D$ (Figure \ref{fig:6}). Then we use the inverse mapping (\ref{eq:12}) to replace $R_{0}$ and $S$, and finally we obtain the potential throughout $D$ (region (3) in Figure \ref{fig:6}) as
\begin{eqnarray}
\Phi_{nid}(R,z)=-{\textstyle{\frac{1}{2}}}\Phi_{nid0}\left[\left(1-\frac{z}{b}\right)-\frac{R^{4}}{a^{4}}\left(\frac{z}{b}-\frac{z^{2}}{b^{2}}\right)\right.\nonumber\\
\left. +\ {\textstyle{\frac{1}{3}}}\left(\frac{z^{3}}{b^{3}}-1\right)+\frac{R^{4}}{a^{4}}\left(\frac{z^{2}}{b^{2}}-\frac{z^{3}}{b^{3}}\right)\right].\ \ \ \ \
\label{eq:13}
\end{eqnarray}
This then determines the components of the electric field (${\bf E}={\bm \nabla} \Phi_{nid}$) everywhere in $D$ as
\begin{equation}
E_{R}=\frac{\partial\Phi_{nid}}{\partial R}=\frac{2\Phi_{nid0}R^{3}}{a^{4}}\left(\frac{z}{b}-\frac{2z^{2}}{b^{2}}+\frac{z^{3}}{b^{3}}\right), \nonumber
\end{equation}
\begin{equation}
E_{z}=\frac{\partial\Phi_{nid}}{\partial z}=\frac{\Phi_{nid0}}{2b}\left(1+\frac{R^{4}}{a^{4}}-\frac{z^{2}}{b^{2}}-\frac{4zR^{4}}{ba^{4}}+\frac{3z^{2}R^{4}}{b^{2}a^{4}}\right). \nonumber
\end{equation}
In order to find $\Phi_{nid} (R,z)$ in region (4) of Figure 6, we start with the values of $\Phi_{nid}$ at the point $Q(a,z_{s})$ on the side of the diffusion region (Figure 6) and then calculate $\Phi_{nid}$ at any point $P(R,z)$ that lies on the same field line in region (4) to the right of $Q$. Thus, after putting $(R,z)$=$(a,z_{s})$ in the expression (\ref{eq:13}) for $\Phi$ that holds in the diffusion region, we obtain
\begin{equation}
\Phi_{nid}(a,z_{s})\equiv f(z_{s})=-\Phi_{nid0}\left[\frac{1}{3}-\frac{z_{s}}{b}+\frac{z_{s}^{2}}{b^{2}}-\frac{z_{s}^{3}}{b^{3}}\right].
\label{eq:14}
\end{equation}
Since ideal MHD holds in region (4), $\Phi_{nid} (R,z)$ is constant along the field line ($zR^{2}=z_{s}a^{2}$) joining $Q$ to $P$, and so the value of $\Phi_{nid}$ at $P$ is simply
\begin{eqnarray}
\Phi_{nid}(R,z)=f\left(\frac{zR^{2}}{a^{2}}\right)\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \nonumber\\
=-\Phi_{nid0}\left[\frac{1}{3}-\frac{z}{b}\frac{R^{2}}{a^{2}}+\frac{z^{2}}{b^{2}}\frac{R^{4}}{a^{4}}-\frac{z^{3}}{b^{3}}\frac{R^{6}}{a^{6}}\right].
\label{eq:15}
\end{eqnarray}
The solution for $z<0$ can be obtained in a similar manner by integrating from $z = -b$.
We may now make various deductions from the solution. The reconnection rate depends on the form of $\eta$ and is given in order of magnitude by
\begin{equation}
\int E_{\parallel}\ ds\ \sim\ 2E_{0}\ b,\nonumber
\end{equation}
where $E_{0}$ is the electric field at the centre of the diffusion region and $2 b$ is the dimension of the diffusion region along the magnetic field direction. In our example, $E_{0}=E_{z}(0,0,0)=\Phi_{nid0}/(2b)=\eta j_{0}$, where $j_{0}={\bar j_{0}}B_{0}/(\mu L_{0})$ is the value of the current at the origin, and along the spine (\ref{eq:13}) implies that
\begin{equation}
E_{z}(0,0,z)=\frac{\Phi_{nid0}}{2b}\left(1-\frac{z^{2}}{b^{2}}\right),\nonumber
\end{equation}
and so the reconnection rate becomes more accurately
\begin{equation}
\int_{-b}^{b} E_{z}(0,0,z)\ dz\ = \ {\textstyle{\frac{4}{3}}}\ E_{0}\ b = \ {\textstyle{\frac{2}{3}}}\Phi_{nid0}.\label{eq:16}
\end{equation}
The other feature that we can deduce from the electric field components is the perpendicular plasma velocity given by Eq.~(\ref{eq:9}).
In particular, on the fan plane ($z=0$) inside $D$, $E_{R}=0$, $E_{z}=(\Phi_{nid0}/2b)(1+R^{4}/a^{4})$, $\eta j_{z}= (\Phi_{nid0}/2b)(1-R^{4}/a^{4})$ and $B_{R}=B_{0}R/L_{0}$ so that there is a rotational component given by
\begin{equation}
v_{\phi}=\frac{(E_{z}-\eta j_{z})B_{R}}{B^{2}}=v_{0}\frac{R^{3}}{a^{3}},\nonumber
\end{equation}
where $v_{0}=\Phi_{nid0}L_{0}/[baB_{0}(1+{\textstyle{\frac{1}{4}}}\ {\bar j_{0}}^{2})].$ The nature of the flow becomes clear if we subtract a component parallel to ${\bf B}$ in order that $v_z=0$ (we are free to do this since the component of ${\bf v}$ parallel to ${\bf B}$ is arbitrary in the model). After doing this we find that $v_R$ vanishes, leaving ${\bf v}=(0, v_\phi, 0)$, i.e., the flow corresponds to a pure rotation (as in the solutions of Refs.~\cite{hornig03, pontin04a}).
\subsection{Extra Ideal Solution}\label{sec:3.2}
To the above pure diffusive solution any ideal solution may be added satisfying ${\bf E}+{\bf v} \times {\bf B}={\bf 0}$ and ${\bm \del} \times {\bf E}={\bf 0}$, for which the potential ($\Phi_{id}$) satisfies
\begin{equation}
{\bf B} \. {\bm \nabla} \Phi_{id}=0.
\nonumber
\end{equation}
Thus, once the functional form $\Phi_{id}(R_{0},\phi_{0})$ is chosen at the points $(R_{0},\phi_{0},b)$ on $z=b$, say, that form of $\Phi_{id}$ is constant along field lines given by the mapping (\ref{eq:11}). The resulting variation of $\Phi_{id}(R,\phi,z)$ throughout space is given by substituting for $R_{0}$ and $\phi_{0}$ from the inverse mapping (\ref{eq:12}).
As an example, suppose
\begin{equation}
\Phi_{id}(R_{0},\phi_{0})=\Phi_{id0}\frac{R_{0}^{2}}{a^{2}}
\nonumber
\end{equation}
on the plane $z=b$.
Then throughout the volume we find
\begin{equation}
\Phi_{id}(R,\phi,z)=\Phi_{id0}\frac{R^{2}z}{a^{2}b},
\nonumber
\end{equation}
which implies electric field components
\begin{equation}
E_{R}=\frac{\Phi_{id0}}{a^{2}b}2Rz,\ \ \ \ \ \ E_{z}=\frac{\Phi_{id0}}{a^{2}b}R^{2}.
\nonumber
\end{equation}
Then the plasma velocity components follow from ${\bf v}_{\perp}={\bf E}\times{\bf B}/B^{2}$ as
\begin{equation}
(v_{\perp R},v_{\perp \phi},v_{\perp z})=\frac{\Phi_{id0}L_{0}}{a^{2}bB_{0}}\frac{(-\textstyle{\frac{1}{2}} {\bar j_{0}} R^{3},R^{3}+4Rz^{2}, {\bar j_{0}}R^{2}z)}{(\alpha^{2}R^{2}+4z^{2})},
\label{eq:17}
\end{equation}
where $\alpha^{2}=1+{\textstyle\frac{1}{4}}{\bar j_{0}}^{2}$.
In particular, we notice that the flow vanishes on the spine $R=0$, and that in the fan $z=0$ there is a rotational flow that linearly increases with distance $v_{\phi}(R,\phi,0)=-\Phi_{id0}L_{0}R/(a^{2}bB_{0}\alpha^{2})$.
The reconnection of field lines takes the form of a rotational slippage.
Field lines entering the diffusion region have a flux velocity ${\bf w}_{in}=-{\bm \nabla} \Phi_{in}\times {\bf B}/B^{2}$, while those that leave it have a flux velocity ${\bf w}_{out}=-{\bm \nabla} \Phi_{out}\times {\bf B}/B^{2}$. $\Phi_{in}$ is obtained by integrating along field lines that enter from the ideal region on one side, while $\Phi_{out}$ is obtained by integrating backwards along field lines that leave from the other side. The rate of slippage between inward and outward flux bundles is given by $\Delta {\bf w}= {\bf w}_{out} - {\bf w}_{in}$ and represents the rate of reconnection, which we have evaluated directly above in Equation (\ref{eq:16}).
This reconnection rate, obtained by integrating $E_\parallel$ along the spine, measures the difference between the rates of flux transport across surface A and surface B in Fig.~\ref{fig:5}(c).
Note that the extra ideal solution does not change the rate of relative slippage. However, it does allow for different external conditions, such as rotation above and below the diffusion region in the same or opposite senses. To see the effect of a non-rotational ideal flow see Ref.~\cite{pontin04a}. In the solution given above the physical quantities ${\bf E}$ and ${\bf v}$ are continuous but not differentiable at the boundary between regions (3) and (4). This is a sacrifice made for tractability and pedagogic purposes. For a solution with differentiable physical quantities, see \citet{pontin04a}.
In the above, the diffusion region was imposed to be a cylinder whose width ($a$) and height ($2b$) are parameters of the solution. The formation, in a self-consistent fashion, of such a cylindrical diffusion region was observed in the simulations described by \citet{pontin07a}. In one of their simulations they imposed a twisting perturbation of the magnetic field in the vicinity of the fan plane. As the disturbance propagated inwards towards the null, it was dominated by a helical Alfv{\' e}nic wave -- travelling along the field lines and thus stretching out along the spine. The result was a tube of current focussed around the spine, giving a large aspect ratio to the diffusion region ($b \gg a$). During the process of torsional spine reconnection the narrowing and elongation of the current tube is likely to continue until the rotational advection that twists the field and intensifies the current is balanced by the rotational slippage.
\begin{figure}
\centering
\includegraphics*[width=.29\textheight]{fig7.eps}
\caption{A rotational motion of the spine (open arrows) driving torsional fan reconnection with a strong current in the fan and slippage of field lines (solid arrow).}
\label{fig:7}
\end{figure}
\section{Torsional fan reconnection}\label{sec:4}
Now suppose that we rotate the field lines near the spine in opposite directions above and below the fan. Then a current builds up in the fan. Within the fan current sheet, field lines experience rotational slippage \cite{galsgaard03a,pontin07a}, in the opposite sense above and below the fan, in what we propose to term {\it torsional fan reconnection} (Figure \ref{fig:7}). Again there is no flow across either spine or fan.
The counter-rotation (above and below the fan) of the region around the spine builds up a double-spiral structure near the null point, with a current that possesses two components: an axial component that reverses sign at the fan plane and a radial component. A counter-rotating part to the diffusion velocity ($\eta j_{R} B_{z}$) is set up in the $\phi$-direction that reverses sign at the fan.
In order to model such reconnection, we consider what we term a {\it double-spiral\ null\ point} with field components
\begin{eqnarray}
(B_{R},B_{\phi},B_{z})=\frac{B_{0}}{L_{0}}\left(R,\ 2{\bar j_{0}}\frac{z^{2M+1} R^{N-1}}{b^{2M+N-1}},\ -2z\right)
\label{eq:18}
\end{eqnarray}
where $M$ and $N$ are positive integers and the corresponding current components are
\begin{eqnarray}
(j_{R},j_{\phi},j_{z})=\frac{2B_{0}{\bar j_{0}}}{\mu b^{2M+N-1}L_{0}}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \nonumber \\
\left(-(2M+1)z^{2M}R^{N-1},\ 0,\ Nz^{2M+1}R^{N-2}\right).\nonumber
\end{eqnarray}
An alternative solution to the one presented below is outlined in the Appendix.
The field line equations for a mapping from an initial point $(R_{0},\phi_{0},b)$ to any other point $(R,\phi,z)$ are
\begin{eqnarray*}
R&=&R_{0}\ e^{S}, \ \ \ \ z=b\ e^{-2S}, \ \ \ \ \ \nonumber \\
\phi&=&\phi_{0}+\frac{2{\bar j_{0}}}{4M-N+4}\frac{R_{0}^{N-2}}{b^{N-2}}\left(1-e^{-(4M-N+4)S}\right),
\end{eqnarray*}
and the inverse mapping is
\begin{eqnarray*}
R_{0}&=&R\ e^{-S}, \ \ \nonumber \\
\phi_{0}&=&\phi_{0}-\frac{2{\bar j_{0}}}{4M-N+4}\frac{R_{0}^{N-2}}{b^{N-2}}\left(1-e^{-(4M-N+4)S}\right),
\end{eqnarray*}
where $S=-{\textstyle{\frac{1}{2}}}\log(z/b)$.
Let us follow the approach of Section IIIA and calculate the pure non-ideal solution. We shall assume the diffusion region to be a disc of radius $a$ and height $2b$, with the same diagram as before (Figure 6), except that the diffusion region is now expected to be in the shape of a thin disc (with $b \ll a$) rather than a thin tube (with $b\gg a$). Assuming, as before, that $\Phi(R,z)$ vanishes in regions (1) and (2), we evaluate it in region (3) by integrating from a point $T(R,b)$ on the top of the disc to a point $P(R,z)$ inside the diffusion region. After using Equation (\ref{eq:7}) and the mapping and setting $\Phi_{e}=0$, the expression for the potential at $P(R,z)$ then becomes
\begin{eqnarray}
\Phi=-\Phi_{nid0}\int \frac{\eta}{\eta_{0}}\left( (2M+1)\frac{R_{0}^{N}}{b^{N}}e^{-(4M-N)S}\right. \nonumber \\
\left.+2N\frac{R_{0}^{N-2}}{b^{N-2}}e^{-(4M-N+6)S}\right) dS.
\label{eq:21}
\end{eqnarray}
We adopt the following general form for the magnetic diffusivity inside the diffusion region (D)
\begin{equation}
\eta=\eta_{0}\left(1-\frac{R^{m}}{a^{m}}\right)\left(1-\frac{z^{n}}{b^{n}}\right),
\nonumber
\end{equation}
which peaks at the null point and vanishes on the boundary of D when $m$ and $n$ are positive and $n$ is even. After substituting into (\ref{eq:21}) and using the mapping and inverse mapping, we find the potential throughout the diffusion region. In particular, it transpires that an important constraint on the constants $M$, $N$, $m$ and $n$ is that $E_{z}$ be finite and continuous at the fan plane. As an example, one set of such constants that works is $M=2$, $N=6$, $m=4$ and $n=2$, for which
\begin{eqnarray}
\Phi_{nid}(R,z)=-\Phi_{nid0}\left\{
\frac{z^2 R^4}{2b^6}
+ \frac{5z^{3}R^{6}}{3b^{9}}
-\frac{5z^{4}R^{6}}{2b^{10}} \right. \ \ \ \ \ \ \ \ \ \ \nonumber \\
\left. +\frac{5z^{5}R^{10}}{b^{11}a^{4}}
+\frac{5z^{6}R^{6}}{6b^{12}}
-\frac{5z^{6}R^{10}}{2b^{12}a^{4}}
+\frac{z^{8}R^{4}}{b^{12}}
-\frac{5z^{4}R^{10}}{2a^{4}b^{10}} \right. \ \ \ \ \ \nonumber \\
\left.
- \frac{3z^{6}R^{4}}{2b^{10}}
-\frac{3z^{4}R^{8}}{2b^{8}a^{4}}
- \frac{3z^{6}R^{8}}{b^{10}a^{4}}
- \frac{3z^{8}R^{8}}{2b^{12}a^{4}}
\right\}.\ \ \ \ \
\label{eq:22}
\end{eqnarray}
The corresponding components of electric field are
\begin{eqnarray}
E_{R}=\frac{\partial\Phi_{nid}}{\partial R}=-\frac{\Phi_{nid0}}{b}\left\{
\frac{2z^{2}R^{3}}{b^{5}}
+\frac{10z^{3}R^{5}}{b^{8}}
\right. \nonumber \\
-\frac{15z^{4}R^{5}}{b^{9}}
\left. +\frac{50z^{5}R^{9}}{b^{10}a^{4}}
+\frac{5z^{6}R^{5}}{b^{11}}
-\frac{25z^{6}R^{9}}{b^{11}a^{4}}
\right. \nonumber \\
\left. +\frac{4z^{8}R^{3}}{b^{11}}
-\frac{25z^{4}R^{9}}{a^{4}b^{9}}
-\frac{6z^{6}R^{3}}{b^{9}}
\right. \nonumber \\
\left.
-\frac{12z^{4}R^{7}}{b^{7}a^{4}}
-\frac{24z^{6}R^{7}}{b^{9}a^{4}}
-\frac{12z^{8}R^{7}}{b^{11}a^{4}}\right\}, \nonumber
\end{eqnarray}
\begin{eqnarray}
E_{z}=\frac{\partial\Phi_{nid}}{\partial z}=
-\frac{\Phi_{nid0}}{b}\left\{
\frac{zR^{4}}{b^{5}}
+\frac{5z^{2}R^{6}}{b^{8}}
-\frac{10z^{3}R^{6}}{b^{9}}\right. \nonumber \\
\left.+\frac{25z^{4}R^{10}}{b^{10}a^{4}}
+\frac{5z^{5}R^{6}}{b^{11}}
-\frac{15z^{5}R^{10}}{b^{11}a^{4}} \right. \nonumber \\
\left. +\frac{8z^{7}R^{4}}{b^{11}}
-\frac{10z^{3}R^{10}}{a^{4}b^{9}}
-\frac{9z^{5}R^{4}}{b^{9}}\right. \nonumber \\
\left.-\frac{6z^{3}R^{8}}{b^{7}a^{4}}
-\frac{18z^{5}R^{8}}{b^{9}a^{4}}
-\frac{12z^{7}R^{8}}{b^{11}a^{4}}\right\}, \nonumber
\end{eqnarray}
In order to find $\Phi(R,z)$ in region (4) of Figure 6, as before, we calculate its value at any point $Q(a,z_{s})$ where a field line leaves the diffusion region, and then project that value along that field line. Thus, after putting $(R,z)=(a,z_{s})$ into Equation (\ref{eq:22}), we obtain
\begin{eqnarray}
\Phi_{nid}(a,z_{s})\equiv f(z_{s})\
=-\Phi_{nid0}\left\{
\frac{z_{s}^2 a^4}{2b^6}
+\frac{5z_{s}^{3}a^{6}}{3b^{9}}
\right. \nonumber \\
\left. -\frac{5z_{s}^{4}a^{6}}{2b^{10}}
+\frac{5z_{s}^{5}a^{6}}{b^{11}}
+\frac{5z_{s}^{6}a^{6}}{6b^{12}}
-\frac{5z_{s}^{6}a^{6}}{2b^{12}}
+\frac{z_{s}^{8}a^{4}}{b^{12}}
\right. \nonumber \\
\left.
-\frac{5z_{s}^{4}a^{6}}{2b^{10}}
-\frac{3z_{s}^{6}a^{4}}{2b^{10}}
-\frac{3z_{s}^{4}a^{4}}{2b^{8}}
- \frac{3z_{s}^{6}a^{4}}{b^{10}}
- \frac{3z_{s}^{8}a^{4}}{2b^{12}}
\right\}.
\end{eqnarray}
Since ideal MHD holds in region (4), $\Phi_{nid} (R,z)$ is constant along the field line ($zR^{2}=z_{s}a^{2}$) joining $Q$ to $P$, and so the value of $\Phi_{nid}$ at $P$ is simply
\begin{eqnarray}
\Phi_{nid}(R,z)=f\left(\frac{zR^{2}}{a^{2}}\right)
=-\Phi_{nid0}\left\{
\frac{z^2 R^4}{2b^6}
+\frac{5z^{3}R^{6}}{3b^{9}}
\right. \nonumber \\
\left. -\frac{5z^{4}R^{8}}{2a^{2}b^{10}}
+\frac{5z^{5}R^{10}}{a^{4}b^{11}}
+\frac{5z^{6}R^{12}}{6a^{6}b^{12}}
-\frac{5z^{6}R^{12}}{2a^{6}b^{12}}
+\frac{z^{8}R^{16}}{a^{12}b^{12}}
\right. \nonumber \\
\left.
-\frac{5z^{4}R^{8}}{2a^{2}b^{10}}
-\frac{3z^{6}R^{12}}{2a^{8}b^{10}}
-\frac{3z^{4}R^{8}}{2a^{4}b^{8}}
- \frac{3z^{6}R^{12}}{a^{8}b^{10}}
- \frac{3z^{8}R^{16}}{2a^{12}b^{12}}
\right\}.
\end{eqnarray}
The electric field components vanish in both the spine and the fan but are strong just above and below the fan, which is where the reconnection of field lines occurs by rotational slippage in a similar fashion to torsional spine reconnection. Near the spine and fan we have to lowest order in $R$ and $z$
\begin{equation}
E_{R}=-\frac{\Phi_{nid0}}{b}\left(\frac{2z^{2}R^{3}}{b^{5}}\right),\ \ \ \ \ \
E_{z}=-\frac{\Phi_{nid0}}{b}\left(\frac{zR^{4}}{b^{5}}\right).\nonumber
\end{equation}
The reconnection rate is the maximum value of ($\int E_{\parallel}\ ds$) along any field line ($R^{2}z=R_{0}^{2}b$), each of which enters the diffusion region from above at $T(R_{0},b)$ and leaves at $Q(a,z_{s})$. Along such field lines the integral is a function of $R_{0}/a$ and $b/a$, namely:
\begin{eqnarray*}
\int E_{\parallel}\ ds &=& \int \frac{{\bf E} \. {\bf B}}{B} ds = \int \frac{{\bf E} \. {\bf B}}{B_{R}} dR\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \nonumber \\
&=& -\frac{\Phi_{nid0}}{b}\int
\frac{5z^{4}R^{5}}{b^{9}}
- \frac{5z^{6}R^{5}}{b^{11}}
+ \frac{5z^{6}R^{9}}{b^{11}a^{4}} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \nonumber \\
&\ \ \ \ \ \ \ \ \ \ -& \frac{12z^{8}R^{3}}{b^{11}}
- \frac{5z^{4}R^{9}}{b^{9}a^{4}}
+ \frac{12z^{6}R^{3}}{b^{9}} \nonumber \\
&\ \ \ \ \ \ \ \ \ \ +& \frac{12z^{6}R^{7}}{b^{9}a^{4}}
+ \frac{12z^{8}R^{7}}{b^{11}a^{4}}
dR \nonumber \\
&=& \Phi_{nid0}
\left[
\frac{R_{0}^{4}}{2b^{4}}
+ \frac{5R_{0}^{6}}{3b^{6}}
+ \frac{9R_{0}^{8}}{2b^{8}}\frac{b^{4}}{a^{4}}
\right.
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \nonumber \\
&\ \ \ \ \ \ \ \ \ \ +& \left.
\frac{5R_{0}^{10}}{b^{10}}\frac{b^{4}}{a^{4}}
-\frac{5R_{0}^{8}}{b^{8}} \frac{b^{2}}{a^{2}}
-\frac{5R_{0}^{12}}{3b^{12}} \frac{b^{6}}{a^{6}}
\right.
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \nonumber \\
&\ \ \ \ \ \ \ \ \ \ -& \left.
\frac{9R_{0}^{12}}{2b^{12}}\frac{b^{8}}{a^{8}}
- \frac{R_{0}^{16}}{2b^{16}}\frac{b^{12}}{a^{12}} \right] .
\end{eqnarray*}
When $R_{0}\sim a \gg b$ for a slender disc-shaped diffusion region, this reduces to
\begin{eqnarray*}
\int E_{\parallel}\ ds &=& -\frac{\Phi_{nid0} a^{4}}{2b^{4}}\left[
\frac{R_{0}^{4}}{a^{4}}
+ \frac{9R_{0}^{8}}{a^{8}}
- \frac{9R_{0}^{12}}{a^{12}}
- \frac{R_{0}^{16}}{a^{16}} \right].
\end{eqnarray*}
If $a$ and $b$ are held fixed and $R_{0}$ is varied, the maximum value of this occurs at $R_{0}\approx 0.90 a$, giving a reconnection rate of
\begin{equation}
\left(\int E_{\parallel}\ ds\right)_{max} = 0.9 \ \Phi_{nid0}\ \frac{a^{4}}{b^{4}}.
\end{equation}
As for torsional spine reconnection, the reconnection rate is proportional to the potential $\Phi_{nid0}=2B_{0}b{\bar j_{0}}\eta_{0}/(\mu L_{0})$, but in this case, as well as being proportional to the current density ${\bar j_{0}}$ and diffusion region height ($b$), it also depends on its aspect ratio ($a/b$).
Again, as before, a wide range of ideal solutions ($\Phi_{id}$) may be added to the diffusive solution. Thus, if for example, $\Phi_{id}(R_{0},\phi_{0})=\Phi_{id0}R_{0}^{n}/a^{n}$ on the top ($z=b$) of the diffusion region, the fact that it remains constant along field lines ($R^{2}z=R_{0}^{2}b$) determines
\begin{equation}\Phi_{id}(R,\phi,z)=\Phi_{id0}\frac{R^{n}z^{n/2}}{a^{n}b^{n/2}},\nonumber\end{equation}from which the electric field components can be deduced.
For instance, the case $n=4$ gives an electric field of\begin{equation}(E_{R},E_{\phi},E_{z})=\frac{2\Phi_{id0}}{a^{4}b^{2}}\left(2R^{3}z^{2},0,R^{4}z\right),\nonumber\end{equation}which implies a plasma velocity with components normal to the magnetic field of
\begin{eqnarray}
(v_{R},v_{\phi},v_{z})=\frac{1}{B^{2}}\left(-E_{z}B_{\phi},E_{z}B_{R}-E_{R}B_{z},E_{R}B_{\phi}\right)\ \ \ \ \ \ \ \ \ \ \nonumber \\
=\frac{\Phi_{id0}}{g(R,z)}\left(-4{\bar j_{0}}z^{6}R^{9},(2R^{5}z+8R^{3}z^{3})b^{9},8{\bar j_{0}}z^{7}R^{8}\right),
\nonumber
\end{eqnarray}where $g(R,z)\equiv [R^{2}+4R^{10}z^{10}{\bar j_{0}}^{2}/(b^{18})+4z^{2}]B_{0}a^{4}b^{11}/L_{0}$. In particular, it gives a rotational component ($v_{\phi}$) that is odd in $z$ and so represents the kind of counter-rotation that is typical of torsional fan reconnection.
\section{Spine-fan reconnection}\label{sec:5}
\begin{figure}
\centering
\includegraphics[height=.29\textheight]{fig8a.eps}
\includegraphics[height=.29\textheight]{fig8b.eps}
\caption{(a) The magnetic structure of the field in spine-fan reconnection, showing the field lines and the (shaded) diffusion region. (b) The corresponding motion of flux across both the spine and fan (large light arrows). The current sheet is shaded (with the part below the fan having a lighter shading than the part above) and contains a current flowing in the $x$-direction (large dark arrows): its width is $l$, its total length $L_{tot}$ in the $yz$-plane, and its length $L_{c}$ common to spine and fan. }
\label{fig:8}
\end{figure}
In general, if the driving motions tend to shear a null point rather than rotate it, then the result will be {\it spine-fan reconnection}. A shear disturbance of either the spine or fan of the null will tend to make the null `collapse'. That is, the resulting Lorentz force acts to increase the displacement, just as at a 2D null \cite[see Ref.][]{pontin05a} and as at a separator \citep{galsgaard96b,galsgaard00b}. This collapse is opposed by the line-tying at the boundaries, and what occurs is that the shear distortion tends to focus in the weak field region in the vicinity of the null point, forming a localised current sheet \cite{pontin05a, pontin07b}.
What distinguishes spine-fan reconnection from the other null point reconnection modes is that flux is transferred across both the spine and fan. Furthermore, the current concentration is in the form of a localised sheet that is inclined at an intermediate angle between the spine and fan -- indeed the current sheet contains part of both the spine and the fan (see Figures \ref{fig:4}b and \ref{fig:8}a).
As mentioned above, the reconnection rate for this mode of reconnection is obtained by integrating $E_\|$ along the fan field line with the maximum value of $\int E_{\|} ds$. By the symmetry of the simple models described herein, this is the field line parallel to the current orientation at the null (perpendicular to the applied shear). The reconnection rate thus obtained measures exactly the rate of flux transport in the ideal region across the fan separatrix surface. To illustrate the properties of this mode of null point reconnection, we describe here briefly the results of resistive MHD simulation runs (see \citet{pontin07b} for an initial description).
In the simulations, a shear velocity is prescribed on the (line-tied) $z$-boundaries, which advects the spine footpoints, see Fig.~\ref{fig:4}(a) (the results are qualitatively the same if the fan is distorted instead). The current sheet that forms in response to the shearing
is localised in all three directions about the null. However, in the plane of the applied shear (perpendicular to the current orientation at the null) the magnetic field and current patterns have a similar appearance to a 2D X-point configuration. As one moves away from the null in the fan along the direction of current flow, the magnetic field strength parallel to the current (sometimes known as the `guide field') strengthens, while the current intensity weakens -- see Fig.~\ref{fig:8}.
The boundary shearing velocity is ramped up to a constant value ($v_0$) at which it is held until being ramped down to zero, at $t=\tau=3.6$ (space and time units in the code are such that an Alfv{\' e}n wave would travel one space unit in one time unit for uniform density and magnetic field $|{\bf B}|=1, \rho=1$). The resistivity is uniform. Current focusses at the null during the period when the driving occurs, and when the driving ceases both the current and reconnection rate peak, after which the null gradually relaxes towards its original potential configuration. Under continuous driving, it is unclear whether a true steady state would be set up, or whether the current sheet would continually grow in dimensions and intensity \cite[see Ref.][]{pontin07b}. This is an open question for future study. For the case of transient driving, the peak current and reconnection rate increase linearly with the driving velocity. Here we examine more closely the sheet geometry, and its scaling with the driving velocity, and also investigate the scaling of this geometry, the peak current and peak reconnection rate with resistivity.
As previously noted, the current sheet that forms in spine-fan reconnection is focussed at the null, locally spanning both the spine and fan. The sheet has a tendency to spread along the fan depending on the parameters chosen (spreading is enhanced by lowering $v_0$ or increasing the plasma-$\beta$). We examine four spatial measurements (see Figure \ref{fig:8}b) associated with the current sheet, focussing on the time when the current magnitude is a maximum and defining the boundary of the sheet to be an isosurface at 50\% of $|{\bf j}|_{max}$. The sheet {\it thickness} is $l$, the {\it length} $L_{tot}$ is the total extension in the $yz$-plane (normal to ${\bf j}$), $L_c$ is the length of the `collapsed' section (within which the sheet contains both spine and fan), and the {\it width} $w$ is the extension of the sheet along the $x$-direction (parallel to ${\bf j}$).
\begin{figure}[t]
\centering
\includegraphics[width=.5\textwidth]{fig9.eps}
\caption{Scaling with the driving velocity $v_0$ of (top left--bottom right) $L_{tot}$, $L_c$, $l$ and $\theta$ (see Figure \ref{fig:8}b for notation).}
\label{fig:9}
\end{figure}
The scaling of these dimensions with (peak) driving velocity $v_0$ is shown in Figure \ref{fig:9} (we fix $\eta=5\times10^{-4}$). The angle $\theta$ between the current sheet and the $z=0$ plane can be seen to increase as the driving velocity increases. This can be put down to the fact that the stronger driving creates a stronger Lorentz force---the force that is responsible for the collapse.
As expected, $L_c$ increases as $v_0$ increases. This is a result of the fact that the spine footpoints are sheared further for larger $v_0$, and there exists in fact a close correspondence; $ (L_c \cos \theta)/2 \sim v_0 \tau $. In contrast to $L_c$, $L_{tot}$ shows a linear {\it decrease} with $v_0$ (as does $w$, see Ref.~\cite{pontin07b}), showing that as the collapse becomes stronger the distortion of the magnetic field focusses closer and closer around the null itself. The decline in $L_{tot}$ with increasing $v_0$ must of course cease once $L_{tot}=L_c$, as is the case for the strongest driving considered. Examining finally the sheet thickness $l$, any variation is within the error bars of our measurements, and moreover the resolution is not sufficient for firm conclusions to be drawn.
We turn now to consider the scaling of the current sheet with $\eta$, setting $v_0=0.02$, see Figure \ref{fig:10}. As $\eta$ decreases, $j_{max}$ increases, while the reconnection rate decreases. In both cases, with the limited data of this preliminary study, the proportionality appears to be somewhere between power law and logarithmic. That the run with the largest resistivity does not seem to fit the trend for the reconnection rate is likely to be because the current significantly dissipates before reaching the null itself due to the high resistivity ($\eta=0.002$).
Accompanying the increase in $j_{max}$ with $\eta$ is, as expected, a decrease in the thickness $l$. On the other hand, the overall dimensions of the sheet, $L_{tot}$ and $w$, seem to be unaffected by $\eta$, to within our measurement accuracy. Finally, as $\eta$ decreases and the current becomes more intense, the collapse becomes more pronounced as evidenced by increases in both $L_c$ and $\theta$.
\begin{figure}[t]
\centering
\includegraphics*[width=.49\textwidth]{fig10.eps}
\caption{Scaling with $\eta$ of (top left--bottom right) the peak current density, the peak reconnection rate, $L_c$, $l$ and $\theta$ (see Figure \ref{fig:8}b for notation).}
\label{fig:10}
\end{figure}
The relationships discussed briefly above certainly warrant further investigation with carefully designed, higher resolution simulations, as do the corresponding scalings for the continuously driven case.
\section{Conclusion}\label{sec:6}
We have here outlined a new categorisation of reconnection regimes at a 3D null point. In place of the two previous types, namely, spine and fan reconnection, we suggest that three distinct generic modes of null point reconnection are likely to occur. The first two are caused by rotational motions, either of the fan or of the spine, leading to either {\it torsional spine reconnection} or {\it torsional fan reconnection}. These involve slippage of field lines in either the spine or the fan, which is quite different from classical 2D reconnection, but does involve a change of magnetic connection of plasma elements.
Even though pure spine or fan reconnection may occur in special situations (such as when $\nabla \cdot {\bf v}=0$ or there are high-order currents), it is much more likely in practice that a hybrid type of reconnection takes place that we refer to as {\it spine-fan reconnection}. This is the most common form of reconnection that we expect to see in three dimensions at a null point, since it is a natural response to a shearing of the null point. It is most similar of all the 3D reconnection regimes to classical 2D reconnection and involves transfer of magnetic flux across both the spine and the fan. It possesses a diffusion region in the form of a current sheet that is inclined to the fan and spine and has current localised in both the spine and fan, focussed at the null.
In future, much remains to be determined about these new regimes of reconnection that have been observed in numerical experiments. One is the shape and dimensions of the diffusion regions and their relation to the driving velocity and the magnetic diffusivity. Another key question is: what is the rate of reconnection at realistic plasma parameters, and is there a maximum value? Since the analytical theory is so hard in three dimensions, progress is likely to be inspired by future carefully designed numerical experiments.
\section{Acknowledgments}
We are grateful to Guillaume Aulanier, Klaus Galsgaard, and our colleagues in the St Andrews and Dundee MHD Groups for stimulating discussions, especially Gunnar Hornig and Clare Parnell, and to the EU SOLAIRE network and UK Particle Physics and Astronomy Research Council for financial support. ERP is also grateful to Jiong Qiu, Dana Longcope and Dave McKenzie for inspiring suggestions in Bozeman where this work was completed.
| 2024-02-18T23:40:04.916Z | 2009-10-16T15:02:48.000Z | algebraic_stack_train_0000 | 1,272 | 11,315 |
|
proofpile-arXiv_065-6286 | \section{Introduction}
The question of how the adsorption of foreign particles affects the
properties of materials and the means to control this is of central
importance in domains ranging from separation processes to nanotechnology.
This motivates the continuing investigation on the factors determining the
adsorption process and the search of conditions most favorable for its
control (composition of the adsorbing fluid, adsorption geometry etc.) \cite%
{Fundamentals_Adsorp}. The purpose of this communication is to propose a
novel method that allows this, on the basis of Monte Carlo simulations of a
fluid-slit pore equilibrium. To avoid having to consider specific
interactions as in molecular adsorption, we choose to point out the basic
mechanisms on a simple model with only hard-sphere and dipolar interactions.
The situation closest to this model is then the adsorption of
macroparticles. Another reason is the recent development of studies of
colloidal adsorption \cite{Bechinger}. Indeed, while numerous studies exist
on molecular adsorption (see for e.g. \cite{Gubbins,LachetFuchs} and refs
\cite{ Select_ads} for more recent work), one practical advantage of using
colloids is the possibility to tune their effective interaction (eg. by
adding polymeric depletants) and their coupling with external fields,
possibly in confined geometry \cite{VanBlaad}. The actual behavior may,
however, be complicated by the interplay of different effects (see for
example the role of static and hydrodynamic forces in microfluidics devices
\cite{Ajdari}). We thus propose here a method that allows a fine control of
the composition of the adsorbed fluid, while remaining very simple.
\section{Method}
We start with the simplest confinement geometry: an open slit pore with
parallel walls in equilibrium with a bulk fluid. It has been used in several
theoretical studies to determine the parameters affecting the behavior of
the confined fluid, (see for example \cite%
{Gubbins,Sarkisov,Duda1,JCPAbd,JPCGub,Virgiliis,Duda2,Kim} and references
therein). Since we seek a method in which the pore geometry, the
interactions (between the particles and the particles and the confining
medium) as well as the bulk thermodynamic state are fixed, one alternative
is the coupling with an external field. This should always be possible since
besides particles having a permanent dipole such as magnetic colloids,
colloidal particles are always polarizable to some extent. We thus took a
uniform electric field $\bm{E}=E\ \bm{u_z}$ normal to the walls. As in \cite%
{JCPCharles} we considered a mixture in which one species bears a dipole
moment $\bm{\mu}$, taken permanent for simplicity. The field is then not
applied in the bulk. We thus have pure hard-spheres (species 1) and dipolar
hard-spheres (species 2), possibly with a non-additive diameter$\ \sigma
_{12}$ in the potential $u_{12}^{HS}$ (see also the discussion of figure 5).
Both species\ have a hard-sphere interactions with the walls. This makes the
model more appropriate \cite{JPCGub} to a mixture of hard-sphere-like
colloidal particles, than to a molecular mixture (see however the final
remarks). The effect of an external field (and temperature) on the filling
of a cylindrical pore was also studied in \cite{Rasaiah2} (see also and \cite%
{Bratko} for a slit pore), but not from a bulk mixture. Previous studies
considered the role of the pressure in one-component fluids (eg. \cite%
{Sarkisov}), or the total density and the mole fractions \cite%
{Duda1,Duda2,Kim} in bulk mixtures but without field. As shown below, the
combination of both will play here a crucial role. An inhomogeneous
multicomponent mixture with anisotropic interactions being difficult to
study by analytical methods (we are aware of one study by density functional
theory of the adsorption from a mixture of polar molecules \cite{Kotdawala}%
), we used Monte Carlo simulation (see also, for e.g. \cite%
{Rasaiah,Klapp,Rasaiah2,Weis,JCPCharles}). We already pointed out how the
structure can be modulated by the combination of various interactions \cite%
{JCPAbd,JPCGub} and by the action of the field \cite{JCPCharles}. However,
only the density profile of the particles through the pore could be
modulated in \cite{JCPCharles} since the total number of particles was kept
fixed (simulations in the canonical ensemble). An important difference here
is that we consider an open pore which exchanges particles with a reservoir.
One may then achieve much stronger variations of the density of each
component in the pore. The physical pore is assumed large enough that the
interfacial region in which it is in contact with the reservoir \ plays a
negligible role \cite{Panagiotop}.\ For this reason we will refer to the
fluid in the reservoir far from this region as the "bulk". The pore/fluid
equilibrium is determined by the equality of the chemical potentials $\mu
_{1}$ and $\mu _{2}$ of both species in the bulk and in the pore. But since
the practical control variables are the total density $\rho _{b}$ and mole
fraction $x_{2}$ of the dipolar species, in the bulk, the latter is studied
in the canonical ensemble. By considering only homogeneous states or
metastable states very close to the coexistence boundary, $\mu _{1}$ and $%
\mu _{2}$ in the bulk are determined with sufficient accuracy from Widom's
insertion method (see for eg \cite{Neimark} for this point). $\mu _{1}$ and $%
\mu _{2}$ are then used to study the fluid in the pore in the
grand-canonical ensemble. We can then compute the average density of each
species in the pore as a function of $\rho _{b}$ and $x_{2}$. Hereafter,
reduced variables $E^{\ast }=E(\sigma ^{3}/kT)^{1/2}$ and $\mu ^{\ast }=\mu
/(kT\sigma ^{3})^{1/2}$ will be used. The reduced density in the pore is $%
\rho =\bar{N}\sigma ^{3}/V$, with $\bar{N}$ the average number of particles
for a lateral surface $S$ with periodic conditions in the $x$ and $y$
directions and $V=S(H-\sigma )$ the accessible volume in the pore. We took a
pore width $H=3\sigma $. In the bulk, $N=N_{1}+N_{2}$ is fixed.
\section{Results}
We show in figure 1 the first basic mechanism: a field induced filling of
the pore by a one-component dipolar fluid. At increasing field strength $%
E^{\ast }$, the pore is progressively filled by the dipoles with a rate that
depends on $\rho_{b}$ . For the value of the $E^{\ast}$ and $\mu ^{\ast}$
used here, the explanation seems that the field-dipole interaction energy $%
-\sum_{i}\bm{\mu_{i}}.\bm{E}$ offsets the entropy loss due to their
orientation in the direction of the field. This is more visible at low $\rho
_{b}$ in which case the slope is nearly constant beyond $E^{\ast }=8$. For a
particle diameter of $1\mu m$ and $T=300K$, for example, this corresponds to
$E=49\ 10^{-3}V/\mu m$ and $\mu =2\ 10^{5}D$. Thanks to the scaling factor $%
\sigma ^{-3/2}$ in its definition, the same value of $\mu^{\ast }$ is also
appropriate for the dipolar interaction between \textit{molecular} species.
\begin{figure}[htbp]
\centering
\includegraphics[angle=270,totalheight=7cm]{fig1.eps}
\caption{Effect of the applied field on the filling of the pore by a
one-component dipolar fluid.\newline
$\rho $ is the total density in the pore and $E^*$ the field strength in
reduced units. The bulk density is $\rho_b=0.51$ (\emph{filled squares}) and $\rho_b=0.0102 $ (\emph{empty squares}). The lines are a guide to the eye.}
\label{f:pore}
\end{figure}
The dipole moment being then of the order of one Debye, a field strength of
the order $10V/nm$ is needed to obtain the same reduced energy $-\mu ^{\ast
}E^{\ast }$. Such field strengths are not unusual for confinement at the
molecular scale (for water in nanopores see for example \cite{Rasaiah2} and
\cite{Bratko}, in particular figure 1 in the last one).\ Note that the
equilibrium state in presence of the field may not always be the filled one
at other parameters or if more complex interactions are considered (see ref
\cite{Rasaiah2} for example), due to the competition between energetic and
entropic contributions.
Figure 2 illustrates the second mechanism: the relative population of a pore
in equilibrium with a mixture having a natural tendency to demix. The
simplest one is the mixture of non-additive hard spheres \cite{amar} in
which the cross diameter is $\sigma_{ij}=1/2(\sigma_i+\sigma_j)(1+\delta).$
Previous studies \cite{Duda1,Duda2,Kim} have shown that when the pore is in
equilibrium with a mixture in which one species is in minority (say $%
x_2=0.02 $ for a non additivity parameter $\delta =0.2$ ) a population
inversion occurs in the pore when the total bulk density is varied. This
occurs here for $\rho_b$ between $0.55$ and $0.56$ for a pure non-additive
HS mixture and between $0.54$ and $0.55$ for the hard-sphere dipole mixture.
\begin{figure}[htbp]
\begin{center}
\subfloat[ Mixture of symmetric non-additive hard-spheres. \emph{Filled
circles}: adsorption; \emph{Open circles} : desorption; the bulk
concentration of the adsorbing species is $x_2=0.02$. The non-additivity
parameter is $\delta=0.2$
] {\label{f:fig2a}\includegraphics[angle=270,origin=br,totalheight=7cm]{fig2a.eps}}
\subfloat[ Mixture of hard-spheres and dipolar hard-spheres with $\delta=0.2$%
, $\mu^*=1$ and $x_2=0.02$ ; \emph{filled circles:} adsorption of the
dipolar hard-spheres.]{\label{f:fig2b}\includegraphics[angle=270,origin=br,totalheight=7cm]{fig2b.eps}}
\caption{Population inversion in a pore in equilibrium with a bulk
mixture close to demixing. }
\label{f:2}
\end{center}
\end{figure}
Having the basic ingredients, we may now combine them to produce the desired
effect: by choosing the composition of the bulk fluid so as to be close to
the population inversion in the pore, we anticipate that the closer we are
from the threshold density the weaker will be the external field $E^*_{tr}$
required to trigger it. This is shown in figure 3. In the most favorable
case shown the actual value of $E_{tr}$ is about $3 \ 10^{-3} V/\mu m$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[angle=270,totalheight=7cm]{fig3.eps}
\caption{The curves show the dipoles mole fraction in the pore as a function of the
reduced applied field strength. The curves are for a bulk mole fraction $%
x_2=0.02$ and bulk densities (from right to left) $\rho_b=0.51, 0.53, 0.54 $%
. The inset shows the corresponding dipoles and hard-spheres density in the
pore for $\rho_b= 0.53 $. }
\label{f:fig3}
\end{center}
\end{figure}
A small variation of the applied field produces the inversion: the dipolar
particles are selectively absorbed by a weak change about $E_{tr}$, the
converse being possible, perhaps with some hysteresis \cite{Duda2}. Near the
adorption jump ($\rho _{b}=0.53$, $E^{\ast }=0.5$ for $\mu ^{\ast }=1$) a
slight change in temperature ($\delta T=25^{o}C$ in figure 4) produces a
detectable change in adsorption.
\begin{figure}[htbp]
\begin{center}
\includegraphics[angle=270,totalheight=7cm]{fig4.eps}
\caption{ Adsorption-field strength curves at different temperatures.\newline
Dipoles mean density in the pore as a function of the
applied field strength at $T=285K$, $300K$ and $325K$ from left to right.
The bulk mole fraction is $x_{2}=0.02$ and bulk densities
are $\rho_b=0.53$. Here $E^*(T)=E^*(300)(T/300)^{1/2}$. }
\label{f:fig4}
\end{center}
\end{figure}
This observation may be important for some
applications (since the reduced variables combine $E$ and $\mu $ with T,
while $E=0$ in the bulk, one has to rescale $E^{\ast }$ by the factor $%
T^{1/2}$ to compare two temperatures at a given values of $E$).
Finally, in order to check the effect of a different interaction $u_{12}$
between species 1 and 2, we also show in figure 5 the result for a Yukawa
repulsive potential.
\begin{figure}[htbp]
\begin{center}
\includegraphics[angle=270,totalheight=7cm]{fig5.eps}
\caption{Effect of the applied field on the filling of the pore by
different model fluids.\newline
$\rho $ is the density of the dipolar species, in the pore. \emph{Empty
squares} : one-component dipolar fluid (for $\rho _{b}=0.0102$ as in figure
1); \emph{open circles} : additive mixture of hard spheres and dipoles (with
$x_{2}=0.02$ and $\rho _{b}=0.51$ in the bulk). \emph{Diamonds} : same for a
Yukawa repulsion between the dipoles and the hard-spheres; \emph{filled
circles} : same for the non-additive hard-spheres - dipolar hard-spheres
mixture. The range of the Yukawa potential (with $\epsilon \ast =8$) gives
the same contribution to the second virial coefficient as the non-additive
hard-spheres potential with $\delta =0.2$. \emph{Filled squares} :
one-component dipolar fluid with $\rho _{b}=0.51$ (as in figure 1). }
\label{f:fig5}
\end{center}
\end{figure}
We observe that the phenomenon is quite general. The
requirement for observing a sensitive field effect is that the self
coordination should be more favored in the mixture. The poor miscibility can
be favored by suitable chemical composition of the particles surface layers \cite%
{Pusey,Hennequin}. Weak specific interaction with the
pore walls can be achieved similarly.
As our main goal was to demonstrate the phenomenon of a field activated
adsorption from an unstable mixture, we used a simple simulation strategy.
Accordingly, we did not conduct a detailed study of the behavior of the
confined fluid. For instance, phase equilibrium in the pore may take place
before the spontaneous condensation (adsorption jump) predicted in the grand
canonical simulation \cite{NeimarkPRE}. According to Duda al.\cite{Duda2},
the inversion line for non-additive hard spheres is close to the bulk fluid
coexistence line but the two phenomena are different. We actually observed
that the inversion corresponds to a bulk fluid close to demixing or slightly
in the two-phase region. Regardless of this, the essential point is that the
density in the liquid-like phase should be close to the value after the
adsorption jump, as in one-component systems \cite{NeimarkPRE}. The precise
relation between these observations and other phenomena such as capillary
condensation, wetting, hysteresis, etc. (see e.g. \cite%
{Wetting,NeimarkPRE,Kierlik}) will be discussed in future work.
\section{Conclusion}
In conclusion, these results show that the combination of two generic
mechanisms allows a quite sensitive control of the pore filling. Although
this method has been demonstrated for particles that are closest to the
optimum conditions (ie hard-sphere-like colloids), none of them is exclusive
and since the basic mechanisms (demixing instability and coupling with an
external field) are quite generic, this prediction should concern a broader
class of systems (including molecular ones). In order to benefit from the
field effect, one species should be either polar (e.g. ferrocolloid in
magnetic fields) or much more polarizable than the other (the results given
here being relative to permanent dipoles). The solution should also not
contain free charges to avoid particle motion due to the action of the field
(electrohydrodynamic flows), not considered in this simple model. Polar
molecules being on the other hand rather common, one should consider in this
case also the role of specific interactions. We believe that further
experimental studies and simulations on this method are worthwhile given the
diversity of possible applications of this field controlled composition of
the confined fluid and hence flexible control of the physical properties
that depend on the composition of the confined fluid. Just as an example,
one may consider to modulate in this way the dielectric response of the
confined fluid for optical applications.
| 2024-02-18T23:40:05.096Z | 2009-10-22T18:03:47.000Z | algebraic_stack_train_0000 | 1,277 | 2,603 |
|
proofpile-arXiv_065-6293 | \section{Introduction}
Let $F$ be a non-Archimedean local field or a finite field.
Let $n$ be a natural number and $k$ be $1$ or $2$.
Consider $G:=\operatorname{GL}_{n+k}(F)$ and let $M:=\operatorname{GL}_n(F) \times GL_k(F)<G$ be a maximal Levi subgroup. Let $U< G$ be the corresponding unipotent subgroup and let $P=MU$ be the corresponding parabolic subgroup.
Let $J:=J_M^G: \mathcal{M}(G) \to \mathcal{M}(M)$ be the Jacquet functor (i.e. the functor of coinvariants w.r.t. $U$).
We will fix the notations $F,n,G,M$ and $U$ throughout the paper.
In this paper we prove the following theorem.
\begin{introtheorem} \label{thm:PiRho}
Let $\pi$ be an irreducible representation of $G$ and $\rho$ be an irreducible representation of $M$. Then $$\dim \operatorname{Hom}_M(J(\pi),\rho)\leq 1.$$
\end{introtheorem}
As we will show in \S \ref{sec:ImpRes}, this theorem is equivalent to the following one.
\begin{introtheorem} \label{thm:Sc}
Let $G\times M$ act on $G/U$ by $(g,m)([g'])=[g g' m^{-1}].$ This action is well defined since $M$ normalizes $U$.
Consider the space of Schwartz measures $\mathcal{H}(G/U)$ (i.e. compactly supported measures which are locally constant w.r.t. the action of $G$) as a representation of $G\times M$.
Then this representation is multiplicity free, i.e. for any irreducible representation $\pi$ of $G\times M$ we have
$$\dim \operatorname{Hom}_{G \times M}(\mathcal{H}(G/U),\pi)\leq 1.$$
\end{introtheorem}
By Frobenius reciprocity, this theorem is in turn equivalent to the following one.
\begin{introtheorem} \label{thm:GP}
Consider $P$ to be diagonally embedded in $G \times M$. Then the pair $(G \times M,P)$ is a Gelfand pair i.e. for any irreducible representation $\pi$ of $G\times M$ we have
$$\dim \operatorname{Hom}_{P}(\pi, {\mathbb C})\leq 1.$$
\end{introtheorem}
Theorem \ref{thm:PiRho} implies also the following theorem.
\begin{introtheorem} \label{thm:GL}
Suppose $k=1$ and let $H=GL_n(F)$ be standardly embedded inside $G$. Let $\pi$ be an irreducible representation of $G$ and $\rho$ be an irreducible representation of $H$.
Then $$\dim \operatorname{Hom}_H(J(\pi),\rho)\leq 1.$$
\end{introtheorem}
We will prove the implications mentioned above between theorems \ref{thm:PiRho}, \ref{thm:Sc}, \ref{thm:GP} and \ref{thm:GL} in \S \ref{sec:ImpRes}.
\subsection{A sketch of the proof} $ $
Using a version of the Gelfand-Kazhdan criterion we deduce Theorem \ref{thm:Sc} from the following one
\begin{introtheorem}
Any distribution on $(U^t \setminus G) \times (G/U)$ which is invariant with respect to the action of $G\times M$ given by $(g,m)([x],[y]):=([mxg^{-1}],[gym^{-1}])$ is also invariant with respect to the involution $([x],[y]) \mapsto ([y^t],[x^t])$.
\end{introtheorem}
By the method of Bernstein-Gelfand-Kazhdan-Zelevinski (Theorem \ref{thm:BGKZ}) it is enough to prove that the involution preserves all $G \times M$ orbits. This we deduce from the following geometric statement.
\begin{introprop}
\label{intthm:Geo
Let $X:=X_{n,k}:=\{A,B\in Mat_{n+k}| AB=BA=0,rank(A)=n,rank(B)=k\}$. Let $G$ act on $X_{n,k}$ by conjugations.
Define the transposition map $\theta:=\theta_{n,k}:X_{n,k} \to X_{n,k}$ by $\theta(A,B):=(A^t,B^t)$.
Then any $G$-orbit in $X_{n,k}$ is $\theta$-invariant.
\end{introprop}
We deduce this geometric statement from the key lemma \ref{lem:Key}, which states that every $M$-orbit in $U^t \setminus \operatorname{GL}_k(F)/U$ is transposition invariant, where $M<GL_k(F)$ is a Levi subgroup and $U$ is the corresponding unipotent subgroup. This lemma is a straightforward computation since $k\leq 2$, but for bigger $k$ it is not true.
\subsection{Related problems}
\subsubsection{Case $k=1$} In case when $k=1$ and $F$ is a local field, a stronger theorem holds. Namely, the functor of restriction from $\operatorname{GL}_{n+1}(F)$ to $\operatorname{GL}_n(F)$ is multiplicity free. This is proven in \cite{AGRS} for $F$ of characteristic 0, in \cite{AAG} for $F$ of positive characteristic. It is also proven for Archimedean $F$ in \cite{AG_AMOT,SZ}.
This stronger statement does not hold for finite fields already for $n=1$.
Theorem \ref{thm:GL} may be viewed as a weaker form of this statement that works uniformly for local and finite fields.
Note that in case when $k=1$ and $F$ is a finite field, there is an alternative proof of Theorem \ref{thm:GL} which is based on the classification of irreducible representations of $\operatorname{GL}_n(F)$, see \cite{Fad,Gre,Zel}.
\subsubsection{The Archimedean case} We believe that the analog of Theorem \ref{thm:PiRho} for Archimedean $F$ holds. For $k=1$ it holds as explained above. For $k=2$ we believe that the proof given in this paper can be adapted to the Archimedean case. However this will require additional analysis.
\subsubsection{Higher rank cases} One can ask whether an analog of Theorem \ref{thm:PiRho} holds when $M$ is an arbitrary Levi subgroup of $G$. If $F$ is a local field, we do not know the answer for this question.
If $F$ is a finite field, such analog of Theorem \ref{thm:PiRho} holds only in the cases at hand. This is related to the fact that the restriction of any irreducible representation of
the permutation group $S_{n_1+...+n_l} $ to $S_{n_1} \times ... \times S_{n_l}$ is multiplicity free if and only if $l \leq 2$ and $\min(n_1,n_2) \leq 2$. We discuss those questions in \S \ref{sec:HiRank}.
\subsection{Contents of the paper}$ $
In \S \ref{sec:Prel} we give the necessary preliminaries.
In \S\S \ref{subsec:GenNot} we introduce notation that we will use throughout the paper.
In \S\S \ref{subsec:ell} we give some preliminaries and notation on $l$-spaces,
l -groups and their representations based on \cite{BZ}. In \S\S
\ref{subsec:MultFreeFun} we define multiplicity free functors and formulate two theorems that enable to reduce "multiplicity free" property of a strongly right exact functor between the categories of smooth representations of two $l$-groups to
"multiplicity free" property of a certain representation of the product of those groups. We prove those theorems in Appendix \ref{app:MultFree}. In \S\S \ref{subsec:GK} we formulate a version of Gelfand-Kazhdan criterion for "multiplicity free" property of representations of the form
${\mathcal S}(X)$.
We prove this version in Appendix \ref{app:GK}.
In \S\S \ref{subsec:BGKZ} we recall a criterion for vanishing of equivariant distributions in terms of stabilizers of points.
In \S\S \ref{subsec:DelFilt} we recall the Deligne (weight) filtration attached to a nilpotent operator on a vector space.
In \S \ref{sec:ImpRes} we prove equivalence of Theorems \ref{thm:PiRho}, \ref{thm:Sc} and \ref{thm:GP} and deduce Theorem \ref{thm:GL} from them.
In \S \ref{sec:RedGeo} we reduce Theorem \ref{thm:Sc} to the geometric statement.
In \S \ref{sec:PFGeoStat} we prove the geometric statement.
In \S \ref{sec:HiRank} we discuss whether an analog of Theorem \ref{thm:PiRho} holds when $M$ is an arbitrary Levi subgroup.
In \S\S \ref{subsec:Perm} we answer an analogous question for permutation groups. In \S\S \ref{subsec:Con} we discuss the connection between the questions for permutation groups and general linear groups over finite fields. In \S\S \ref{subsec:LocHighRank} we discuss the local field case.
In Appendix \ref{app:MultFree} we prove theorems on strongly right exact functors between the categories of smooth representations of two reductive groups from \S\S \ref{subsec:MultFreeFun}.
In Appendix \ref{app:GK} we prove a version of Gelfand-Kazhdan criterion for "multiplicity free" property of geometric representations from \S\S \ref{subsec:GK}.
\subsection{Acknowledgments}$ $
We thank {\bf Joseph Bernstein} for initiating this work by telling us the case $k=1$.
We also thank {\bf Joseph Bernstein}, {\bf Evgeny Goryachko} and {\bf Erez Lapid} for useful discussions.
This work was conceived while the authors were visiting
the Max Planck Institute
fur Mathematik (MPIM) in Bonn. We wish to thank the MPIM for its hospitality.
D.G. also worked on this paper when he was a post-doc at the Weizmann Institute of Science. He wishes to thank the Weizmann Institute for wonderful working conditions during this post-doc and during his graduate studies.
Both authors were partially supported by a BSF grant, a GIF grant, and an ISF Center
of excellency grant. A.A was also supported by ISF grant No. 583/09 and
D.G. by NSF grant DMS-0635607. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
\section{Preliminaries} \label{sec:Prel}
\subsection{General notation} \label{subsec:GenNot}
\begin{itemize}
\item For a group $H$ acting on a set $X$ and a point $x \in X$ we denote by $Hx$ or by $H(x)$ the orbit of $x$ and by $H_x$ the stabilizer of $x$.
We also denote by $X^H$ the set of fixed points in $X$.
\item For a representation $V$ of a group $H$ we denote by $V^H$ the space of invariants and by $V_H$ the space of coinvariants, i.e.
$V_H:=V/({\operatorname{Span}}\{v - gv\, | \, g\in H, \, v\in V\})$.
\item For a Lie algebra ${\mathfrak{g}}$ acting on a vector space $V$ we denote by $V^{{\mathfrak{g}}}$ the space space of invariants. Similarly, for any element $X \in {\mathfrak{g}}$ we denote by $V^X$ the kernel of the action of $X$.
\item For a linear operator $A:V \to W$ we denote the cokernel of A by $\operatorname{Coker} A:= W/ImA$.
\item For a linear operator $A:V \to V$ and an $A$-invariant subspace $U \subset V$ we denote by $A|_U:U \to U$ and $A|_{V/U}:V/U \to V/U$ the natural induced operators.
\end{itemize}
\subsection{$l$-spaces and $l$-groups} \label{subsec:ell} $ $
We will use the standard terminology of $l$-spaces introduced in \cite{BZ}.
Let us recall it.
\begin{itemize}
\item An $l$-space is a Hausdorff locally compact totally disconnected topological space.
\item For an $l$-space $X$ we
denote by ${\mathcal S}(X)$ the space of Schwartz functions on $X$, i.e. locally constant compactly supported) functions on $X$. We denote by ${\mathcal S}^*(X)$ the dual
space and call its elements distributions.
\item In \cite{BZ} there was introduced the notion of "l-sheaf". As it was later realized (see e.g. \cite[\S\S 1.3]{Ber_Lec}) this notion is equivalent to the usual notion of sheaf on an $l$-space, so we will use the results of \cite{BZ} for sheaves.
\item For a sheaf $\mathcal{F}$ on an $l$-space $X$ we denote by ${\mathcal S}(X,\mathcal{F})$ the space of compactly supported sections of $\mathcal{F}$ and ${\mathcal S}^*(X,\mathcal{F})$ denote its dual space.
\item Note that ${\mathcal S}(X_1,\mathcal{F}_1) \otimes {\mathcal S}(X_2,\mathcal{F}_2) \cong {\mathcal S}(X_1 \times X_2,\mathcal{F}_1 \boxtimes \mathcal{F}_2)$ for any $l$-spaces $X_i$ and sheaves $\mathcal{F}_i$ on them.
\item An $l$-group is a topological group which has a basis of topology at $1$ consisting of open compact
subgroups. In fact, any topological group which is an $l$-space is an $l$-group.
\item Let an l-group $G$ acts (continuously) on an l-space $X$. Let $a:G \times X \to X$ be the action map and $p:G \times X \to X$ be the projection. A $G$-equivariant sheaf on $X$ is a sheaf $\mathcal{F}$ on $X$ together with an isomorphism $a^{*} \mathcal{F} \to p^{*} \mathcal{F}$ which satisfy the natural conditions.
\item For a representation $V$ of an $l$-group $H$ we denote by $V^{\infty}$ the space of smooth vectors, i.e. vectors whose stabilizers
are open.
\item We denote $\widetilde{V}:=(V^*)^{\infty}$.
\item For an $l$-group $H$ we denote by $\mathcal{H}(H)$ the convolution algebra of smooth (i.e. locally constant w.r.t. the action of $H$) compactly supported measures on $H$.
\item Similarly for a transitive $H$-space $X$ we denote by $\mathcal{H}(X)$ the space of smooth compactly supported measures on $X$.
\item For an $l$-group $H$ we denote by $\mathcal{M}(H)$ the category of smooth representations of $H$.
\item Recall that if an $l$-group $H$ acts (continuously) on an $l$-space $X$ and $\mathcal{F}$ is an $H$-equivariant sheaf on $X$ then ${\mathcal S}(X,\mathcal{F})$
is a smooth representation of $H$.
\end{itemize}
\begin{defn}
A representation $V$ of an $l$-group $H$ is called {\bf admissible} if one of the following equivalent conditions
holds.
\begin{enumerate}
\item For any open compact subgroup $K<H$ we have $\dim V^K < \infty$.
\item There exists an open compact subgroup $K<H$ such that $\dim V^K < \infty$.
\item For any open compact subgroup $K<H$, $V|_K = \bigoplus \limits _{\rho \in \operatorname{Irr} K} n_{\rho} \rho$, where $n_{\rho}$ are finite numbers and $\operatorname{Irr} K$ denotes the collection of isomorphism classes of irreducible representations of $K$.
\item The natural morphism $V \to \widetilde{\widetilde{V}}$ is an isomorphism.
\end{enumerate}
\end{defn}
\begin{thm}[Harish-Chandra]
Let $H$ be a reductive (not necessarily connected) group defined over $F$.
Then every smooth irreducible representation of $H(F)$ is admissible.
\end{thm}
\begin{defn}
Let $H$ be an $l$-group.
An $\mathcal{H}(H)$-module $M$ is called {\bf unital} if $\mathcal{H}(H)M=M$.
\end{defn}
\begin{thm}[Bernstein-Zelevinsky]\label{thm:RepUMod}
Let $H$ be an $l$-group. Then\\
(i) the natural functor between $\mathcal{M}(H)$ and the category of unital $\mathcal{H}(H)$-modules is an equivalence of
categories.\\
(ii) The category $\mathcal{M}(H)$ is abelian.
\end{thm}
\subsection{Multiplicity free functors} \label{subsec:MultFreeFun}
\begin{defn}
Let $H$ be an $l$-group. We call a representation $\pi \in \mathcal{M}(H)$ {\bf multiplicity free} if for any irreducible
admissible representation $\tau \in \mathcal{M}(H)$ we have $\dim_{{\mathbb C}} \operatorname{Hom}(\pi, \tau) \leq 1$.
Let $H'$ be an $l$-group. We call a functor $\mathcal{F}:\mathcal{M}(H) \to \mathcal{M}(H')$ a {\bf multiplicity free functor} if for any irreducible
admissible representation $\pi \in \mathcal{M}(H)$, the representation $\mathcal{F}(\pi)$ is multiplicity free.
\end{defn}
\begin{rem}
Note that if $H$ is not reductive then the
"multiplicity free" property might be rather weak since there might be too few admissible representations.
\end{rem}
\begin{thm}\label{thm:FunctorIsModule}
Let $H$ and $H'$ be $l$-groups.\\
Let $\mathcal{F}:\mathcal{M}(H) \to \mathcal{M}(H')$ be a ${\mathbb C}$-linear functor that commutes with arbitrary direct limits (or, equivalently, is right exact and commutes with arbitrary direct sums). Let $\Pi:=\mathcal{F}( \mathcal{H}(H))$.
Consider the action of $H$ on $\mathcal{H}(H)$ given by
$g\mu:=\mu*\delta_{g^{-1}}$. It defines an action of $H$ on $\Pi$ which commutes with the action of $H'$. In this way $\Pi$ becomes a representation of
$H \times H'$.
Then\\
(i) $\Pi$ is a smooth representation. \\
(ii) $\mathcal{F}$ is canonically isomorphic to the functor given by $\pi \mapsto
(\Pi \otimes \pi)_H$.
\end{thm}
This theorem is known.
For the sake of completeness we include its prove in Appendix \ref{subapp:PfFunMod}.
\begin{thm}\label{thm:Mult1FunctorIsMult1Module}
Let $H$ and $H'$ be $l$-groups.\\
Let $\mathcal{F}:\mathcal{M}(H) \to \mathcal{M}(H')$ be a ${\mathbb C}$-linear functor that commutes with arbitrary direct limits.
Then $\mathcal{F}$ is a multiplicity free functor if and only
if $\mathcal{F}(\mathcal{H}(H))$ is a multiplicity free representation of $H \times H'$.
\end{thm}
For proof see Appendix \ref{subapp:PfFreeFunMod}.
\subsection{Gelfand Kazhdan criterion for "multiplicity free" property of geometric representations} \label{subsec:GK}
\begin{thm}\label{thm:GK}
Let $H$ be an $l$-group.
Let $X$ and $Y$ be $H$-spaces and $\mathcal{F}$ and $\mathcal{G}$ be $H$-equivariant sheaves on $X$ and $Y$ respectively.
Let $\tau:X \to Y$ be a homeomorphism (not necessarily $H$-invariant).
Suppose that we are given an isomorphism $\tau_*\mathcal{F} \simeq \mathcal{G}$.
Define $T:X \times Y \to X \times Y$ by $T(x,y):=(\tau^{-1}(y),\tau(x))$.
It gives an involution $T$ on the space ${\mathcal S}^*(X \times Y,\mathcal{F} \boxtimes \mathcal{G})$.
Suppose that any $\xi \in {\mathcal S}^*(X \times Y,\mathcal{F} \boxtimes \mathcal{G})$ which is invariant
with respect to the diagonal action of $H$ is invariant with respect to $T$.
Then for any irreducible admissible representation $\pi \in \mathcal{M}(H)$ we have
$$ \dim \operatorname{Hom} ({\mathcal S}(X,\mathcal{F}), \pi) \cdot \dim \operatorname{Hom} ({\mathcal S}(Y,\mathcal{G})), \widetilde{\pi}) \leq 1.$$
\end{thm}
In the case when $X$ and $Y$ are transitive and correspond to each other in a certain way, this theorem is a classical theorem by Gelfand and Kazhdan (see \cite{GK}).
For the general case the proof is the same and we repeat it in Appendix \ref{app:GK}.
In fact, in this paper we could use the classical formulation of this theorem, but we believe that this theorem is useful in the general formulation.
\begin{defn}
Let $H$ be an $l$-group.
Let $\theta:H \to H$ be an involution.
Let $X$ be an $H$-space. \\
(i) Denote by $\theta(X)$ the $H$-space which coincides with $X$ as an
$l$-space but with the action of $H$ twisted by $\theta$. \\
(ii) Similarly, for a representation $\pi$ of $H$ we denote by
$\theta(\pi)$ the
representation $\pi \circ \theta$.\\
(iii) Let $\mathcal{F}$ be an $H$-equivariant sheaf on $X$. Let us define an equivariant sheaf $\theta(\mathcal{F})$ on $\theta(X)$. As a sheaf, $\theta(\mathcal{F})$ coincides with $\mathcal{F}$ and the equivariant structure is defined in the following way. Let $a:H \times X \to X$ denote the action map and $p_2:H \times X \to X$ denote the projection.
Let $\alpha: a^*(\mathcal{F}) \to p_2^*(\mathcal{F})$ denote the equivariant structure of $\mathcal{F}$.
We have to define an equivariant structure $\theta(\alpha): (\theta(a))^*(\theta(\mathcal{F})) \to p_2^*(\theta(\mathcal{F}))$, where $\theta(a):H \times
\theta(X) \to \theta(X)$ is the action map.
Note that $(\theta(a))^*(\theta(\mathcal{F})) \cong (\theta \times Id)^*(a^*(\mathcal{F}))$. Since $\theta \times Id$ is an involution,
it is enough to define a map between $a^*(\mathcal{F})$ and $(\theta \times Id)^*(p_2^*(\mathcal{F}))$. Let $\beta$ denote the canonical isomorphism
between $(\theta \times Id)^*(p_2^*(\mathcal{F}))$ and $(p_2 \circ
(\theta\times Id))^*(\mathcal{F}) = p_2^*(\mathcal{F})$. Now, the desired map is given by $\beta^{-1} \circ \alpha$.
\end{defn}
\begin{remark}
Clearly, ${\mathcal S}(\theta(X),\theta(\mathcal{F})) \cong \theta({\mathcal S}(X,\mathcal{F})).$
\end{remark}
\begin{notn}
Let $H:=\operatorname{GL}_{n_1} \times ... \times \operatorname{GL}_{n_k}$. We denote by $\kappa$ the Cartan involution $\kappa(g)
:=(g^t)^{-1}$.
\end{notn}
\begin{thm}[\cite{GK}]\label{thm:DualKappa}
Let $H:=\operatorname{GL}_{n_1} \times ... \times \operatorname{GL}_{n_k}$. Let $\pi$ be an irreducible smooth representation of $H(F)$. Then
$\widetilde{\pi} \simeq \kappa(\pi)$.
\end{thm}
\begin{cor}\label{cor:MultFree}
Let $H:=\operatorname{GL}_{n_1} \times ... \times \operatorname{GL}_{n_k}$. Let $X$ be an $H(F)$-space. Let $\mathcal{F}$ be an $H(F)$-equivariant sheaf on $X$.
Suppose that any $\xi \in {\mathcal S}(X \times \kappa(X), \mathcal{F} \boxtimes \kappa(\mathcal{F}))$ which is invariant with respect to the diagonal action of $H(F)$ is invariant with respect to swap of the coordinates. Then the representation ${\mathcal S}(X, \mathcal{F} )$ is multiplicity free.
\end{cor}
\subsection{Bernstein-Gelfand-Kazhdan-Zelevinski criterion for vanishing of invariant distributions} \label{subsec:BGKZ}
\begin{thm}[Bernstein-Gelfand-Kazhdan-Zelevinsky] \label{thm:BGKZ}
Let an algebraic group $H$ act on an algebraic variety $X$, both defined over $F$. Let $H'$ be an open subgroup of $H(F)$.
Let $\mathcal{F}$ be a sheaf over $X(F)$. Suppose that for any $x\in X(F)$
we have $$(\mathcal{F}_x \otimes \Delta_{H'}|_{H'_x} \otimes \Delta_{H'_x} ^{-1})^{H'_x}=0.$$
Then ${\mathcal S}(X, \mathcal{F} )^{H'}=0.$
\end{thm}
This theorem follows from \cite[\S 6]{BZ} and \cite[\S\S 1.5]{Ber}.
\begin{cor}\label{cor:BGKZ
Let an algebraic group $H$ act on an algebraic variety $X$, both defined over $F$.
Let $\sigma:X \to X$ be an involution defined over $F$. Suppose that $\sigma$ normalizes the action of $H$.
Then each $H(F)$-invariant distribution on $X$ is invariant under $\sigma$.
\end{cor}
\subsection{Deligne filtration} \label{subsec:DelFilt}
\begin{thm}[Deligne]
Let $A$ be a nilpotent operator on a vector space $V$. Then there exists and unique a finite decreasing filtration $V^{\geq i}$ s.t.
\\
(i) $A$ is of degree 2 w.r.t. this filtration.\\
(ii) $A^l$ gives an isomorphism $V^{\geq l}/V^{\geq l+1} \simeq V^{\geq -l}/V^{\geq -l+1}$.
\end{thm}
For proof see \cite[Proposition I.6.I]{Del}
\begin{defn}
We will denote this filtration by ${\mathcal{D}}_A^{\geq i}(V)$ and call it the Deligne filtration.
\end{defn}
\begin{notn}
The filtration ${\mathcal{D}}_A^{\geq i}(V)$ induces filtrations on $\operatorname{Ker} A$ and $\operatorname{Coker} A$ in the following way
$${\mathcal{D}}_{A,+}^{\geq i}(\operatorname{Ker} A):= {\mathcal{D}}_A^{\geq i}(V) \cap \operatorname{Ker} A \quad \text{ and }
\quad {\mathcal{D}}_{A,-}^{\leq i}(\operatorname{Coker} A):= {\mathcal{D}}_A^{\leq -i}(V)/ (\Im A \cap {\mathcal{D}}_A^{\leq -i}(V)).$$
Denote by $\mu_A: \operatorname{Gr}^i_{A,-}(\operatorname{Coker} A) \to \operatorname{Gr}^i_{A,+}(\operatorname{Ker} A)$ the isomorphism given by $A^i$.
\end{notn}
\section{Implications between the main results} \label{sec:ImpRes}
\setcounter{lemma}{0}
In this section we prove that Theorems \ref{thm:PiRho},\ref{thm:Sc} and \ref{thm:GP} are equivalent and imply Theorem \ref{thm:GL}.
\begin{proof}[Proof that Theorem \ref{thm:PiRho} $\Leftrightarrow$ Theorem \ref{thm:Sc}]
Note that $J_M^G(\mathcal{H}(G)) \cong \mathcal{H}(U\backslash G)$ where the action of $M$ is from the left and the action of $G$ is from the right. Clearly this representation of $G \times M$ is isomorphic to the representation $\mathcal{H}(G/U)$ that was described in Theorem \ref{thm:Sc}.
The equivalence follows now from Theorem \ref{thm:Mult1FunctorIsMult1Module}.
\end{proof}
\begin{proof}[Proof that Theorem \ref{thm:Sc} $\Leftrightarrow$ Theorem \ref{thm:GP}]
Note that $(G \times M)/P = G/U$. Hence $\mathcal{H}(G/U) = \mathcal{H}((G \times M)/P)$.
Now
\begin{multline*}
Hom_{G \times M}(\mathcal{H}(G/U) ,\pi)=Hom_{G \times M}(\mathcal{H}((G \times M)/P) ,\pi)=Hom_{G \times M}(\tilde \pi,C^{\infty}((G \times M)/P))=\\=Hom_{G \times M}(\tilde \pi,Ind_P^{G \times M}({\mathbb C}))= Hom_{P}(\tilde \pi,{\mathbb C}).
\end{multline*}
\end{proof}
\begin{proof}[Proof that Theorem \ref{thm:PiRho} implies Theorem \ref{thm:GL}]
Note that the center $Z(G)$ of $G$ lies in $M$, and that
$M \cong Z(G) \times H$. Now, let $\pi$ be an irreducible representation of $G$. Then $Z(G)$ acts on it by a character $\chi$. Let $\rho$ be an irreducible representation of $H$. Extend it to a representation of $M$ by letting $Z(G)$ act by $\chi$. Then
$ \operatorname{Hom}_H(J(\pi),\rho) = \operatorname{Hom}_M(J(\pi),\rho),$ which is at most one dimensional by Theorem \ref{thm:PiRho}.
\end{proof}
\section{Reduction to the geometric statement} \label{sec:RedGeo}
\setcounter{lemma}{0}
\begin{defn}
Let $X:=X_{n,k}:=\{A,B\in Mat_{n+k}| AB=BA=0,rank(A)=n,rank(B)=k\}$. Let $G$ act on $X_{n,k}$ by conjugations.
We define the transposition map $\theta:=\theta_{n,k}:X_{n,k} \to X_{n,k}$ by $\theta(A,B):=(A^t,B^t)$.
\end{defn}
In this section we deduce Theorem \ref{thm:Sc} from the following geometric statement.
\begin{prop}[geometric statement]\label{thm:Geo}
Any $G$-orbit in $X_{n,k}$ is $\theta$-invariant.
\end{prop}
\begin{defn}$ $\\
(i) We denote by $E_{n,k}$ the $l$-space of exact sequences of the form
$$ 0 \to F^n \overset{\phi}{\to} F^{n+k} \overset{\psi}{\to} F^k \to 0.$$
We consider the natural action of $G\times M$ on $E_{n,k}$ given by $$(g,(h_1,h_2))(\phi,\psi):= (g\phi h_1^{-1},h_2\psi g^{-1}).$$
(ii) We denote by $\tau:E_{n,k} \to E_{k,n}$ the map given by $\tau(\phi,\psi):=(\psi^t,\phi^t).$ \\
(iii) We denote by $T:E_{n,k} \times E_{k,n} \to E_{n,k} \times E_{k,n}$ the map given by $T(e_1,e_2):=(\tau(e_2),\tau(e_1)).$
\end{defn}
The following lemma is straightforward.
\begin{lem} \label{lem:Geo}$ $\\
(i) $G/U \cong E_{n,k}$ as a $G\times M$ - space.\\
(ii) The transposition map $\tau$ defines an isomorphism of $G\times M$ - spaces $\tau:E_{n,k} \to \kappa(E_{k,n})$.
\end{lem}
\begin{notn}
Denote by $C_{n,k}:E_{n,k}\times E_{k,n} \to X_{n,k}$ the composition map given by
$$C_{n,k}((\phi_1,\psi_1),(\phi_2,\psi_2)):=(\psi_2 \circ \phi_1,\psi_1 \circ \phi_2).$$
\end{notn}
The following lemma is straightforward.
\begin{lem}
$ $\\
(i) $C_{n,k}$ defines a bijection between $G\times M$ -orbits on $E_{n,k}\times E_{k,n}$ and $G$-orbits on $X_{n,k}$.\\
(ii) $C_{n,k} \circ T = \theta \circ C_{n,k}$ .
\end{lem}
\begin{cor}
The geometric statement implies that all $G\times M$ -orbits on $E_{n,k}\times E_{k,n}$ are $T$-invariant.
\end{cor}
\begin{cor}
The geometric statement implies Theorem \ref{thm:Sc}.
\end{cor}
This corollary follows from the previous corollary, Lemma \ref{lem:Geo}, Corollary \ref{cor:BGKZ} and Corollary \ref{cor:MultFree}.
\section{Proof of the geometric statement (Proposition \ref{thm:Geo})} \label{sec:PFGeoStat}
\setcounter{lemma}{0}
The proof is by induction on $n$. From now on we assume that the geometric statement holds for all dimensions
smaller than $n$.
\begin{rem}
The proof that will be given here is valid for any field $F$.
\end{rem}
We will use the following lemma.
\begin{lemma}[Key Lemma]\label{lem:Key}
Let $G':=GL_k$. Let $P_+'$ be its parabolic subgroup. Let $P_-'$ be the opposite parabolic. Let $P''$ be the subgroup of $P_+'\times P_-'$ consisting of pairs with the same
Levi part. Consider the two sided action of $P_+'\times P_-'$ on $G$
(given by $(p_1,p_2)g:=p_1gp_2^{-1}$) and its restriction to $P''$.
Then any $P''$ orbit on $G'$ is transposition invariant.
\end{lemma}
Since $k \leq 2$, this lemma is a straightforward computation.
\begin{remark}
The analogous statement for $k \geq 3$ is not true. In fact, this lemma is the only place where we use the assumption $k \leq 2$.
\end{remark}
\begin{notn
Denote $X':=X'_{n,k}:=\{(A,B) \in X\, | \, A \text{ is nilpotent}\}$.
\end{notn}
We will show that the geometric statement follows from the following proposition.
\begin{prop}\label{prop:GeoNilp}
Any $G$-orbit in $X'_{n,k}$ is $\theta$-invariant.
\end{prop}
\begin{proof}[Proof that Proposition \ref{prop:GeoNilp} implies Theorem \ref{thm:Geo}]
Let $(A,B) \in X -X'.$ We have to show that there exists $g \in G$ such that $gAg^{-1}=A^t$ and $gBg^{-1}=B^t$.
Decompose $F^{n+k}:=V \oplus W$ such that $A=A' \oplus A''$ where $A'$ is a nilpotent operator on $V$ and $A''$
is an invertible operator on $W$. Since $AB=BA=0$, we have $B = B' \oplus 0$, where $B'$ is an operator on $V$ and $0$ denotes the
zero operator on $W$. Without loss of generality we may assume that $V$ and $W$ are coordinate spaces.
By the induction assumption, there exists $g_1 \in \operatorname{GL}(V)$ such that $g_1A'g_1^{-1}=A'^t$ and $g_1B'g_1^{-1}=B'^t$.
It is well known that there exists $g_2 \in \operatorname{GL}(W)$ such that $g_2A''g_2^{-1}=A''^t$. Take $g:=g_1\oplus g_2$.
\end{proof}
\begin{notn}
Let $A$ be a nilpotent operator on a vector space $V$. Let $\nu_A:\operatorname{GL}(V)_A \to \operatorname{GL}(Ker A)\times \operatorname{GL}(Coker A)$ denote the map defined by
$\nu_A(g):=(g|_{Ker A}, g|_{Coker A})$. Denote also
\begin{multline*}
{\mathcal P}_A:= \{g,h \in \operatorname{GL}(Ker A)\times \operatorname{GL}(Coker A) \, | \, g \text{ preserves } \mathcal{D}_{A,+}, h \text{ preserves } \mathcal{D}_{A,-}
\text{ and }\\ Gr_{\mathcal{D}_{A,+}}(g) \text{ corresponds to } Gr_{\mathcal{D}_{A,-}}(h) \text{under the identification } \mu_A \}.
\end{multline*}
\end{notn}
\begin{lem} \label{lem:nuAPA}
Let $A$ be a nilpotent operator on a vector space $V$. Then $\Im(\nu_A) = {\mathcal P}_A$.
\end{lem}
\begin{proof}
Clearly $\Im(\nu_A) \subset {\mathcal P}_A$.
Let $\mathfrak{p}$ denote the Lie algebra of ${\mathcal P}_A$. It is enough to show that the map $d\nu_A:{\mathfrak{gl}}(V)_A \to \mathfrak{p}$ is onto.
Let $V=\bigoplus V_i$ be the decomposition of $V$ to Jordan blocks w.r.t. the action of $A.$
We have
\begin{align}
& {\mathfrak{gl}}(V)_A = (V^*\otimes V)^A = \bigoplus_{i,j} (V_i^* \otimes V_j)^A\\
& {\mathfrak{gl}}(Ker A)= (V^A)^* \otimes V^A = \bigoplus_{i,j} (V_i^A)^* \otimes V_j^A\\
& {\mathfrak{gl}}(Coker A)= (V/AV)^* \otimes (V/AV) = \bigoplus_{i,j} (V_i/AV_i)^* \otimes (V_j/AV_j)
\end{align}
The filtration $\mathcal{D}_{A,+}$ on $Ker A$ gives a natural filtration on ${\mathfrak{gl}}(Ker A)$. It is easy to see that the 1-dimensional space $(V_i^A)^* \otimes V_j^A$ is of degree $\dim V_j - \dim V_i$ w.r.t. this filtration. Similarly $(V_i/AV_i)^* \otimes (V_j/AV_j)$ is of degree $\dim V_i - \dim V_j$.
Hence $\mathfrak{p} = \bigoplus \mathfrak{p}_{ij},$ where $$\mathfrak{p}_{ij}=
\left \{ \begin{array}{llll} (V_i^A)^* \otimes V_j^A & \quad \dim V_j > \dim V_i\\
(V_i/AV_i)^* \otimes (V_j/AV_j) & \quad \dim V_j < \dim V_i\\
\{(X,Y) \in (V_i^A)^* \otimes V_j^A \oplus (V_i/AV_i)^* \otimes (V_j/AV_j) \, | \, X \text{ corresponds to }Y & \\
\text{ under the
identification given by } A^{\dim V_i -1} \} & \quad \dim V_j = \dim V_i \\
\end{array} \right .$$
This
decomposition
gives a decomposition $d\nu_A = \bigoplus \nu_{ij}$, where $\nu_{ij}:(V_i^* \otimes V_j)^A \to \mathfrak{p}_{ij}$.
It is enough to show that $\nu_{ij}$ is surjective for any $i$ and $j$.
Choose a gradation on $V_i$ which is compatible with the Deligne filtration. Let $L_{ij} \subset (V_i^* \otimes V_j)^A$ be the 1-dimensional subspace of vectors of weight $\dim V_j - \dim V_i$ w.r.t. this gradation.
It is easy to see that $\nu_{ij}|_{L_{ij}}$ is surjective.
\end{proof}
The following lemma is a reformulation of the Key Lemma.
\begin{lem} \label{lem:DualKey}
Let $V$ and $W$ be linear spaces of dimension $k$. Suppose that we are given a non-degenerate pairing between $V$ and $W$. Let
$\mathcal{F}$ be a descending filtration on $V$ and $\mathcal{G}$ be the dual, ascending, filtration on $W$.
Suppose that we are given an isomorphism of graded linear spaces $\mu : Gr_{\mathcal{F}}(V) \to Gr_{\mathcal{G}}(W)$.
Let
\begin{multline*}
{\mathcal P}:= \{g,h \in \operatorname{GL}(V)\times \operatorname{GL}(W) \, | \, g \text{ preserves } \mathcal{F}, h \text{ preserves } \mathcal{G}
\text{ and }\\ Gr_{\mathcal{F}}(g) \text{ corresponds to } Gr_{\mathcal{G}}(h) \text{under the identification } \mu \}.
\end{multline*}
Let ${\mathcal P}$ act on $\operatorname{Hom}(V,W)$ by $(g,h)(\phi):= h \circ \phi \circ g^{-1}$. Note that the pairing between $V$ and $W$ defines a notion of
transposition on $\operatorname{Hom}(V,W)$.
Then any ${\mathcal P}$-orbit on $\operatorname{Hom}(V,W)$ is invariant under transposition.
\end{lem}
\begin{proof}
[Proof of Proposition \ref{prop:GeoNilp}]
Let $(A,B) \in X'$. We have to show that there exists $g \in G$ such that $gAg^{-1}=A^t$ and $gBg^{-1}=B^t$.
Fix a bilinear form $Q$ on $F^{n+k}$ such that $A^t_Q=A$, where $A^t_Q$ denotes transpose with respect to the form $Q$. It is enough
to show that there exists $g \in G_A$ such that $gBg^{-1} = B_Q^t$. Note that $\operatorname{Ker} A =\Im B$ and $\operatorname{Ker} B =\Im A$. Denote by
$B':\operatorname{Coker} A \to \operatorname{Ker} A$ the map induced by $B$. Consider the natural action of $\operatorname{GL}(\operatorname{Coker} A) \times \operatorname{GL}(\operatorname{Ker} A)$ on
$\operatorname{Hom}(\operatorname{Coker} A, \operatorname{Ker} A)$.
Note that $\operatorname{Ker} B_Q^t = \Im A$ and $\operatorname{Ker} A = \Im B_Q^t$ and hence $B_Q^t$ also induces a map $\operatorname{Coker} A \to \operatorname{Ker} A$. Denote this map
by $B''$. Note that $B''$ is the transposition of the map $B'$ with respect to the non-degenerate pairing between $\operatorname{Coker} A$ and $\operatorname{Ker} A$ given by $Q$.
The assertion follows now from Lemma \ref{lem:nuAPA} and Lemma \ref{lem:DualKey}
\end{proof}
\section{Discussion of the higher rank cases} \label{sec:HiRank}
\setcounter{lemma}{0}
In this section we discuss whether an analog of Theorem \ref{thm:PiRho} holds when $M$ is an arbitrary Levi subgroup.
If $F$ is a finite field, a negative answer to this question can be obtained from a negative answer to an analogous question for permutation groups. We discuss permutation groups in \S\S \ref{subsec:Perm} and the connection between the two questions in \S\S \ref{subsec:Con}.
The answer we obtain is that
such analog of Theorem \ref{thm:PiRho} holds only in the cases at hand.
We discuss the case when $F$ is a local field in \S\S \ref{subsec:LocHighRank}, but we do not reach a conclusion.
Since the results here are negative and mostly known, the
discussion is rather informal and some details are omitted.
\subsection{The analogous problems for the permutation groups} \label{subsec:Perm}$ $
Let $M'=S_{n_1} \times ... \times S_{n_l}$ and $G':=S_{n_1+...+n_l} $. One can ask when $(G',M')$ is a strong Gelfand pair, i.e. when the restriction functor from $G'$ to $M'$ is multiplicity free. The answer is: $(G',M')$ is a strong Gelfand pair if and only if $l \leq 2$ and $\min(n_1,n_2) \leq 2$.
This is well known,
but
let us indicate the proof.
The fact that the pairs $(S_{n+1},S_{n})$ and $(S_{n+2},S_{n}\times S_2)$ are strong Gelfand pairs follows by Theorems \ref{thm:Mult1FunctorIsMult1Module} and \ref{thm:GK} from the fact that every permutation from $G'$ is conjugate by $M'$ to its inverse.
In order to show that other pairs mentioned above are not strong Gelfand pairs,
we have to show that the algebra of $Ad(M')$-invariant functions on $G'$ with respect to convolution is not commutative unless $l \leq 2$ and $\min(n_1,n_2) \leq 2$.
If $l \geq 3$ then consider the transpositions $\sigma_1=(1,n_1+1)$ and $\sigma_2=(n_1+1,n_2+1)$. It is easy to see that the characteristic functions of their $M'$-conjugacy classes do not commute.
If $l = 2$ and $n_1,n_2 \geq 3$ then consider the cyclic permutations $\sigma_1=(1,2,3,n_1+1,n_1+2,n_1+3)$ and $\sigma_2=(1,n_1+1,n_1+2)$. It is easy to see that the characteristic functions of their $M'$-conjugacy classes do not commute.
\subsection{Connection with our problem for the finite fields} \label{subsec:Con}$ $
Suppose that $F$ is a finite field.
Let $M=GL_{n_1}(F) \times ... \times GL_{n_l}(F)$ and $G:=GL_{n_1+...+n_l}(F)$. Then the multiplicities problem of Jacquet functor between $\mathcal{M}(G)$ and $\mathcal{M}(M)$ can be considered as a generalization of a deformation of the multiplicities problem of the restriction functor from $\mathcal{M}(G')$ to $\mathcal{M}(M')$.
Indeed, the multiplicities problem of Jacquet functor is equivalent to multiplicities problem of the parabolic induction from $\mathcal{M}(M)$ to $\mathcal{M}(G)$. Let $\Sigma:=i_{T_M}^M({\mathbb C})$, where $i_{T_M}^M$ denotes the parabolic induction from the torus of $M$ to $M$.
Let $\Pi:=i_{T_G}^G({\mathbb C})$. Let ${\mathcal A}$ be the subcategory of
$\mathcal{M}(M)$
generated by $\Sigma$ and ${\mathcal B}$ be the subcategory of
$\mathcal{M}(G)$
generated by $\Pi$. Then the multiplicities problem of the parabolic induction from ${\mathcal A}$ to ${\mathcal B}$ is a special case of the multiplicities problem of the parabolic induction from $\mathcal{M}(M)$ to $\mathcal{M}(G)$.
Let $A:=End_{M}(\Sigma)$ and $B:=End_{G}(\Pi)$. Clearly, ${\mathcal A}$ is equivalent to the category of $A$-modules and ${\mathcal B}$ is equivalent to the category of $B$-modules. It is well known that $A$ and $B$ are deformations of the group algebras of
$M'$ and $G'$
respectively.
Therefore the multiplicities problem of the parabolic induction from ${\mathcal A}$ to ${\mathcal B}$ is a deformation of the multiplicities problem of the induction from $M'$ to $G'$, which in turn is equivalent to he multiplicities problem of the restriction from $G'$ to $M'$.
In fact, one can show that those deformations are trivializable since those algebras are semisimple.
One can use this argumention in order to show that $(G',M')$ is a strong Gelfand pair only if $l \leq 2$ and $\min(n_1,n_2) \leq 2$.
\subsection{Higher rank cases over local fields} \label{subsec:LocHighRank}$ $
First note that the reduction of Theorem \ref{thm:Sc} to the Key Lemma works without change for arbitrary $k$.
This reduction connects between the Gelfand-Kazhdan criterion for the "multiplicity free" property of the Jacquet functor from $\operatorname{GL}_{n+k}(F)$ to $\operatorname{GL}_{n}(F) \times \operatorname{GL}_{k}(F)$ and the Gelfand-Kazhdan criterion for the "multiplicity free" property of the Jacquet functor from $\operatorname{GL}_{k}(F)$ to an arbitrary Levi subgroup. Therefore we believe that the "multiplicity free" properties themselves are connected and if one wants to consider the case of arbitrary $k$, he will also have to consider arbitrary Levi subgroups. At the moment we do not have an opinion when the Jacquet functor from $\operatorname{GL}_n(F)$ to an arbitrary Levi subgroup is multiplicity free.
| 2024-02-18T23:40:05.108Z | 2009-10-19T21:45:50.000Z | algebraic_stack_train_0000 | 1,282 | 6,557 |
|
proofpile-arXiv_065-6387 | \section{Introduction}
A classical question in probability theory comprises the following.
Suppose the ordinary resp.\ stochastic exponential $M=\exp(X)$
resp.\ $\scr E(X)$\footnote{The \emph{stochastic exponential} $\scr E(X)$ of a semimartingale $X$ is the unique solution of the linear SDE $d\scr E(X)_t=\scr E(X)_{t-}dX_t$ with $\scr E(X)_0=1$, cf., e.g., \cite[I.4.61]{js.87} for more details.} of some process $X$ is a
positive \emph{local martingale} and hence a supermartingale. Then
under what (if any) additional assumptions is it in fact a
\emph{true martingale}?
This seemingly technical question is of considerable interest in
diverse applications, for example, absolute continuity of
distributions of stochastic processes (cf., e.g.,
\cite{cheridito.al.05} and the references therein), absence of
arbitrage in financial models (see, e.g.,
\cite{delbaen.schachermayer.95c}) or verification of optimality in
stochastic control (cf., e.g., \cite{elkaroui.81}).
In a general semimartingale setting it has been shown in
\cite{foellmer.72} that any supermartingale $M$ is a martingale if
and only if it is non-explosive under the associated \emph{F\"ollmer
measure} (also cf.\ \cite{yoerp.76}). However, this general result
is hard to apply in concrete models, since it is expressed in purely
probabilistic terms. Consequently, there has been extensive research
focused on exploiting the link between martingales and non-explosion
in various more specific settings, see, e.g., \cite{wong.heyde.04}.
In particular, \emph{deterministic} necessary and sufficient
conditions for the martingale property of $M$ have been obtained if
$X$ is a one-dimensional diffusion (cf., e.g.,
\cite{delbaen.shirakawa.02, blei.engelbert.09} and the references
therein; also compare \cite{mijatovic.urusov.10}).
For processes with jumps, the literature is more limited and mostly
focused on sufficient criteria as in \cite{lepingle.memin.78,
kallsen.shiryaev.00b, protter.shimbo.08, kallsen.muhlekarbe.08b}. By
the independence of increments and the L\'evy-Khintchine formula, no
extra assumptions are needed for $M$ to be a true martingale if $X$
is a L\'evy process. For the more general class of \emph{affine
processes} characterized in \cite{duffie.al.03} the situation
becomes more involved. While no additional conditions are needed for
continuous affine processes, this no longer remains true in the
presence of jumps (cf.\ \cite[Example 3.11]{kallsen.muhlekarbe.08b}).
In this situation a necessary and sufficient condition for
one-factor models has been established in \cite[Theorem
2.5]{kellerressel.09}, whereas easy-to-check sufficient conditions
for the general case are provided by \cite[Theorem
3.1]{kallsen.muhlekarbe.08b}.
In the present study, we complement these results by sharpening
\cite[Theorem 3.1]{kallsen.muhlekarbe.08b} in order to provide
deterministic necessary and sufficient conditions for the martingale
property of $M=\scr E(X^i)$ resp.\ $\exp(X^i)$ in the case where $X^i$
is one component of a general non-explosive affine process $X$. As
in \cite{kellerressel.09,kallsen.muhlekarbe.08b} these conditions
are expressed in terms of the admissible \emph{parameters} which
characterize the distribution of $X$ (cf.\ \cite{duffie.al.03}).
Since we also use the linkage to non-explosion, we first complete
the characterization of \emph{conservative}, i.e.\ non-explosive,
affine processes from \cite[Section 9]{duffie.al.03}. Generalizing
the arguments from \cite{kallsen.muhlekarbe.08b}, we then establish
that $M$ is a true martingale if and only if it is a local
martingale and a related affine process is conservative. Combined
with the characterization of local martingales in terms of
semimartingale characteristics \cite[Lemma 3.1]{kallsen.03} this
then yields necessary and sufficient conditions for the martingale
property of $M$.
The article is organized as follows. In Section \ref{se: prelim}, we recall terminology and results on affine Markov processes from \cite{duffie.al.03}. Afterwards, we characterize conservative affine processes. Subsequently, in Section \ref{sec: exp}, this characterization is used to provide necessary and sufficient conditions for the martingale property of exponentially affine processes. Appendix \ref{sec: ODEs} develops ODE comparison results in a general non-Lipschitz setting that are used to establish the results in Section \ref{sec: cons}.
\section{Affine processes}\label{se: prelim}
For stochastic background and terminology, we refer to
\cite{js.87,revuz.yor.99}. We work in the setup of
\cite{duffie.al.03}, that is we consider a time-homogeneous Markov
process with state space $D:=\mathbb R _+^m \times \mathbb R^n$, where $m, n \geq
0$ and $d=m+n \geq 1$. We write $p_t(x,d\xi)$ for its transition
function and let $(X,\mathbb{P}_x)_{x \in D}$ denote its realization
on the canonical filtered space $(\Omega,\scr{F}^0,(\scr{F}^0_t)_{t
\in \mathbb R _+})$ of paths $\omega: \mathbb R _+ \to D_{\Delta}$ (the
one-point-compactification of $D$). For every $x \in D$,
$\mathbb{P}_x$ is a probability measure on $(\Omega,\scr F^0)$ such that
$\mathbb{P}_x(X_0=x)=1$ and the Markov property holds, i.e.\
\begin{eqnarray*}
\mathbb{E}_x(f(X_{t+s})|\scr F^0_s)&=&\int_{D} f(\xi)p_{t}(X_{s},d\xi)\\
&=&\mathbb{E}_{X_s}(f(X_t)), \quad \mathbb{P}_x \textrm{--a.s.}\
\quad \forall t,s,\in \mathbb R _+,
\end{eqnarray*}
for all bounded Borel-measurable functions $f: D \to \mathbb{C}$.
The Markov process $(X,\mathbb{P}_x)_{x \in D}$ is called
\emph{conservative} if $p_t(x,D)=1$, \emph{stochastically
continuous} if we have $p_s(x,\cdot) \to p_t(x,\cdot)$ weakly on
$D$, for $s \to t$, for every $(t,x) \in \mathbb R _+ \times D$, and
\emph{affine} if, for every $(t,u) \in \mathbb R _+ \times i\mathbb R^d$, the
characteristic function of $p_t(x,\cdot)$ is of the form
\begin{equation}\label{e:affine}
\int_D e^{\langle u,\xi
\rangle}p_t(x,d\xi)=\exp\left(\psi_0(t,u)+\langle \psi(t,u), x
\rangle\right), \quad \forall x \in D,
\end{equation}
for some $\psi_0(t,u) \in \mathbb{C}$ and
$\psi(t,u)=(\psi_1(t,u),\ldots,\psi_d(t,u)) \in \mathbb{C}^d$.
Note that $\psi(t,u)$ is uniquely specified by \eqref{e:affine}. But $\mathrm{Im}(\psi_0(t,u))$ is only determined up to multiples of $2\pi$. As usual in the literature, we enforce uniqueness by requiring the continuity of $u \mapsto \psi_0(t,u)$ as well as $\psi_0(t,0)=\log(p_t(0,D)) \in (-\infty,0]$ (cf., e.g., \cite[\S 26]{bauer.02}).
For every stochastically continuous affine process, the mappings
$(t,u) \mapsto \psi_0(t,u)$ and $(t,u) \mapsto \psi(t,u)$ can be
characterized in terms of the following quantities:
\begin{definition}\label{definition par}
Denote by $h=(h_1,\ldots,h_d)$ the truncation function on $\mathbb R^d$
defined by
$$h_k(\xi):=\begin{cases} 0, &\mbox{if } \xi_k=0, \\ (1 \wedge |\xi_k|)\frac{\xi_k}{|\xi_k|}, &\mbox{otherwise.} \end{cases} $$
Parameters $(\alpha,\beta,\gamma,\kappa)$ are called
\emph{admissible}, if
\begin{itemize}
\item $\alpha=(\alpha_0,\alpha_1,\ldots,\alpha_d)$ with symmetric positive semi-definite $d \times d$-matrices $\alpha_j$ such that $\alpha_j=0$ for $j \geq m+1$ and $\alpha_j^{kl}=0$ for $0 \leq j \leq m$, $1 \leq k,l \leq m$ unless $k=l=j$;
\item $\kappa=(\kappa_0,\kappa_1,\ldots,\kappa_d)$ where $\kappa_j$ is a Borel measure on $D \backslash \{0\}$ such that $\kappa_j=0$ for $j \geq m+1$ as well as $\int_{D \backslash \{0\}} ||h(\xi)||^2 \kappa_j(d\xi)<\infty$ for $0 \leq j \leq m$ and
$$\int_{D \backslash \{0\}} \vert h_k(\xi) \vert\kappa_j(d\xi)<\infty, \quad 0 \leq j \leq m, \quad 1 \leq k \leq m, \quad k \neq j;$$
\item $\beta=(\beta_0,\beta_1,\ldots,\beta_d)$ with $\beta_j \in \mathbb R^d$ such that $\beta_j^k=0$ for $j \geq m+1$, $1 \leq k \leq m$ and
$$ \beta_j^k -\int_{D \backslash \{0\}} h_k(\xi)\kappa_j(d\xi) \geq 0, \quad 0 \leq j \leq m, \quad 1 \leq k \leq m, \quad k \neq j.$$
\item $\gamma=(\gamma_0,\gamma_1,\ldots,\gamma_d)$, where $\gamma_j \in \mathbb R _+$ and $\gamma_j=0$ for $j=m+1,\dots,d$.
\end{itemize}
\end{definition}
Affine Markov processes and admissible parameters are related as
follows (cf.\ \cite[Theorem 2.7]{duffie.al.03} and \cite[Theorem
5.1]{kellerressel.al.09}):
\begin{theorem}\label{t:2.7}
Let $(X,\mathbb{P}_x)_{x \in D}$ be a stochastically continuous
affine process. Then there exist admissible parameters
$(\alpha,\beta,\gamma,\kappa)$ such that $\psi_0(t,u)$ and
$\psi(t,u)$ are given as solutions to the \emph{generalized Riccati
equations}
\begin{align}\label{e:riccati}
\partial_t \psi(t,u)&=R(\psi(t,u)), \quad \,\,\,\psi(0,u)=u,\\\
\partial_t\psi_0(t,u)&=R_0(\psi(t,u)),\quad\psi_0(0,u)=0,\label{e:riccati2}
\end{align}
where $R=(R_1,\dots, R_d)$ and for $0 \leq i \leq d$,
\begin{equation}\label{e:R}
R_i(u):=\frac{1}{2}\langle \alpha_i u, u\rangle+\langle \beta_i, u
\rangle-\gamma_i+\int_{D \backslash \{0\}} \left(e^{\langle
u,\xi\rangle} -1-\langle u, h(\xi) \rangle\right)\kappa_i(d\xi).
\end{equation}
Conversely, for any set $(\alpha,\beta,\gamma,\kappa)$ of admissible
parameters there exists a unique stochastically continuous affine
process such that \eqref{e:affine} holds for all $(t,u) \in \mathbb R _+
\times i\mathbb R^d$, where $\psi_0$ and $\psi$ are given by
\eqref{e:riccati2} and \eqref{e:riccati}.
\end{theorem}
Since any stochastically continuous affine process
$(X,\mathbb{P}_x)_{x \in D}$ is a Feller process (cf.\ \cite[Theorem
2.7]{duffie.al.03}), it admits a c\`adl\`ag modification and hence
can be realized on the space of c\`adl\`ag paths $\omega: \mathbb R _+ \to
D_{\Delta}$. If $(X,\mathbb{P}_x)_{x \in D}$ is also conservative it
turns out to be a semimartingale in the usual sense and hence can be
realized on the \emph{Skorokhod space}
$(\mathbb{D}^d,\scr{D}^d,(\scr{D}^d_t)_{t \in \mathbb R _+})$ of $D$- rather
than $D_{\Delta}$-valued c\`adl\`ag paths. Here, $\scr{D}^d_t=\bigcap_{s>t}\scr{D}^{0,d}_s$ for the filtration $(\scr{D}^{0,d}_t)_{t \in \mathbb{R}_+}$ generated by $X$. The semimartingale characteristics of $(X,\mathbb{P}_x)_{x \in D}$ are then given in terms of the admissible parameters:
\begin{theorem}\label{t:2.12}
Let $(X,\mathbb{P}_x)_{x \in D}$ be a conservative, stochastically
continuous affine process and let $(\alpha,\beta,\gamma,\kappa)$ be the
related admissible parameters. Then $\gamma=0$ and for any $x \in
D$, $X=(X^1,\ldots,X^d)$ is a semimartingale on
$(\mathbb{D}^d,\scr{D}^d,(\scr{D}^d_t)_{t \in \mathbb R _+},\mathbb{P}_x)$
with characteristics $(B,C,\nu)$ given by
\begin{eqnarray}
B_t &=& \int_0^t \left(\beta_0+\sum_{j=1}^d \beta_j X^j_{s-}\right) ds,\label{e:b}\\
C_t &=& \int_0^t \left(\alpha_0+\sum_{j=1}^d \alpha_j X^j_{s-}\right) ds,\label{e:c}\\
\nu(dt,d\xi) &=& \left(\kappa_0(d\xi)+\sum_{j=1}^d X^j_{t-}
\kappa_j(d\xi)\right) dt,\label{e:nu}
\end{eqnarray}
relative to the truncation function $h$. Conversely, let $X'$ be a
$D$-valued semimartingale defined on some filtered probability
space $(\Omega',\scr{F}',(\scr F'_t),\mathbb{P}')$. If
$\mathbb{P}'(X'_0=x)=1$ and $X'$ admits characteristics of the form
\eqref{e:b}-\eqref{e:nu} with $X_{-}$ replaced by $X_{-}'$, then
$\mathbb{P}' \circ X'^{-1}=\mathbb{P}_x$.
\end{theorem}
\begin{proof} $\gamma=0$ is shown in \cite[Proposition 9.1]{duffie.al.03}; the remaining assertions follow from \cite[Theorem 2.12]{duffie.al.03}.
\end{proof}
\section{Conservative affine processes}\label{sec: cons}
In view of Theorem \ref{t:2.12}, the powerful toolbox of
semimartingale calculus is made available for affine processes, provided that the Markov
process $(X,\mathbb P_x)_{x\in D}$ is
conservative. Hence, it is desirable to characterize this property
in terms of the parameters of $X$. This is done in the present
section. The main result is Theorem \ref{th: char can cons},
which completes the discussion of conservativeness in \cite[Section
9]{duffie.al.03}.
To prove this statement, we proceed as follows. First, we recall some properties of the generalized Riccati equations \eqref{e:riccati}, \eqref{e:riccati2} established by Duffie et al.\ \cite{duffie.al.03}. In the crucial next step, we use the comparison results developed in the appendix to show that whereas the characteristic exponent $\psi$ of the affine process $X$ is not the \emph{unique} solution to these equations in general, it is necessarily the \emph{minimal} one among all such solutions. Using this observation, we can then show that conservativeness of the process $X$ is indeed \emph{equivalent} to uniqueness for the specific initial value zero. Note that \emph{sufficiency} of this uniqueness property was already observed in \cite[Proposition 9.1]{duffie.al.03}; here we show that this condition is also \emph{necessary}.
Let us first introduce some definitions and notation. The partial
order on $\mathbb R^m$ induced by the natural cone $\mathbb R_+^m$
is denoted by $\preceqq$. That is, $x\preceqq 0$ if and only if
$x_i\leq 0$ for $i=1,\dots,m$. A function $g: D_g\rightarrow\mathbb
R^m$ is \emph{quasimonotone increasing} on $D_g\subset \mathbb R^m$
(\emph{qmi} in short, for a general definition see section \ref{sec:
ODEs}) if and only if for all $x,y\in D_g$ and $i=1,\dots,m$ the
following implication holds true:
\[
(x\preceqq y,\quad x_i=y_i)\;\Rightarrow \; g_i(x)\leq g_i(y).
\]
In the sequel we write $\mathbb R_{--}:=(-\infty,0)$ and $\mathbb
C_{--}:=\{c\in \mathbb C\,\mid\,\Re(c)\in \mathbb R_{--}\}$.
Moreover, we introduce the index set $\mathcal I:=\{1,\dots, m\}$
and, accordingly, define by $u_{\mathcal I}=(u_1,\dots, u_m)$ the
projection of the $d$--dimensional vector $u$ onto the first $m$
coordinates. Similarly $R_{\mathcal I}$ denotes
the first $m$ components of $R$, i.e. $R_\mathcal I=(R_1,\dots,R_m)$
and $R_\mathcal I(u_I,0):=(R_1(u_1,\dots,u_m,0,\dots,0), \dots,
R_m(u_1,\dots,u_m,0,\dots,0))$. Finally, $\psi_{\mathcal{I}}$ and $\psi_{\mathcal{I}}(t,(u_{\mathcal{I}},0))$ are defined analogously.
For this section the uniqueness of solutions to eqs.
\eqref{e:riccati}--\eqref{e:riccati2} is essential. It is adressed
in the following remark. For more detailed information, we refer to
\cite[Sections 5 and 6]{duffie.al.03}.
\begin{remark}\rm
\begin{enumerate}\label{rem: uniquenss}
\item \label{uniquenss issue 1} Due to the admissibility conditions
on the jump parameters $\kappa$ the domains of $R_0$ and $R$ can be be extended from $i\mathbb
R^d$ to $\mathbb C_{-}^m\times i\mathbb R^n$. Moreover, $R_0, R$ are analytic functions on $\mathbb C_{--}^m\times
i\mathbb R^n$, and admit a unique continuous extension to $\mathbb C_{-}^m\times
i\mathbb R^n$.
\item In general, $R$ is not locally Lipschitz on $i\mathbb R^d$,
but only continuous (see \cite[Example 9.3]{duffie.al.03}). This
lack of regularity prohibits to provide well-defined
$\psi_0,\psi$ by simply solving
\eqref{e:riccati}--\eqref{e:riccati2}, because unique
solutions do not always exist, again cf.\ \cite[Example 9.3]{duffie.al.03}. Hence another
approach to construct unique characteristic exponents $\psi_0,\psi$
is required. Duffie et al.\ \cite{duffie.al.03} tackle this problem by
first proving the existence of unique global solutions
$\psi_0^\circ,\psi^\circ$ on $\mathbb C_{--}^m\times i\mathbb R^n$,
where uniqueness is guaranteed by the analyticity of $R$, see
\ref{uniquenss issue 1}. Their unique continuous extensions to the
closure $\mathbb C_{-}^m\times i\mathbb R^n$ are also differentiable
and solve \eqref{e:riccati}--\eqref{e:riccati2} for $u\in i\mathbb
R^d$. Moreover, they satisfy \eqref{e:affine}. Henceforth, $\psi_0,\psi$ therefore denote these unique extensions.
\end{enumerate}
\end{remark}
\begin{lemma}\label{lem: properties affine on canonical state}
The affine transform formula \eqref{e:affine} also holds for
$u=(u_{\mathcal I},0)\in \mathbb R_-^d$ with characteristic
exponents $\psi_0(t,(u_{\mathcal I},0)): \,\mathbb R_+\times \mathbb
R_-^m\rightarrow\mathbb R_-$ and $\psi_{\mathcal I}(t,(u_{\mathcal
I},0)): \,\mathbb R_+\times \mathbb R_-^m\rightarrow\mathbb R_-^m$
satisfying
\begin{align}\label{eq phi}
\partial_t{\psi}_0(t,(u_{\mathcal I},0))&=R_0((\psi_{\mathcal I}(t,(u_{\mathcal I},0)),0)),\quad\quad\,\,\,
\psi_0(0,(u_{\mathcal I},0))=0,\\\label{eq psi}\partial_t
{\psi_{\mathcal I}}(t,(u_\mathcal I,0))&=R_{\mathcal
I}((\psi_{\mathcal I}(t,(u_{\mathcal I},0)),0))\quad\quad\,\,\,
\psi_{\mathcal I}(0,(u_{\mathcal I},0))=u_{\mathcal I}.
\end{align}
Furthermore we have:
\begin{itemize}
\item $R_0, R_{\mathcal I}$ are continuous functions on $\mathbb
R_{-}^m$ such that $R_0(0)\leq 0$, $R_{\mathcal I}(0)\preceqq 0$
\item $R_{\mathcal I}((u_{\mathcal I},0))$ is locally Lipschitz continuous on
$\mathbb R_{--}^m$ and qmi on $\mathbb R_-^m$, \item $\psi_{\mathcal
I}(t,(u_{\mathcal I},0))$ restricts to
an $\mathbb{R}^m_{--}$-valued unique
global solution $\psi_{\mathcal I}^{\circ}(t,(u_{\mathcal I},0))$
of \eqref{eq psi} on $\mathbb R_+\times \mathbb R_{--}^m$.
\end{itemize}
\end{lemma}
\begin{proof}
By \cite{kellerressel.al.09}, any {\it stochastically continuous}
affine processes is {\it regular} in the sense of
\cite{duffie.al.03}. Hence, the first statement is a consequence of
\cite[Proposition 6.4]{duffie.al.03}. The regularity of $R_0$ and
$R_{\mathcal I}$ follows from \cite[Lemma 5.3 (i) and
(ii)]{duffie.al.03}. Equation \eqref{e:R} shows $R_0(0)\leq 0$ and
$R_{\mathcal I}(0)\preceqq 0$. The mapping $v\mapsto R_{\mathcal
I}((v,0))$ is qmi on $\mathbb R_-^m$ by \cite[Lemma 4.6]{PhDMKR},
whereas the last assertion is stated in \cite[Proposition
6.1]{duffie.al.03}.
\end{proof}
In the following crucial step we establish the minimality of
$\psi_{\mathcal I}(t,(u_{\mathcal I},0))$ among all solutions of
\eqref{eq psi} with respect to the partial order $\preceqq$.
\begin{proposition}\label{prop: extremality}
Let $T>0$ and $u_{\mathcal{I}} \in \mathbb{R}_{-}^m$. If $g(t):
[0,T)\rightarrow \mathbb R_{-}^m$ is a solution of
\begin{equation}\label{e:eberhard1}
\partial_t g(t)=R_{\mathcal I}(g(t),0),
\end{equation}
subject to $g(0)=u_{\mathcal I}$, then $g(t)\succeqq \psi_{\mathcal
I}(t,(u_{\mathcal I},0))$, for all $t<T$.
\end{proposition}
\begin{proof}
The properties of $R_{\mathcal I}$ established in Lemma \ref{lem:
properties affine on canonical state} allow this conclusion by a use
of Corollary \ref{th1}. For an application of the latter, we make
the obvious choices $f=R_{\mathcal I}$, $D_f=\mathbb R_-^m$. Then we
know that for $u^\circ_{\mathcal I}\in \mathbb R_{--}^m$ we have
$g(t)\succeq \psi_{\mathcal I}^\circ(t,(u_{\mathcal I},0))$, for all
$t<T$. Now letting $u^\circ_{\mathcal I}\rightarrow u_{\mathcal I}$
and using the continuity of $\psi_{\mathcal I}$ as asserted in Lemma
\ref{lem: properties affine on canonical state} yields the
assertion.
\end{proof}
We now state the main result of this section, which is a full
characterization of conservative affine processes in terms of a
uniqueness criterium imposed on solutions of the corresponding
generalized Riccati equations. It is
motivated by a partial result of this kind provided in
\cite[Proposition 9.1]{duffie.al.03}, which gives a necessary
condition for conservativeness, as well as a sufficient one. Here, we show that their sufficient condition, which (modulo the assumption
$R(0)=0$) equals \ref{char1 point 3} below, is in fact also necessary for
conservativeness.
The proof is based on the comparison results for multivariate initial value problems developed in Appendix
\ref{sec: ODEs}.
\begin{theorem}\label{th: char can cons}
The following statements are equivalent:
\begin{enumerate}
\item \label{char1 point 1} $(X, \mathbb P_x)_{x\in D}$ is conservative,
\item \label{char1 point 3} $R_0(0)=0$ and there exists no non-trivial $ \mathbb R_-^m$-valued local solution $g(t)$ of \eqref{e:eberhard1} with $g(0)=0$.
\end{enumerate}
Moreover, each of these statements implies that $R(0)=0$.
\end{theorem}
\begin{proof}
\ref{char1 point 1}$\Rightarrow$\ref{char1 point 3}: By definition,
$X$ is conservative if and only if, for all $t\geq 0$ and $x\in D$, we
have
\[
1=p_t(x, D)=e^{\psi_0(t,0)+\langle
\psi(t,0),x\rangle}=e^{\psi_0(t,0)+\langle \psi_{\mathcal
I}(t,0),x_{\mathcal I}\rangle},
\]
because $\psi_i(t,(u_{\mathcal I},0))=0$, for $i=m+1,\dots,d$. By first putting $x=0$ and then using the arbitrariness of $x$, it follows that this is equivalent to
\begin{equation}\label{eq: ptD}
\psi_0(t,0)=0\textrm{ and }\psi_{\mathcal
I}(t,0)=0, \quad \forall t \geq 0.
\end{equation}
Let $g$ be a (local) solution of \eqref{e:eberhard1} on some
interval $[0,T)$, satisfying $g(0)=0$ and with values in $\mathbb
R_{-}^m$. By Proposition \ref{prop: extremality}, $\psi(t,0)\preceqq
g(t)$, $0\leq t< T$. In view of \ref{char1 point 1} and
eq.~\eqref{eq: ptD}, the left side of the inequality is equal to
zero. This yields $g\equiv 0$. Now by Lemma \ref{lem: properties
affine on canonical state} and \ref{char1 point 1} (see \eqref{eq:
ptD})
\[
0=\psi_\mathcal I(t,0)=\int_0^t R_0(\psi_\mathcal I(s,0))ds=\gamma_0
t,\quad t\in [0,T),
\]
which implies $\gamma_0=0$ and hence \ref{char1 point
3}.
\ref{char1 point 3} $\Rightarrow$\ref{char1 point 1}: By Lemma
\ref{lem: properties affine on canonical state}, $g:=\psi_{\mathcal
I}(\cdot,0)$ is a solution of \eqref{e:eberhard1} with $g(0)=0$ and
values in $\mathbb{R}^m_{-}$. Assumption \ref{char1 point 3} implies
$\psi_{\mathcal I}(\cdot,0)\equiv 0$. Now $\gamma_0=R_0(0)=0$ as
well as $\psi_0(t_0,0)=0$ and \eqref{eq phi} yield $\psi_0(\cdot,
0)\equiv 0$. Hence \eqref{eq: ptD} holds and \ref{char1 point 1}
follows.
Finally, we show that either \ref{char1 point 1} or \ref{char1 point
3} implies $(\gamma_1,\dots,\gamma_m)=0$. Note that by
Definition \ref{definition par} we have
$\gamma_{m+1}=\dots=\gamma_d=0$. From \eqref{eq psi} for $u_\mathcal
I=0$ and from \eqref{eq: ptD} it follows that $0=R_j(0)\cdot t$ and
hence $R_j(0)=\gamma_j=0$ for all $1\leq j\leq m$.
\end{proof}
\begin{remark}\label{consremark}\rm
\begin{enumerate}
\item By Definition \ref{definition par}, $R_0(0)=0$, $R(0)=0$ is equivalent to $\gamma=0$. This means that the infinitesimal generator of the associated Markovian semi-group has zero potential, see \cite[Equation (2.12)]{duffie.al.03}. If an affine process with $\gamma=0$ fails to be conservative, then it must have state-dependent jumps.
\item \label{consremark for ODE result} The comparison results established in Appendix \ref{sec: ODEs} are the major tool for proving Proposition
\ref{prop: extremality}. They are quite general and therefore allow
for a similar characterization of conservativeness of affine
processes on geometrically more involved state-spaces (as long as
they are proper closed convex cones). In particular, such a
characterization can be derived for affine processes on the cone of
symmetric positive semidefinite matrices of arbitrary dimension, see
also \cite[Remark 2.5]{CFMT}.
\item \label{expremark2cons} Conservativeness of $(X,\mathbb{P}_x)_{x \in D}$ and uniqueness for solutions of the ODE \eqref{e:eberhard1} can be ensured by requiring
\begin{equation}\label{e:sufficient}
\int_{D \backslash \{0\}} \left(|\xi_k| \wedge |\xi_k|^2\right)
\kappa_j(d\xi)<\infty, \quad 1 \leq k, j \leq m,
\end{equation}
as in \cite[Lemma 9.2]{duffie.al.03}, which implies that
$R_{\mathcal I}(\cdot,0)$ is locally Lipschitz continuous on
$\mathbb R^m_{-}$.
\item \label{expremark3cons} If $m=1$, conservativeness corresponds to uniqueness of a one dimensional ODE and can be characterized more explicitly: \cite[Corollary 2.9]{duffie.al.03}, \cite[Theorem 4.11]{filipovic.01} and Theorem \ref{th: char can cons} yield that $(X,\mathbb{P}_x)_{x \in D}$ is conservative if and only if either \eqref{e:sufficient} holds or
\begin{equation}\label{eq osgood}
\int_{0-} \frac{1}{R_{1}(u_1,0)}du_1=-\infty,
\end{equation}
where $\int_{0-}$ denotes an integral over an arbitrarily small left
neighborhood of $0$.
\end{enumerate}
\end{remark}
The sufficient condition \eqref{e:sufficient} from \cite[Lemma 9.2]{duffie.al.03} is easy to check in applications, since it can be read off directly from the parameters of $X$. However, the following example shows that it is not necessarily satisfied for conservative affine processes.
This example is somewhat artificial and constructed so that the moment condition \eqref{e:sufficient} fails but the well-known Osgood condition \eqref{eq osgood} does not. While it is possible to extend the example in several directions (infinite activity, stable-like tails instead of discrete support, multivariate processes, etc.), we chose to present the simplest version in order to highlight the idea.
\begin{example}\label{crucial cons example}\rm
Define the measure
\[
\mu:= \sum_{n=1}^\infty \frac{\delta_n}{n^2},
\]
where $\delta_n$ is the Dirac measure supported by the one-point set
$\{n\}$. Then we have
\[
\beta_1:=\int_0^\infty h(\xi)\,d\mu(\xi)=\sum_{n=1}^\infty
\frac{1}{n^2}<\infty.
\]
Therefore the parameters $(\alpha,\beta,\gamma,\kappa)$ defined by
\[
\alpha=(0,0),\quad\beta=(0,\beta_1),
\quad\gamma=(0,0),\quad\kappa=(0,\mu)
\]
are admissible in the sense of Definition \ref{definition
par}.
Denote by $(X,\mathbb
P_x)_{x\in \mathbb R_+}$ the corresponding affine process provided
by Theorem \ref{t:2.7}. Then
\[
\int_0^\infty (\vert \xi \vert\wedge\vert
\xi\vert^2)\,d\mu(\xi)=\int_1^\infty \xi\,d\mu(\xi) =
\sum_{n=1}^\infty \frac{1}{n} = \infty,
\]
which violates the sufficient condition \eqref{e:sufficient} for
conservativeness. However, we now show that the necessary and sufficient condition \ref{char1 point 3}
of Theorem \ref{th: char can cons} is fulfilled, which in
turn ensures the conservativeness of $(X,\mathbb P_x)_{x\in \mathbb
R_+}$. By construction, $R_0(u)=0$ and \begin{equation}\label{1}
R(u)=R_1(u) = \int_1^\infty (e^{u\xi}-1)\,d\mu(\xi) =
\sum_{n=1}^\infty \frac{e^{un}-1}{n^2}.
\end{equation}
Clearly, $R(u)$ is smooth on $(-\infty,0)$, and differentiation of
the series on the right-hand side of~(\ref{1}) yields
\begin{align}
&R'(u)=\sum_{n=1}^\infty\frac{e^{un}}{n} \label{2} \\
&R''(u)=\sum_{n=1}^\infty e^{un} = \frac{e^{u}}{1-e^{u}}.\label{3}
\end{align}
By \eqref{3}, we have $R'(u)= -\ln(1-e^{u})+C$ and further by
\eqref{2}, $R'(u)$ tends to zero as $u\to-\infty$ and, therefore,
$C=0$. We thus obtain
\begin{equation}\label{4}
R'(u)= -\ln(1-e^{u}).
\end{equation}
Since $1-e^{u}=-u+O(u^2)$, we have $1-e^{u}\geq -u/2$ for $u\leq 0$
small enough. Hence,
\[
0 \leq R'(u)\leq -\ln \left(-\frac{u}{2}\right)
\]
for $u\leq 0$ small enough. As $R(0)=0$ by \eqref{1}, it follows
that
\begin{equation}\label{R est cons}
0 \geq R(u)=-\int_u^0 R'(u')\,du'\geq \int_u^0 \ln
\left(\frac{-u'}{2}\right)\,du' = -u\ln
\left(\frac{-u}{2}\right)+u\geq -2u\ln \left(\frac{-u}{2}\right)
\end{equation}
for $u\leq 0$ small enough. This implies
\begin{equation*}\label{int inf} \int_{-1}^{0-}\frac{du}{R(u)}=-\infty;
\end{equation*}
hence $(X,\mathbb P_x)_{x\in \mathbb R_+}$ is conservative by Remark
\ref{consremark} \ref{expremark3cons}.
\end{example}
\section{Exponentially affine martingales}\label{sec: exp}
We now turn to the characterization of exponentially affine
martingales. Henceforth, let $(X,\mathbb{P}_x)_{x\in D}$ be the
canonical realization on $(\mathbb{D}^d,\scr{D}^d,(\scr{D}^d_t)_{t
\in \mathbb R _+})$ of a conservative, stochastically continuous affine
process with corresponding admissible parameters
$(\alpha,\beta,0,\kappa)$.
We proceed as follows. First, we characterize the \emph{local} martingale property and the positivity of stochastic exponentials. Since these are ``local'' properties, they can be read directly from the parameters of the process. Afterwards, we consider the \emph{true} martingale property of $\scr E(X^i)$. Using Girsanov's theorem, we first establish that it is \emph{necessary} that a related affine process is conservative. Afterwards, we adapt the arguments from \cite{kallsen.muhlekarbe.08b} to show that this is also a \emph{sufficient} condition. Combined with the results of Section 3, this then characterizes the true martingale property of $\scr E(X^i)$ in terms of uniqueness of the solution of a system of generalized Riccati equations. Finally, we adapt our Example \ref{crucial cons example} to construct an exponentially affine local martingale $\scr E(X^i)$ for which the sufficient condition of \cite{kallsen.muhlekarbe.08b} fails, but uniqueness of the Riccati equations and hence the true martingale property of $\scr E(X^i)$ is assured by the Osgood condition \eqref{eq osgood}.
We begin with the local properties. Our first lemma shows that it can be read directly from the corresponding parameters whether $\scr E(X^i)$ is a local martingale.
\begin{lemma}\label{l:mloc}
Let $i \in \{1,\ldots,d\}$. Then $\scr E(X^i)$ is a local
$\mathbb{P}_x$-martingale for all $x \in D$ if and only if
\begin{equation}\label{e:integrable}
\int_{\{|\xi_i|>1\}}|\xi_i|\kappa_j(d\xi)<\infty, \quad 0 \leq j
\leq d,
\end{equation}
and
\begin{equation}\label{e:drift}
\beta_j^i +\int_{D \backslash \{0\}}
(\xi_i-h_i(\xi))\kappa_j(d\xi)=0, \quad 0 \leq j \leq d.
\end{equation}
\end{lemma}
\begin{proof} $\Leftarrow$: On any finite interval $[0,T]$, the mapping $t \mapsto X_{t-}$ is $\mathbb{P}_x$-a.s.\ bounded for all $x \in D$. Hence it follows from Theorem \ref{t:2.12} and \cite[Lemma 3.1]{kallsen.03} that $X^i$ is a local $\mathbb{P}_x$-martingale. Since $\scr E(X^i)=1+\scr E(X^i)_{-} \stackrel{\mbox{\tiny$\bullet$}}{} X^i$ by definition of the stochastic exponential, the assertion now follows from \cite[I.4.34]{js.87}, because $\scr E(X^i)_{-}$ is locally bounded.\\
$\Rightarrow$: As $\kappa_j=0$ for $j=m+1,\ldots, d$ and $X^j_{-}$
is nonnegative for $j=1,\ldots,m$, \cite[Lemma 3.1]{kallsen.03} and
Theorem \ref{t:2.12} yield that
$\int_{\{|\xi_i|>1\}}|\xi_i|\kappa_0(d\xi)<\infty$ and
\begin{equation}\label{e:component}
\int_{\{|\xi_i|>1\}}|\xi_i|\kappa_j(d\xi)X^j_{-}<\infty, \quad 1
\leq j \leq m,
\end{equation}
up to a $d\mathbb{P}_x \otimes dt$-null set on $\Omega \times \mathbb R _+$
for any $x \in D$. Now observe that \eqref{e:component} remains
valid if $X_{-}$ is replaced by $X$, because $X_{-}=X$ holds
$d\mathbb{P}_x \otimes dt$-a.e., for any $x \in D$. Setting
$\Omega_x=\{X_0=x\}$ for some $x \in D$ with $x^j>0$, the
right-continuity of $X$ shows that there exist $\epsilon>0$ and a
strictly positive random variable $\tau$ such that $X^j_{t}(\omega)
\geq \epsilon$ for all $0 \leq t \leq \tau(\omega)$ and for all
$\omega \in \Omega_x$. Denoting the set on which \eqref{e:component}
holds by $\widetilde{\Omega}_0$, it follows that the set $\widetilde{\Omega}_0
\cap [\![ 0,\tau ]\!] \cap \Omega_x \times\mathbb{R}_+ \subset
\Omega \times \mathbb{R}_+$ has strictly positive $d\mathbb{P}_x
\otimes dt$-measure. Therefore it contains at least one $(\omega,t)$
for which
$$ \epsilon \int_{\{|\xi_i|>1\}} |\xi_i| \kappa_j(d\xi) \leq \int_{\{|\xi_i|>1\}} |\xi_i|\kappa_j(d\xi) X^j_t(\omega) < \infty.$$
Hence \eqref{e:integrable} holds. We now turn to \eqref{e:drift},
which is well-defined by \eqref{e:integrable}. Set
$$\widetilde{\beta}^i_j:= \beta^i_j +\int_{D \backslash \{0\}} (\xi_i-h_i(\xi)) \kappa_j(d\xi), \quad 0 \leq j \leq d.$$
Again by \cite[Lemma 3.1]{kallsen.03} and Theorem \ref{t:2.12}, we
have
\begin{equation}\label{e:componentdrift}
\widetilde{\beta}^i_0+\sum_{j=1}^d \widetilde{\beta}^i_j X^{j}_{-}=0,
\end{equation}
up to a $d\mathbb{P}_x \otimes dt$-null set on $\Omega \times \mathbb R _+$
for all $x \in D$. As above, \eqref{e:componentdrift} remains valid
if $X_{-}$ is replaced by $X$. But now, using Fubini's theorem and
the right-continuity of $X$ we find that \eqref{e:componentdrift}
holds for \emph{all} $t \geq 0$ and for all $\omega$ from a set
$\Omega_x$ with $\mathbb{P}_x(\Omega_x)=1$. For $x=0$ and $t=0$ this
yields $\widetilde{\beta}^i_0=0$. Next we choose $x=e_k$ (the $k$-th
unit-vector of the canonical basis in $\mathbb{R}^d$) and $t=0$. In
view of $\widetilde{\beta}_0=0$, \eqref{e:componentdrift} implies
$\widetilde{\beta}^i_k=0$. Hence \eqref{e:drift} holds and we are
done.
\end{proof}
The nonnegativity of $\scr E(X^i)$ can also be characterized completely
in terms of the parameters of $X$.
\begin{lemma}\label{l:positive}
Let $i \in \{1,\ldots,d\}$. Then $\scr E(X^i)$ is $\mathbb{P}_x$-a.s.\
nonnegative for all $x \in D$ if and only if
\begin{equation}\label{e:positive}
\kappa_j(\{\xi \in D: \xi_i <-1\})=0, \quad 0 \leq j \leq m.
\end{equation}
\end{lemma}
\begin{proof} Fix $x \in D$ and let $T>0$. By \cite[I.4.61]{js.87}, $\scr E(X^i)$ is $\mathbb{P}_x$-a.s.\ nonnegative on $[0,T]$ if and only if $\mathbb{P}_x( \,\exists\, t \in [0,T]: \Delta X_t^i < -1)=0$. By \cite[II.1.8]{js.87} and Theorem \ref{t:2.12} this in turn is equivalent to
\begin{equation}\label{e:positivecomponent}
\begin{split}
0 &= \mathbb{E}_x\left( \sum_{t \leq T} 1_{(-\infty,-1)}(\Delta X_t^i)\right)\\
&= \mathbb{E}_x\left( 1_{(-\infty,-1)}(\xi_i)*\mu^X_T\right) \\
&= \mathbb{E}_x\left( 1_{(-\infty,-1)}(\xi_i)*\nu_T\right) \\
&= T\kappa_0(\{\xi \in D:\xi_i<-1\})+\sum_{j=1}^m \kappa_j(\{\xi \in D:\xi_i<-1\}) \int_0^T \mathbb{E}_x(X^j_{t-})dt.
\end{split}
\end{equation}
$\Leftarrow$: Evidently, \eqref{e:positive} implies \eqref{e:positivecomponent} for every $T$.\\
$\Rightarrow$: Since $X^j$ is nonnegative for $j=1,\ldots,m$,
\eqref{e:positivecomponent} implies that $\kappa_0(\{\xi \in D:
\xi_i<-1\})=0$ and $\kappa_j(\{\xi \in D:\xi_i<-1\})\int_0^T
\mathbb{E}_x(X^j_{t-})dt=0$ for all $x \in D$. As
in the proof of Lemma \ref{l:mloc}, it follows that $\int_0^T
\mathbb{E}_x(X^j_{t-})dt$ is strictly positive for any $x \in D$
with $x^j>0$. Hence $\kappa_j(\{\xi \in D:\xi_i<-1\})=0$, which
completes the proof.
\end{proof}
Every positive local martingale of the form $M=\scr E(X^i)$ is a true
martingale for processes $X^i$ with independent increments by
\cite[Proposition 3.12]{kallsen.muhlekarbe.08b}. In general, this
does not hold true for affine processes as exemplified by
\cite[Example 3.11]{kallsen.muhlekarbe.08b}, where the following
\emph{necessary} condition is violated.
\begin{lemma}\label{l:nec}
Let $i \in \{1,\ldots,d\}$ such that $M=\scr E(X^i)$ is
$\mathbb{P}_x$-a.s.\ nonnegative for all $x \in D$. If $M$ is a
local $\mathbb{P}_x$-martingale for all $x \in D$, the parameters
$(\alpha^\star,\beta^\star,0,\kappa^\star)$ given by
\begin{alignat}{2}
\alpha^{\star}_j&:=\alpha_j ,&\quad 0 &\leq j \leq m,\label{e:cstar}\\
\beta^{\star}_j&:=\beta_j+\alpha_j^{\cdot i}+\int_{D \backslash \{0\}} (\xi_i h(\xi)) \kappa_j(d\xi),&\quad 0 &\leq j \leq d,\label{e:bstar}\\
\kappa^{\star}_j(d\xi) &:= (1+\xi_i)\kappa_j(d\xi),&\quad 0 &\leq j
\leq d,\label{e:nustar}
\end{alignat}
are admissible. If $M$ is a true $\mathbb{P}_x$-martingale for all
$x \in D$, the corresponding affine process
$(X,\mathbb{P}^{\star}_x)_{x \in D}$ is conservative.
\end{lemma}
\begin{proof} The first part of the assertion follows from Lemmas \ref{l:mloc} and \ref{l:positive} as in the proof of \cite[Lemma 3.5]{kallsen.muhlekarbe.08b}. Let $M$ be a true martingale for all $x \in D$. Then for every $x \in D$, e.g.\ \cite{cherny.02} shows that there exists a probability measure $\mathbb{P}^M_x \stackrel{\mathrm{loc}}{\ll} \mathbb{P}_x$ on $(\mathbb{D}^d,\scr{D}^d,(\scr{D}^d_t))$ with density process $M$. Then the Girsanov-Jacod-Memin theorem as in \cite[Lemma 5.1]{kallsen.03} yields that $X$ admits affine $\mathbb{P}^M_x$-characteristics as in \eqref{e:b}-\eqref{e:nu} with $(\alpha,\beta,0,\kappa)$ replaced by $(\alpha^{\star},\beta^{\star},0,\kappa^{\star})$. Since $\mathbb{P}^M_x |_{\scr{D}_0} = \mathbb{P}_x |_{\scr{D}_0}$ implies $\mathbb{P}_x^M(X_0=x)=1$, we have $\mathbb{P}^M_x=\mathbb{P}^{\star}_x$ by Theorem \ref{t:2.12} . In particular, the transition function $p_t^{\star}(x,d\xi)$ of $(X,\mathbb{P}^{\star}_x)_{x \in D}$ satisfies $1=\mathbb
{P}_x^M(X_t \in D)=\mathbb{P}_x^{\star}(X_t \in D)=p^{\star}_t(x,D)$, which completes the proof.\end{proof}
If $M=\scr E(X^i)$ is only a local martingale, the affine process
$(X,\mathbb{P}_x^{\star})_{x \in D}$ does not necessarily have to be
conservative (see \cite[Example 3.11]{kallsen.muhlekarbe.08b}). A
careful inspection of the proof of \cite[Theorem
3.1]{kallsen.muhlekarbe.08b} reveals that conservativeness of
$(X,\mathbb{P}^{\star}_x)_{x \in D}$ is also a \emph{sufficient}
condition for $M$ to be a martingale. Combined with Lemma
\ref{l:mloc} and Theorem \ref{th: char can cons} this in turn allows
us to provide the following deterministic necessary and sufficient
conditions for the martingale property of $M$ in terms of the
parameters of $X$.
\begin{theorem}\label{t:main}
Let $i \in \{1,\ldots,d\}$ such that $\scr E(X^i)$ is
$\mathbb{P}_x$-a.s.\ nonnegative for all $x \in D$. Then we have
equivalence between:
\begin{enumerate}
\item $\scr E(X^i)$ is a true $\mathbb{P}_x$-martingale for all $x \in D$.\label{equiv:1}
\item $\scr E(X^i)$ is a local $\mathbb{P}_x$-martingale for all $x \in D$ and the affine process corresponding to the admissible parameters $(\alpha^{\star},\beta^{\star},0,\kappa^{\star})$ given by \eqref{e:cstar}-\eqref{e:nustar} is conservative.\label{equiv:2}
\item \eqref{e:integrable} and \eqref{e:drift} hold and $g=0$ is the only $\mathbb R^m_{-}$-valued local solution of
\begin{equation}\label{e:eberhard}
\partial_t g(t)=R^{\star}_{\mathcal I}(g(t),0), \quad g(0)=0,
\end{equation}
where $R^\star$ is given by \eqref{e:R} with $(\alpha^{\star},\beta^{\star},0,\kappa^{\star})$ instead of $(\alpha,\beta,\gamma,\kappa)$.\label{equiv:3}
\end{enumerate}
\end{theorem}
\begin{proof} \ref{equiv:1} $\Rightarrow$ \ref{equiv:2}: This is shown in Lemma \ref{l:nec}.\\
\ref{equiv:2} $\Rightarrow$ \ref{equiv:3}: This follows from Lemma
\ref{l:mloc} and Theorem \ref{th: char can cons}.\\\ref{equiv:3}
$\Rightarrow$ \ref{equiv:1}: By \eqref{e:integrable},
\eqref{e:drift} and Lemma \ref{l:positive}, Assumptions 1-3 of
\cite[Theorem 3.1]{kallsen.muhlekarbe.08b} are satisfied. Since we
consider time-homogeneous parameters here, Condition 4 of
\cite[Theorem 3.1]{kallsen.muhlekarbe.08b} also follows immediately
from \eqref{e:integrable}. The final Condition 5 of \cite[Theorem
3.1]{kallsen.muhlekarbe.08b} is only needed in \cite[Lemma
3.5]{kallsen.muhlekarbe.08b} to ensure that a semimartingale with
affine characteristics relative to
$(\alpha^{\star},\beta^\star,0,\kappa^\star)$ exists. In view of the
first part of Lemma \ref{l:nec}, Theorem \ref{th: char can cons} and
Theorem \ref{t:2.12} it can therefore be replaced by requiring that
$0$ is the unique $\mathbb R^m_{-}$-valued solution to \eqref{e:eberhard}.
The proof of \cite[Theorem 3.1]{kallsen.muhlekarbe.08b} can then be
carried through unchanged. \end{proof}
\begin{remark}\label{remarkmart}\rm
\begin{enumerate}
\item \label{Remark1} In view of \cite[Lemma 2.7]{kallsen.muhlekarbe.08b}, $\widetilde{M}:=\exp(X^i)$ can be written as $\widetilde{M}=\exp(X^i_0)\scr E(\widetilde{X}^i)$ for the $d+1$-th component of the $\mathbb R _+^m \times \mathbb R^{n+1}$-valued affine process $(X,\widetilde{X}^i)$ corresponding to the admissible parameters $(\widetilde{\alpha},\widetilde{\beta},0,\widetilde{\kappa})$ given by $(\widetilde{\alpha}_{d+1},\widetilde{\beta}_{d+1},\widetilde{\kappa}_{d+1})=(0,0,0)$ and
$$\qquad \quad (\widetilde{\alpha}_j,\widetilde{\beta}_j,\widetilde{\kappa}_j(G)):=\left(\begin{pmatrix} \alpha_j & \alpha_j^{\cdot i} \\ \alpha_j^{i \cdot} & \alpha_j^{ii} \end{pmatrix} , \begin{pmatrix} \beta_j \\ \widetilde{\beta}^{d+1}_j \end{pmatrix},\int_{D \backslash \{0\}} 1_G(\xi,e^{\xi_i}-1)\kappa_j(d\xi)\right)$$
for $G \in \scr B^{d+1}$, $j=0,\ldots,d$, and
\begin{equation*}
\widetilde{\beta}^{d+1}_j=\beta_j^i+\frac{1}{2}\alpha_j^{ii}+\int_{D
\backslash \{0\}} (h_i(e^{\xi_i}-1)-h_i(\xi))\kappa_j(d\xi).
\end{equation*}
This allows to apply Theorem \ref{t:main} in this situation as well.
\item Theorem \ref{t:main} is stated for the stochastic exponential $\scr E(X^i)$ of $X^i$, that is, the projection of $X$ to the $i$-th component. It can, however, also be applied to the stochastic exponential $\scr E(A(X))$ of a general affine functional $A:D \to \mathbb{R}: x \mapsto p+Px$, where $p \in \mathbb{R}$ and $P \in \mathbb{R}^d$. To see this, note that it follows from It\^o's formula and Theorem \ref{t:2.12} that the $\mathbb{R}^m_+ \times \mathbb{R}^{n+1}$-valued process $Y=(X,A(X))$ is affine with admissible parameters $(\widetilde{\alpha},\widetilde{\beta},0,\widetilde{\kappa})$ given by $(\widetilde{\alpha}_{d+1},\widetilde{\beta}_{d+1},\widetilde{\kappa}_{d+1})=(0,0,0)$ and
$$
\qquad \qquad \widetilde{\alpha}_j=\begin{pmatrix} \alpha_j & \alpha_j P \\ P^{\top} \alpha_j & P^{\top} \alpha_j P \end{pmatrix}, \quad \widetilde{\beta}_j = \begin{pmatrix} \beta_j \\ P^{\top} \beta_j + \int (h(P^{\top}x)-P^{\top}h(x))\kappa_j(dx) \end{pmatrix},
$$
as well as
$$\widetilde{\kappa}_j(G)=\int_{D\backslash \{0\}} 1_G(x,P^{\top}x)\kappa_j(dx) \quad \forall G \in \scr{B}^{d+1},$$
for $j=0,\ldots,d$. Therefore one can simply apply Theorem \ref{t:main} to $\scr E(Y^{d+1})$.
\item \label{expremark2} Conservativeness of $(X,\mathbb{P}^\star_x)_{x \in D}$ and uniqueness for solutions of ODE \eqref{e:eberhard1} can be ensured by requiring the moment condition \eqref{e:sufficient}
for $\kappa_j^\star$. The implication \ref{equiv:3} $\Rightarrow$ \ref{equiv:1} in Theorem \ref{t:main} therefore leads to the easy-to-check sufficient criterion \cite[Corollary 3.9]{kallsen.muhlekarbe.08b} for the martingale property of $M$.
\item \label{expremark3} By Remark \ref{consremark} \ref{expremark3cons} we know that in the case $m=1$, $(X,\mathbb{P}^\star_x)_{x \in D}$ is conservative if and only if either \eqref{e:sufficient} holds for $\kappa_j^\star$ or equation \eqref{eq osgood} holds for $R_{1}^{\star}$. Together with Remark \ref{Remark1}, this leads to the necessary and sufficient condition for the martingale property of ordinary exponentials $\exp(X^i)$ obtained in \cite[Theorem 2.5]{kellerressel.09}.
\end{enumerate}
\end{remark}
We conclude by providing an example of an exponentially affine local
martingale for which the sufficient conditions from
\cite{kallsen.muhlekarbe.08b} cannot be applied. Our main Theorem
\ref{t:main}, however, shows that is indeed a true martingale. This process is based on the one in Example \ref{crucial cons example} and therefore again somewhat artificial. Various extensions are possible, but we again restrict ourselves to the simplest possible specification here.
\begin{example}\rm
Consider the $\mathbb{R}_+ \times \mathbb{R}$-valued affine process
$(X^1,X^2)$ corresponding to the admissible parameters
$$\alpha=(0,0,0), \quad \beta=(0,\beta_1,0), \quad \gamma=(0,0,0),\quad \kappa=(0,\kappa_1,0),$$
where
$$\begin{pmatrix} \beta^1_1 \\ \beta^2_1 \end{pmatrix}=\begin{pmatrix} \sum_{n=1}^\infty \frac{1}{(1+n)n^2} \\ \sum_{n=1}^\infty \frac{1-n}{(1+n)n^2}\end{pmatrix} \quad \mbox{and} \quad \kappa_1=\sum_{n=1}^\infty \frac{\delta_{(n,n)}}{(1+n)n^{2}},$$
for the Dirac measures $\delta_{(n,n)}$ supported by $\{(n,n)\}$, $n \in \mathbb{N}$. Since
$X^2$ has only positive jumps, $\scr E(X^2)$ is positive. Moreover, it
is a local martingale by Lemma \ref{l:mloc}, because
$$\int_{\{|\xi_2|>1\}}|\xi_2|\kappa_1(d\xi)= \sum_{n=1}^\infty \frac{1}{(1+n)n}<\infty$$
and $\beta^2_1+\int_0^\infty (\xi_2-h_2(\xi_2))\kappa_1(d\xi)=0$. Note
that \cite[Corollary 3.9]{kallsen.muhlekarbe.08b} is not applicable, because
$$\int_{\{|\xi_2|>1\}}\xi_1(1+\xi_2) \kappa_1(d\xi)= \sum_{n=1}^\infty \frac{1}{n}=\infty.$$
However, by Theorem \ref{t:main} and Remark \ref{remarkmart}(iii),
$\scr E(X^2)$ is a true martingale, since we have shown in Example
\ref{crucial cons example} that \eqref{eq osgood} is satisfied for
$$R^\star_1(u_1,0)=\sum_{n=1}^\infty \frac{e^{u_1 n}-1}{n^2}.$$
\end{example}
\begin{appendix}
\section{ODE comparison results in non-Lipschitz setting}\label{sec: ODEs}
Let $C$ be a closed convex proper cone with nonempty interior
$C^\circ$ in a normed vector space $(E,\|\,\,\|)$. The partial order
induced by $C$ is denoted by $\preceqq$. For $x,y\in E$, we write
$x\ll y$ if $y-x\in C^\circ$. We denote by $C^*$ the dual cone of
$C$. Let $D_g$ be a set in $E$. A function $g\colon D_g\rightarrow
E$ is called \emph{quasimonotone increasing}, in short \emph{qmi},
if for all $l\in C^*$, and $x,y\in D_g$
\[
(x\preceqq y,\,\,l(x)=l(y))\Rightarrow(l(g(x))\leq l(g(y))).
\]
The next lemma is a special case of Volkmann's result \cite[Satz
1]{Volkmann}.
\begin{lemma}\label{th: Volkmann}
Let $0<T\leq \infty$, $D_f\subset E$, and $f\colon [0,T)\times
D_f\rightarrow E$ be such that $f(t,\cdot)$ is qmi on $D_f$ for all
$t\in [0,T)$. Let $\zeta,\eta:[0,T)\rightarrow D_f$ be curves that
are continuous on $[0,T)$ and differentiable on $(0,T)$. Suppose
$\zeta(0)\gg \eta(0)$ and $\dot {\zeta}(t)-f(t,\zeta(t))\gg
\dot{\eta}(t)-f(t,\eta(t))$ for all $t\in (0,T)$. Then $\zeta(t)\gg
\eta(t)$ for all $t\in[0,T)$.
\end{lemma}
A function $g:[0,T)\times D_g\rightarrow E$ is called \emph{locally
Lipschitz}, if for all $0<t<T$ and for all compact sets $K\subset
D_g$ we have
\[
L_{t,K}(g):=\sup_{0<\tau<t,\ x,y \in K: x \neq
y}\frac{\|g(\tau,x)-g(\tau,y)\|}{\|x-y\|}<\infty
\]
where $L_{t,K}(g)$ is usually called the Lipschitz constant.
We now use Lemma \ref{th: Volkmann} to prove the following general
comparison result.
\begin{proposition}\label{prop: essential comparison}
Let $T$, $D_f$, and $f$ be as in Lemma~$\ref{th: Volkmann}$.
Suppose, moreover, that $D_f$ has a nonempty interior and $f$ is
locally Lipschitz on $[0,T)\times D_f^\circ$. Let
$\zeta,\eta:[0,T)\rightarrow D_f$ be curves that are continuous on
$[0,T)$, differentiable on $(0,T)$, and satisfy the conditions
\begin{enumerate}
\item $\eta(t)\in D_f^\circ$
\item $\dot {\zeta}(t)-f(t,\zeta(t))\succeqq \dot{\eta}(t)-f(t,\eta(t))$
\item $\zeta(0)\succeqq \eta(0)$
\end{enumerate}
for all $t\in [0,T)$. Then $\zeta(t)\succeqq\eta(t)$ for all $t\in
[0,T)$.
\end{proposition}
\begin{proof}
Fix $t_0\in [0,T)$. Since $\eta$ is continuous, the image $S$ of the
segment $[0,t_0]$ under the map $\eta$ is a compact subset of
$D_f^\circ$. Let $\delta>0$ be such that the closed
$\delta$-neighborhood $S_\delta$ of $S$ is contained in $D_f^\circ$.
By the local Lipschitz continuity of $f$ on $D_f^\circ$, there
exists a constant $L>0$ such that
\begin{equation}\label{Lipschitz}
\|f(t,x)-f(t,y)\|\leq L\|x-y\|
\end{equation}
for any $t\in [0,t_0]$ and $x,y\in S_\delta$. Let $c\in C^\circ$ be
such that $\|c\|=1$ and let $d_c$ denote the distance from $c$ to
the boundary $\partial C$ of $C$. For $\varepsilon>0$, we set
$h_\varepsilon(t):=\varepsilon e^{2Lt/d_c} c$. If $\varepsilon \leq
e^{-2Lt_0/d_c}\delta$, then $\eta(t)-h_\varepsilon(t)\in S_{\delta}$
for any $t\in [0,t_0]$, and (\ref{Lipschitz}) gives
\begin{equation}\label{norm_estimate}
\|f(t,\eta(t)-h_\varepsilon(t))-f(t,\eta(t))\|\leq
L\|h_{\varepsilon}(t)\|,\quad t\in [0,t_0].
\end{equation}
Since $C$ is a cone, the distance from $Lh_\varepsilon(t)/d_c$ to
$\partial C$ is equal to $L\varepsilon
e^{2Lt/d_c}=L\|h_\varepsilon(t)\|$. In view
of~(\ref{norm_estimate}), it follows that
\[
Lh_\varepsilon(t)/d_c\succeqq
f(t,\eta(t)-h_\varepsilon(t))-f(t,\eta(t))
\]
and hence
\begin{equation}\label{eq: comp1}
-\dot h_\varepsilon(t)=-2L h_\varepsilon(t)/d_c\ll
f(t,\eta(t)-h_\varepsilon(t))-f(t,\eta(t)),\quad t\in [0,t_0],
\end{equation}
for $\varepsilon$ small enough. This implies that
\begin{equation*}
\dot{\zeta}(t)-f(t,\zeta(t))\succeqq \dot{\eta}(t)-f(t,\eta(t))\gg
\dot{\eta}(t)-\dot h_\varepsilon(t)-f(t,\eta(t)+h_\varepsilon(t)).
\end{equation*}
Applying Lemma \ref{th: Volkmann} to the functions $\zeta(t)$ and
$\eta(t)+h_\varepsilon(t)$ yields $\zeta(t)\gg
\eta(t)+h_\varepsilon(t)$, for all $t\in [0, t_0]$. Now letting
$\varepsilon\rightarrow 0$ yields the required inequality for all
$t\in [0,t_0]$. This proves the assertion, because $t_0< T$ can be
chosen arbitrarily.
\end{proof}
If we consider the differential equation
\begin{equation}\label{eq: ODE general}
\dot {\xi}=f(t,\xi(t)),\quad \xi(0)=u\in D_f,
\end{equation}
Proposition \ref{prop: essential comparison} allows the following
immediate conclusion, which is the key tool for proving Proposition
\ref{prop: extremality} and in turn Theorem \ref{th: char can cons}.
\begin{corollary}\label{th1}
Let $T$, $D_f$ and $f$ be as in Lemma \ref{prop: essential
comparison}. Suppose further that equation \eqref{eq: ODE general}
gives rise to a global solution $\psi^\circ(t,u)\colon\,\mathbb
R_+\times D^\circ_f\rightarrow D_f^\circ$. Let $u_2\in D_f^\circ$
and let $\xi\colon [0,T)\rightarrow D_f$ be a solution of \eqref{eq:
ODE general} such that $\xi(0)=u_1\succeqq u_2$. Then
$\xi(t)\succeqq \psi^\circ(t,u_2)$, for all $t\in [0,T)$.
\end{corollary}
\end{appendix}
\providecommand{\bysame}{\leavevmode\hbox
to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
} \providecommand{\href}[2]{#2}
| 2024-02-18T23:40:05.404Z | 2010-11-30T02:06:24.000Z | algebraic_stack_train_0000 | 1,295 | 8,761 |
|
proofpile-arXiv_065-6423 | \section{Mutual Unbiased Bases (MUB) - brief review}
We briefly summarize some of the MUB features of the continuous,
$d\rightarrow \infty$,
Hilbert space which will be used later.\\
The complete orthonormal eigenfunctions of the quadrature operator,
\begin{equation}\label{X}
\hat{X}_{\theta}\equiv
cos\theta\;\hat{x}+sin\theta\;\hat{p}=U^{\dagger}(\theta)\hat{x} U(\theta),
\end{equation}
with,
\begin{equation}
U(\theta)=e^{i\theta \hat{a}^{\dagger}\hat{a}};\;\;\hat{a}\equiv \frac{1}{\sqrt2}
(\hat{x}+ i\hat{p}),\;\hat{a}^{\dagger}\equiv \frac{1}{\sqrt2}(\hat{x}- i\hat{p}),
\end{equation}
are labelled by $|x,\theta\rangle$:
\begin{equation}\label{X1}
\hat{X}_{\theta}|x,\theta\rangle=x|x,\theta\rangle.
\end{equation}
(As is well known \cite{ulf} $\hat{a},\;\hat{a}^{\dagger}$ are referred to as
"creation" and annihilation" operators, respectively.) This (Eq.(\ref{X},\ref{X1})
allow us to relate bases of different labels much like time evolution governed by an
harmonic oscillators hamiltonian \cite{larry}: we may view the basis labelled by
$\theta$ as "evolved" from one labelled by $\theta=0$ i.e. for a vector in the
x-representation we have,
\begin{equation}\label{evol}
|x,\theta \rangle= U^{\dagger}(\theta)|x\rangle.
\end{equation}
We note that the state whose eigenvalue is x in a basis labelled by $\theta$, i.e.
$|x,\theta\rangle$, are related to the eigenfunction, $|p\rangle,$ of the momentum
operator via,
\begin{equation}\label{pi/2}
|x,\frac{\pi}{2}\rangle=|p\rangle.
\end{equation}
(In this equation $|p\rangle$ is the eigenfunction of the momentum operator,
$\hat{p}$, whose eigenvalue ,p, is numerically equal to x.) Similarly we have,
\begin{equation}\label{reverse}
|x,\pm \pi\rangle=|-x\rangle,
\end{equation}
i.e. evolution by $\pm \pi$ may be viewed as leading to a vector in the same basis
(i.e. $\theta$ intact) but evolves to a vector whose eigenvalue is of opposite sign.
Returning to Eq.(\ref{evol}) we utilize the known analysis of the evolution operator
$U(\theta)$ \cite{larry} to deduce that, in terms of the eigenfunction of
$\hat{Y}_0=\hat{x}$, viz the x-representation, $|y;\theta\rangle$ is given by,
\begin{equation} \label{ls}
\langle x|y,\theta \rangle=\langle
x|U^{\dagger}(\theta)|y\rangle=\frac{1}{\sqrt{2\pi sin \theta}}
e^{-\frac{i}{2sin\theta}\left([x^2+y^2]cos \theta-2xy\right)}.
\end{equation}
These states form a set of MUB each labelled by $\theta$:
\begin{equation}\label{mub}
|\langle x;\theta|y, \theta' \rangle|=\frac{1}{\sqrt{2\pi |sin(\theta-\theta'|}}.
\end{equation}
Thus the verification of the particle as being in the state of coordinate x in the
basis labelled by $\theta$ implies that it is equally likely to be in any coordinate
state x' in the basis labelled by $\theta'$ ($\theta'\ne \theta)$. Note however that
in the continuum $(d \rightarrow \infty)$ considered above, the inter-basis scalar
product, Eq.(\ref{mub}), retains, in general, their basis labels ($\theta,\theta')$.
For a finite, d dimensional Hilbert space the scalar inter MUB product is, in
absolute value, $\frac{1}{\sqrt d}$ and does not contain any information on the base
labels \cite{amir}. It was shown by Schwinger \cite{schwinger} that complete
operator basis (COB) for this problem constitute of $\hat{Z}$ and $\hat{X}$ with,
\begin{equation}
\hat{Z}|n\rangle=\omega^n |n\rangle,\;\;\omega=e^{i\frac{2\pi}{d}},\;\;\hat{X}|n\rangle=
|n+1\rangle\;\;|n+d\rangle =|n\rangle.
\end{equation}
It was further shown \cite{ivanovich,wootters,tal,klimov} that the maximal number of
MUB possible for a d dimensional Hilbert space is d+1. However only for d=prime (or
a power of a prime) d+1 such bases are known to exist. (3 such bases are known for
all values of d.) For the case of d=prime a general MUB basis is given in terms of
the computational basis,\cite{tal}
\begin{equation}
|m;b\rangle=\frac{1}{\sqrt d}\Sigma_0^{d-1}\omega^{\frac{b}{2}n(n-1)-nm}|n\rangle.
\end{equation}
These are the eigenfunction of $\hat{X}\hat{Z}^{b}$, $b=0,1,...d-1$. Here b may be
used to label the basis. (These d bases supplemented with the computational basis
form the d+1 MUB , \cite{tal}.)
\section{The continuum, $d\rightarrow \infty$, case}
The generic maximally entangled state is the EPR \cite{epr} state,
\begin{equation}
|\xi,\mu \rangle=\frac{1}{\sqrt{2\pi}}\int dx{_1}dx{_2}\delta\big(\frac{x_1-x_2}
{\sqrt2}-\xi\big)e^{i\mu\frac{x_1+x_2}{\sqrt2}}|x_1\rangle |x_2\rangle,
\end{equation}
($\sqrt2$ is introduced for later convenience.) We now consider an alternative means
of accounting for the two particles states to which we refer to as the "relative"
and "center of mass" coordinates (we assume equal masses for simplicity),
\begin{equation}
\xi=\frac{x{_1}-x{_2}}{\sqrt2};\;\;\eta=\frac{x_1+x_2}{\sqrt2}.
\end{equation}
The corresponding operators, each acting on one of these coordinates, are
\begin{equation}\label{xieta}
\hat{\xi}=\frac{\hat{x}_1-\hat{x}_2}{\sqrt2};\;\;\eta=\frac{\hat{x}_1+\hat{x}_2}{\sqrt2},
\end{equation}
with,
\begin{equation}
\hat{\xi}|\xi\rangle=\xi|\xi\rangle;\;\;\hat{\eta}|\eta\rangle=\eta|\eta\rangle.
\end{equation}
Using relations of the type,
\begin{equation}
\langle x_1x_2|\hat{\xi}|\xi\eta\rangle=\xi\langle x_1x_2|\xi\eta\rangle=
\langle x_1x_2|\frac{\hat{x}_1-\hat{x}_2}{\sqrt2}|\xi\eta\rangle=
\frac{x_1-x_2}{\sqrt2}\langle x_1x_2|\xi\eta\rangle,
\end{equation}
One may show that,
\begin{equation}
\langle x_1x_2|\xi\eta\rangle=\delta\big(\xi-\frac{x_1-x_2}{\sqrt2}\big)\delta\big
(\eta-\frac{x_1+x_2}{\sqrt2}\big).
\end{equation}
We note that $\hat{x}_1,\hat{p}_1$ form a complete operator basis (COB) for the
first particle Hilbert space (we do not involve spin) and similarly
$\hat{x}_2,\hat{p}_2$ for the second particle, i.e.,
\begin{eqnarray}
\left[\hat{x}_1,\hat{p}_1\right] &=& \left[\hat{x}_2,\hat{p}_2\right]=i,\;\;\nonumber \\
\left[\hat{x}_1,\hat{p}_2\right] &=& \left[\hat{x}_2,\hat{p}_1\right]=
\left[\hat{x}_2,\hat{x}_1\right]=\left[\hat{p}_2,\hat {p}_1\right]=0,
\end{eqnarray}
thus we have that the two pairs of operators form a COB for the
combined ($d^2$ dimensional) Hilbert space. Defining,
\begin{equation}\label{numu}
\hat{\nu}\equiv\frac{\hat{p}_1-\hat{p}_2}{\sqrt2},\;\;\hat{\mu}\equiv\frac{\hat{p}_1+
\hat{p}_2}{\sqrt2},
\end{equation}
we have
\begin{eqnarray}\label{comm}
\left[\hat{\xi},\hat{\nu}\right]&=&\left[\hat{\eta},\hat{\mu}\right]=i,\;\;\nonumber \\
\left[\hat{\xi},\hat{\mu}\right]&=&\left[\hat{\xi},\hat{\eta}\right]=
\left[\hat{\eta},\hat{\nu}\right]=\left[\hat{\mu},\hat{\nu}\right]=0.
\end{eqnarray}
These (viz: $\hat{\xi},\hat{\nu},\hat{\eta},\hat{\mu}$) form an
alternative COB for the (combined) Hilbert space with
$\hat{\xi},\hat{\nu}$ spanning the relative coordinates space
while $\hat{\eta},\hat{\mu}$ the "center of mass" one. By analogy
with the single particle state analysis we now define "creation"
and "annihilation" operators for the collective degrees of
freedom:
\begin{eqnarray}
\hat{A}&=&\frac{1}{\sqrt2}(\hat{\xi}+i\hat{\nu}),\;\;\hat{A}^{\dagger}=
\frac{1}{\sqrt2}(\hat{\xi}-i\hat{\nu}),\nonumber\\
\hat{B}&=&\frac{1}{\sqrt2}(\hat{\eta}+i\hat{\mu}),\;\;\hat{B}^{\dagger}=
\frac{1}{\sqrt2}(\hat{\eta}-i\hat{\mu}),
\end{eqnarray}
these abide by the commutation relations
\begin{equation}
\left[\hat{A},\hat{A}^{\dagger}\right]=\left[\hat{B},\hat{B}^{\dagger}\right]=1,
\end{equation}
with all other commutators vanishing, and the "evolution"
(Eq.(\ref{evol})) operators are,
\begin{equation}
V_A(\theta)=e^{i\theta
\hat{A}^{\dagger}\hat{A}};\;\;V_B(\theta)=e^{i\theta
\hat{B}^{\dagger}\hat{B}}.
\end{equation}
These operators are (as we shall see shortly) our entangling operators: each (pair)
act on different "collective" coordinate. We note that for $\theta=\theta'$ and only
in this case,
\begin{equation}\label{product}
V_A^{\dagger}(\theta)V_B^{\dagger}(\theta)=U_1^{\dagger}(\theta)U_2^{\dagger}(\theta),
\end{equation}
i.e. in this case a simple relation exists between the particles' operators and the
collective ones. The results of section II, Eq.(\ref{evol}), now read,
\begin{eqnarray}
|\xi,\theta\rangle&=& V_A^{\dagger}(\theta)|\xi\rangle, \nonumber\\
|\eta,\theta'\rangle&=& V_B^{\dagger}(\theta')|\eta\rangle.
\end{eqnarray}
The commutation relation, Eq.(\ref{comm}), implies that the basis
$|\eta\rangle,$ the eigenbasis of $\hat{\eta}$ (i.e.
$V_B^{\dagger}(0)|\eta\rangle$), and the basis $|\mu\rangle,$ the
eigenstates of $\hat{\mu},$ (i.e. the states
$V_B^{\dagger}(\frac{\pi}{2}))|\eta\rangle$, are MUB with,
\begin{equation}
\langle\eta|\mu\rangle=\frac{1}{\sqrt{2\pi}}e^{i\eta\mu},
\end{equation}
With similar expression for $\langle \xi|\nu\rangle$. Note that, in our approach,
these follow from the equations that corresponds to Eq.(\ref{ls}). We have then that the
maximally entangled state (the EPR state)
$$|\xi\rangle|\mu\rangle$$
is a product state in the collective variables. It is natural now to consider mutual
unbiased collective bases (MUCB) labelled, likewise, with $\theta$: The relative
coordinates bases $V_A^{\dagger}(\theta)|\xi\rangle$ is one such MUCB. The center of
mass $V_B^{\dagger}(\theta)|\eta\rangle$ is another. We now formulate our link
between MUB and (maximal) entanglement thus consider the (product) two particle
state $|x_1\rangle |x_2\rangle$. It may be written in terms of a product state in
the "collective" coordinates: (When clarity requires we shall mark henceforth the
eigenstates of the collective operators with double angular signs,
$\rangle\rangle,$.)
\begin{equation}
|x_1\rangle |x_2\rangle=\int d\xi'd\eta'\langle \xi',\eta'|x_1,x_2\rangle
|\xi'\rangle\eta'\rangle\;=\;|\xi=\frac{x_1-x_2}{\sqrt2}\rangle\rangle|\eta=
\frac{x_1+x_2}{\sqrt 2}\rangle\rangle.
\end{equation}
We now assert that replacing the basis $|\eta\rangle$ by any of the MUB bases,
$$|\eta\rangle\rightarrow|\eta,\theta\rangle =
V_B^{\dagger}(\theta)|\eta\rangle,\;\theta\ne0,$$
give a maximally entangled state: $|\xi\rangle|\eta,\theta\rangle$. (The EPR state,
$|\xi,\mu\rangle$ is the special case of $V_B^{\dagger}(\frac{\pi}{2})$.) The proof
is most informative with the state
$|\xi\rangle|\mu,\theta\rangle$: (Note: $|\mu,\theta\rangle=V_{B}(\theta)|\mu\rangle=
V_{B}(\theta+\frac{\pi}{2})|\eta\rangle$.)
\begin{eqnarray} \label{maxentg}
|\xi\rangle|\mu,\theta\rangle&=&\int dx_1dx_2|x_1,x_2\rangle\langle
x_1,x_2|\int d\eta d\bar{\eta}\;|\xi,\eta\rangle\langle\eta|\bar{\eta},\theta\rangle
d\bar{\eta}\langle\bar{\eta},\theta|\mu,\theta\rangle \nonumber \\
&=&\frac{\sqrt2}{2\pi\cos\theta}e^{\frac{i\mu}{2\cos\theta}\big(2\xi-\mu \sin\theta\big)}
\int dxe^{\frac{\sqrt2 i x\mu}{cos\theta}}
|x\rangle|x-\sqrt2 \xi\rangle.
\end{eqnarray}
The various matrix elements are given by,
\begin{eqnarray}
\langle x_1|\xi,\eta \rangle&=&\delta\left( x_1-\frac{\eta+\xi}{\sqrt2}\right)
\Big|x_2=\frac{\eta-\xi}{\sqrt2}\Big\rangle,\nonumber \\
\langle \eta|\bar{\eta},\theta \rangle&=&\frac{1}{\sqrt{2\pi|sin\theta|}}
e^{-\frac{i}{2sin\theta}\big[(\eta^2+\bar{\eta}^2)cos\theta -2\bar{\eta}\eta \big]},
\nonumber \\
\langle\bar{\eta},\theta|\mu,\theta\rangle&=&\frac{1}{\sqrt{2\pi}}e^{i\bar{\eta}\mu}.
\end{eqnarray}
The state, Eq.(\ref{maxentg}), upon proper normalization, is the maximally entangled
EPR state, as claimed (cf. Appendix B): it involves, with equal probability, all the
vectors of the x representation. It follows by inspection that this remain valid to
all states (exceptions are specific angles that are specified below) build with
MUCB,
\begin{equation}
|\xi,\theta\rangle|\eta,\theta'\rangle.
\end{equation}
We summarize our consideration thus far as follows: Consider two pairs of operators
(we assume that these two form a COB) pertaining to two Hilbert spaces. Each pair is
made up of {\it non commuting} operators, e.g. $\hat{x}_1,\hat{p}_1$ and
$\hat{x}_2,\hat{p}_2$. Now form two {\it commuting} pairs of operators with these
operators as their constituents, e.g. $\hat{R}_A(0)=\hat{x}_1-\hat{x}_2$ and
$\hat{R}_B(\frac{\pi}{2})= \hat{p}_1+\hat{p}_2$: the common eigenfunction of
$\hat{R}_A(0)$ and $\hat{R}_B(\frac{\pi}{2})$ is, necessarily, an entangled state.
This was generalized via the consideration of the common eigenfunction of the
commuting operators
\begin{eqnarray}
R_A(\theta)&\equiv& V^{\dagger}_{A}(\theta)\hat{\xi}V_{A}(\theta)=
cos\theta \hat{\xi}+sin\theta \hat{\nu}, \nonumber \\
R_B(\theta') &\equiv&V^{\dagger}_B(\theta')\hat{\eta}V_B(\theta')=
cos\theta' \hat{\eta}+sin\theta' \hat{\mu}.
\end{eqnarray}
These commute for all $\theta, \theta'$ and thus have common eigenfunctions. For
$\theta=\theta'\;and\;\theta=\theta'\pm \pi$ and only for these values, the common
eigenfuction is a product state (in these cases the constituents commute, e.g. $
\hat{x}_1-\hat{x}_2$ and $\hat{x}_1+\hat{x}_2$). This is shown in Appendix C. For
all other $\theta, \theta'$ the common eigenfunction is an entangled state. (
Moreover, these states are maximally entangled states. The proof is outlined in
Appendix B.) The definition of the "collective" coordinates is such as to assure the
decoupling of the combined Hilbert space to two independent subspaces whose
constituent (pairs) operators commute (e.g. $ \hat{x}_1-\hat{x}_2$ and
$\hat{x}_1+\hat{x}_2$) much as it (the Hilbert space) was decoupled with the
individual particles operators.
\section{Finite dimensional analysis - collective coordinates}
We now turn to the more intriguing cases of d dimensional Hilbert spaces.
We confine our study to (two) d-dimensional spaces with d a prime ($\ne2$). The indices
are elements of an algebraic field of order d. The computational, two particle, basis
states $$|n\rangle_1 |m\rangle_2\;\;n,m=0,1,..d-1.$$ spans the space. A COB (complete
operator basis) is defined via ($i=1,2$,
\begin{eqnarray}
Z_i|n\rangle_{i}&=&\omega^{n_i}|n\rangle_{i},\;\;\omega=e^{i\frac{2\pi}{d}} \nonumber \\
X_i|n\rangle_{i}&=&|n+1\rangle_{i},
\end{eqnarray}
We now define our collective coordinate operators via,
\begin{equation}\label{coll}
\bar{Z}_1\equiv Z_1^{\frac{1}{2}}Z_2^{-\frac{1}{2}};\;\bar{Z}_2\equiv Z_1^{\frac{1}{2}}
Z_2^{\frac{1}{2}}.
\end{equation}
(We remind the reader that the exponent value of $\frac{1}{2}$ is a field number such
that twice its value is 1 mode[d], e.g. for d=7, $\frac{1}{2}=4.$)
Eq.(\ref{coll}) implies that,
\begin{equation}
Z_1=\bar{Z}_1\bar{Z}_2,\;\;Z_2=\bar{Z}_1^{-1}\bar{Z}_2.
\end{equation}
The spectrum of $\bar{Z}_i$ is $\omega^{\bar{n}}, \bar{n}=0,1,..d-1$ since we have that
$\bar{Z}_i^{d}=1$
and we consider the bases that diagonalize $\bar{Z}_i$:
\begin{equation}
\bar{Z}_i|\bar{n}_i\rangle=\omega^{\bar{n}_i}|\bar{n}_i\rangle.
\end{equation}
To obtain the transformation function $\langle n_1, n_2|\bar{n}_1,\bar{n}_2\rangle$
we evaluate $\langle n_1,n_2|A|\bar{n}_1,\bar{n}_2\rangle$ with A equals
$Z_1,Z_2,\bar{Z}_1,\bar{Z}_2$ in succession. e.g. for $A=Z_1$,
\begin{equation}
\langle n_1, n_2|Z_1|\bar{n}_1,\bar{n}_2\rangle=\omega^{n_1}\langle n_1,n_2|\bar{n}_1,
\bar{n}_2\rangle=\omega^{\bar{n}_1+\bar{n}_2}\langle n_1,n_2|\bar{n}_1,\bar{n}_2\rangle.
\end{equation}
These give us the following relations (all equations are modular: mode[d]),
\begin{eqnarray}
n_1&=&\bar{n}_1+\bar{n}_2;\;\;n_2=-\bar{n}_1+\bar{n}_2, \nonumber \\
\bar{n}_1&=&\frac{n_1}{2}-\frac{n_2}{2};\;\;\bar{n}_2=\frac{n_1}{2}+\frac{n_2}{2}.
\end{eqnarray}
Whence we deduce,
\begin{equation}
\langle n_1,n_2|\bar{n}_1,\bar{n}_2\rangle=\delta_{n_1,\bar{n}_1+\bar{n}_2}
\delta_{n_2,\bar{n}_1-\bar{n}_2}.
\end{equation}
In a similar fashion we now define,
\begin{equation}
\bar{X}_1\equiv X_1X_2^{-1},\;\;\bar{X}_2\equiv X_1X_2\;\rightarrow X_1=
\bar{X}_1^{1/2}\bar{X}_2^{1/2},\;X_2=\bar{X}_1^{-1/2}\bar{X}_2^{1/2}.
\end{equation}
These entail,
\begin{eqnarray}
\bar{X}_i\bar{Z}_i&=&\omega\bar{Z}_i\bar{X}_i,\;i=1,2 \nonumber \\
\bar{X}_i\bar{Z}_j&=&\bar{Z}_j\bar{X}_i ,\;i \ne j.
\end{eqnarray}
Thence,
\begin{equation}
\bar{X}_i|\bar{n}_i\rangle=|\bar{n}_i+1\rangle,\;\;i=1,2
\end{equation}
and, denoting the eigenvectors of the barred operators (i.e. the collective coordinates)
with double angular sign we have that
\begin{eqnarray}
\bar{X}_1|n_1,n_2\rangle&=&\bar{X}_1|\frac{n_1-n_2}{2},\frac{n_1+n_2}{2}\rangle\rangle=
|\frac{n_1-n_2}{2}+1,\frac{n_1+n_2}{2}\rangle\rangle \nonumber \\
\bar{X}_2|n_1,n_2\rangle&=&\bar{X}_2|\frac{n_1-n_2}{2},\frac{n_1+n_2}{2}\rangle\rangle=
|\frac{n_1-n_2}{2},\frac{n_1+n_2}{2}+1\rangle\rangle .
\end{eqnarray}
Recalling, Eq.(10), the set of MUB associated with
$|\bar{n}_2\rangle\rangle$, viz $|\bar{n}_2,b\rangle\rangle$ (with
$b=0,1..d-1$):
\begin{equation}
|\bar{n}_2,b\rangle=\frac{1}{\sqrt
d}\Sigma_{\bar{n}}\omega^{\frac{b}{2}\bar{n}(\bar{n}+1)
-\bar{n}\bar{n}_2}|\bar{n}\rangle\rangle.
\end{equation}
This state is an eigenfunction of $\bar{X}_2\bar{Z}_2^b$, cf Eq. ().
Our association of maximally entangled states with MUB amounts to the following. Given a
product state. We write it as a product state of the collective coordinates, e.g.
\begin{equation}
|n_1\rangle|n_2\rangle=|\bar{n}_1\rangle\rangle|\bar{n}_2\rangle\rangle,\;n_1=
\bar{n}_1+\bar{n}_2;\;n_2=\bar{n}_2-\bar{n}_1.
\end{equation}
Now replace one of these (collective coordinates states) by a state (any one of which)
belonging to its MUB set, e.g.
\begin{equation}
|\bar{n}_1\rangle\rangle|\bar{n}_2\rangle\rangle \rightarrow|\bar{n}_1\rangle\rangle
|\bar{n}_2,b\rangle\rangle,\;\;b=1,2..d-1.
\end{equation}
The resultant state is a maximally entangled state. We prove it for a representative
example by showing that measuring in such state $Z_1$ that yield the value $n_1$ leaves
the state an eigenstate of $Z_2$ with a specific eigenvalue. To this end we consider the
projection of the state $\langle n_1|$ on the representative state . Somewhat lengthy
calculation yields,
\begin{equation}
\langle
n_1|\bar{n}_1\rangle\rangle|\bar{n}_2,b\rangle\rangle=\frac{1}{\sqrt
d}|n_2=-2\bar{n}_1+n_1\rangle \omega^{\frac{b}{2}(n_1-\bar{n}_1)(
n_1-\bar{n}_1-1)-\bar{n}_2(n_1-\bar{n}_1)}.
\end{equation}
Here the state $|n_2=-2\bar{n}_1+n_1\rangle$ is an eigenstate of $Z_2$ proving our point.\\
We discuss now the finite dimensional Hilbert space in a manner that stresses its analogy
with the
$d\rightarrow \infty$ case considered above: Given two, each d-dimensional, Hilbert spaces
and each
pertaining to one of two particles (systems) bases. The combined, $d^2$-dimensional space
is conveniently spanned by a basis made of product of computational bases,
$|n_1\rangle|n_2\rangle;\;n_i=0,1,...d-1$. Each of the computational basis may be
replaced by any of the d other available MUB bases (recall that we limit ourselves to
d=prime where d+1 MUB are available \cite{tal}). Each MUB basis is associated \cite{tal}
with a unitary operator, $X_iZ_i^b,\;\;b=0,1,..d-1$ (these supplemented by $Z_i$ account
for the d+1 MUB). We have shown above that the combined Hilbert space may be accounted
for by what we termed collective coordinates computational bases:
$|\bar{n}_1\rangle| \bar{n}_2\rangle,\;\;\bar{n}_i=0,1,...d-1.$ (Here $|\bar{n}_1\rangle$
relates to the "relative" while $|\bar{n}_2\rangle$ to the "center of mass" coordinate.)
These were defined such that $$|n_1\rangle|n_2\rangle=
|\bar{n}_1\rangle\rangle| \bar{n}_2\rangle\rangle.$$ We then noted that, in analogy
with the $R_i(\theta)$ of the $d\rightarrow \infty$ case each $|\bar{n}_i\rangle\rangle$
may be replaced by any of the d+1 MUB of the collective coordinates. These are
associated with $\bar{X}_i\bar{Z}_i^b,\;\;b=0,1,...d-1$. We now have the space spanned
by $|\bar{n}_1,b_1\rangle\rangle|\bar{n}_2,b_2\rangle\rangle$. These except for
"isolated" combination are maximally entangled states (cf. Appendix A). The isolated
values are the $b_1=b_2$ cases and the bases associated with
$\bar{X}_1\bar{Z}_1^b\;\;and\;\;\bar{X}_2^{-1}\bar{Z}_2^{-b}$ - the eigenstates of
which are product states.\\
Now while in the finite dimensional case the set of d+1 MUB states can be
constructed only for d a prime
(or a power of a prime - which is not studied here) no such limit holds for the continuous case. The intriguing
price being that in this ($d\rightarrow \infty$) case the definition of the MUB states involves a basis dependent
normalization. We have considered the cases with d=prime. The case d=2 need special treatment because, in this
case, +1=-1 [mode 2] (indeed 2=0 [mode 2]) hence the "center of mass" and "relative" coordinates are indistinguishable.
Here the operator vantage point may be used to interpret the known results \cite{schwinger}. The
operators vantage point involves the following: given two systems $\alpha,\beta$. Consider two non commuting
operators pertaining to $\alpha: \;A,A'$ and correspondingly two non-commuting operators B and B' that belong to
$\beta$. our scheme was to construct a common eigenfunction for AB and A'B' with (which we assume) $[AB,A'B']=0$.
This common eigenfunctions are maximally entangled. This is trivially accomplished: e.g. consider ($\alpha,\beta\rightarrow1,2$
$\sigma_{x1}\sigma_{x2}\;\;with\; \sigma_{z1}\sigma_{z2}$, and $\sigma_{x1}\sigma_{x2}\;\;with\;\sigma_{1}\sigma_{y2}$.
Their common eigenfunctions are the well known Bell states \cite{sam}.
\section{Concluding Remarks}
An association of maximally entangled states for two particles, each of
dimensionality d, with mutually unbiased bases (MUB) of d dimensional Hilbert space
inclusive of the continuous ($d\rightarrow \infty$) cases were established. The
analysis is based on the alternative forms for the two particle state: product of
computational based states, and a product of the state given in terms of collective
coordinates (dubbed center of mass and relative). A formalism allowing such an
alternative accounting for the states was developed for d a prime ($\ne 2$) which
applies the finite, d ($\ne2$), dimensional cases where the maximally allowed MUB
(d+1) is known to be available. Based on the alternative ways of writing the two
particle states we defined and demonstrated the use of mutually unbiased collective
bases (MUCB). The latter is generated by noting that replacing one of the states in
the collective coordinates product state with any of its MUCB states realizes a
maximal entangled state. Such state is, by construction, made of eigenfunctions of
commuting pairs of two particles operators with the single particle operators in the
different pair non commuting. Thus we shown that maximally entangled states both in
the continuum and some finite dimension Hilbert spaces may be viewed as product
states in collective variables and have demonstrated the intimate connection between
entanglement and operator non commutativity (i.e. the uncertainty principle).
\section* {Appendix A: Maximally Entangled State}
We prove here that the state $|\xi\rangle|\eta,\frac{\pi}{2}\rangle$ is a maximally
entangled state. (Note $|\eta,\theta+\frac{\pi}{2}\rangle =|\mu,\theta\rangle$).
This can be seen directly by calculating the $x$ representation of the state and
noting that it is of the same form of the EPR state, i.e. its Schmidt decomposition
contains all the states paired with coefficients of equal magnitude
\cite{peres,shim}:
\begin{eqnarray}
|\xi\rangle|\mu,\theta\rangle&=&\int dx_1dx_2|x_1,x_2\rangle \langle
x_1,x_2|\int d\eta d\bar{\eta}\;|\xi,\eta\rangle\langle\eta|\bar{\eta},\theta\rangle
d\bar{\eta}\langle\bar{\eta},\theta|\mu,\theta\rangle \nonumber \\
&=&\frac{\sqrt2}{2\pi\cos\theta}e^{\frac{i\mu}{2\cos
\theta}\big(2\xi-\mu \sin\theta\big)}
\int dxe^{\frac{\sqrt2 i x\mu}{cos\theta}}
|x\rangle|x-\sqrt2 \xi\rangle.
\end{eqnarray}
This is a maximally entangled state for $0\le\theta<\frac{\pi}{2}$. Now considering
the state for $\theta=\frac{\pi}{2}$ we have (c.f., Eq.(\ref{pi/2},\ref{reverse})
\begin{equation}
\langle x_1,x_2|\xi\rangle|\mu,\frac{\pi}{2}\rangle=\langle
x_1,x_2|\xi,-\eta\rangle=\delta\Big(x_1-\frac{\xi-\eta}{\sqrt2}\Big)\Big(x_2+\frac{\xi+
\eta}{\sqrt2}\Big),
\end{equation}
i.e. at $\theta=\frac{\pi}{2}$ the state is a product state. We interpret this to
mean that entanglement is not analytic \cite{amir}.\\
\section*{Appendix B: Maximal entanglement of the state $|\xi,\theta;\eta,\theta'\rangle$}
We now prove that the state $|\xi,\theta,\eta,\theta'\rangle$ is a maximally entangled
state for all
$\theta,\theta'$ (except for isolated points:$\theta=\theta'\pm \pi$, at these points
the state
is a product state). We note that
\begin{equation}\label{quad}
\hat{A}^{\dagger}\hat{A}+\hat{B}^{\dagger}\hat{B}=\hat{a}^{\dagger}_1\hat{a}_1+
\hat{a}^{\dagger}_2\hat{a}_2.
\end{equation}
Hence, cf. Eq. (\ref{product}), here the numerical subscripts refers to the
particles,
\begin{equation}
V_A^{\dagger}(\theta)V_B^{\dagger}(\theta)=U_1^{\dagger}(\theta)U_2^{\dagger}(\theta).
\end{equation}
Assuming, without loss of generality that $\theta' > \theta$ (when they are equal the
state is a product state), we may thus write ($\Delta=\theta'-\theta$),
\begin{equation}\label{maxent}
|\xi,\theta;\eta,\theta'\rangle=\int d\bar{\eta}\Big|x_1=\frac{\bar{\eta}+\xi}{\sqrt2},
\theta\big\rangle \Big|x_2=\frac{\bar{\eta}-\xi}{\sqrt2},\theta\big\rangle
\frac{1}{\sqrt{2\pi|sin\Delta|}}e^{-\frac{i}{2sin\Delta}\big[(\eta^2+\bar{\eta}^2)
cos\Delta -2\bar{\eta}\eta \big]}.
\end{equation}
Here the vectors ($|x_i\rangle$) are the single particle eigenvectors of
$U^{\dagger}(\theta)\hat{x}_iU(\theta)$.
Now our proof that the state $|\xi,\theta;\eta,\theta'\rangle$ is a maximally entangled
state is attained via "measuring" the position of the first particle (in the basis
labelled by $\theta$), i.e. calculating the projection $\langle x'_1,\theta|\xi,\theta;
\eta,\theta'\rangle$, and showing that the resultant state is the second particle in a
definite (up to a phase factor) one particle state, $|y_2,\bar{\theta}\rangle$ with $y_2$
linearly related to $x'_1$. Thus ($x'=x'_1$):
\begin{equation}
\langle x'_1|\xi,\theta;\eta,\theta'\rangle= \frac{1}{\sqrt{\pi|sin\Delta|}}
e^{\frac{i}{2sin\Delta}\big[2\xi\eta+(\xi^2+\eta^2)cos\Delta\big]}
e^{\frac{i}{sin\Delta}\big[(x'^2-\sqrt2 x'\xi)cos\Delta-\sqrt2 x'\eta\big]}|x'-
\sqrt2 \xi\rangle.
\end{equation}
QED\\
\section*{Appendix C: Angular labels for product states}
The proof that $|\xi,\theta\rangle|\eta,\theta'\rangle$ are
product states for $\theta=\theta'\;and\;\theta=\theta'\pm \pi$
utilizes the following preliminary observations:\\
a. $V_A^{\dagger}(\pm \pi)|\xi \rangle=|-\xi\rangle;\;V_B^{\dagger}(\pm
\pi)|\eta\rangle=|-\eta\rangle$ i.e. "evolution" by $\pm \pi$ may be viewed as
leaving the basis unchanged but "evolves" to a state whose eigenvalue is of opposite
sign. See Eq. (\ref{reverse}).\\
The states $|\xi \rangle|\pm \eta\rangle,\;|\pm \xi\rangle|\eta \rangle$ are product
states: e.g.
$$|\xi\rangle|-\eta\rangle=\int dx_1dx_2|x_1\rangle_1|x_2\rangle\langle
x_1|\langle x|\xi\rangle|-\eta\rangle=$$
$$\int
dx_1dx_2|x_1\rangle|x_2\rangle\delta\big(\xi-\frac{x_1-x_2}{\sqrt2}\big)\delta
\big(-\eta-\frac{x_1+x_2}{\sqrt2}\big)=\big|\frac{\xi-\eta}{\sqrt2}\big\rangle\big|-\frac{\eta+\xi}{\sqrt
2}\big\rangle.$$ QED.\\
These observations imply that, e.g.,
$$|\xi,\theta\rangle|\eta,\theta+\pi\rangle=V_A^{\dagger}(\theta)V_B^{\dagger}(\theta+\pi)|\xi\rangle|\eta\rangle=$$
$$U_1^{\dagger}(\theta)U_2^{\dagger}|\xi\rangle|-\eta\rangle=\big|\frac{\xi-\eta}{\sqrt2};\theta\big\rangle
\big|-\frac{\eta+\xi}{\sqrt2};\theta \big\rangle.$$ With similar results for $|\pm \xi\rangle
|\eta\rangle$. These are are product states each involves a distinct particle.\\
Acknowledgments: Informative comments by O. Kenneth and C. Bennett
are gratefully acknowledged.
| 2024-02-18T23:40:05.507Z | 2009-10-17T04:11:29.000Z | algebraic_stack_train_0000 | 1,303 | 4,816 |
|
proofpile-arXiv_065-6553 | \section{Introduction \label{sec:intro}}
It is by now well-established that neutrinos are massive and
mixed, and that these properties lead to the oscillations observed in
measurements of neutrinos produced in the
Sun~\cite{home2}--\cite{bor}, in
the atmosphere~\cite{SKatm}, by accelerators~\cite{minos,k2k}, and by
reactors~\cite{kam}. The mixing model predicts not only neutrino
oscillations in vacuum, but also the effects of matter on the
oscillation probabilities (the `MSW' effect)~\cite{wolf,msw}. To
date, the effects of matter have only been studied in the solar
sector, where the neutrinos' passage through the core of both the Sun
and the Earth can produce detectable effects. The model predicts
three observable consequences for solar neutrinos: a suppression of
the $\nu_e$ survival probability below the average vacuum value of
$1-\frac{1}{2}\sin^22\theta_{12}$ for high-energy ($^8$B) neutrinos, a
transition region between matter-dominated and vacuum-dominated
oscillations, and a regeneration of $\nu_e$s as the neutrinos pass
through the core of the Earth (the day/night effect). In addition to
improved precision in the extraction of the total flux of $^8$B
neutrinos from the Sun, an advantage of the low energy threshold
analysis (LETA) presented here is the enhanced ability to explore the
MSW-predicted transition region and, in addition, more stringent
testing of theories of non-standard interactions that affect the shape
and position of the predicted rise in survival
probability~\cite{solstatus}--\cite{nsiagain}.
We present in this article a joint analysis of the data from the first
two data acquisition phases of the Sudbury Neutrino Observatory (SNO),
down to an effective electron kinetic energy of $T_{\rm eff}=3.5$~MeV,
the lowest analysis energy threshold yet achieved for the extraction
of neutrino signals with the water Cherenkov technique. The previous
(higher threshold) analyses of the two data sets have been documented
extensively elsewhere~\cite{longd2o,nsp}, and so we focus here on the
improvements made to calibrations and analysis techniques to reduce
the threshold and increase the precision of the results.
We begin in Section~\ref{sec:detector} with an overview of the SNO
detector and physics processes, and provide an overview of the data
analysis in Section~\ref{sec:anal_overview}. In
Section~\ref{sec:dataset} we briefly describe the SNO Phase~I and
Phase~II data sets used here. Section~\ref{sec:montecarlo} describes
changes to the Monte Carlo detector model that provides the
distributions used to fit our data, and Section~\ref{sec:hitcal}
describes the improvements made to the hit-level calibrations of PMT
times and charges that allow us to eliminate some important
backgrounds.
Sections~\ref{sec:recon}-~\ref{sec:beta14} describe our methods for
determining observables like position and energy, and estimating their
systematic uncertainties. Section~\ref{sec:cuts} describes the cuts we
apply to our data set, while Section~\ref{sec:treff} discusses the
trigger efficiency and Section~\ref{sec:ncap} presents the neutron
capture efficiency and its systematic uncertainties. We provide a
detailed discussion of all background constraints and distributions in
Section~\ref{sec:backgrounds}.
Section~\ref{sec:sigex} describes our `signal extraction' fits to the
data sets to determine the neutrino fluxes, and
Section~\ref{sec:results} gives our results for the fluxes and mixing
parameters.
\section{The SNO Detector\label{sec:detector}}
SNO was an imaging Cherenkov detector using heavy water
($^2$H$_2$O, hereafter D$_2$O) as both the interaction and detection
medium~\cite{snonim}. SNO was located in Vale Inco's Creighton Mine,
at $46^{\circ} 28^{'} 30^{''}$ N latitude, $81^{\circ} 12^{'} 04^{''}$
W longitude. The detector was 1783~m below sea level with an
overburden of 5890 meters water equivalent, deep enough that the rate
of cosmic-ray muons passing through the entire active volume was just
3 per hour.
One thousand metric tons (tonnes) of D$_2$O was contained in a
12~m diameter transparent acrylic vessel (AV). Cherenkov light
produced by neutrino interactions and radioactive backgrounds was
detected by an array of 9456 Hamamatsu model R1408 20~cm
photomultiplier tubes (PMTs), supported by a stainless steel geodesic
sphere (the PMT support structure or PSUP). Each PMT was surrounded
by a light concentrator (a `reflector'), which increased the effective
photocathode coverage to nearly $55$\%. The channel discriminator
thresholds were set to 1/4 of a photoelectron of charge. Over seven
kilotonnes (7$\times 10^6$~kg) of H$_2$O shielded the D$_2$O from
external radioactive backgrounds: 1.7~kT between the AV and the PSUP,
and 5.7~kT between the PSUP and the surrounding rock. Extensive
purification systems were used to purify both the D$_2$O and the
H$_2$O. The H$_2$O outside the PSUP was viewed by 91 outward-facing
20~cm PMTs that were used to identify cosmic-ray muons. An additional
23 PMTs were arranged in a rectangular array and suspended in the
outer H$_2$O region to view the neck of the AV. They were used
primarily to reject events not associated with Cherenkov light
production, such as static discharges in the neck.
The detector was equipped with a versatile calibration-source
deployment system that could place radioactive and optical sources
over a large range of the $x$-$z$ and $y$-$z$ planes (where $z$ is the
central axis of the detector) within the D$_2$O volume. Deployed
sources included a diffuse multi-wavelength laser that was used to
measure PMT timing and optical parameters (the
`laserball')~\cite{laserball}, a $^{16}$N source that provided a
triggered sample of 6.13~MeV $\gamma$s~\cite{n16}, and a $^8$Li source
that delivered tagged $\beta$s with an endpoint near
14~MeV~\cite{li8}. In addition, 19.8~MeV $\gamma$s were provided by a
$^3{\rm H}(p,\gamma)^4{\rm He}$ (`pT') source~\cite{pt_nim} and
neutrons by a $^{252}$Cf source. Some of the sources were also
deployed on vertical lines in the H$_2$O between the AV and PSUP.
`Spikes' of radioactivity ($^{24}$Na and $^{222}$Rn) were added at
times to the light water and D$_2$O volumes to obtain additional
calibration data. Table~\ref{tbl:cal_sources} lists the primary
calibration sources used in this analysis. \begingroup \squeezetable
\begin{table*}[ht!]
\begin{center}
\begin{tabular}{lllcc}
\hline \hline Calibration source & Details & Calibration & Deployment
Phase & Ref. \\ \hline Pulsed nitrogen laser & 337, 369, 385, &
Optical \& & I \& II & \cite{laserball} \\ \qquad(`laserball') & 420,
505, 619~nm & \hspace{0.1in} timing calibration & & \\ \NS & 6.13~MeV
$\gamma$ rays & Energy \& reconstruction & I \& II & \cite{n16} \\
$^8$Li & $\beta$ spectrum & Energy \& reconstruction & I \& II &
\cite{li8} \\ $^{252}$Cf & neutrons & Neutron response & I \& II &
\cite{snonim} \\ Am-Be & neutrons & Neutron response & II only & \\
$^3$H$(p,\gamma)^4$He (`pT') & 19.8~MeV $\gamma$ rays & Energy
linearity & I only & \cite{pt_nim} \\ Encapsulated U, Th &
$\beta-\gamma$ & Backgrounds & I \& II & \cite{snonim} \\ Dissolved Rn
spike & $\beta-\gamma$ & Backgrounds & II only & \\ \textit{In-situ}
$^{24}$Na activation & $\beta-\gamma$ & Backgrounds & II only & \\
\hline \hline
\end{tabular}
\caption{\label{tbl:cal_sources} Primary calibration sources.}
\end{center}
\end{table*}
\endgroup
SNO detected neutrinos through three processes~\cite{herb}:
\begin{center}
\begin{tabular}{lcll}
$ \nu_x + e^-$ & $\rightarrow$ & $\nu_x + e^-$ & (ES)\\
$\nu_e + d$ & $\rightarrow$ & $p + p + e^-$\hspace{0.5in} & (CC)\\
$ \nu_x + d$ & $\rightarrow$ & $p + n + \nu_x'$ & (NC)\\ \\ \end{tabular}
\end{center}
For both the elastic scattering (ES) and charged current (CC)
reactions, the recoil electrons were detected directly through their
production of Cherenkov light. For the neutral current (NC) reaction,
the neutrons were detected via de-excitation $\gamma$ s following their
capture on another nucleus. In SNO Phase~I (the `D$_2$O phase'), the
detected neutrons captured predominantly on the deuterons in the
D$_2$O. Capture on deuterium releases a single 6.25~MeV $\gamma$ ray,
and it was the Cherenkov light of secondary Compton electrons or
$e^+e^-$ pairs that was detected. In Phase II (the `Salt phase'), 2
tonnes of NaCl were added to the D$_2$O, and the neutrons captured
predominantly on $^{35}$Cl nuclei, which have a much larger neutron
capture cross section than deuterium nuclei, resulting in a higher
neutron detection efficiency. Capture on chlorine also releases more
energy (8.6~MeV) and yields multiple $\gamma$s, which aids in
identifying neutron events.
The primary measurements of SNO are the rates of the three neutrino
signals, the energy spectra of the electrons from the CC and ES
reactions, and any asymmetry in the day and night interaction rates
for each reaction. Within the Phase~I and II data sets, we cannot
separate the neutrino signals on an event-by-event basis from each
other or from backgrounds arising from radioactivity in the detector
materials. Instead, we `extracted' the signals and backgrounds
statistically by using the fact that they are distributed differently
in four observables: effective kinetic energy ($T_{\rm eff}$), which
is the estimated energy assuming the event consisted of a single
electron, cube of the reconstructed radial position of the event
($R^3$), reconstructed direction of the event relative to the
direction of a neutrino arriving from the Sun ($\cos\theta_{\odot}$ ), and a measure of
event `isotropy' ($\beta_{14}$), which quantifies the spatial
distribution of PMT hits in a given event (Sec.~\ref{sec:beta14}).
Low values of $\beta_{14}$ indicate a highly isotropic distribution.
Figure~\ref{fig:pdfsnus} shows the one-dimensional projections of the
distributions of these observables for the three neutrino signals,
showing CC and ES in Phase~II and NC for both data sets. The Phase~II
distributions are normalized to integrate to 1 except in
Fig.~\ref{fig:pdfsnus}(c), in which the CC and NC distributions are
scaled by a factor of 10 relative to ES for the sake of clarity. The
Phase~I NC distributions are scaled by the ratio of events in the two
phases, to illustrate the increase in Phase~II. In the figure, and
throughout the rest of this article, we measure radial positions in
units of AV radii, so that $R^3 \equiv (R_{\rm fit}/R_{AV})^3$.
\begin{figure}
\begin{center}
\includegraphics[width=0.42\textwidth]{signal_e.eps}
\includegraphics[width=0.42\textwidth]{signal_r.eps}
\includegraphics[width=0.42\textwidth]{signal_c.eps}
\includegraphics[width=0.42\textwidth]{signal_b.eps}
\caption{(Color online) The Monte Carlo-generated distributions of (a)
energy ($T_{\rm eff}$), (b) radius cubed ($R^3$), (c) direction
($\cos\theta_{\odot}$), and (d) isotropy ($\beta_{14}$) for signal events. The same
simulation was used to build multi-dimensional PDFs to fit the
data. In calculating $R^3$, the radius $R$ is first normalized to the
600~cm radius of the AV. The CC and NC $\cos\theta_{\odot}$ distributions are scaled
by a factor of 10 for clarity against the ES
peak. \label{fig:pdfsnus}}
\end{center}
\end{figure}
Figure~\ref{fig:pdfsbkds} shows the same distributions for some of the
detector backgrounds, namely `internal' $^{214}$Bi and $^{208}$Tl
(within the D$_2$O volume) and `AV' $^{208}$Tl (generated within the
bulk acrylic of the vessel walls). While some of the $^{214}$Bi nuclei
came from decays of intrinsic $^{238}$U, the most likely source of
$^{214}$Bi was from decays of $^{222}$Rn entering the detector from
mine air. The $^{208}$Tl nuclei came largely from decays of intrinsic
$^{232}$Th. Near the $T_{\rm eff}=3.5$~MeV threshold the dominant
signal was from events originating from radioactive decays in the
PMTs. These events could not be generated with sufficient precision
using the simulation, and so were treated separately from other event
types, as described in Sec.~\ref{s:pmtpdf}. There were many other
backgrounds; these are described in Sec.~\ref{sec:backgrounds}.
\begin{figure}
\begin{center}
\includegraphics[width=0.42\textwidth]{bkg_e.eps}
\includegraphics[width=0.42\textwidth]{bkg_r.eps}
\includegraphics[width=0.42\textwidth]{bkg_c.eps}
\includegraphics[width=0.42\textwidth]{bkg_b.eps}
\caption{(Color online) The Monte Carlo-generated distributions of (a)
energy ($T_{\rm eff}$) on a log scale, (b) radius cubed ($R^3$), (c)
direction ($\cos\theta_{\odot}$), and (d) isotropy ($\beta_{14}$) for background
events. The same simulation was used to build multi-dimensional PDFs
to fit the background events. The backgrounds shown are internal
$^{214}$Bi, internal $^{208}$Tl, and AV
$^{208}$Tl. \label{fig:pdfsbkds}}
\end{center}
\end{figure}
The energy spectra provide a powerful method for separating different
event types. The CC and ES spectra depend on the shape of the
incident neutrino spectrum. We treated the CC and ES spectra in two
different ways: in one fit we made no model assumptions about the
underlying spectral shape, allowing the CC and ES spectra to vary in
the fit, and in a second fit we assumed that the underlying incident
neutrino spectrum could be modeled as a smoothly distorted $^8$B
spectrum. The shapes of NC and background spectra do not depend on
neutrino energy and so were fixed in the fit, to within the systematic
uncertainties derived later. Decays of $^{214}$Bi and $^{208}$Tl in
the detector both led to $\gamma$ rays above the deuteron binding
energy of 2.2~MeV, which created higher energy events when the
photodisintegration neutron was subsequently captured on either
deuterium (Phase~I) or predominantly $^{35}$Cl (Phase~II). A
significant fraction of $^{214}$Bi decays produce a 3.27~MeV-endpoint
$\beta$. These background events are therefore characterized by
steeply falling energy spectra with a photodisintegration tail, as
shown in Fig.~\ref{fig:pdfsbkds}(a).
CC and ES events produced single electrons and, hence, the observed
light from these events was fairly anisotropic, yielding a
correspondingly high value for the isotropy parameter, $\beta_{14}$. The
$\beta_{14}$ distributions show small differences due to the different energy
spectra of the two event types, which affects $\beta_{14}$ through the known
correlation between energy and isotropy of an event. The isotropy of
Phase~I NC events looks similar to that of CC and ES events, because
the $\gamma$ ray tended to produce light dominated by that from one
Compton electron. By contrast, the isotropy distribution of Phase~II
NC events is peaked noticeably lower because neutron capture on
$^{35}$Cl atoms nearly always resulted in multiple $\gamma$ s, which could
each scatter an electron and, hence, produce a more isotropic PMT hit
pattern. Therefore, $\beta_{14}$ provides a sensitive method for separation
of electron-like events from neutron capture events in this phase,
without requiring a constraint on the shapes of the CC and ES energy
spectra, thus providing an oscillation-model-independent measurement
of the flux of solar neutrinos. The isotropy distributions for
$^{214}$Bi events and $^{208}$Tl events inside the heavy water are
noticeably different because, above the $T_{\rm eff}=$ 3.5~MeV
threshold, Cherenkov light from $^{214}$Bi events was dominated by
that from the ground state $\beta$ branch while that from $^{208}$Tl
events was from a $\beta$ and at least one additional Compton
electron. The difference allowed these events to be separated in our
fit, as was done in previous SNO {\it in-situ} estimates of detector
radioactivity~\cite{longd2o,nsp}.
The $\cos\theta_{\odot}$ distribution is a powerful tool for distinguishing ES events
since the scattering of $\nu_e$ from the Sun resulted in electron
events whose direction is strongly peaked away from the Sun's
location. The direction of CC events displays a weaker correlation of
$\sim (1 - \frac{1}{3}$$\cos\theta_{\odot}$) relative to the direction of the Sun.
The NC distribution is flat since the $\gamma$ s generated by neutron
capture carried no information about the incident neutrino direction.
Background events had no correlations with the Sun's location and,
thus, also exhibit a flat distribution, as shown in
Fig.~\ref{fig:pdfsbkds}(c).
The radial position of events within the detector yields a weak
separation between the three neutrino interaction types, but a much
more powerful level of discrimination from external background events.
CC and ES events occurred uniformly within the detector and hence have
relatively flat distributions. NC events occurred uniformly, but
neutrons produced near the edge of the volume were more likely to
escape into the AV and H$_2$O\xspace regions, where the cross section for
neutron capture was very high due to the hydrogen content. Neutron
capture on hydrogen produced 2.2~MeV $\gamma$ s, below the analysis
threshold and thus less likely to be detected. Therefore, the radial
profile of NC events falls off at the edge of the volume. This effect
is more noticeable in Phase~I, since the neutron capture efficiency on
deuterium is lower than on $^{35}$Cl and, hence, the neutron mean-free
path was longer in Phase~I than in Phase~II.
\section{Analysis Overview \label{sec:anal_overview}}
The `LETA' analysis differs from previous SNO analyses in the joint
fit of two phases of data, the much lower energy threshold, (which
both result in increased statistics) and significantly improved
systematic uncertainties.
The neutrino signal rates were determined by creating probability
density functions (PDFs) from distributions like those in
Figs.~\ref{fig:pdfsnus} and~\ref{fig:pdfsbkds} and performing an
extended maximum likelihood fit to the data. The CC and ES spectra
were determined by either allowing the flux to vary in discrete energy
intervals (an `unconstrained fit') or by directly parameterizing the
$\nu_e$ survival probability with a model and fitting for the
parameters of the model.
There were three major challenges in this analysis: reduction
of backgrounds, creation of accurate PDFs (including determination of
systematic uncertainties on the PDF shapes), and extracting the
neutrino signals, energy spectra, and survival probabilities from the
low-threshold fits.
Three new techniques were applied to reduce backgrounds compared to
previous SNO analyses~\cite{longd2o,nsp}. First, we made substantial
improvements to energy reconstruction by developing a new algorithm
that included scattered and reflected light in energy estimation. The
inclusion of `late light' narrowed the detector's effective energy
resolution by roughly 6\%, substantially reducing the leakage of
low-energy background events into the analysis data set by $\sim$60\%.
Second, we developed a suite of event-quality cuts using PMT charge
and time information to reject external background events whose
reconstructed positions were within the fiducial volume. Third, we
removed known periods of high radon infiltration that occurred during
early SNO runs and when pumps failed in the water purification system.
Creation of the PDFs was done primarily with a Monte Carlo
(MC) simulation that included a complete model of physics processes
and a detailed description of the detector. We made substantial
improvements to the Monte Carlo model since our previous publications,
and we describe these improvements in detail in
Sec.~\ref{sec:montecarlo}.
Our general approach to estimating systematic uncertainties on the
Monte Carlo-simulated PDF shapes was based on a comparison of
calibration source data to Monte Carlo simulation, as in previous SNO
analyses. In cases where the difference between calibration data and
simulation was inconsistent with zero, and we had evidence that the
difference was not caused by a mis-modeling of the calibration source,
we corrected the PDF shapes to better match the data. For example, we
applied corrections to both the energy (Sec.~\ref{sec:energy}) and
isotropy (Sec.~\ref{sec:beta14}) of simulated events. Any residual
difference was used as an estimate of the uncertainty on the Monte
Carlo predictions. Corrections were verified with multiple
calibration sources, such as the distributed `spike' sources as well
as encapsulated sources, and additional uncertainties were included to
account for any differences observed between the various measurements.
Uncertainties were also included to take into account possible
correlations of systematic effects with the observable parameters.
So, for example, we allowed for an energy dependence in the fiducial
volume uncertainty, and the uncertainty on the energy scale was
evaluated in a volume weighted fashion to take into account possible
variations across the detector.
The final extraction of signal events from the data was a
multi-dimensional, many-parameter fit. Although marginal
distributions like those shown in Figs.~\ref{fig:pdfsnus}
and~\ref{fig:pdfsbkds} could be used as PDFs, in practice there are
non-trivial correlations between the observables that can lead to
biases in the fit results. We therefore used three-dimensional PDFs
for most of the backgrounds and for the NC signal, factoring out the
dimension in $\cos\theta_{\odot}$, which is flat for these events. The CC and ES
events had PDFs whose dimensionality depended on the type of fit. For
the unconstrained fit, we used three-dimensional PDFs in
$(R^3,\beta_{14},\cos\theta_{\odot})$, factoring out the $T_{\rm eff}$
dimension because the fit was done in discrete intervals, within which
the $T_{\rm eff}$ spectrum was treated as flat. For the direct fit for
the $\nu_e$ survival probability, we used fully four-dimensional PDFs
for the CC and ES signals.
The parameters of the `signal extraction' fits were the
amplitudes of the signals and backgrounds, as well as several
parameters that characterized the dominant systematic uncertainties.
\textit{A priori} information on backgrounds and systematic
uncertainties was included. To verify the results, we pursued two
independent approaches, one using binned and the other unbinned
PDFs. We describe both approaches in Sec.~\ref{sec:sigex}.
We developed and tuned all cuts using simulated events and calibration
source data. Signal extraction algorithms were developed on Monte
Carlo `fake' data sets, and tested on a 1/3-livetime sample of data.
Once developed, no changes were made to the analysis for the final fit
at our analysis threshold on the full data set.
In treating systematic uncertainties on the PDF shapes, we
grouped the backgrounds and signals into three classes:
`electron-like' events, which include true single-electron events as
well as those initiated via Compton scattering from a single $\gamma$;
neutron capture events on chlorine that produced a cascade of many
$\gamma$s with a complex branching table; and PMT $\beta$-$\gamma$
decays, which occurred in the glass or envelope of the PMT assembly and
support structure. The PMT $\beta$-$\gamma$ events were treated
separately from other
$\beta$-$\gamma$ events because they were heavily influenced by local
optical effects near the PMT concentrators and support structure, and
are therefore hard to model or simulate.
The analysis results presented here have substantially reduced
uncertainties on the neutrino interaction rates, particularly for
SNO's signature neutral current measurement. Although there are many
sources of improvement, the major causes are:
\begin{itemize}
\item The lower energy threshold increased the statistics of the CC
and ES events by roughly 30\%, and of the NC events by $\sim 70$\%;
\item In a joint fit, the difference in neutron detection sensitivity
in the two phases provided improved neutron/electron separation,
beyond that due to differences in the isotropy distributions;
\item Significant background reduction due to improved energy
resolution, removal of high radioactivity periods, and new event
quality cuts;
\item Use of calibration data to correct the PDF shapes.
\end{itemize}
\section{Data Sets \label{sec:dataset}}
The Phase~I and Phase~II data sets used here have been described
in detail elsewhere~\cite{longd2o,nsp}. We note only a few critical
details.
SNO Phase~I ran from November 2, 1999 to May 31, 2001. Periods
of high radon in Phase~I were removed for this analysis based on the
event rate. To minimize bias, we used Chauvenet's criterion to
eliminate runs in which the probability of a rate fluctuation as high
or higher than observed was smaller than 1/(2$N$), where $N$ is the
total number of runs in our data set ($\sim500$). With this cut, we
reduced the previously published 306.4 live days to 277.4. Most of
the runs removed were in the first two months of the phase, or during
a period in which a radon degassing pump was known to have failed.
This $\sim$9\% reduction in livetime removed roughly 50\% of all
$^{214}$Bi events from the Phase~I data set. SNO Phase~II ran from
July 2001 to August 2003, for a total of 391.4 live days.
SNO had several trigger streams, but the primary trigger for
physics data required a coincidence of $N_{\rm coinc}$ or more PMT
hits within a 93~ns window. From the start of Phase~I until December
20, 2000, $N_{\rm coinc}$ was set to 18; it was subsequently lowered
to 16 PMT hits. This hardware threshold is substantially below the
analysis threshold, and no efficiency correction was required, even at
3.5~MeV (see Sec.~\ref{sec:treff}).
\section{Monte Carlo Simulation \label{sec:montecarlo}}
SNO's Monte Carlo simulation played a greater role here than
in previous publications, as we used it to provide PDFs of not only
the neutrino signals, but for nearly all backgrounds as well. The
simulation included a detailed model of the physics of neutrino
interactions and of decays of radioactive nuclei within the detector.
Propagation of secondary particles was done using the EGS4 shower
code~\cite{egs}, with the exception of neutrons, for which the
MCNP~\cite{mcnp} neutron transport code developed at Los Alamos
National Laboratory was used. Propagation of optical photons in the
detector media used wavelength-dependent attenuations of D$_2$O\xspace and H$_2$O\xspace
that were measured {\it in situ} with laserball calibrations, and
acrylic attenuations measured {\it ex situ}. The simulation included
a detailed model of the detector geometry, including the position and
orientation of the PSUP and the PMTs, the position and thickness of
the AV (including support plates and ropes), the size and position of
the AV `neck', and a full three-dimensional model of the PMTs and
their associated light concentrators. SNO's data acquisition system
was also simulated, including the time and charge response of the PMTs
and electronics. Details of the simulation have been presented
in~\cite{longd2o,nsp}; we describe here the extensive upgrades and
changes that were made for this analysis.
Ultimately, SNO's ability to produce accurate PDFs depends on
the ability of the Monte Carlo simulation to reproduce the low-level
characteristics of the data, such as the distributions of PMT hit
times and charges. We therefore improved our timing model to more
correctly simulate the `late pulsing' phenomenon seen in the Hamamatsu
R1408s used by SNO. We also added a complete model of the PMT single
photoelectron charge distribution that includes PMT-to-PMT variations
in gain. Gain measurements were made monthly with the laserball source
at the center of the detector, and the simulation uses different
charge distributions for each PMT according to these gain
measurements.
Addition of the more complete charge spectrum also allowed us
to add a detailed model of each electronics channel's discriminator.
On average, the threshold voltage was near 1/4 of that for a single
photoelectron, but there were large variations among channels because
of variations in noise level. Over time, the channel thresholds were
adjusted as PMTs became quieter or noisier; these settings were used
in the simulation for each run. The discriminator model also provided
for channel-by-channel efficiencies to be included, thus improving
simulation of the detector's energy resolution.
We made several important changes to the optical model as
well. The first was a calibration of PMT efficiencies, which
accounted for tube-to-tube variations in the response of the
photomultipliers and light concentrators. These efficiencies are
distinct from the electronics discriminator efficiency described
above, as they depended on the PMT quantum efficiency, local magnetic
field, and individual concentrator reflectivity, while the
discriminator efficiency depended upon PMT channel gain and threshold
setting. The PMT efficiencies were measured using the laserball, as
part of the detector's full optical calibrations, which were performed
once in Phase~I and three times in Phase~II. The efficiencies in the
simulation were varied over time accordingly.
The light concentrators themselves are known to have degraded
over time and the three-dimensional model of the collection efficiency
of the PMT-concentrator assembly used in previous analyses had to be
modified. We developed for this analysis a phenomenological model of
the effects of the degradation to the concentrator efficiency. Rather
than modifying the concentrator model itself, we altered the PMT
response as a function of the position at which the photon struck the
photocathode. In effect, this produced a variation in the response of
the concentrator and PMT assembly as a function of photon incidence
angle. A simultaneous fit was performed to laserball calibration data
at six wavelengths, with each wavelength data set weighted by the
probability that a photon of that wavelength caused a successful PMT
hit. The extraction of optical calibration data was extended to a
larger radius than in previous analyses, in order to extract the PMT
response at wider angles. {\it Ex-situ} data were also included in
the fit to model the response at $>\,$40$^{\circ}$ for events in the
light water region. Time dependence was accommodated by performing
separate fits in time intervals defined by the available calibration
data: one interval in Phase~I and three in Phase~II. This change
improved the modeling of any position-dependence of the energy
response but did not affect the overall energy scale, which was
calibrated using the $^{16}$N source. We also made a global change to the
light concentrator reflectivity based on measurements with the
$^{16}$N source. Figure~\ref{fig:3dpmt} compares the new model of the
PMT-concentrator response as a function of incidence angle to that
used in earlier publications.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{d2o365nm.eps}
\caption{(Color online) Comparison of new model of photomultiplier
angular response to data and the old model for Phase~I at
365$\,$nm.\label{fig:3dpmt}}
\end{center}
\end{figure}
The laserball calibration data were used as a direct input to the
energy reconstruction algorithms, providing media attenuations, PMT
angular response measurements, and PMT efficiencies. For wavelengths
outside the range in which data were taken, the Monte Carlo simulation
was used to predict the response.
\section{Hit-level Calibrations \label{sec:hitcal}}
The accuracy with which we know the charge and time of each PMT hit
directly affects event position and energy uncertainties. To
calibrate the digitized charges and time, we performed pulser
measurements twice weekly, measuring pedestals for the charges and the
mapping of ADC counts to nanoseconds for the times. The global
channel-to-channel time offsets and the calibration of the pulse
risetime corrections were done with the laserball source deployed near
the center of the detector. These calibrations have been described
elsewhere~\cite{longd2o}.
Four significant changes were made to the calibration of PMT
charges and times. The first was the removal of hits associated with
channel-to-channel crosstalk. Crosstalk hits in the SNO electronics
were characterized by having low charges, slightly late times, and
being adjacent to a channel with very high charge.
The second change was a correction to the deployed positions of the
laserball source to ensure that the time calibrations were consistent
between calibration runs. Prior to this correction, the global PMT
offsets had been sensitive to the difference between the nominal and
true position of the source, which varied from calibration run to
calibration run. The new correction reduced the time-variations of
the PMT calibrations noticeably, but there was a residual 5~cm offset
in the reconstructed $z$-position of events, for which a correction
was applied to all data.
There were a variety of ways in which PMTs could fail, and we
therefore applied stringent criteria for a PMT to be included in
position and energy reconstruction. The criteria were applied to both
calibration and `neutrino' data sets as well as to run simulations.
The last improvement was a calibration to correct for a
rate-dependence in the electronics charge pedestals. Crosstalk hits
were used to monitor the pedestal drift and a time-varying correction
was applied. With this correction we could use the PMT charge
measurements to remove certain types of background events, and to
substantially reduce systematic uncertainties on the energy scale
associated with variations in PMT gain, which affected the photon
detection probability.
Figure~\ref{fig:qt} shows the distributions of PMT
time-of-flight residuals and measured photoelectron charges for a
$^{16}$N calibration run at the center of the detector compared to a
simulation of that run. The simulation includes the upgrades
discussed in Sec.~\ref{sec:montecarlo}. The time residuals show
excellent agreement in the dominant prompt peak centered near $\Delta
t=0$~ns, as well as good agreement for the much smaller pre-pulsing
($\Delta t\sim -20$~ns) and late-pulsing ($\Delta t \sim 15$~ns and
$\Delta t \sim 35$~ns) features. For the charge distribution, the
agreement is also excellent above 10 ADC counts or so, which
corresponds to the majority of the charges used in the analysis.
Thus, we are confident that the simulation models the behavior of
reconstruction and cuts with sufficient accuracy.
\begin{figure}
\begin{center}
\includegraphics[height=0.2\textheight]{tres_datamc_comp.eps}
\includegraphics[height=0.2\textheight]{qhs_datamc_comp.eps}
\caption{Comparison of $^{16}$N simulation to data for (a) PMT hit
time-of-flight residuals and (b) photoelectron charge
spectra.\label{fig:qt}}
\end{center}
\end{figure}
\section{Position and Direction Reconstruction\label{sec:recon}}
The primary reconstruction algorithm used in this analysis was
the same as in previous Phase~I publications. We used reconstructed
event position and direction to produce the PDFs shown in
Figs.~\ref{fig:pdfsnus} and~\ref{fig:pdfsbkds}, and to reject
background events originating outside the AV. Knowledge of event
position and direction was also used in the estimation of event energy
(see Sec.~\ref{sec:energy}). Below we outline the reconstruction
method, and then discuss the uncertainties in our knowledge of event
positions and directions.
\subsection{Reconstruction Algorithm}
The vertex and direction reconstruction algorithm fitted event position,
time, and direction simultaneously using the hit times and locations
of the hit PMTs. These values were found by maximizing the
log-likelihood function,
\begin{equation}
{\log\cal L}(\vec{r}_e,\vec{v}_e,t_e) = \sum_{i=1}^{N_{\rm hit}} \log
{\cal P}(t^{\rm res}_i,\vec{r}_i;\vec{r}_e,\vec{v}_e,t_e),
\end{equation}
with respect to the reconstructed position ($\vec{r}_e$), direction
($\vec{v}_e$), and time ($t_e$) of the event. ${\cal
P}(t^{\rm res}_i,\vec{r}_i;\vec{r}_e,\vec{v}_e,t_e)$ is the
probability of observing a hit in PMT $i$ (located at $\vec{r}_i$)
with PMT time-of-flight residual $t^{\rm res}_i$
(Eq.~\eqref{eqn:ftp-tresid}), given a single Cherenkov electron track
occurring at time $t_e$ and position $\vec{r}_e$, with direction
$\vec{v}_e$. The sum is over all good PMTs for which a hit was
recorded. The PMT time-of-flight residuals relative to the
hypothesized fit vertex position are given by:
\begin{equation}
\label{eqn:ftp-tresid}
t^{\rm res}_i = t_{i} - t_{\rm e} - |\vec{r}_{\rm e} -
\vec{r}_{i}|\frac{{n_{\rm eff}}}{c},
\end{equation}
where $t_i$ is the hit time of the $i$th PMT. The photons are assumed
to travel at a group velocity $\frac{{c}}{n_{\rm eff}}$, with ${n_{\rm
eff}}$ an effective index of refraction averaged over the detector
media.
The probability ${\cal P}$ contains two terms to allow for the
possibilities that the detected photon arrived either directly from
the event vertex (${\cal P}_{\rm direct}$) or resulted from
reflections, scattering, or random PMT noise (${\cal P}_{\rm other}$).
These two probabilities were weighted based on data collected in the
laserball calibration runs.
The azimuthal symmetry of Cherenkov light about the event
direction dilutes the precision of reconstruction along the event
direction. Thus, photons that scattered out of the Cherenkov cone
tended to systematically drive the reconstructed event vertex along
the fitted event direction. After initial estimates of position and
direction were obtained, a correction was applied to shift the vertex
back along the direction of the event so as to compensate for this
systematic drive. The correction varied with the distance of the
event from the PSUP as measured along its fitted direction.
The reconstruction algorithm returned a quality-of-fit statistic
relative to the hypothesis that the event was a correctly
reconstructed single electron. This statistic was used later in the
analysis to remove backgrounds and reduce tails on the reconstruction
resolution. Details of the reconstruction algorithm can be found
in~\cite{longd2o}.
\subsection{Uncertainties on Position and Direction}
Many effects that could produce systematic shifts in reconstructed
positions were modeled in the simulation. Data from calibration
sources deployed within the detector were compared to Monte Carlo
predictions, and the differences were used to quantify the uncertainty
on the simulation. The observed differences were not deemed significant
enough to warrant applying a correction to the Monte Carlo-generated
positions, and so the full size of the difference was taken as the
magnitude of the uncertainty. The differences between data and Monte
Carlo events were parameterized as four types:
\begin{itemize}
\item vertex offset: a constant offset between an event's true and
reconstructed positions;
\item vertex scale: a position-dependent shift of events either inward
or outward;
\item vertex resolution: the width of the distribution of
reconstructed event positions;
\item angular resolution: the width of the distribution of
reconstructed event directions relative to the initial electron
direction.
\end{itemize}
These uncertainties can have an impact upon the flux and spectral
measurements in two ways: by altering the prediction for the number of
events reconstructing inside the fiducial volume and by affecting the
shape of the PDFs used in the signal extraction.
Reconstruction uncertainties were determined primarily from $^{16}$N
source data. In previous analyses \cite{longd2o}, the volume density
of Compton-scattered electrons relative to the source location was
modeled with the analytic function $S(r) \sim
\exp(\frac{-r}{\lambda})/(r^2)$. Model improvements for this analysis
allowed us to extract this distribution for each $^{16}$N source run from
the Monte Carlo simulation of that run, and take into account the
exact source geometry, effect of data selection criteria on the
distribution, and any time-dependent detector effects.
The distribution of electron positions was convolved with a Gaussian,
representing the detector response, and the resulting function was fit
to the one-dimensional reconstructed position distribution along each
axis, allowing both the mean and standard deviation of the Gaussian to
vary for each orthogonal axis independently. An example of such a fit
is shown in Figure~\ref{f:n16fit}. This fit was done separately for
the $^{16}$N data and the Monte Carlo simulation of each $^{16}$N run. The
difference in the Gaussian means gives the vertex offset for that run
and the square root of the difference in the variances represents the
difference in vertex resolution.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.48\textwidth]{n16fit.eps}
\caption{\label{f:n16fit}Fit of the $^{16}$N Compton-electron position
distribution convolved with a Gaussian to the reconstructed $z$
position of $^{16}$N data events for a typical central run in Phase~II.}
\end{center}
\end{figure}
\subsubsection{Vertex Offset}
\label{s:voff}
Analysis of the differences between the reconstructed and true event
vertex positions at the center of the detector, or `central vertex
offset', was done using runs with the source within 25$\,$cm of the
center, where the source position is known most accurately. This
avoids confusion with any position-dependent effects, which are taken
into account in the scale measurement (Sec.~\ref{s:vscale}). A
data$-$MC offset was determined for each run, along each detector
axis. The offsets from the runs were combined in weighted averages
along each axis, with the uncertainty for each run offset increased to
include the uncertainty in source position. Although the results
showed a small mean offset along each axis, the magnitude was
comparable to the source position uncertainty and therefore we did not
correct the PDFs based on this difference. Instead, asymmetric
double-sided uncertainties were formulated by using the uncertainty in
the weighted average, and increasing it by the magnitude of the
weighted average itself on the side on which the offset was measured.
The effects of these uncertainties were determined during signal
extraction by shifting the position of each event by the positive and
negative values of the uncertainty along each axis independently, and
recomputing the PDFs. The values of the uncertainties are given in
Table~\ref{t:recunc} in Sec.~\ref{s:recsum}.
\subsubsection{Vertex Scale}
\label{s:vscale}
A potential position-dependent bias in the reconstructed position
that can be represented as being proportional to the distance of the
event from the center of the detector is defined as a vertex scale
systematic.
In previous SNO analyses, uncertainty in the position of the
calibration source was a major contribution to reconstruction
uncertainties, especially away from the $z$-axis of the detector,
where sources were deployed in a less accurate mode. A new method was
derived for this analysis to reduce sensitivity to this effect.
Although the absolute source position was known only to $\sim2\,$cm on
the $z$-axis and $\sim5\,$cm away from this axis, changes in position
once the source was deployed were known with much greater precision.
By comparing the result from each $^{16}$N run to a run at the center of
the detector from the same deployment scan, possible offsets between
the recorded and true source position were removed, thus reducing
source position uncertainties. In addition, any constant offset in
vertex position, such as that measured in Sec.~\ref{s:voff}, was
inherently removed by this method, thus deconvolving the measurement
of scale from offset. This method allowed data from different scans
to be combined, providing a more representative sampling across the
time span of the data set and improving the statistics of the
measurement.
Vertex scale was investigated by using the data$-$MC
reconstructed position offset along each detector axis,
as shown in Figure~\ref{f:recxyz}, using only runs within 50~cm of
that axis to minimize correlations among the three. The runs were
grouped into 50$\,$cm bins along each axis by source position, and
the weighted average of the offsets for the runs within each bin was
found. A linear function was fit to the bins as a function of
position along that axis. Since the method was designed to remove any
central vertex offset, the function was defined to be zero at the
center of the detector.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.48\textwidth]{xyz.eps}
\caption{\label{f:recxyz}(Color online) Vertex offset along the three
detector axes as a function of position within the detector.}
\end{center}
\end{figure}
The slope from the fit provides the scaling required to bring
the simulation into agreement with data. We did not apply a
correction, but instead treated it as an asymmetric uncertainty on the
reconstructed positions of all events. The effects observed along the
$x$ and $y$ axes were of a very similar magnitude and, therefore, were
assumed to be due to a radial effect, possibly caused either by small
errors in the modeling of the wavelength-dependent refractive index or
residual PMT timing calibration errors. Conservatively, the larger of
the $x$ and $y$ values was used to bound this effect. The resulting
uncertainty was applied in our signal extraction fits by multiplying
the $x$, $y$ and $z$ position of each event in our PDFs by the value
of the scale uncertainty, thus shifting events either inwards or
outwards in the detector, and taking the difference from the nominal
fit. Since the effect observed along the $z$-axis was larger, the
difference of this from the radial effect was treated as an additional
uncertainty, applied only to the $z$ position of events. The values
used for each uncertainty are listed in Table~\ref{t:recunc} in
Sec.~\ref{s:recsum}.
Since only runs within 50$\,$cm of each Cartesian axis were used to
determine vertex scale, diagonal axis runs could be used for
verification. The method described measured the scale for each
Cartesian axis independently. The values obtained for the $y$ and $z$
axes, for example, could therefore be combined to predict the scaling
for runs on the $y$-$z$ diagonal. The prediction was shown to agree
very well with the data, as illustrated in Figure~\ref{f:yztest},
demonstrating the robustness of the analysis and its applicability to
events everywhere in the fiducial volume.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.48\textwidth]{yz.eps}
\caption{\label{f:yztest}(Color online) Vertex offset along the
$y$-$z$ diagonal as a function of position along that diagonal. The
dashed line shows the prediction from the $y$- and $z$-axis values and
the solid line shows the best fit scaling value for these data points.
Observed variations at negative positions are likely associated with
systematics in source position.}
\end{center}
\end{figure}
A similar analysis was performed using $^{252}$Cf source data in Phase~II.
The results were consistent with those shown here, verifying that the
same uncertainties could be applied to both electron-like and neutron
capture events.
We investigated several other potential causes of variation in
reconstruction accuracy. The $^{16}$N-source event rate during most
calibration runs was high in comparison to our expected neutrino event
rate, so the results were checked using low-rate $^{16}$N data. The
stability over time was determined by comparing runs across the span
of the two phases. As in previous analyses~\cite{longd2o},
calibration-source dependence was investigated by verifying $^{16}$N
results using the $^{8}$Li source. This also provides a check on the
energy dependence because the $^{8}$Li data extended to higher energies
than the $^{16}$N data. The results were all consistent within the
uncertainties presented here.
\subsubsection{Vertex Resolution}
\label{s:vresn}
The position resolution achieved in this analysis was $\sim$20~cm for
data events. The difference in resolutions between data and Monte
Carlo events was modeled as a Gaussian of standard deviation (or
`width') $\sigma_{\rm extra}$, by which the Monte Carlo distribution
should be smeared to reproduce the data. $\sigma_{\rm extra}^2$ was
given by $(\sigma_{\rm Data}^2 - \sigma_{\rm MC}^2)$ for each $^{16}$N
run. This procedure is only valid for $\sigma_{\rm MC} < \sigma_{\rm
Data}$, which was the likely scenario since any minor detector
non-uniformities tend to cause a broader resolution in the data. In
some cases, the simulation and data were close enough to one another
that statistical variation caused $\sigma_{\rm Data}$ to appear to be
less than $\sigma_{\rm MC}$. In these cases, $ |(\sigma_{\rm Data}^2
- \sigma_{\rm MC}^2)|$ was taken to represent the uncertainty in the
comparison. The results from the runs were combined in a weighted
average, independently for each detector axis. The resulting values
for $\sigma_{\rm extra}$ are listed in Table~\ref{t:recunc} in
Sec.~\ref{s:recsum}. These were applied during the signal extraction
by smearing the positions of all Monte Carlo events by a Gaussian of
the appropriate width. This was achieved for the binned signal
extraction (Sec.~\ref{s:mxf}) by generating a random number for each
event from a Gaussian of the correct width and adding the result to
the event's position and, for the unbinned method, by a direct
analytic convolution (Sec.~\ref{s:kernel}).
\subsubsection{Angular Resolution}
The $^{16}$N source was used for this measurement by relying on the high
degree of colinearity of Compton scattered electrons with the initial
$\gamma$ direction. The mean of the distribution of reconstructed
event positions was used to estimate the source position. The
reconstructed event position was used as an estimate for the
scattering vertex. To reduce the effect of reconstruction errors,
only events reconstructing more than 120$\,$cm from the source were
used. The angle between the initial $\gamma$ direction (taken to be
the vector from the source position to the fitted scattering vertex)
and the reconstructed event direction was found and the distributions
of these angles were compared for data and Monte Carlo events.
The same functional form used in previous analyses~\cite{nsp} was fit
to the distributions for data and Monte Carlo events within each run.
The weighted average of the differences in the fitted parameters was
computed across the runs and the resulting value used as an estimate
of the uncertainty in angular resolution (given in
Table~\ref{t:recunc}, Sec.~\ref{s:recsum}).
\subsubsection{Energy Dependent Fiducial Volume}
The energy dependence of the vertex scaling is of particular
importance since it could affect the number of events that reconstruct
within the fiducial volume as a function of energy and, hence, distort
the extracted neutrino spectrum. Because the $^{16}$N source provided
monoenergetic $\gamma$s, giving rise to electrons around 5~MeV,
whereas the $^{8}$Li source sampled the full range of the neutrino energy
spectrum, the $^{8}$Li source was used for this measurement. The fraction
of events reconstructing inside the source's radial position, closer
to the detector center, was used as a measure of the number of events
reconstructing inside the fiducial volume to take into account both
vertex shift and resolution effects. Absolute offsets between data
and Monte Carlo events have already been characterized in Sections
\ref{s:voff}--\ref{s:vresn}, so a differential comparison of this
parameter between data and Monte Carlo events was used to evaluate any
energy dependence. A fit from Phase~II is shown in
Figure~\ref{f:efv}. The energy dependence is given by the slope of a
straight line fit to the ratio of the data and Monte Carlo parameters,
averaged across calibration runs. The final uncertainty is quoted as
an asymmetric, double-sided uncertainty to account for the non-zero
value of the slope and its uncertainty. The values for each phase are
given in Table~\ref{t:recunc}. The absolute shift, indicated in
Fig.~\ref{f:efv} by an intercept different from one, is a measure of
the global vertex scaling. This effect has already been evaluated in
Sec.~\ref{s:vscale}. It does not impact the energy dependence and
therefore is not relevant to this present measurement.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.48\textwidth]{efv.eps}
\caption{\label{f:efv}(Color online) Ratio of the fraction of events
reconstructing inside the source position for data and Monte Carlo
events, as a function of effective electron energy, for $^{8}$Li source
runs.}
\end{center}
\end{figure}
An additional check was performed using neutrino data from outside the
fiducial volume. All standard analysis cuts were applied, as
described in Sec.~\ref{s:cuts}, as well as a 5.5~MeV threshold to
select a clean sample of neutrino events. A Hill function was fit to
the radial distribution of the events, with the half-point of the
function representing the position of the AV. Statistics in the data
were limited, so the fit was performed in just three energy bins.
Monte Carlo simulation of the three types of neutrino interactions was
combined in the signal ratios found in a previous SNO analysis
\cite{nsp} and the same fit was performed. The ratio of the resulting
fitted AV position in the data and simulation is a measure of the
radial scaling and, therefore, the energy dependence of this ratio is
a check on the analysis described above. The results were in good
agreement. In Phase~II the energy dependence was $0.8\pm2.1\%$/MeV,
in comparison to $-0.07\pm0.41\%$/MeV measured using the $^{8}$Li source.
\subsubsection{Summary of Reconstructed Position Uncertainties}
\label{s:recsum}
Table~\ref{t:recunc} summarizes the uncertainties in reconstructed
position and direction.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{lccc} \hline \hline
& \multicolumn{2}{c}{Uncertainty, $\delta_i$}& Transformation \\
Parameter & Phase~I & Phase~II &of observables \\ \hline $x$ Offset
(cm) & $^{+1.15}_{-0.13}$ & $^{+0.62}_{-0.07}$ & $x+\delta_i$ \\ $y$
Offset (cm) & $^{+2.87}_{-0.17}$ & $^{+2.29}_{-0.09}$& $y+\delta_i$\\
$z$ Offset (cm) & $^{+2.58}_{-0.15}$ & $^{+3.11}_{-0.16}$&
$z+\delta_i$\\ $R$ Scale (\%)& $^{+0.10}_{-0.57}$ & $^{+0.04}_{-0.34}$
& $(1 + \frac{\delta_i}{100})x_i$ \\ $z$ Scale (\%)& $^{+0.40}_{-0.0}$
& $^{+0.03}_{-0.25}$ & $(1 + \frac{\delta_i}{100})z$\\ $x$ resn
(cm)&$+3.3$ & $+3.1$ & $x + \mathcal{N}(0,\delta_i)$\\ $y$ resn (cm)&
$+2.2$ &$+3.4$ & $y + \mathcal{N}(0,\delta_i)$\\ $z$ resn (cm)& $+1.5$
& $+5.3$ &$z + \mathcal{N}(0,\delta_i)$\\ Angular resn & $\pm 0.11$ &
$\pm 0.11$&$1 + (\cos\theta_{\odot}-1)(1 + \delta_i)$\\ EFV (\%/MeV)&
$^{+0.85}_{-0.49}$ & $^{+0.41}_{-0.48}$ &
$W=1+\frac{\delta_i}{100}(T_{\rm eff}-5.05)$\\ \hline \hline
\end{tabular}
\caption{\label{t:recunc}Systematic uncertainties in the reconstructed
position and direction of events. EFV is the energy dependent
fiducial volume uncertainty. The column labeled ``Transformation of
observables'' refers to the formulae used to propagate these
uncertainties into the signal extraction fits.
$\mathcal{N}(0,\delta_i)$ refers to a convolution with a Gaussian
distribution of mean 0.0 and standard deviation $\delta_i$. Events
that are pushed past $\cos\theta_{\odot}$ $=\pm1.0$ are randomly assigned a $\cos\theta_{\odot}$ value
in the interval [-1.0, 1.0]. $W$ is an energy-dependent fiducial
volume factor applied around the midpoint of the $^{16}$N energy, where
$T_{\rm eff}$ is the reconstructed effective electron kinetic energy
and 5.05~MeV is the central $T_{\rm eff}$ value for the $^{16}$N data.
This was applied as a weight for each event when creating the PDFs.
(``Resolution'' is abbreviated as ``resn'').}
\end{center}
\end{table}
It is worth noting that in previous analyses~\cite{nsp} the radial
scaling uncertainty was evaluated at $\pm\,$1\%, which translates to a
3\% uncertainty in fiducial volume. The improved analysis presented
here has reduced the scale uncertainty to a little over 0.5\% at its
maximum and significantly less in most dimensions. The resolution
differences observed previously were on the order of 9$\,$cm
\cite{longd2o}, whereas the differences measured here are roughly one
third of that in most dimensions. The angular resolution uncertainty
of 11\% is an improvement over the 16\% measured in previous work
\cite{nsp}.
\section{Energy Reconstruction}
\label{sec:energy}
We estimated the kinetic energy of an event after its position and
direction were reconstructed. The energy estimate was used both to
reject background events and to produce the PDFs shown in
Figs.~\ref{fig:pdfsnus} and \ref{fig:pdfsbkds}. Improving the
resolution of the energy estimation algorithm was critical because of
the low energy threshold of the analysis -- a 6\% improvement in
energy resolution reduces the number of background events
reconstructing above threshold by $\sim$60\%.
\subsection{Total-Light Energy Estimator}
\label{sec:ftk}
A new algorithm, called ``FTK'', was designed to use all the detected
PMT hits in the energy estimate, including scattered and reflected
light~\cite{dunfordthesis}. The look-up table approach of the
prompt-light fitter used in previous publications was abandoned in
favor of a maximum likelihood method, in which photon detection
probabilities were generated based on the reconstructed event position
and direction. The best value of the effective kinetic energy,
$T_{\rm eff}$, was found by maximizing the likelihood given the
observed number of hit PMTs, $N_{\rm hit}$, and taking into account
optical effects due to the reconstructed position and direction of the
event. In principle, one could consider a more sophisticated approach
in which both the number and distribution of all hit PMTs are used
along with the recorded time of each hit, but such an approach is much
more time intensive and was judged to be impractical for the present
analysis.
We considered five sources of PMT hits in an event, defined by the
following quantities:
\begin{itemize}
\item $n^{\rm dir}_{\rm exp}$ - the expected number of
detected photons that traveled directly to a PMT, undergoing
only refraction at the media boundaries;
\item $n^{\rm scat}_{\rm exp}$ - the expected number of
detected photons that were Rayleigh scattered once in the D$_2$O\xspace
or H$_2$O\xspace before detection (scattering in the acrylic is
neglected);
\item $n^{\rm av}_{\rm exp}$ - the expected number of detected
photons that reflected off the inner or outer surface of the
acrylic vessel;
\item $n^{\rm pmt}_{\rm exp}$ - the expected number of
detected photons that reflected off the PMTs or light
concentrators;
\item $n^{\rm noise}_{\rm exp}$ - the expected number of PMT
noise hits, based on run-by-run measurements.
\end{itemize}
FTK computed the probabilities of a single photon being detected by
any PMT via the four event-related processes: $\rho_{\rm dir}$,
$\rho_{\rm scat}$, $\rho_{\rm av}$, $\rho_{\rm pmt}$. The direct
light probability was found by tracing rays from the event vertex to
each PMT, and weighting each ray by the attenuation probability in
each medium, transmittance at each boundary, solid angle of each PMT,
and detection probability given the angle of entry into the light
concentrator. Scattering and reflection probabilities were found
using a combination of ray tracing and tables computed from Monte
Carlo simulation of photons propagating through the detector.
If $N_\gamma$ is the number of potentially detectable Cherenkov
photons produced in the event given the inherent PMT detection
efficiency, then the expected number of detected photons given these
probabilities is:
\begin{equation}
n_{\rm exp}(N_\gamma) = N_\gamma \times (\rho_{\rm dir} +
\rho_{\rm scat} + \rho_{\rm av} + \rho_{\rm pmt}).
\end{equation}
To be able to compare $n_{\rm exp}$ to the observed $N_{\rm hit}$, we
need to account for noise hits and convert from detected photons to
PMT hits, since multiple photons in the same PMT produced only one
hit. Given the rarity of multiple photons in a single PMT at solar
neutrino energies, FTK made a correction only to the dominant source
term, $n^{\rm dir}_{\rm exp}=N_\gamma\rho_{\rm dir}$. Letting $N_{\rm
MPC}(n^{\rm dir}_{\rm exp})$ be the multi-photon corrected number of
direct PMT hits, the total expected number of hits is:
\begin{eqnarray}
N_{\rm exp}(N_\gamma) & \approx & N_{\rm MPC}(n^{\rm dir}_{\rm
exp}) \nonumber \\ & & + N_\gamma \times (\rho_{\rm scat} +
\rho_{\rm av} + \rho_{\rm pmt}) + n^{\rm noise}_{\rm exp}.
\end{eqnarray}
The probability of observing $N_{\rm hit}$ hits when $N_{\rm exp}$ are
expected is given by the Poisson distribution:
\begin{equation}
P(N_{\rm hit}\,|\,N_\gamma) = \frac{(N_{\rm exp})^{N_{\rm
hit}} e^{-N_{\rm exp}}}{N_{\rm hit}!}.
\end{equation}
To obtain a likelihood function for $T_{\rm eff}$, rather than
$N_\gamma$, we integrate over the distribution of $N_\gamma$ given an
energy $T_{\rm eff}$:
\begin{equation}
\mathcal{L}(T_{\rm eff}) = \int \frac{(N_{\rm
exp}(N_\gamma))^{N_{\rm hit}} e^{-N_{\rm
exp}(N_\gamma)}}{N_{\rm hit}!}\times P(N_\gamma\,|\,T_{\rm
eff})\,dN_\gamma,
\end{equation}
where $P(N_\gamma\,|\,T_{\rm eff})$ is the probability of $N_\gamma$
Cherenkov photons being emitted in an event with energy $T_{\rm eff}$.
The negative log-likelihood was then minimized in one dimension to
give the estimated energy of the event.
\subsection{Energy Scale Corrections and Uncertainties}
\label{sec:ecorr}
We measured the energy scale of the detector by deploying the tagged
$^{16}$N $\gamma$ source at various locations in the $x$-$z$ and
$y$-$z$ planes within the D$_2$O\xspace volume. Although $^{16}$N was a nearly
monoenergetic $\gamma$ source, it produced electrons with a range of
energies through multiple Compton scattering and $e^+e^-$ pair
production. As a result, the single 6.13~MeV $\gamma$ produced an
`effective electron kinetic energy' ($T_{\rm eff}$) distribution that
peaked at approximately 5~MeV.
Using the $^{16}$N $\gamma$-ray source to determine the
detector's energy scale is complicated by its broad spectrum of
electron energies. To separate the detector's response from this
intrinsic electron energy distribution, we modeled the reconstructed
energy distribution with the integral
\begin{equation}
P(T_{\rm eff}) = N\int P_{\rm
source}(E_{e^{-}})\frac{1}{\sqrt{2\pi}\, \sigma}
e^{\frac{(T_{\rm eff} - E_{e^{-}} -
p_3)^2}{2\sigma^2}}dE_{e^{-}},
\end{equation}
where $N$ is a normalization constant, $\sigma(E_{e^{-}}) = p_1 +
p_2\sqrt{E_{e^{-}}}$ is the detector resolution, and $P_{\rm source}$
is the apparent electron energy distribution from the $^{16}$N
$\gamma$ rays without including the detector optical response. $p_3$
sets the displacement of the $^{16}$N peak, and therefore the offset
in energy scale at that source location. The $P_{\rm source}$
distribution was computed from a Monte Carlo simulation of $\gamma$
propagation through the source container and production of Cherenkov
photons from Compton-scattered $e^-$ and pair-produced $e^+e^-$. We
translated the number of Cherenkov photons in each simulated event to
a most probable electron (MPE) kinetic energy with the same tables
that were used in the FTK energy estimation algorithm, and generated
the distribution, $P_{\rm source}$, of event
values~\cite{dunfordthesis}. Given this fixed distribution for the
$^{16}$N calibration source, we fit for $N$, $p_1$, $p_2$, and $p_3$
in each source run, for both data and for Monte Carlo simulation of
the same source position and detector state. The parameter
differences between data and Monte Carlo, run-by-run, determined the
energy corrections and uncertainties. Parameters $p_1$ and $p_2$
measure the detector energy resolution, and are discussed further in
Sec.~\ref{sec:eres}. Parameter $p_3$ was used here to define the
spatial energy scale correction and uncertainties.
The Monte Carlo was initially tuned by adjusting a global collection
efficiency parameter in the simulation to minimize the difference
between data and Monte Carlo energy scales for $^{16}$N runs at the
center of the detector. A series of additional corrections were then
applied to the estimated energy of all the data and Monte Carlo
events, to remedy known biases.
Approximations in FTK's handling of multiple hits on a single
tube lead to a small energy non-linearity, and we derived a correction
for this by comparing the reconstructed energy for Monte Carlo events
to their true energies. Similarly, the simple PMT optical model used
by FTK produced a small radial bias in event energies and, again,
comparison of reconstructed energies of Monte Carlo events to their
true values were used to provide a correction.
Two additional corrections were based on evaluations of
data. The first was to compensate for the degradation of the PMT light
concentrators, which changed the detector's energy response over time
during Phase I. The degradation affected the fraction of light that
was reflected off the PMT array. We tracked the variation using
$^{16}$N runs taken at the center of the detector, and created a
time-dependent correction to event energies that shifted their values
by up to 0.4\%~\cite{dunfordthesis}.
The final correction was applied to remove a variation in
energy with the detector $z$-coordinate. Figure~\ref{f:saltescale}(a)
shows the difference between the average reconstructed energies of
events from the $^{16}$N source for each calibration run, and the
Monte Carlo simulation of the run, as a function of the radial
position of the source. As can be seen, for events in the top
(positive $z$) hemisphere of the detector, the Monte Carlo
underestimated the event energies by as much as 3\% and, in the bottom
hemisphere, it overestimated the energies by almost the same amount.
The cause of the former was the simulation's poor optical model of the
acrylic in the neck of the AV. The latter was likely caused by
accumulation of residue at the bottom of the acrylic vessel and
variations in the degradation of the PMT light concentrators.
To correct for the $z$-dependence of the energy scale, we first split
the $^{16}$N calibration runs into two groups. One group contained
runs on the $x$-$z$ plane along with half of the runs on the $z$-axis,
and was used to construct the correction function. The second group
contained runs on the $y$-$z$ plane along with the other half of the
$z$-axis runs, and was used later to independently evaluate the
spatial component of the energy scale uncertainty.
We found that the variation in the energy scale best
correlated with the vertical position of the event ($z$) and the
direction cosine of the event relative to the $z$-axis ($u_z$). All
of the $^{16}$N events in the first group were binned in the $(z,
u_z)$ dimensions and the peak of the $^{16}$N energy distribution was
found for data and Monte Carlo events separately. We fit a
second-order polynomial in $z$ and $u_z$ to the ratio of the data and
Monte Carlo peak energies. This smooth function provided the spatial
energy correction for data events. Fig.~\ref{f:saltescale}(b) shows
the spatial variation after this energy correction.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.48\textwidth]{salt_escale.eps}
\caption{\label{f:saltescale}(Color online) Difference between
$^{16}$N data and Monte Carlo energy scales as a function of radius
for Phase~II $^{16}$N source runs in the upper hemisphere, on the
equatorial plane, and in the lower hemisphere. Panel (a) shows the
significant variation in these three regions before the spatial energy
correction. Panel (b) shows the same runs after the spatial energy
scale correction is applied. (The fiducial volume cut is at 550~cm).}
\end{center}
\end{figure}
To evaluate the spatial component of the energy scale uncertainty, we
assumed azimuthal symmetry in the detector, and divided the second
group of $^{16}$N calibration runs into regions based on radius and
polar angle. Within each region, the RMS of the individual run
differences between the corrected data and Monte Carlo energy scales
defined the uncertainty on the energy scale in that volume. All
regions were then combined into a volume-weighted measure of the
uncertainty on the overall energy scale in the detector due to spatial
variation and non-uniform sampling of the detector volume. As a
verification of the procedure, we reversed the roles of the two
calibration groups (using the $y$-$z$ plane to construct the
calibration function and the $x$-$z$ plane to evaluate the
uncertainties) and found very similar corrections and uncertainties.
The energy scale uncertainty of the detector also includes uncertainty
in modeling of energy loss in the $^{16}$N source itself, uncertainties in
the online status of PMTs, variation in the channel response between
high-rate calibration data and low-rate neutrino data, and
uncertainties in the data acquisition channel gains and thresholds,
which affect the photon detection probability. Many of these
uncertainties have been substantially reduced compared to previous
publications by the improvements to the Monte Carlo model described in
Sec.~\ref{sec:montecarlo} and the rate-dependent correction to the
channel pedestals described in Sec.~\ref{sec:hitcal}.
The components of the energy scale uncertainties are
summarized in Table \ref{tab:escale_uncert}. We take the source
uncertainty as 100\% correlated between phases, and the other
uncertainties as uncorrelated. To verify the validity of the
$^{16}$N-derived energy corrections and uncertainties over a wider
range of energies, we compared the data and Monte Carlo energy
distributions for $^{252}$Cf neutron source runs and the D$_2$O\xspace-volume
radon spike, for both of which events are more widely distributed in
the detector than for the $^{16}$N source. In both cases, the
agreement between the data and Monte Carlo was well within the
uncertainties stated in Table~\ref{tab:escale_uncert}.
\begin{table}[!h]
\begin{center}
\begin{tabular}{lcc}
\hline \hline Uncertainty & Phase I & Phase II \\ \hline PMT Status &
$\pm 0.01$\% & $\pm 0.01$\% \\ Threshold/Gain & $+0.18\; -0.31$\% &
$+0.13\; -0.07$\% \\ Rate & $\pm 0.3$\% & $\pm 0.05$\% \\ Source &
$\pm 0.4$\% & $\pm 0.4$\% \\ Spatial Variation & $\pm 0.18$\% & $\pm
0.31$\% \\ \hline Total & $+0.56\; -0.62$\% & $+0.52\; -0.51$\% \\
\hline \hline
\end{tabular}
\caption{Summary of energy scale uncertainties.}
\label{tab:escale_uncert}
\end{center}
\end{table}
\subsection{Energy Resolution}
\label{sec:eres}
Energy resolution was a significant systematic uncertainty because of
its impact on background acceptance above the 3.5~MeV energy
threshold. Due to differing event topologies in the two phases, the
resolution uncertainties were treated as three independent,
uncorrelated systematic parameters: Phase~I events (both electron-like
and neutron capture events), Phase~II electron-like events, and
Phase~II neutron capture events. In all cases, the resolution was
found to be slightly broader in the data than for Monte Carlo events.
The difference was parameterized as a Gaussian of width $\sigma_{\rm
extra}$, with which the Monte Carlo distribution was convolved to
reproduce the data. The width of the Gaussian was given by the
quadrature difference of the data and Monte Carlo resolutions:
$\sigma_{\rm extra} = \sqrt{ (\sigma_{\rm Data}^2 - \sigma_{\rm
MC}^2)}$. A resolution correction was formulated using calibration
source data and applied to the Monte Carlo events used in PDF
generation. The uncertainties on this correction were then taken from
the spread of the calibration data.
\subsubsection{Energy Resolution Uncertainties for Phase~II Electron-like
Events} \label{s:eres:saltelec}
The $^{16}$N source was the primary source for this measurement. We
evaluated the uncertainties in two ways by measuring the resolution
for the spectrum of Compton electrons differentially and integrally.
The MPE fit described in Sec.~\ref{sec:ecorr} unfolds source effects
from the event distribution, allowing the extraction of the intrinsic
monoenergetic electron resolution as a function of energy. The fit
was performed for both data and Monte Carlo simulation of $^{16}$N runs
and the resulting resolutions were compared differentially in energy.
The energy resolution at threshold is the dominant concern for
electron-like events, due to the exponential rise of the backgrounds,
and the value at 3.5~MeV was therefore used as representative of the
detector resolution. $\sigma_{\rm extra}$ at threshold was found to
be 0.152 $\pm$ 0.053~MeV. In terms of the fractional difference:
\begin{equation}
\sigma_{\rm frac} = \frac{(\sigma_{\rm Data} - \sigma_{\rm
MC})}{\sigma_{\rm MC}}
\end{equation}
this translates to $\sigma_{\rm frac}=$2.4 $\pm$ 1.6\% at threshold.
To measure the integrated Compton electron resolution using the
monoenergetic $\gamma$ rays produced by the $^{16}$N source, the
reconstructed energy distribution for Monte Carlo-simulated $\gamma$s
was convolved with a smearing Gaussian and the result was fit directly
to the data, allowing the mean and width of the smearing Gaussian to
vary. The resulting $\sigma_{\rm extra}$ of the smearing Gaussian was
$0.0\pm 0.046$~MeV. This measurement represents a higher average
energy than the `unfolded' MPE value since the $^{16}$N provides $\gamma$s
at 6.13~MeV. The value of $\sigma_{\rm frac}$ from this $\gamma$-ray
measurement is $0.00\pm 0.08$\%.
Two $^{222}$Rn spikes were deployed during Phase~II, one in the D$_2$O\xspace
and one in the H$_2$O\xspace volume. These provided a low energy source of
$\beta$s and $\gamma$ s, below the analysis threshold and, therefore, all
observed decays appeared due to the detector energy resolution, making
the spikes particularly sensitive to this effect. The unbinned signal
extraction code (Sec.~\ref{s:kernel}) was used in a simplified
configuration to fit the data from each spike.
The internal spike was fit with 3 PDFs in two dimensions: energy and
isotropy. The PDFs were $^{214}$Bi electron-like events (primarily
$\beta$s) in the D$_2$O\xspace volume, $^{214}$Bi photodisintegration neutrons,
and a `quiet' data set drawn from neutrino runs near the date of the
spike. The latter provides the energy distribution of all
`background' events to the spike measurement, including other
radioactive decays such as PMT $\beta$-$\gamma$s as well as neutrino
interactions. An analytic convolution parameter was also floated,
defining the width of the convolving Gaussian applied to the Monte
Carlo electron-like events. The resulting $\sigma_{\rm extra}$ was
0.139 $^{+0.023}_{-0.036}$~MeV, which is equivalent to $\sigma_{\rm
frac}=$2.0 $\pm$ 1.0\% at threshold. Floating the $^{214}$Bi
electrons and neutrons independently also allowed a verification of
the Monte Carlo prediction for the photodisintegration rate. The
results were in good agreement, giving 0.91 $\pm$ 0.13 times the Monte
Carlo predicted rate.
The external spike was fit with two PDFs in just the energy dimension,
due to lower statistics. The electron to neutron ratio in the
$^{214}$Bi PDF was fixed to the Monte Carlo prediction and the overall
normalization of this PDF was taken as a free parameter, along with
the quiet data normalization. The Monte Carlo events were again
convolved with a Gaussian, whose width was allowed to vary in the fit.
The resulting value for $\sigma_{\rm extra}$ was
$0.273^{+0.030}_{-0.035}$~MeV, which gives $\sigma_{\rm frac}=$$7.6\pm
1.9$\% at threshold. The broader resolution for external events,
which were generated in the H$_2$O\xspace region but either traveled or were
misreconstructed into the D$_2$O, is not unexpected since the
detector's energy response was modeled less well in the outer detector
regions.
These four measures were combined to give the resolution correction
and associated uncertainty for electron-like events in Phase~II.
Since the two $^{16}$N measurements are not independent, they were not
used together. The weighted mean of the MPE fit and the two spike
points was used to give the correction, with an associated
uncertainty. The difference of that value from the weighted mean of
the $^{16}$N $\gamma$ point and the two spike points was then taken as an
additional one-sided (negative) uncertainty, to take into account the
difference in the two $^{16}$N measurements. This results in a final
value of $\sigma_{\rm extra} = 0.168 ^{+0.041}_{-0.080}$~MeV, which
was applied as a constant smearing across the energy range. The four
measurements and the resulting one sigma band on the final correction
value for Phase~II electron-like events are shown in
Figure~\ref{f:salteres}.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.48\textwidth]{salt_eresfrac.eps}
\caption{\label{f:salteres}(Color online) Measurements of energy
resolution in Phase~II. The solid area shows the one sigma band on
the energy resolution correction applied to Phase~II electron-like
events. The $^{252}$Cf and muon follower points show the measurements of the
energy resolution for neutron capture events, and were not used to
evaluate the total shift for electron-like events.}
\end{center}
\end{figure}
The MPE fit was also applied to the $^{8}$Li source but this was not
included in the calculation due to the low statistics of the
measurement. However, the energy dependence of both the $^{8}$Li and the
$^{16}$N MPE fits were used to demonstrate that the use of a constant
$\sigma_{\rm extra}$ across the energy spectrum was consistent with
the data available.
\subsubsection{Energy Resolution Uncertainties for Phase~II
Neutron Capture Events}
The energy resolution for neutron capture events in Phase~II was
measured using the $^{252}$Cf source, with a verification performed using a
`muon follower' data set, consisting of neutron capture events
occurring within a defined time window after a muon passed through the
detector.
There are fewer uncertainties associated with the neutron measurement
since the $^{252}$Cf source produced neutrons whose captures on $^{35}$Cl and
deuterium resulted in the same $\gamma$ cascades as those from NC
events. The measurement was performed by numerically convolving a
spline-fit of the Monte Carlo energy distribution with a Gaussian and
fitting the resulting form to the data. The mean and width of the
convolving Gaussian were allowed to vary, in order to take into
account possible correlations between energy scale and resolution
effects. The result was $\sigma_{\rm extra} = 0.153 \pm 0.018$~MeV.
The observed energy scale from this measurement agreed very well with
that evaluated in Sec.~\ref{sec:ecorr}.
The statistics of the muon follower data set were low, and the
resulting uncertainty on the measurement was therefore relatively
large. Nevertheless, a similar analysis was performed, giving a
$\sigma_{\rm extra}$ of $0.237 \pm 0.144$~MeV.
The weighted mean of the two points was used for the final correction
to the energy resolution of neutron capture events in Phase~II, with
its associated uncertainty, with the value dominated by the $^{252}$Cf
measurement: $\sigma_{\rm extra} = 0.154 \pm 0.018$~MeV. Both points
are also shown on Fig.~\ref{f:salteres}.
\subsubsection{Energy Resolution
Uncertainties for Phase~I Electron-like Events}
No radon spikes were deployed in Phase~I, and so only the two $^{16}$N
measurements were available. Both the MPE fit and the Gaussian
convolution to the $\gamma$-ray energy distribution were performed for
Phase~I $^{16}$N runs, in the same manner as for Phase~II
(Sec.~\ref{s:eres:saltelec}). The central correction value was taken
from the MPE fit directly, giving $\sigma_{\rm extra} = 0.155 \pm
0.036$~MeV. The small number of energy resolution measurements in
Phase I provides fewer handles on the uncertainty than the much-better
calibrated Phase~II. The uncertainties in Phase I were therefore
chosen to match those of Phase~II. The width of the convolving
Gaussian for Phase~I events was therefore taken as $\sigma_{\rm extra}
= 0.155 ^{+0.041}_{-0.080}$~MeV. This was also applied to neutron
capture events in Phase~I, since the event topologies were similar.
\subsection{Energy Linearity}
\label{sec:enonlin}
The corrections derived in Sec.~\ref{sec:ecorr} were done primarily
using the $^{16}$N source, and therefore the uncertainty in the energy
scale at the $^{16}$N energy is very small. An additional uncertainty was
included to account for possible differential changes in the energy
scale that were not correctly modeled in the Monte Carlo simulation.
Such changes could be caused by residual crosstalk hits, or
mis-modeling of the multi-photon PMT hit probabilities in the energy
reconstruction algorithm. The differential changes were determined
relative to the $^{16}$N point, and used calibration sources whose
energies were substantially higher.
The pT source provided $\gamma$s roughly 14~MeV higher in energy than
those from $^{16}$N, resulting in a good lever-arm on any non-linear
effects. This source was only deployed in Phase~I since deployment in
Phase~II would have resulted in an overwhelming neutron signal. The
difference between data and Monte Carlo-reconstructed event energies
was measured to be $-$1.36 $\pm$ 0.01\% at the energy of the pT
source.
The MPE fit described in Sec.~\ref{sec:ecorr} was applied here to the
$^{8}$Li source, including an additional term in the parameterization to
model first-order differential changes in the energy scale. The fit
was done to both data and Monte Carlo events, and a difference of just
$-0.011\pm 0.004$\% was found, evaluated at the same energy as the pT
source $\gamma$ rays.
Giving the pT and $^{8}$Li sources equal weight, the average shift in
energy scale at the energy of the pT source was found to be $-$0.69\%.
Using this as a measure of the degree by which the Monte Carlo energy
scale could vary differentially from the data and assuming a linear
interpolation between the $^{16}$N and pT energies, the linearity
uncertainty was parameterized in terms of the difference of an event's
energy from the $^{16}$N source ($\sim$5.05~MeV). This results in a
scaling factor that can be applied to the energy of each Monte Carlo
event used to build the PDFs in the signal extraction procedure.
Conservatively, this was applied as a two-sided uncertainty:
\begin{eqnarray}
\label{e:enonlin}
T'_{\rm eff} = \left [1.0 \pm 0.0069 \times \left ( \frac{T_{\rm
eff}-5.05}{19.0-5.05}\right ) \right ] T_{\rm eff},
\end{eqnarray}
\noindent where 19$\,$MeV is the effective energy of the pT source,
$T_{\rm eff}$ is the original effective kinetic energy of an
individual event and $T'_{\rm eff}$ is the modified energy.
Tests using both the $^{8}$Li and $^{252}$Cf sources suggested no evidence for
any linearity shift in Phase~II. We expect any source of linearity
shift to be common across the two phases, however, and therefore the
results from Phase~I were conservatively taken to apply to both phases
in a correlated fashion.
\section{Event Isotropy}
\label{sec:beta14}
As discussed in Sec.~\ref{sec:detector}, we used a measure of
event `isotropy' as one dimension of our PDFs to help distinguish
different types of events. By `isotropy' we mean the degree of
uniformity in solid angle of the hit PMTs relative to the fitted event
location.
Single electron events, like those created in neutrino CC and
ES reactions, had a Cherenkov cone that, at solar neutrino energies,
was somewhat diffuse due to electron multiple scattering in the water.
Nevertheless, even with the multiple scattering, these events were
characterized by a fairly tight cluster of PMT hits in a cone aligned
with the forward direction of the electron.
Neutron capture events on deuterium in Phase~I led to a single
6.25~MeV $\gamma$ ray. Although these events could produce multiple
Compton electrons and, hence, a number of Cherenkov cones that
distributed hits more widely than single electrons, Phase~I neutron
capture events in the data set were dominated by single Compton
scatters and, thus, isotropy was not useful in distinguishing them
from CC or ES events.
In contrast, in Phase~II neutrons captured primarily on
$^{35}$Cl, which typically led to a $\gamma$ cascade that looks very
different from single electrons. Neutron capture on $^{35}$Cl
typically produced several $\gamma$ rays, with energies totaling
$8.6$~MeV, which distributed PMT hits more uniformly in solid
angle. The isotropy distribution for these events is thus a
convolution of the isotropy distribution of single $\gamma$-ray events
with the directional distribution of the $\gamma$ rays emitted in the
possible $\gamma$-decay cascades.
The isotropy of background events can also be significantly
different from that of single electron and neutron events. Decays of
$^{208}$Tl, for example, produce both a $\beta$ and a 2.614~MeV
$\gamma$ ray and, thus, resulted in a different distribution of hit
PMTs than either single electrons or single $\gamma$s.
The measure of isotropy was therefore critical to the
analysis, helping us to separate CC and ES events from NC events, and
both of these from low-energy background events.
We examined several measures of isotropy, including a full
correlation function, the average angle between all possible pairwise
combinations of hit PMTs, and constructions of several variables using
Fisher discriminants. We found that, for the most part, they all had
comparable separation power between the single electron (CC and ES)
and the neutron (NC) signals. As in our previous Phase~II
publications~\cite{nsp}, we opted to use a linear combination of
parameters, $\beta_{14}\equiv\beta_1+4\beta_4$, where:
\begin{equation}
\beta_l = \frac{2}{N(N-1)}\sum_{i=1}^{N-1} \sum_{j=i+1}^N
P_l(\cos\theta_{ij}).
\end{equation}
In this expression, $P_l$ is the Legendre polynomial of order $l$,
$\theta_{ij}$ is the angle between triggered PMTs $i$ and $j$ relative
to the reconstructed event vertex, and $N$ is the total number of
triggered PMTs in the event. Very isotropic events have low (even
negative) values of $\beta_{14}$.
\subsection{Uncertainties on the Isotropy Measure}
We parameterized the difference between the predicted $\beta_{14}$\xspace PDF
and the true PDF by a fractional shift in the mean,
$\bar{\beta}_{14}$, and a broadening of the width,
$\sigma_{\beta_{14}}$. We also allowed for an energy dependence in
the shifts.
Figure~\ref{fig:isotropy1} shows $\beta_{14}$\xspace distributions of Phase~II data
from $^{252}$Cf and \NS~sources and from corresponding MC simulations. The
\NS~source emitted a single 6.13~MeV $\gamma$ ray, which usually
underwent Compton scattering and produced one or more electron tracks,
while neutrons from the $^{252}$Cf source were typically captured in Phase~II
by the chlorine additive, leading to a cascade of several $\gamma$
rays. It is clear from the figure that the \NS~data and Monte Carlo
agree very well, while the Monte Carlo simulation of the $^{252}$Cf source
shows a very small shift toward higher $\beta_{14}$ values (less
isotropic events than in the data). This shift is discussed in
Sec.~\ref{sec:piib14neut}.
\begin{figure}
\begin{center}
\includegraphics[width=3.4in]{beta14_n16_cf_comp.eps}
\caption{\label{fig:isotropy1} $\beta_{14}$ isotropy distributions for
$^{252}$Cf data and MC and \NS~data and MC. There is a very small shift of
the Monte Carlo $^{252}$Cf $\beta_{14}$ distribution toward higher
(less isotropic) values.}
\end{center}
\vspace{-4ex}
\end{figure}
Errors in the simulated distributions of $\beta_{14}$ can have
several sources: incorrect modeling of the detector optics or
photomultiplier tubes, unmodeled event vertex reconstruction errors,
errors in the model of the production of Cherenkov light (including
the interactions of $\gamma$ rays and electrons in the detector) and,
for neutrons captured on $^{35}$Cl, uncertainties in our knowledge of
the $\gamma$ cascade sequences and correlations between the directions
of the multiple $\gamma$ rays.
Except for the last item, these errors affect all event types.
For Phase~I, in which neutrons were captured on deuterons, we allowed
for correlations among the uncertainties on all signals and most
backgrounds. For Phase~II, we treated the uncertainties on the mean
and width of the $\beta_{14}$ distribution for NC events and
photodisintegration neutrons separately from the other event types.
Uncertainties on the $\beta_{14}$ distributions of $\beta$s and
$\gamma$s from radioactive background events were treated the same as
for CC and ES events. The one exception to this was PMT
$\beta$-$\gamma$ events, whose location at the PMT array led to
effects on the $\beta_{14}$ distribution that are not present in the
other signals. The $\beta_{14}$ distribution and associated
uncertainties for PMT $\beta$-$\gamma$s are discussed in
Sec.~\ref{s:pmtpdf}.
As usual in this analysis, we derived uncertainties on the
mean, width, and energy dependence of the $\beta_{14}$ distribution by
comparing calibration source data to Monte Carlo simulations of the
calibration source runs. When we found a difference that was
corroborated by more than one source, or was caused by known errors in
the simulation, we adjusted the simulated distribution by shifting the
mean of the distribution and/or convolving the distribution with a
smearing function to better match the calibration data. In such
cases, additional uncertainties associated with the correction were
included.
\subsubsection{$\beta_{14}$ Uncertainties for Phase~II Electron-like Events}
\label{sec:b14iie}
The primary measure of isotropy uncertainties for Phase~II
electron-like events comes from comparisons of $^{16}$N calibration
source data to Monte Carlo simulation. We fit Gaussians to both the
data and simulated events for each run, and calculated the fractional
difference between the fitted parameters. Figure~\ref{fig:piin16}
shows the fractional difference in the means as a function of $R^3$.
Each point shown is the fractional difference for a single run, with
the error bar evaluated as the combination of the uncertainty on the
fit parameters for data and Monte Carlo events. The detector region
in which the source was deployed has been identified for each run.
Also shown in Fig.~\ref{fig:piin16} are the
averages of these differences, in several radial bins.
\begin{figure}
\begin{center}
\includegraphics[width=3.4in]{beta14_salt_n16_mean_bands.eps}
\caption{\label{fig:piin16}(Color online) Fractional differences in
the mean of the $\beta_{14}$ distributions for data and Monte Carlo,
for the Phase~II $^{16}$N calibration source. Also shown in the
figure are the averages in each radial bin, with the bands indicating
the volume-weighted uncertainty in each bin. }
\end{center}
\end{figure}
The uncertainty on each average is the standard deviation of the
points in that bin, weighted by the volume represented by the bin
(smaller volumes have larger uncertainties). The overall weighted
average within the entire 550~cm radius fiducial volume is consistent
with zero, with an uncertainty of $\pm 0.21$\%. The calibration data
were collected at a high rate relative to normal neutrino data runs
and, so, we added to this an uncertainty to account for the difference
in $\beta_{14}$\xspace between high rate and low rate data ($\pm 0.1$\%) by comparing
low-rate and high-rate $^{16}$N source runs, as well as a small
uncertainty of $\pm 0.002$\% associated with a possible un-modeled
time-dependence obtained by comparing data and Monte Carlo differences
over time. The quadrature combination of these uncertainties on the
mean of the $\beta_{14}$ distribution totals $\pm 0.24$\%. A similar
analysis was performed for the width of the $\beta_{14}$ distribution,
yielding a total fractional uncertainty of $\pm 0.54$\%.
\subsubsection{$\beta_{14}$ Uncertainties for Phase~I Electron-like Events}
\label{sec:pib14}
We applied an identical analysis to the Phase~I $^{16}$N data
but, as shown in Figure~\ref{fig:pin16}, we found a difference of
$-0.81\pm 0.20$\% between the means of the $\beta_{14}$ distributions
for source data and source simulations. Comparison of $^{16}$N data
between Phase~I and Phase~II showed them to be consistent, and the
(data-Monte Carlo) difference seen in Fig.~\ref{fig:pin16} to be due
to a shift in the simulated events. Further investigation showed that
the difference was caused by the value of the Rayleigh scattering
length used in the Phase~I simulation. Explicit measurements of the
Rayleigh scattering had been made and used in the simulation for
Phase~II but no such measurements existed for Phase~I. Use of the
Phase~II Rayleigh scattering length in Phase~I simulations was found
to produce the desired magnitude of shift, and we therefore corrected
the $\beta_{14}$ values of all simulated Phase~I events by a factor of
$(1-0.0081)=0.9919$.
\begin{figure}
\begin{center}
\includegraphics[width=3.4in]{beta14_d2o_n16_mean_bands.eps}
\caption{\label{fig:pin16}(Color online) Fractional differences in the
mean of the $\beta_{14}$ distributions for data and Monte Carlo, for
the Phase~I $^{16}$N calibration source. Also shown in the figure are
the averages in each radial bin, with the bands indicating the
volume-weighted uncertainty in each bin. }
\end{center}
\end{figure}
We included three uncertainties associated with this correction. The
first was 0.20\% on the correction itself, evaluated from the
volume-weighted average of the data and Monte Carlo differences for
Phase~I, as shown in Fig.~\ref{fig:pin16}. To take into account the
fact that we used the consistency in the $^{16}$N data between the two
phases to support the correction of $-0.81$\%, we added in quadrature
the uncertainty on the difference between the means of the Phase~ I
and Phase~II $^{16}$N $\beta_{14}$ distributions, which was 0.34\%.
Finally, because we used the consistency of the Phase~II data with the
Monte Carlo simulation as evidence that the Phase~I $\beta_{14}$
distribution was correct, aside from the Rayleigh scattering
correction, we included the volume-weighted Phase~II uncertainty on
the offset of the mean (0.21\% from Fig.~\ref{fig:piin16} in
Sec.~\ref{sec:b14iie}).
The evaluations of the uncertainties associated with rate
dependence and time dependence in Phase~I were 0.08\% and 0.03\%,
respectively, and the overall uncertainty on the mean of the
$\beta_{14}$ distribution in Phase~I thus totaled 0.42\%.
We evaluated the uncertainty on the width of the $\beta_{14}$
distribution for Phase~I in the same way as for Phase~II, finding a
fractional uncertainty which also totaled 0.42\%.
\subsubsection{$\beta_{14}$ Uncertainties for Phase~II Neutron Capture Events
\label{sec:piib14neut}}
Neutron capture events in Phase~II were distinct from other
neutrino-induced events and backgrounds in that the $\gamma$ cascade
was more isotropic than a single electron or $\gamma$ ray. The
primary measurement of the uncertainty on the mean of the $\beta_{14}$
distribution comes from deployments of the $^{252}$Cf source, which
produced several neutrons per fission decay. The $\beta_{14}$
distribution of the resulting neutron capture events was noticeably
non-Gaussian, and we therefore derived uncertainties on the mean and
width by fitting the $\beta_{14}$ distributions from simulated
$^{252}$Cf runs directly to the distributions of data. The fit allowed
for scaling as well as convolution with a Gaussian smearing function.
Figure~\ref{fig:cffit} shows the fit of a simulated $^{252}$Cf run to
data, in which the fitted scaling was -1.2\% and the smearing was an
additional 1.8\% of the width of the Monte Carlo distribution.
\begin{figure}
\begin{center}
\includegraphics[width=3.4in]{b14cfsplinefit.eps}
\caption{\label{fig:cffit}(Color online) Fit of Monte Carlo simulated
$\beta_{14}$ distribution for neutron capture events from $^{252}$Cf
to data taken with the $^{252}$Cf source. The fitted shift for this
sample is $-$1.2\%, and the additional smear is 1.8\%, before any
corrections for bias.}
\end{center}
\end{figure}
We derived scaling factors from fits like that in
Fig.~\ref{fig:cffit} for all $^{252}$Cf runs, and then volume-weighted
them in the same way as for the $^{16}$N data. The average of the
volume-weighted differences showed an overall offset between the means
of the $\beta_{14}$ distributions for data and Monte Carlo of $\sim
-1.4$\%. This result was not consistent with that from the $^{16}$N
data for Phase II (which, as discussed above, had no significant
offset), which indicated that the shift was not due to a detector
effect. To check whether the shift was caused by mis-modeling of the
$^{252}$Cf source in the simulation, we performed the same analysis on
several types of neutron capture events: neutrons produced by passage
of a muon through the detector (`muon followers'), neutrons from a
tagged Am-Be source, and neutrons produced by deuteron
photodisintegration during the deployment of a radon spike in the
detector. Figure~\ref{fig:b14src} shows results from these sources.
An energy-dependent fit to all sources except $^{252}$Cf showed an
offset of $-1.12\pm0.31$\%, consistent with the data from the
$^{252}$Cf source. This indicated that the offset was likely not a
source effect but was instead associated with the simulation of the
$\gamma$ cascade from neutron captures on chlorine, possibly with some
contribution from the energy-dependent correction of the Monte Carlo
value for $\beta_{14}$\xspace presented in Sec.~\ref{s:bofenergy}. All sources taken
together gave an overall offset of $-1.44$\%, and we therefore
corrected the $\beta_{14}$ PDF by multiplying each simulated event's
$\beta_{14}$ value by $(1 + \delta_{\beta_{14}}) = (1-0.0144)=0.9856$.
\begin{figure}
\begin{center}
\includegraphics[width=3.4in]{beta14_neutrons_plot.eps}
\caption{\label{fig:b14src}(Color online) Fractional difference in
mean $\beta_{14}$ between data and Monte Carlo events for several
neutron sources. The horizontal band indicates the error on the
overall $-$1.44\% correction.}
\end{center}
\end{figure}
The uncertainties on this correction came first from the
uncertainty on the overall average, which was 0.17\%. To this we
added in quadrature the same rate- and time-dependent uncertainties as
were calculated for the Phase~II $^{16}$N sources. We also added an
uncertainty associated with the multiplicity of neutrons from the
$^{252}$Cf source of 0.09\% (neutrons produced by either
photodisintegration of deuterons or the NC reaction are singles,
whereas the $^{252}$Cf source produces multiple neutrons/decay), and
0.03\% uncertainty to account for the relatively sparse sampling of
the detector, giving a total of 0.22\%. Conservatively, we included a
further uncertainty based on the difference between $^{252}$Cf and the
other neutron-source data, a one-sided uncertainty of 0.31\%. The
total uncertainty on the mean of the $\beta_{14}$ distribution for
Phase~II neutron captures was therefore $^{+0.38}_{-0.22}$\%.
As well as a measure of any required shift, the fit described above
also allowed for the widths of the data and Monte Carlo distributions
to differ. A resolution parameter was varied in the fit, as the
standard deviation of the Gaussian by which the Monte Carlo
distribution was analytically convolved. The results for each $^{252}$Cf run
were volume-weighted using the procedure described above to result in
an average overall smearing value. The same fit was performed on a
sample of Monte Carlo-generated data, and the bias determined from
these fits was subtracted from the overall average. The result was a
fractional smearing correction to be applied to the PDFs of 0.43\%,
with an uncertainty (including all sources described above: time,
rate, multiplicity, and sampling) of 0.31\%.
\subsubsection{$\beta_{14}$ Uncertainties for Phase~I Neutron Capture Events}
Neutrons created in Phase~I captured on deuterons, releasing a
single 6.25~MeV $\gamma$ ray. The uncertainties on the mean
and width of the $\beta_{14}$ distribution were therefore
well-estimated by the measurements made with the $^{16}$N
6.13~MeV $\gamma$-ray source, already discussed in
Sec.~\ref{sec:pib14}. We therefore used the same
uncertainties for both event types, applied in a correlated
fashion.
\subsubsection{Energy Dependence of $\beta_{14}$ Uncertainties}
\label{s:bofenergy}
A final systematic uncertainty on the $\beta_{14}$
distributions is their energy dependence. In Figure~\ref{fig:b14vE}
we show the energy
\begin{figure}
\begin{center}
\includegraphics[width=3.4in]{b14vE_all.eps}
\caption{\label{fig:b14vE}(Color online) Fractional shift in mean
$\beta_{14}$ between data and Monte Carlo simulation in Phase~II for
several calibration sources as a function of kinetic energy, with the
fit to Eq.~\eqref{eq:b14vE} shown. }
\end{center}
\end{figure}
dependence of the fractional difference between Monte Carlo
predictions of the mean of the $\beta_{14}$ distribution and data from
several different sources: the Phase~II radon spike, low and high
energy $^{16}$N source events, the $^{252}$Cf source (with the data
corrected by the 1.44\% shift discussed above), and $^8$Li-source
$\beta$ events in three energy bins. There clearly is an energy
dependence in the data, which we fit with a function of the form:
\begin{equation}
f = \delta_{\beta_{14}}+m_{\beta_{14}}(T_{\rm eff}-5.6\rm ~MeV),
\label{eq:b14vE}
\end{equation}
where $T_{\rm eff}$ is kinetic energy, and 5.6~MeV is the kinetic
energy at the high-energy $^{16}$N point (the point used to determine
the offset in the mean of the Phase~II electron $\beta_{14}$
distribution). With this parameterization, the offset
($\delta_{\beta_{14}}$) and the slope ($m_{\beta_{14}}$) are
uncorrelated. Given that all the sources exhibited the same trend, we
applied the same slope to all event types, but used the different
offsets and uncertainties for $\delta_{\beta_{14}}$ described in the
previous sections. We performed a similar analysis for Phase~I,
although less calibration data were available, and found that the same
slope fit the $^{16}$N and $^8$Li data in this phase.
We found no energy dependence in the broadening of the width
of the $\beta_{14}$ distributions. These uncertainties were therefore
treated as independent of energy.
The corrections and uncertainties to the $\beta_{14}$ distributions
are listed in Tables~\ref{tbl:b14sum1} and~\ref{tbl:b14sum2}.
\begin{table}[!ht]
\centering
\begin{tabular}{lcc}
\hline Phase/Particles & $\delta_{\beta_{14}}$ & $m_{\beta_{14}}$
(10$^{-3}$ MeV$^{-1}$)\\ \hline \hline II/electrons &
0.0$\,\pm\,$0.0024 & 2.76$\,\pm\,0.696$ \\ II/neutrons &
$-0.0144\,^{+0.0038}_{-0.0022}$ & 2.76$\,\pm\,0.696$ \\ I/electrons &
$-$0.0081$\,\pm\,$0.0042 & 2.76$\,\pm\,0.696$ \\ I/neutrons &
$-$0.0081$\,\pm\,$0.0042 & 2.76$\,\pm\,0.696$ \\ \hline
\end{tabular}
\caption{\label{tbl:b14sum1} Summary of uncertainties on the
$\beta_{14}$ scale. The $\beta_{14}$\xspace of each event was corrected by:
$\beta_{14}\rightarrow\beta_{14}
(1+(\delta_{\beta_{14}}+m_{\beta_{14}}(T_{\rm eff}-5.6{\rm ~MeV})))$.}
\end{table}
\begin{table}[!ht]
\centering
\begin{tabular}{lcc}
\hline Phase/Particles & Correction (\%) & Uncertainty (\%) \\ \hline
\hline II/electrons & 0.0 & $\pm\,$0.42 \\ II/neutrons & 0.43 &
$\pm\,$0.31 \\ I/electrons & 0.0 & $\pm\,$0.42 \\ I/neutrons & 0.0 &
$\pm\,$0.42 \\ \hline
\end{tabular}
\caption{\label{tbl:b14sum2} Summary of uncertainties on the
$\beta_{14}$ width.}
\end{table}
\section{Cuts and Efficiencies\label{sec:cuts}}
\label{s:cuts}
The data set contains two main types of background events: physics
backgrounds, due to radioactive decays, and instrumental backgrounds,
caused by the detector itself. Two sets of cuts were developed to
remove these events, described in Sections~\ref{s:cutdescdamn}
and~\ref{s:cutdeschlc}. Each set of cuts had an associated level of
signal loss, which was taken into account in the measurement of
neutrino flux and spectra as described in Sec.~\ref{s:cutacc}.
\subsection{Low-Level (Instrumental) Cuts}
\label{s:cutdescdamn}
There were many sources of instrumentally-generated events in
the SNO detector, which produced hits originating either in the PMTs
or in the electronics channels. Static discharges in the nitrogen in
the neck of the acrylic vessel and `flasher' PMTs, in which discharges
occurred within a photomultiplier tube itself, produced light in the
detector. Electronic pickup generated by noise on the deck above the
detector or by high-voltage breakdown could produce hits in
electronics channels. We removed these instrumental backgrounds with
a suite of loose `low-level' cuts that rejected events before event
reconstruction. The cuts were based on event characteristics such as
the distribution of PMT hit times, the presence of unusually low or
high PMT charges, or unusual time correlations between events (such as
bursts of events with large numbers of hits). More details on these
low-level cuts can be found in~\cite{longd2o,nsp}. We used the same
cuts and cut criteria here, with the exception that the simple burst
cut used in~\cite{longd2o} was not used in this analysis because it
was redundant with other burst cuts.
The acceptance of these cuts was re-evaluated for this
analysis, particularly in the low-threshold region (below $T_{\rm
eff}=5.0$~MeV) where the cuts had not previously been examined in
detail. We discuss the results of these cut acceptance measurements in
Sec.~\ref{s:inssac}.
\subsection{High-Level Cuts} \label{s:cutdeschlc}
Background radioactivity events were produced primarily by the decays
of $^{214}$Bi and $^{208}$Tl. Lower energy ($T_{\rm eff}<3$~MeV)
decays of these nuclei in the heavy water could appear above our
$T_{\rm eff}=3.5$~MeV threshold because of the broad energy resolution
intrinsic to a Cherenkov detector. Decays within the walls of the
acrylic vessel, the light water surrounding the vessel, and the
photomultiplier tube array could pass the energy cut and have
misreconstructed vertex positions which falsely placed them within the
fiducial volume. The PMT array was, by far, the radioactively hottest
component of the SNO detector and, consequently, the largest source of
background events. We designed a suite of 13 loose cuts that used
`high-level' information (reconstructed event position, direction, and
energy) to remove events whose likely origin was either outside the
fiducial volume or whose true energy was below our threshold. All of
the cuts were adjusted based exclusively on simulated events and
calibration data. Several of the cuts had a high degree of redundancy
in order to maximize background rejection. The acceptance of the cuts
was therefore evaluated collectively, as described in
Sec.~\ref{s:cutacc}.
Five of the high-level cuts removed backgrounds using
Kolmogorov-Smirnov (KS) tests of the hypothesis that the event had a
single Cherenkov-electron track. Two of these tests compared
azimuthal and two-dimensional (polar vs azimuthal) angular
distributions to those expected for Cherenkov light produced by an
electron, and two others did the same for hits restricted to a narrow
prompt time window. The fifth of these KS tests was a comparison of
the distribution of fitted PMT time residuals (see
Eq.~\eqref{eqn:ftp-tresid}) with the expected distribution for direct
Cherenkov light.
Three more of the cuts applied event `isotropy' to remove
misreconstructed events. Events whose true origins were well outside
the fiducial volume but which reconstructed inside tend to appear very
anisotropic. For one of these cuts we used the mean angle between
pairs of PMTs, ($\theta_{ij}$), and for another the isotropy parameter
$\beta_{14}$, which is described in Sec.~\ref{sec:beta14}. Both of
these have been used in previous SNO analyses~\cite{longd2o,nsp}. The
third of these cuts was based on the charge-weighted mean pair angle,
$\theta_{ij}$, in which each pair angle is weighted by the product of
the detected charges of the two PMTs in the pair.
Further cuts used information from the energy reconstruction
algorithm discussed in Sec.~\ref{sec:ftk}. Two cuts removed events
whose reported energy uncertainty was well outside of the range
expected from the known energy resolution. These are referred to in
Sections~\ref{sec:hlcsac}--\ref{sec:desac} as the `energy-uncertainty'
cuts. The third was a comparison of the energy estimated with FTK
(which used all hits) with that from a prompt-light-only energy
estimator. Events whose origins were outside the acrylic vessel and
which pointed outward often had a larger fraction of prompt hits
because the direct light was not attenuated by the acrylic vessel.
Such an event would have a higher energy as measured by a prompt-light
energy estimator than by the total-light energy reconstruction of FTK.
We normalized the ratio of these two energy estimates by the ratio of
prompt to total hits in the event. The cut itself was
two-dimensional: events were removed if the normalized ratio of energy
estimates was unusually large and the charge-weighted $\theta_{ij}$
was unusually low (the latter indicating an outward-pointing event
with a tight cluster of hits).
The last two high-level cuts were also used in determining the
PDFs for radioactive backgrounds from the PMTs. The first of these,
the in-time ratio (ITR) cut, removed events based on the ratio of the
prompt hits to the total hits. The prompt time window for the ITR cut
extended from 2.5~ns before the reconstructed event time to 5.0~ns
after, and the full-event window was roughly 250~ns long. The mean of
the ITR distribution for SNO events is at 0.74. Events that were
reconstructed at positions far from their true origin tend to have
small ITR values, because the PMT hits were spread across the entire
time window. In previous analyses~\cite{longd2o,nsp,snoncd} we used
the ITR cut with a fixed threshold, rejecting events with an in-time
ratio smaller than 0.55. For the lower-energy events included in this
analysis, the lower number of hits caused the distribution of ITR to
broaden and introduced a large, energy-dependent bias in the
acceptance of the cut. We therefore changed the cut threshold to
scale with the number of hits ($N_{\rm hit}$) in an event. The fixed
value of 0.55 used in earlier publications corresponded to cutting
events that fell more than 2.7$\sigma$ below the mean of the
distribution, and we retained this criterion, so that the new version
of the ITR cut rejected events that were more than 2.7$\sigma$ below
the mean of 0.74, where now $\sigma = 0.43/\sqrt{N_{\rm hit}}$.
The last cut was aimed directly at removing events produced by
radioactive decays in the PMTs themselves. Such events produced light
either in the PMT glass or in the light water, just in front of the
PMTs. Although only a tiny fraction of such events were
misreconstructed inside the fiducial volume, the PMT array was
relatively hot, with a total decay rate from uranium and thorium chain
daughters of a few kHz. Because of their origin within or near the
PMTs, these events were characterized by a large charge in one PMT (or
distributed over a few nearby PMTs) with hit times that preceded the
reconstructed event time. The `early charge' (EQ) cut therefore
examined PMT hits in a window that ran from $-$75~ns to $-$25~ns
before the event time. If a PMT hit in this window had an unusually
high charge, or there was an unusually large number of hits in this
window, then the event was cut. To account for variations in PMT
gain, `unusually high charge' was defined by using the known charge
spectrum of the PMT in question to calculate the probability of
observing a charge as high as observed or higher. If more than one
hit was in the window, a trials penalty was imposed on the tube with
the lowest probability, and an event was cut if this trials-corrected
probability was smaller than 0.01. We defined `unusually large number
of hits' in a similar way, by comparing the number of hits observed in
the early time window to the expected number, given the total number
of hits in the event. If the Poisson probability of having the
observed number in the early time window was below 0.002, the event
was cut.
\subsection{Burst Removal}
\label{sec:bursts}
Atmospheric neutrinos, spontaneous fission, and cosmic-ray
muons could all produce bursts of events that were clearly not due to
solar neutrinos. Most of these bursts had a detectable primary event
(like a high-energy atmospheric-neutrino event) followed by several
neutron events. In addition, many instrumentally-generated events
came in bursts, such as those associated with high-voltage breakdown
in a PMT.
We therefore applied several cuts to the data set to remove
most of these time-correlated events. Four of these were part of the
suite of instrumental cuts described in Sec.~\ref{s:cutdescdamn}. The
first removed events that were within 5~$\mu$s of a previous event
and, therefore, eliminated events associated with PMT afterpulsing or
Michel electrons from decays of stopped muons. The second removed all
events within 20 seconds of an event that had been tagged as a muon.
Most of these `muon followers' were neutrons created by passage of a
cosmic-ray muon through the heavy water, which captured either on
deuterons or, in Phase II, on $^{35}$Cl, but the cut also removed
longer-lived cosmogenic activity. The muon follower cut resulted in a
very small additional overall detector deadtime because of the very
low rate of cosmic rays at SNO's depth. Atmospheric neutrinos could
also produce neutrons, either directly or by creating muons which, in
turn, disintegrated deuterons. We therefore removed any event within
250~ms of a previous event that had $N_{\rm hit}>60$ (Phase~I) or
$N_{\rm hit}>150$ (Phase~II). The fourth cut was aimed primarily at
residual instrumental bursts, and removed events that were part of a
set of six or more with $N_{\rm hit}>40$ that occurred within an
interval of six seconds.
Because of the relatively loose criteria used, after these
cuts were applied there were still time-correlated events in the SNO
data set that were very unlikely to be solar neutrinos, but were
primarily low-multiplicity neutrons created by atmospheric neutrino
interactions. We therefore applied a final `coincidence cut' that
removed events if two or more occurred within a few neutron capture
times of each other. For Phase~I this window was 100~ms; a shorter
window of 15~ms was used for Phase~II because of the shorter neutron
capture time on chlorine compared to deuterium. The cut was
`retriggerable', in that the window was extended for its full length
past the last event found. If a new event was thus `caught', the
window was again extended. We calculated that this cut removed less
than one pair of events from each data set due to accidental
coincidences.
\subsection{Cut Summary}
The numbers of events in the data sets after successive application of
each set of cuts are shown in Table~\ref{t:cuts}. The burst cuts
described in Sec.~\ref{sec:bursts} are included in instrumental cuts,
except for the final coincidence cut, which appears in the last line
of the table.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{lrr}
\hline \hline Events & \multicolumn{1}{c}{Phase~I} &
\multicolumn{1}{c}{Phase~II} \\ \hline Full data set &128421119 &
115068751\\ Instrumental &115328384 & 102079435\\ Reconstruction
&92159034 & 77661692\\ Fiducial volume ($<$550~cm) &11491488 &
8897178\\ Energy range (3.5--20$\,$MeV) &25570 & 40070\\ High-level
cuts &9346 & 18285\\ Coincidence cut & 9337 & 18228 \\ \hline \hline
\end{tabular}
\caption{\label{t:cuts}Number of events remaining in the data set
after successive application of each set of cuts.}
\end{center}
\end{table}
\subsection{Cut Acceptance}
\label{s:cutacc}
As in previous analyses \cite{longd2o}, the fraction of signal events
expected to pass the full set of analysis cuts (the `cut acceptance')
was determined by separating the cuts into three groups: instrumental,
reconstruction, and high-level cuts. Correlations between these
groups had been shown to be minimal \cite{nsp}, and it was verified
that this was still true after the addition of new high-level cuts for
this analysis.
The $^{16}$N and $^{8}$Li calibration sources were used for the primary
measurements of cut acceptance and the $^{252}$Cf source was used for neutron
capture events in Phase~II. Neutron events in Phase~I are
well-modeled by $^{16}$N events since capture on deuterium resulted in a
single $\gamma$ at 6.25~MeV and $^{16}$N was a source of 6.13~MeV
$\gamma$s.
\subsubsection{Instrumental Cut Acceptance}
\label{s:inssac}
The instrumental cuts were not simulated in the Monte Carlo code and,
therefore, we could not make a relative estimate of their acceptance
by comparing simulation to data. Instead, an absolute measure of their
acceptance was made using calibration data and applied as a correction
(with uncertainties) to the PDFs.
Being a near-perfect source of CC-like electron events, the $^{8}$Li
source was used to evaluate the signal loss for electron-like events,
and $^{252}$Cf was used for Phase~II neutron capture events. The $^{16}$N source
was used as a check and any difference in the values obtained was
conservatively taken as a two-sided systematic uncertainty.
Figure~\ref{f:n16li8inssac} shows the $^{16}$N and $^{8}$Li measurements in
Phase~I. The weighted mean of the $^{8}$Li signal loss shown in the
figure was taken as the correction to the PDFs, and the median
deviation of the points from this value was used to represent the
energy-dependent uncertainty.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.48\textwidth]{d2on16li8inssac.eps}
\caption{\label{f:n16li8inssac}(Color online) Signal loss due to the
instrumental cuts for the $^{16}$N and $^{8}$Li calibration sources as a
function of reconstructed kinetic energy, in Phase~I.}
\end{center}
\end{figure}
The $^{16}$N source, which was deployed more frequently and at more
positions than $^8$Li, was used to determine time- and
position-dependent uncertainties. Runs were binned by position and
date, and the median deviation of the bin values from the best-fit
value was taken as the measure of systematic uncertainty.
After combination of the systematic uncertainties in quadrature,
the final estimates of signal loss due to the instrumental cuts were:
\begin{itemize}
\item Phase~I: $0.214$\% $\pm 0.026$ (stat) $\pm 0.094$ (syst)
\item Phase~II $e^-$: $0.291$\% $\pm 0.028$ (stat) $\pm 0.202$ (syst)
\item Phase~II $n$: $0.303$\% $\pm 0.003$ (stat) $\pm 0.186$ (syst)
\end{itemize}
where ``$e^-$'' refers to electron-like events and ``$n$'' to neutron
captures. The acceptance is given by one minus the fractional signal
loss and was applied as an adjustment to the normalization of the
PDFs.
\subsubsection{Acceptance of Reconstruction}
Occasionally, the reconstruction algorithm failed to converge, and
returned no vertex for an event. In past analyses, an upper bound was
placed on the resulting signal loss, by using calibration source data,
but a different approach was used in this analysis. What is important
is how well the effect is reproduced in the simulation. Therefore, a
comparison was made of the acceptance of data and Monte Carlo events
and the difference of the ratio from unity was taken as a systematic
uncertainty on the PDF normalization.
Results from the $^{16}$N source, and the $^{252}$Cf source for Phase~II
neutrons, demonstrated that the signal loss in the data was reproduced
by the simulation to within the statistical uncertainties. Analysis
of runs taken during the two phases showed no significant deviation
with time. A position-dependent uncertainty was evaluated by taking
the ratio of the acceptance of $^{16}$N data and Monte Carlo events as a
function of source deployment position. The difference of the
weighted average of the points from 1.0 was taken as the value of the
uncertainty. The $^{8}$Li source was used to investigate energy
dependence. As expected, the signal loss decreased at higher
energies, where more information was available to reconstruct an
event. The simulation was shown to reproduce this effect very
accurately and the uncertainty was therefore treated in the same
manner as the position-dependent uncertainty.
Combining the systematic uncertainties in quadrature, we obtained the
final uncertainties associated with reconstruction acceptance:
\begin{itemize}
\item Phase~I: $\pm 0.034\%$ (stat) $\pm 0.060 \%$ (syst)
\item Phase~II $e^-$: $\pm 0.037\%$ (stat) $\pm 0.090 \%$ (syst)
\item Phase~II $n$: $\pm 0.000\%$ (stat) $\pm 0.009 \%$ (syst)
\end{itemize}
\subsubsection{High-Level Cut Acceptance}
\label{sec:hlcsac}
To take into account the acceptance of the high-level cuts, the ratio
of the cut acceptance for data and Monte Carlo events was calculated
and applied to the PDFs as a normalization correction. This ratio was
evaluated as a function of energy, position and time.
The energy-uncertainty cuts described in Sec.~\ref{s:cutdeschlc} were
observed to have much stronger variations in signal loss as a function
of position and energy than the other high-level cuts and were
therefore treated separately. It was verified that the correlations
between the two resulting subsets of high-level cuts were minimal, so
that treating them independently was a valid approach. The following
sections describe the analysis for each subset of cuts, where `reduced
high-level cuts' refers to the subset that does not include the
energy-uncertainty cuts.
\subsubsection{Reduced High-Level Cut Acceptance}
\label{s:redhlcsac}
The data/Monte Carlo acceptance ratio and its uncertainty were
calculated for each calibration source run. The runs were divided
into radial bins, and the error-weighted mean and standard deviation
were calculated in each bin. Finally, the volume-weighted average of
the bin values was calculated.
The energy dependence of the acceptance ratio was investigated using
$^{16}$N and $^{8}$Li data for electron-like events and $^{252}$Cf for Phase~II
neutron capture events. The $^{16}$N data were restricted to the energies
below 9~MeV to avoid complications associated with event pile-up
caused by the high rate of the calibration source.
The measurements from $^{16}$N and $^{8}$Li were in very good agreement, and
were both consistent with the acceptance ratio having no dependence on
energy. The normalization correction for the PDFs was therefore
evaluated using the $^{16}$N source data by taking the weighted mean of
the values in each energy bin. The median deviation of the $^{8}$Li
points from the best-fit was taken as a systematic uncertainty on the
energy-dependence.
The acceptance ratio for Phase~II neutron capture events was
evaluated using $^{252}$Cf data. To avoid pile-up of fission $\gamma$s, the
events were required to have energies in the interval 4.5-9.5~MeV. An
energy-dependent uncertainty was included to account for any variation
of individual energy bins from the overall average.
The stability of the acceptance as a function of time was studied
using $^{16}$N runs taken in the center of the detector. No trend was
observed, but the time variability was incorporated as an additional
systematic uncertainty.
The $^{16}$N source was also used to evaluate a systematic uncertainty
associated with a possible position dependence of the acceptance
ratio. Runs were binned by position in the detector, the
volume-weighted average of the bins was found and the mean deviation
of the ratio in each bin from this value was calculated. A comparison
of $^{16}$N and $^{252}$Cf source data showed that they exhibited statistically
equivalent position dependences, so the more widely deployed $^{16}$N
source was used to quantify this effect for both electron-like and
neutron capture events.
The acceptance corrections and associated uncertainties derived from
the difference between the high-level cut acceptances for data and
Monte Carlo events are summarized in Table~\ref{t:hlcsac}.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{lccc} \hline
\hline & Phase~I & Phase~II $e^-$ & Phase~II $n$ \\ \hline Correction
& 0.9945 & 0.9958 & 0.9983 \\ \hline Stat uncert (\%) & 0.0273 &
0.0159 & 0.0196 \\ Energy dep (\%) & 0.1897 & 0.1226 & 0.0005--2.3565
\\ Position dep (\%) & 0.1630 & 0.3144 & 0.3144 \\ Time dep (\%) &
0.0805 & 0.0130 & 0.0130 \\ \hline \hline
\end{tabular}
\caption{\label{t:hlcsac}Correction and associated uncertainties for
the high-level cut acceptance ratio. The Phase~II neutron
energy-dependent uncertainty was treated differentially with energy;
the quoted range covers the value across the energy spectrum.}
\end{center}
\end{table}
\subsubsection{Energy-Uncertainty Cut Acceptance}
\label{sec:desac}
We expect that the effect of placing cuts on the uncertainty on the
estimate of an event's energy reported by the energy reconstruction
algorithm should be the same for data and Monte Carlo events.
Nevertheless, uncertainties on this assumption were evaluated using
the $^{16}$N and $^{252}$Cf source data, applying the same energy ranges as in
the reduced high-level cut analysis (Sec.~\ref{s:redhlcsac}).
Differential uncertainties were evaluated using the same method as for
the reduced high-level cuts. The stability over time was measured
using $^{16}$N data. The acceptance ratio was observed to be stable, but
an additional uncertainty was included based on the spread of the
points.
The $^{16}$N and $^{252}$Cf data showed statistically equivalent
position-dependent behavior in the acceptance of the
energy-uncertainty cuts, and we therefore evaluated position-dependent
uncertainties using the more widely-deployed $^{16}$N source. $^{16}$N source
data were divided into 50~cm slices along the $z$-axis, and the
acceptance ratios calculated in the slices were combined in a
volume-weighted average. The uncertainty on this average was derived
from the deviation of the points from unity.
The energy-uncertainty cuts were even more sensitive to the effects of
pile-up than were the other high-level cuts. Therefore, to evaluate
an energy-dependent uncertainty on the acceptance ratio for
electron-like events, events from the $^{16}$N source were restricted to
energies below 7~MeV, and the lower rate $^{8}$Li source was used for
measurements at higher energies. $^{252}$Cf data were used for Phase~II
neutron capture events, with the deviations from unity measured in the
8.5--9$\,$MeV bin also applied to higher energy events. This resulted
in energy-dependent uncertainties for both electron-like and neutron
capture events.
The uncertainties in acceptance were applied as uncertainties in
normalization of the PDFs. The values are summarized in
Table~\ref{t:desac}.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{lrrr} \hline
\hline & \multicolumn{1}{c}{Phase~I} & \multicolumn{1}{c}{Phase~II
$e^-$} & \multicolumn{1}{c}{Phase~II $n$} \\ \hline Stat uncert (\%) &
0.0377 & 0.0668 & 0.0322 \\ Position dep (+) (\%) & +0.0750 & +0.0838
& +0.0838 \\ Position dep ($-$) (\%) & $-$1.0760 & $-$0.9897 &
$-$0.9897 \\ Time dep (\%) & 0.0834 & 0.0531 & 0.0531 \\ \hline \hline
\end{tabular}
\caption{\label{t:desac}Uncertainties on the energy-uncertainty cut
acceptance ratio. Energy-dependent uncertainties were treated
differentially with energy and are not shown. The uncertainty in
position is asymmetric.}
\end{center}
\end{table}
\subsubsection{Overall Cut Acceptance}
The final correction to the PDF normalization comes from combination
of the high-level cut correction (Table~\ref{t:hlcsac}) and the
instrumental cut correction (Sec.~\ref{s:inssac}). The various
contributions to uncertainty on signal loss were treated as
uncorrelated and combined in quadrature to give the final uncertainty
on the cut acceptance correction. Table~\ref{t:finalsac} lists the
final corrections and uncertainties.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{lccc} \hline
\hline & Phase~I & Phase~II $e^-$ & Phase~II $n$ \\ \hline Correction
& 0.9924 & 0.9930 & 0.9954 \\ Pos uncertainty (\%) & 0.34--0.45 &
0.41--0.80 & 0.38--2.70 \\ Neg uncertainty (\%) & 1.12--1.17 &
1.07--1.08 & 1.06--1.65 \\ \hline \hline
\end{tabular}
\caption{\label{t:finalsac}Corrections applied to the Monte
Carlo-generated PDFs due to cut acceptance. The uncertainties were
evaluated differentially with energy; the quoted range covers their
values across the energy spectrum.}
\end{center}
\end{table}
Figure~\ref{f:sacrifice} shows a comparison of the cut acceptance for
data and Monte Carlo events from a single $^{252}$Cf run in Phase~II. The
full set of analysis cuts was applied to both data and simulation, and
the Monte Carlo-predicted acceptance was corrected by the value from
Table~\ref{t:finalsac}. As the figure shows, the Monte Carlo
simulation reproduces the shape of the data distribution very closely.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.48\textwidth]{acceptance_srccut.eps}
\caption{\label{f:sacrifice}(Color online) Acceptance of the full set
of analysis cuts for both data and Monte Carlo events from a single
$^{252}$Cf run in Phase~II, as a function of kinetic energy. }
\end{center}
\end{figure}
\section{Trigger Efficiency \label{sec:treff}}
As discussed in Sec.~\ref{sec:dataset}, the primary trigger
for SNO was a coincidence of PMT hits within a 93~ns time window, set
to $N_{\rm coinc}=18$ hits for the early part of Phase~I and to
$N_{\rm coinc}=16$ hits for the remainder of Phase~I and all of
Phase~II. We define the `efficiency' of the trigger as the
probability that an event with $N_{\rm coinc}$ hits actually triggered
the detector. Small shifts in the analog (DC-coupled) baseline,
noise, and disabled trigger electronics channels could all lead to a
non-unity efficiency. We measured the efficiency using the isotropic
laser source, by triggering on the laser pulse and comparing an
offline evaluation of the trigger (by counting hits in a sliding 93~ns
window) to the output of the hardware trigger. We found that for the
$N_{\rm coinc}=18$ hit threshold, events with 23 or more hits in
coincidence triggered the detector with an efficiency greater than
99.9\% and, for the $N_{\rm coinc}=16$ hit threshold, the efficiency
reached 99.9\% at 21 hits. Figure~\ref{fig:trigturn} shows the
efficiency measured as a function of $N_{\rm coinc}$, for Phase I at
the higher $N_{\rm coinc}=18$ threshold, and for Phase~II at the lower
$N_{\rm coinc}=16$ hit threshold.
\begin{figure}
\begin{center}
\includegraphics[height=0.26\textheight]{treff_leta.eps}
\caption{\label{fig:trigturn} Comparison of the trigger efficiencies
in the two data-taking phases and for the two different thresholds
used.}
\end{center}
\end{figure}
For events at our $T=3.5$~MeV analysis threshold, the mean
number of hits in an event over the full 400~ns event window was
$\sim$ 30 for Phase I and $\sim$ 27 for Phase~II, with RMS's of 1.8
hits and 1.7 hits, respectively. The numbers of hits in the 400~ns
event window and in the 93~ns trigger coincidence window differed
primarily in the contribution from random PMT noise which, for both
phases, contributed on average roughly 1 additional hit in the 400~ns
event window. Thus, for both phases, the trigger efficiency was above
99.9\% for all but a negligible fraction of events with a high enough
$N_{\rm coinc}$ to pass the analysis cuts.
Because our PDFs and overall normalization were derived from
simulation, we compared the trigger-efficiency estimate from the data
to the simulation's prediction. We also compared the idealized
simulated trigger to a simulation that included variations in the
trigger baseline as measured by an online monitor. We found that the
Monte Carlo simulation's prediction of trigger efficiency was in
excellent agreement with our measurement for both SNO phases, and that
the measured variations contributed a negligible additional
uncertainty to our overall acceptance.
\section{Uncertainties on the Neutron Capture Efficiencies \label{sec:ncap}}
In Phase~I, neutrons produced through the NC reaction and
background processes were captured on deuterons within the heavy
water, releasing a single 6.25~MeV $\gamma$ ray. In Phase~II, the
neutrons were captured primarily on $^{35}$Cl, releasing a $\gamma$
cascade of total energy 8.6~MeV. The absolute cross sections for
these capture reactions, along with detector acceptance, determined
the rate of detected neutron events. The uncertainty on the neutron
capture efficiency for Phase~II overwhelmingly dominates that for
Phase~I in the final flux determinations because of the larger capture
cross section.
In this analysis, we used the Monte Carlo simulation to define the
central values of the neutron capture efficiencies. Included in our
simulation were the measured isotopic purity of the heavy water, as
well as its density and temperature and, for Phase~II, the measured
density of salt added to the D$_2$O.
To assess the systematic uncertainties on the neutron capture
efficiencies, we used data taken with the $^{252}$Cf source deployed
at many positions throughout the detector, and compared the observed
counting rates to simulations of the source runs. The differences
between data and simulated events provide an estimate of the
simulation's accuracy. The Phase~I and Phase~II data sets are
noticeably different in their neutron detection efficiency because of
the much larger capture cross section in Phase~II, and the higher
energy $\gamma$ cascade from neutron capture on chlorine. We
therefore assessed the uncertainties in the two phases slightly
differently, as discussed below. We also compared the results of this
`direct counting' approach with a `time series analysis', in which the
relative times of events were used to extract the capture efficiency.
The two methods were in excellent agreement for both phases. Our
capture efficiency uncertainty for Phase~II is $\pm 1.4$\%, and for
Phase~I it is $\pm$2\%.
\subsection{Phase~II Neutron Capture Efficiency Uncertainties}
For the Phase~II analysis, neutron events from the $^{252}$Cf
source were selected using the same burst algorithm that was used in
previous SNO publications~\cite{nsp}. Neutrons were identified by
looking for prompt fission $\gamma$ events from the $^{252}$Cf decay,
and tagging subsequent events that occurred within 40~ms.
Figure~\ref{fig:funcomps} plots the neutron detection efficiency for
each source run as a function of radial position of the source in the
detector, for both data and Monte Carlo simulated events. The source
position for a run was determined by finding the mean reconstructed
position of the prompt fission $\gamma$ events, to eliminate the large
positioning uncertainties of the source deployment mechanism. The
efficiencies shown in Fig.~\ref{fig:funcomps} were each fitted to a
phenomenologically-motivated neutron detection efficiency function:
\begin{equation}
\epsilon(s) = A ({\rm tanh}(B(s-C)) - 1),
\label{eqn:tanh}
\end{equation}
where $\epsilon(s)$ gives the neutron capture efficiency at source
radius $s$.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{mc_data_fits.eps}
\caption{\label{fig:funcomps}(Color online) Data and Monte Carlo
neutron detection efficiencies in Phase~II fitted to the
phenomenologically-motivated neutron detection efficiency function. }
\end{center}
\vspace{-4ex}
\end{figure}
To determine the uncertainty on the simulation's prediction of
capture efficiency, we first calculated the mean capture efficiency in
the D$_2$O\xspace volume, given the two functions shown in
Fig.~\ref{fig:funcomps}, as follows:
\begin{equation}
\epsilon = \frac{\int_{0}^{600.5}s^{2}\epsilon(s) ds
}{\int_{0}^{600.5}s^{2}ds}.
\end{equation}
We took the difference of 0.8\% between data and simulation as a
baseline uncertainty. (The mean detection efficiency measured this
way was 35.6\%).
The normalization of the curves shown in
Fig.~\ref{fig:funcomps} depends on the strength of the $^{252}$Cf
source, which we know to 0.7\% based on {\it ex-situ} measurements.
An overall shift in reconstructed event positions, discussed in
Sec.~\ref{sec:hitcal}, also changed the measured efficiency in data
relative to the simulation results. By varying the value of this
shift within its range of uncertainty we found it resulted in an
additional 0.3\% uncertainty in capture efficiency. The uncertainty
in the fit parameters of the neutron detection efficiency function was
included conservatively by taking the entire statistical uncertainty
on the data efficiency measurements of Fig.~\ref{fig:funcomps}, which
yields another 0.9\%. Lastly, we included a 0.1\% uncertainty to
account for the fraction of $^{250}$Cf in the $^{252}$Cf source (only
$^{252}$Cf is simulated by the Monte Carlo code). The overall
uncertainty on the neutron capture efficiency, calculated by adding
these in quadrature, was 1.4\%.
We checked these results by performing an independent time
series analysis, in which we fit directly for the efficiency at each
source deployment point based on the rates of neutron capture and
$\gamma$ fission events (the source strength is not an input
parameter). The fit included parameters associated with the overall
fission rate, backgrounds from accidental coincidences, and the mean
capture time for neutrons. We obtained the efficiency as a function
of source radial position, to which we fit the same efficiency
function from Eq.~\ref{eqn:tanh}, and extracted the volume-weighted
capture efficiency directly (rather than by comparison to Monte
Carlo). The mean efficiency calculated this way was $35.3\pm 0.6$\%,
in excellent agreement with the value of 35.6\% from the direct
counting method, and well within the uncertainties on both
measurements.
\subsection{Phase~I Neutron Capture Efficiency Uncertainties}
The measurement of neutron capture efficiency uncertainty for
Phase~I is more difficult than for Phase~II, primarily because the
lower capture cross section in Phase~I made identification of neutron
events from the $^{252}$Cf source difficult. The number of detected
neutrons per fission was small (less than one on average), and the
long capture time (roughly 50~ms) made coincidences more likely to be
accidental pile-up of prompt fission $\gamma$s than neutrons following
the $\gamma$s.
Instead of using the burst algorithm, we separated neutron
events from fission $\gamma$ s based on their differing energies and mean
free paths in D$_2$O. Events were required to be more than 150~cm
from the source position and to have energies above the mean energy
expected for a neutron capture event, for both data and Monte Carlo
events. The detected rate of events after these cuts was used for the
data and Monte Carlo simulation comparison.
An additional parameter was added to the neutron detection
efficiency function for these data, as follows:
\begin{equation}
\epsilon(s) = A ({\rm tanh}(B(s-C)) - D),
\label{eqn:tanhprime}
\end{equation}
and the resulting fits to data and Monte Carlo are shown in
Figure~\ref{fig:cfmcdata_d2o}.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{cf_d2o_mc_data_fit.eps}
\caption{\label{fig:cfmcdata_d2o}(Color online) Comparison of the fit
functions to the data and Monte Carlo in Phase~I.}
\end{center}
\vspace{-4ex}
\end{figure}
The difference of the volume-weighted integrals of the two
curves is just 0.9\%, but the small value is clearly due to
cancellation differences at different radii. The shape difference is
driven by small differences between the data and Monte Carlo fits at
large radii, which are likely due to unassessed systematic errors on
the data points themselves. We included additional uncertainties to
account for these. In particular, we included a 0.6\% uncertainty
associated with the statistical uncertainties of the data and Monte
Carlo neutron detection efficiency function parameters, and an
additional 0.6\% uncertainty associated with knowledge of the source
position. We also included a further uncertainty of 0.9\% to account
for data and Monte Carlo differences in the energy cut applied to
select neutrons.
We applied the same source-strength uncertainties as for the
Phase~II analysis, namely the 0.7\% absolute source strength
calibration, and 0.1\% from the (unmodeled) contamination of
$^{250}$Cf in the $^{252}$Cf source. The total uncertainty on the
neutron capture efficiency for Phase~I comes to 2\%.
To check our estimates, we also performed a time series
analysis of the $^{252}$Cf data. Unlike Phase~II, for Phase~I we
cannot extract the absolute efficiency to compare with that derived
from the direct counting method because of the 150~cm reconstruction
cut. Instead, we performed the time series analysis on both Monte
Carlo and source data runs, and compared them. We found the fractional
difference between the source-derived and Monte Carlo-derived
efficiencies to be just 0.3\%, well within the 2\% uncertainty
obtained from the direct counting method. One output of the time
series analysis is the neutron capture time: the time between neutron
emission from the $^{252}$Cf source and capture on a deuteron.
Figure~\ref{fig:d2otau} shows the neutron capture time as a function
of source radial position for both data and Monte Carlo. As the
$^{252}$Cf source approaches the acrylic vessel and light water
region, the capture time decreases significantly. The overall
agreement between the measured capture times in data and Monte Carlo
is very good throughout most of the volume.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{tau_vs_rad_d2o.eps}
\caption{\label{fig:d2otau}(Color online) Mean neutron capture time
from the time series analysis in Phase~I as a function of source
position. The line shows the best fit to the simulation using a cubic
polynomial.}
\end{center}
\vspace{-4ex}
\end{figure}
\section{Backgrounds \label{sec:backgrounds}}
Lowering the energy threshold opened the analysis window to additional
background contamination, predominantly from radioactive decays of
$^{214}$Bi and $^{208}$Tl in the $^{238}$U and $^{232}$Th chains, respectively.
In Phase~II, neutron capture on $^{23}$Na produced a low level of
$^{24}$Na in the detector which, in its decay to $^{24}$Mg, produced a
low energy $\beta$ and two $\gamma$s. One of these $\gamma$ s has an
energy of 2.75~MeV, which could photodisintegrate a deuteron. The
result was some additional electron-like and neutron capture
background events. In addition, radon progeny that accumulated on the
surface of the AV during construction could have created neutrons
through ($\alpha$,$n$) reactions on isotopes of carbon and oxygen
within the acrylic.
In the past, most of these backgrounds were estimated using
separate self-contained analyses and then subtracted from the measured
neutrino fluxes. In this analysis, the Monte Carlo simulation was
used to create PDFs for each of 17 sources of background events
(except for PMT $\beta$-$\gamma$ events, for which an analytic PDF was
used in each phase, as described in Sec.~\ref{s:pmtpdf}), and the
numbers of events of each type were parameters in the signal
extraction fits. Table~\ref{t:bkgs} lists the sources of
physics-related backgrounds that were included in the fits.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{lccc}
\hline \hline Detector Region & Phase~I & Phase~II\\ \hline \hline
D$_2$O\xspace volume & Internal $^{214}$Bi\xspace & Internal $^{214}$Bi\xspace \\ & Internal $^{208}$Tl\xspace & Internal
$^{208}$Tl\xspace \\ & & $^{24}$Na \\ \hline Acrylic vessel & Bulk $^{214}$Bi\xspace & Bulk $^{214}$Bi\xspace \\
& Bulk $^{208}$Tl\xspace & Bulk $^{208}$Tl\xspace \\ & Surface ($\alpha$,$n$) $n$s & Surface
($\alpha$,$n$) $n$s \\ \hline H$_2$O\xspace volume & External $^{214}$Bi\xspace & External
$^{214}$Bi\xspace \\ & External $^{208}$Tl\xspace & External $^{208}$Tl\xspace \\ & PMT $\beta$-$\gamma$ s & PMT
$\beta$-$\gamma$ s \\ \hline \hline
\end{tabular}
\caption[Sources of background events in the LETA analysis.]{The
sources of physics-related background events in the LETA analysis.}
\label{t:bkgs}
\end{center}
\end{table}
All of the Monte Carlo-generated PDFs were verified using calibration
sources. {\it Ex-situ} measurements~\cite{htio, mnox} of background
levels in the D$_2$O\xspace and H$_2$O\xspace provided {\it a priori} information for
several of them, which were used as constraints in the signal
extraction fits. In addition, corrections were applied after the
signal extraction fits to account for a number of background event
types that contributed much smaller levels of contamination. The
following sections describe these procedures.
\subsection{Background PDFs}
Most of the PDFs used in the signal extraction were created from Monte
Carlo simulations of the specific event types. However, because of
the limited number of simulated PMT $\beta$-$\gamma$ events available in
the radial range of interest, an analytic parameterization of the PDF
was used, as described in Sec.~\ref{s:pmtpdf}. This was verified by
comparison to the simulation and uncertainties associated with the
value of each parameter were propagated in the signal extraction fits.
The remainder of the background PDFs were verified by comparison of
calibration data to simulated events. The D$_2$O\xspace and H$_2$O\xspace backgrounds
were verified using the D$_2$O\xspace- and H$_2$O-region radon spikes in
Phase~II and calibration sources deployed in these regions. Bulk AV
backgrounds were verified using the $^{238}$U and $^{232}$Th sources,
and surface ($\alpha$,$n$) neutrons using the $^{252}$Cf source deployed near
the AV.
In all cases, the data and Monte Carlo event distributions agreed to
within the systematic uncertainties already defined for the PDFs.
Figure~\ref{f:spikefits} shows the energy dimension of a fit to the
internal radon spike. The fit was performed using the unbinned signal
extraction code (see Sec.~\ref{s:kernel}) in a simplified
configuration, as described in Sec.~\ref{s:eres:saltelec}. The result
is a good fit to the data, in particular at low energy.
Figure~\ref{f:avtl} shows a comparison of data to simulation for the
$^{232}$Th source deployed near the AV. A band is shown for the
simulated events, representing the quadrature sum of the statistical
uncertainties with the effect of applying the dominant systematic
uncertainties. The distributions in $T_{\rm eff}$, $R^3$ and $\beta_{14}$\xspace
show good agreement within the 1$\sigma$ uncertainties.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.48\textwidth]{spike_int_paper_zoom.eps}
\caption{\label{f:spikefits}(Color online) One dimensional projection
of the fit to the internal radon spike data.}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.48\textwidth]{AVtlenergy_3.eps}
\includegraphics[width=0.48\textwidth]{AVtlr3_3.eps}
\includegraphics[width=0.48\textwidth]{AVtlb14_3.eps}
\caption{\label{f:avtl}(Color online) Comparison of data to simulation
for $^{232}$Th source runs near the AV in Phase~II, in (a) $T_{\rm
eff}$, (b) $R^3$, and (c) $\beta_{14}$. The band represents the 1$\sigma$
uncertainty on the Monte Carlo-prediction, taking the quadrature sum
of the statistical uncertainties with the effect of applying the
dominant systematic uncertainties.}
\end{center}
\end{figure}
The cross section for photodisintegration affects the relative
normalization of the neutron and electron parts of the background
PDFs. The simulation used a theoretical value for the cross section
and the associated 2\% uncertainty was propagated in the signal
extraction fits.
The simulation of $^{24}$Na events used to generate a PDF was done
under the assumption of a uniform distribution of events within the
detector, since a primary source of $^{24}$Na was the capture of
neutrons produced by deployed calibration sources on $^{23}$Na.
$^{24}$Na was also introduced via the neck, and via the water systems,
which connected near the top and bottom of the AV. Therefore, the
signal extraction fits were redone with different spatial
distributions, in which the events originated either at the neck of
the AV or at the bottom, with a conservatively chosen 10\% linear
gradient along the $z$-axis. The difference from the baseline
(uniform distribution) fit was taken as a systematic uncertainty.
\subsection{Low Energy Background Constraints}
\label{s:bkgconst}
Several radioassays were performed during data taking to measure the
concentrations of radon and radium in the D$_2$O\xspace and H$_2$O\xspace regions, as
described in previous publications \cite{longd2o, htio, mnox}.
Although equilibrium was broken in the decay chains, the results are
expressed in terms of equivalent amounts of $^{238}$U and $^{232}$Th
assuming equilibrium for ease of comparison with other measurements.
The results were used to place constraints on the expected number of
background events in the analysis window. During Phase II, there was
a leak in the assay system used to measure the $^{238}$U chain
contamination that was not discovered until after data taking had
ended, so there is no accurate constraint on the $^{238}$U level in
the D$_2$O\xspace during that phase. Other limits based on secondary assay
techniques were found to be too loose to have any impact on the signal
extraction results and so were disregarded. The results of the assays
are given in tables \ref{t:exsitud} and \ref{t:exsituh}.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{lcc}
\hline \hline Phase & Isotope & Concentration ($\times$ 10$^{-15}$ g/g
of D$_2$O\xspace)\\ \hline I & $^{238}$U & 10.1$\,^{+3.4}_{-2.0} $ \\ &
$^{232}$Th & 2.09$\,\pm\,$0.21(stat)$\,^{+0.96}_{-0.91}$(syst)\\
\hline II & $^{238}$U & --- \\ & $^{232}$Th &
1.76$\,\pm\,$0.44(stat)$\,^{+0.70}_{-0.94}$(syst) \\ \hline \hline
\end{tabular}
\caption[{\it Ex-situ} constraints on background events.]{$^{238}$U
and $^{232}$Th concentrations in the D$_2$O\xspace volume, determined from {\it
ex-situ} radioassays in Phases I and II.}
\label{t:exsitud}
\end{center}
\end{table}
\begin{table}[!ht]
\begin{center}
\begin{tabular}{lcc}
\hline \hline Phase & Isotope & Concentration (g/g of H$_2$O\xspace)\\ \hline I
& $^{238}$U & 29.5$\,\pm\,$ 5.1$\times$ 10$^{-14}$\\ & $^{232}$Th &
8.1$\,^{+2.7}_{-2.3} $$\times$ 10$^{-14}$\\ \hline II & $^{238}$U &
20.6$\,\pm\,$ 5.0$\times$ 10$^{-14}$\\ & $^{232}$Th & 5.2$\,\pm\,$
1.6$\times$ 10$^{-14}$\\ \hline \hline
\end{tabular}
\caption[{\it Ex-situ} constraints on background events.]{$^{238}$U
and $^{232}$Th concentrations in the H$_2$O\xspace volume, determined from {\it
ex-situ} radioassays in Phases I and II.}
\label{t:exsituh}
\end{center}
\end{table}
These concentrations were converted into an expected number of events
and were applied as constraints in the signal extraction fits, as
described in Sec.~\ref{s:penaltyterms}.
{\it In-situ} analyses \cite{simsthesis} were used to predict the
number of background events from $^{24}$Na decays in Phase II. The
predicted value of $392\pm117.6$ events was applied as a constraint in
the signal extraction fits.
\subsection{PMT $\beta$-$\gamma$ PDF}
\label{s:pmtpdf}
We use the term ``PMT events'' to refer to all radioactive decays in
the spherical shell region encompassing the PMTs and the PSUP. These
events were primarily $^{208}$Tl decays originating from $^{232}$Th
contamination in the PMT/PSUP components.
PMT events occurred at a high rate, but only a tiny fraction of them
reconstructed inside the signal box and within the fiducial volume: in
Phase~I, the acceptance was only $1.7\times 10^{-8}$ and in Phase~II
it was $5.9\times 10^{-8}$. Therefore, an enormous amount of computer
time would be needed to generate enough events to create a PDF.
Creation of a multi-dimensional PDF based entirely on simulation was
therefore deemed to be impractical.
A high rate thorium source was deployed near the PSUP in both phases
to help model these events. However, interpretation of this data was
complicated by the fact that a point source with a sufficiently high
rate tends to produce significant `pile-up' of multiple events that
trigger in the same time window. This pile-up changes the topology of
the events to the extent that they are not characteristic of PMT
$\beta$-$\gamma$ s, so they cannot be used directly as a model.
Therefore, an analytic parameterization of the PDF, given in
Eq.~\eqref{e:pmtpdf}, was used. For this, the $\cos\theta_{\odot}$ dimension was
assumed to be flat; the remaining three-dimensional PDF was of the
form:
\begin{eqnarray}
\label{e:pmtpdf}
P_{PMT}(T_{\rm eff},& \beta_{14}&, R^3) = e^{A\,T_{\rm eff}} \times
(e^{B\,R^3} + C)\nonumber \\ &\times& \mathcal{N}(\beta_{14}\, |\,
\bar{\beta}_{14}=D + ER^3, \sigma = F),
\end{eqnarray}
where $\mathcal{N}(x|\bar{x}, \sigma)$ is a Gaussian distribution in
$x$ with mean $\bar{x}$ and standard deviation $\sigma$. The $\beta_{14}$
dimension was determined from a Gaussian fit to Monte Carlo events, in
which $\bar{\beta}_{14}$ was allowed a linear dependence on $R^3$.
The source location of the PMT events, their large number, and the
fact that they must reconstruct nearly 3~m from their origin to appear
inside the fiducial volume means that they have features that
distinguish them from other sources of backgrounds. Therefore, we
were able to extract a prediction for the total number of PMT events,
as well as for the shape of the energy and radial dimensions of the
PDF, from the data itself, by performing a bifurcated analysis.
In a bifurcated analysis, two independent cuts are selected that
discriminate signal from background. The behavior of these cuts when
applied both separately and in combination is used to assess the
number of signal and background events in the analysis window. We
assume that the data set consists of $\nu$ signal events and $\beta$
background events, so that the total number of events is $S=\beta +
\nu$. The background contamination in the final signal sample is just
the fraction of $\beta$ that passes both cuts. If the acceptances for
background and signal events by cut $i$ are $y_i$ and $x_i$,
respectively, the contamination is $y_1 y_2 \beta$ and the number of
signal events is $x_1 x_2 \nu$.
Given the number, $a$, of events that pass both cuts, the number, $b$,
that fail cut 1 but pass cut 2, and the number, $c$, that pass cut 1
but fail cut 2, we then relate these with a system of equations:
\begin{eqnarray}
a+c&=&x_1 \nu+y_1 \beta,\\
\label{eq1}
a+b&=&x_2 \nu+y_2 \beta,\\
\label{eq2}
a&=&x_1 x_2\nu+y_1 y_2 \beta,\\
\label{eq3}
\beta+\nu&=&S,
\label{eq3prime}
\end{eqnarray}
\noindent
which we solve analytically, using Monte Carlo-predictions for the cut
acceptances, to determine the contamination, $K= y_1 y_2 \beta$, in
the signal sample. A feature of this method is that it produces a
contamination estimate without including events from the signal box
(those that pass both cuts) in the analysis.
In this analysis, the `background' comprised the PMT events and the
`signal' all other events, including both neutrino interactions and
non-PMT radioactive decays. The cuts chosen were the in-time ratio
(ITR) cut, because it selected events that were reconstructed far from
their true origin, and the early charge (EQ) cut because it selected
events in which a large amount of light produced hits early in time in
a small number of tubes. These tend to be characteristics of PMT
events (see Sec.~\ref{s:cutdeschlc}).
For a bifurcated analysis to work, the probabilities of passing the
cuts must be statistically independent. To demonstrate this, we
loosened the cuts, and found that the increase in the number of
background events agreed well with what would be expected if they were
independent.
One result of the bifurcated analysis is a prediction for the
number of PMT events in the analysis window, which was used as a
constraint in the binned likelihood signal extraction fits, as
described in Sec.~\ref{s:penaltyterms}.
The acceptance of signal events ($x_1 x_2$) $\neq 1.0$ and therefore
some non-PMT events were also removed by the cuts. Such events falsely
increase the count of background events in the three `background
boxes'. We limited the impact of this effect by restricting the
analysis to the 3.5--4.5~MeV region, which was overwhelmingly
dominated by PMT events. We also included a correction for the number
of non-PMT events in each of the background boxes by using estimates
from the Monte Carlo simulation for the acceptance of all other
signals and backgrounds, and verifying these predictions with radon
spike data. ($^{214}$Bi, a radon daughter, is the dominant background
other than the PMT events in this region).
To estimate the number of non-PMT events in each of the three
background boxes, we multiplied the Monte Carlo-predicted acceptances
of non-PMT events by the expected total number of these events in the
data set. The procedure was therefore iterative: a PMT PDF was
created using initial estimates for the total number of non-PMT events
in the data set and their acceptances; the bifurcated analysis was
used to predict the number of PMT events in the signal box; the data
were re-fit with this new PMT constraint; the total number of non-PMT
events in the data set, based upon the new fit, was then used to
update the non-PMT event correction in the background boxes in the
bifurcated analysis, and so on. In practice, the bifurcated analysis
itself was simply included within the signal extraction fit, so the
prediction for the number of PMT events could be recalculated as the
fit progressed, and the penalty factor in the likelihood calculation
from the resulting constraint could be varied accordingly. To
determine systematic uncertainties on this overall procedure, we
tested the analysis on sets of fake data and compared the prediction
of the bifurcated analysis to the known true number of PMT
$\beta$-$\gamma$ events in the signal box.
We verified the bifurcated analysis results by comparing the
prediction of the total number of PMT $\beta$-$\gamma$ events in the
signal box to an estimate made with an independent analysis performed
outside the fiducial volume. This independent analysis looked for
events that occurred at high radius and were inward-pointing, which
are characteristics of PMT $\beta$-$\gamma$ events, and extrapolated
that count into the fiducial volume. The measurements agreed with the
bifurcated analysis to well within the uncertainties on the two
methods.
To predict the shape of the PMT PDF, the bifurcated analysis was
performed in discrete bins in $T_{\rm eff}$ and $R^3$. Unlike the
prediction for the total number of PMT events in the data set, this
calculation was not included in the signal extraction, so a fixed
estimate of the contamination of non-PMT events in the three
background boxes was applied. This estimate was derived from a signal
extraction fit performed on a small subset of the data. To take
uncertainties into account, bifurcated analyses were performed on
Monte Carlo-generated `fake' data sets with the dominant systematic
and statistical uncertainties applied in turn, to determine the effect
of each on the extracted shape for the PMT PDF. The differences of
the results from the unshifted version were added in quadrature to
obtain an additional uncertainty on the shape.
A number of functional forms were fit to the $T_{\rm eff}$ and $R^3$
distributions to determine the best parameterizations for the shapes.
An exponential was found to be a good fit to the energy profile and an
exponential plus a constant offset to the radial distribution (see
Eq.~\eqref{e:pmtpdf}). The fit results for Phase~II are shown in
Figure~\ref{f:saltpmtpdf}.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.48\textwidth]{BAsalt_rdata.eps}
\includegraphics[width=0.48\textwidth]{BAsalt_edata.eps}
\caption{\label{f:saltpmtpdf}(Color online) Predicted shapes for the
PMT PDF in (a) $R^3$ and (b) $T_{\rm eff}$ in Phase~II.}
\end{center}
\end{figure}
The parameters from the fits shown in Fig.~\ref{f:saltpmtpdf} were
varied in the signal extraction by applying a Gaussian penalty factor
to the likelihood function, as described in Sec.~\ref{s:penaltyterms}.
The mean of the Gaussian was the central fit value from
Fig.~\ref{f:saltpmtpdf} and the standard deviation was taken as the
total uncertainty in this value, including both the fit uncertainty
from Fig.~\ref{f:saltpmtpdf} and the additional systematic
uncertainties described above. Results for both phases are shown in
Table~\ref{t:bafin}. The fits to the bifurcated analysis prediction
for the $R^3$ distribution showed a significant correlation between
the exponent and the offset, with correlation coefficients of 0.846
and 0.883 in Phases~I and~II, respectively. This correlation was
included in the Gaussian penalty factor in the signal extraction fits.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{lrr}
\hline \hline Parameter& \multicolumn{1}{c}{Phase~I} &
\multicolumn{1}{c}{Phase~II}\\ \hline Energy exponent, A (/MeV) &
$-$5.94 $\pm$ 0.96 & $-$6.37 $\pm$ 0.81\\ $R^3$ exponent, B & 5.83
$\pm$ 0.96 & 5.28 $\pm$ 0.79\\ $R^3$ offset, C & $-$0.40 $\pm$ 1.43 &
$-$0.32 $\pm$ 1.16\\ \hline \hline
\end{tabular}
\caption{Parameters defining the PMT PDF shape, as defined in
Eq.~\eqref{e:pmtpdf}.}
\label{t:bafin}
\end{center}
\end{table}
\subsection{Limits on Instrumental Backgrounds}
Because instrumental background events were not modeled by the
simulation, their contamination in the analysis window was determined
directly from the data. A bifurcated analysis was used, similar to
that described in Sec.~\ref{s:pmtpdf}. In this instance, two sets of
cuts were used to define the analysis: the instrumental cuts and the
high-level cuts, described in Sec.~\ref{s:cuts}. The numbers of
events in the data set failing each and both sets of cuts were used to
estimate the contamination by instrumental backgrounds.
As was done in Sec.~\ref{s:pmtpdf}, a prediction of the number of good
(physics) events that failed the instrumental cuts was used to correct
the number of events in each of the background boxes. We obtained
this prediction using the cut acceptances given in Sec.~\ref{s:inssac}
and an estimate of the numbers of signal and radioactive background
events in the data set. The analysis was performed at two energy
thresholds in order to study the energy dependence of the
contamination. Results are given in Table~\ref{t:instcon}.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{lcc}
\hline \hline & \multicolumn{2}{c}{Threshold} \\ Phase & 3.5$\,$MeV &
4.0$\,$MeV \\ \hline I & 2.64 $\pm$ 0.22 & 0.09 $\pm$ 0.42 \\ II &
4.48 $\pm$ 0.27 & 0.52 $\pm$ 0.23 \\ \hline \hline
\end{tabular}
\caption[Instrumental contamination.]{Estimated number of instrumental
contamination events in the full data set at different analysis
thresholds.}
\label{t:instcon}
\end{center}
\end{table}
Since these events were not modeled in the simulation, it is difficult
to directly predict their effect on the signal extraction fit results.
However, because virtually all of them fall into the lowest energy
bin, they are unlikely to appear like neutron events. Since the
$T_{\rm eff}$ distributions of CC and ES signals were unconstrained in
the signal extraction fit, they could mimic these event types.
Therefore, a conservative approach was taken, in which the estimated
contamination from the 3.5$\,$MeV analysis was applied as an
additional uncertainty in the lowest energy bin for both the CC and ES
signals.
\subsection{Atmospheric Backgrounds}\label{s:atmbkg}
The NUANCE neutrino Monte Carlo simulation package~\cite{nuance} was
used to determine the contribution of atmospheric neutrino events to
the data set. The estimated number of atmospheric neutrino events was
not large enough to merit introducing an additional event type into
the already complex signal extraction procedure. Instead, 15
artificial data sets were created that closely represented the best
estimate for the real data set, including all neutrino signals and
radioactive backgrounds in their expected proportions. The NUANCE
simulation was used to predict the distribution of atmospheric
neutrino events in each of the four observable parameters used to
distinguish events in the signal extraction fit (see
Sec.~\ref{sec:sigex}), and a number of such events were included in
each artificial data set, drawn from the estimate for the number in
the true data. Signal extraction was performed on these sets to
determine which signal the events would mimic in the extraction. This
resulted in a small correction to the NC flux of $4.66\pm0.76$ and
$17.27\pm2.83$ events to be subtracted in Phases I and II,
respectively, and small additional uncertainties for the CC and ES
rates, mostly at the sub-percent level.
Atmospheric events were often characterized by a high-energy primary
followed by several neutrons. Therefore, there was significant
overlap with events identified by the `coincidence cut', which removed
events that occurred within a fixed time period of each other. This
overlap was exploited to verify the predicted number of atmospheric
events. Without application of the coincidence cut, a total of
$28.2\pm5.4$ and $83.9\pm15.9$ atmospheric neutrino events were
predicted in Phases I and II, respectively. The coincidence cut
reduced these numbers to $21.3\pm4.0$ and $29.8\pm5.7$ events, which
were the numbers used in the creation of the initial artificial data
sets. A second group of sets was created, using the pre-coincidence
cut estimates for the number of events, to determine the change in the
NC flux due to the additional events. The signal extraction was then
performed on a subset of the real data, both with and without the
application of the coincidence cut, and the observed difference in the
NC flux was entirely consistent with the predictions, thus verifying
the method used to derive the NC flux correction.
\subsection{Isotropic Acrylic Vessel Background (IAVB)}
Early in the SNO analyses, a type of instrumental background was
discovered that reconstructed near the AV and was characterized by
very isotropic events ($\beta_{14}<0.15$). At higher energies
($N_{\rm hit}>60$), these events form a distinct peak in a histogram
of $\beta_{14}$, and they are easily removed from the data by a combination of
the fiducial volume and isotropy cuts. However, at lower energies,
position reconstruction errors increase and the isotropy distributions
of the IAVB and other events broaden and join, so that removal of the
IAVB events by these cuts is no longer assured.
Accurate simulation of these events is difficult because the physical
mechanism that produces the IAVB events has not been identified and
crucial IAVB event characteristics cannot be predicted. These include
the light spectrum, photon timing distribution, location, and
effective event energy. To circumvent this problem, simulated events
were generated that covered a wide range of possibilities. Three
event locations were modeled: on the exterior and interior AV
surfaces, and uniformly distributed within the AV acrylic. Events
were generated at three different photon wavelengths that cover the
range of SNO detector sensitivity: 335, 400, and 500~nm. The photons
were generated isotropically, with the number of photons in an event
chosen from a uniform distribution with a maximum above the energy
range used in the neutrino analysis. The photon time distribution was
a negative exponential, with the time constant for an event chosen
from a truncated Gaussian with mean and standard deviation of 5~ns.
Using PDFs derived from the simulated event samples, maximum
likelihood signal extraction code was used to estimate the number of
IAVB events in the data in the vicinity of the AV, between 570 and
630~cm from the detector center, in accompaniment with the CC, ES, and
NC neutrino event types and $^{208}$Tl\xspace and $^{214}$Bi\xspace backgrounds in the D$_2$O\xspace, AV,
H$_2$O\xspace, and PMTs. This was done separately for each of the nine
simulated photon wavelength/event location combinations. Because the
energy distribution of the IAVB events was unknown, the IAVB
extractions were done as a function of N$_{\rm hit}$ in 11 bins. The ratio
of the number of IAVB events that passed all the neutrino cuts to
those that fit near the AV in each N$_{\rm hit}$ bin was calculated for each
simulated IAVB case as a function of event energy. These ratios were
used, together with the estimated numbers of such events near the AV,
to estimate the IAVB contamination in the neutrino sample as a
function of energy for each of the simulated IAVB cases.
The polar-angle distributions of hit PMTs in the simulated IAVB events
were studied in a coordinate system centered on the middle of the AV,
with its $z$-axis along the radial vector through the fitted event
location. There are marked differences in these distributions among
the different simulated cases due to optical effects of the AV.
Comparisons of these distributions were made between simulated events
and high N$_{\rm hit}$, high isotropy events in the data that reconstruct
near the AV (presumed to be IAVB events). A fit was made to find the
weighted combination of the simulated cases that best fit the high
N$_{\rm hit}$ data. The resulting weights were assumed to be valid at all
energies, and were used together with the contamination ratios
discussed above: first, to estimate the total IAVB background expected
in the neutrino analysis data set as a function of energy (totaling 27
and 32 events above 3.5~MeV in Phases I and II, respectively) and,
second, to generate a set of simulated IAVB events representative of
those expected to contaminate the neutrino data.
A test similar to that described in Sec.~\ref{s:atmbkg} was performed.
Fifteen artificial data sets were created that also contained
simulated IAVB events based on estimates of the weighted contributions
of the simulated cases and their energy distributions. It was found
that the majority of the IAVB events fit out as other background event
types, so that the result of adding the simulated IAVB background was
only small additional uncertainties for each of the neutrino flux
parameters, with no required corrections. The increase in uncertainty
for the NC flux was evaluated at $0.26\%$. The increases of the CC
uncertainties were also mostly at the sub-percent level, and the
increase in uncertainties on the ES rates were so small as to be
negligible ($< 0.01\%$).
\subsection{Additional Neutron Backgrounds}
A full study of other possible sources of neutron background events,
such as from events such as ($\alpha$,$n$) reactions and terrestrial
and reactor antineutrino interactions, was presented in previous
publications~\cite{longd2o, nsp}. The full set of simulated NC events
was used to adjust these numbers for the lowered energy threshold and
for the live times and detection efficiencies in the two phases to
give a final correction to the NC flux of $3.2\pm0.8$ and $12.0\pm3.1$
neutron capture events in Phases~I and II, respectively.
\section{Signal Extraction Methods \label{sec:sigex}}
An extended maximum likelihood method was used to separate event types
based on four observable parameters: the effective electron kinetic
energy, $T_{\rm eff}$; the angle of the event direction with respect
to the vector from the Sun, $\cos\theta_{\odot}$; the normalized cube of the
radial position in the detector, $R^3$; and the isotropy of the PMT
hits, $\beta_{14}$. Two independent techniques were used, as described in
sections~\ref{s:mxf} and~\ref{s:kernel}. One method used binned PDFs
and the other an unbinned, ``kernel estimation'' approach.
We performed two distinct types of fit. The first extracted the
detected electron energy spectra for CC and ES events in individual
$T_{\rm eff}$ bins, without any model constraints on the shape of the
underlying neutrino spectrum. We refer to this as an `unconstrained'
fit. The second fit exploited the unique capabilities of the SNO
detector to directly extract the energy-dependent $\nu_e$ survival
probability (Sec.~\ref{s:kerpoly}). The survival probability was
parameterized as a polynomial function and applied as a distortion to
the $^8$B neutrino energy spectrum (taken from~\cite{winter}). The
shapes of the CC and ES $T_{\rm eff}$ spectra were recomputed from the
distorted $^8$B spectrum as the fit progressed, allowing the
polynomial parameters to vary in the fit. The overall fluxes were
also constrained in this fit through the requirement of unitarity.
The features in common for the two signal extraction approaches are
described below.
The types of events included in the fit were the three neutrino
interaction types (CC, ES and NC) and 17 background event types across
the two phases of data, as defined in Table~\ref{t:bkgs}. The
likelihood was maximized with respect to the number of events of each
signal type, and several systematic parameters affecting the shapes of
the PDFs, as described in Sections~\ref{s:mxf} and~\ref{s:kernel}.
To extract energy spectra for the CC and ES neutrino signals in the
unconstrained fits, CC and ES PDFs were created in discrete $T_{\rm
eff}$ intervals and the fitted numbers of events in these intervals
were allowed to vary independently. The energy spectra for events
from the NC interaction and from radioactive backgrounds have no
dependence on the neutrino oscillation model, and so the shapes of
these spectra were fixed within their systematic uncertainties.
The flux of solar neutrinos was assumed to be constant, so a single
set of neutrino-related fit parameters was applied to both phases.
Therefore, the neutrino signal parameters varied in the fit were an NC
rate and a number of CC and ES rates in discrete energy intervals, as
defined in Sections~\ref{s:mxf} and~\ref{s:kernel}. Although SNO was
primarily sensitive to the $^8$B chain of solar neutrinos, we included
a fixed contribution of solar hep neutrinos, which was not varied in
the fit. Based on results from a previous SNO analysis~\cite{nsp}, we
used 0.35, 0.47, and 1.0 times the Solar Standard Model (SSM) prediction
for CC, ES, and NC hep
neutrinos, respectively. Taken together, these correspond to 16.4
events in Phase I and 33.3 events in Phase II.
To take into account correlations between parameters,
multi-dimensional PDFs were used for all signals. In the
unconstrained fits, CC and ES were already divided into discrete
energy bins, and three-dimensional PDFs were created in each bin for
the other observables: $P(\beta_{14}, R^3, \cos \theta_{\odot})$. In
the survival probability fits, fully four-dimensional PDFs were used
for CC and ES events. For the NC and background PDFs the $\cos\theta_{\odot}$
distribution is expected to be flat, since there should be no
dependence of event direction on the Sun's position, but correlations
exist between the other observables. For these event types, the PDFs
were factorized as $P(T_{\rm eff}, \beta_{14}, R^3)\times P(\cos
\theta_{\odot})$.
Uncertainties in the distributions of the observables were treated as
parameterized distortions of the Monte Carlo PDF shapes. The dominant
systematic uncertainties were allowed to vary in the fit in both
signal extraction methods. Less significant systematics were treated
as in previous SNO analyses~\cite{longd2o}, using a `shift-and-refit'
approach: the data were refit twice for each systematic uncertainty,
with the model PDFs perturbed by the estimated positive and negative
1~$\sigma$ values for the uncertainty in a given parameter. The
differences between the nominal flux values and those obtained with
the shifted PDFs were taken to represent the 68\% C.L. uncertainties,
and the individual systematic uncertainties were then combined in
quadrature to obtain total uncertainties for the fluxes.
\subsection{Systematic Uncertainties: Phase Correlations}
\label{s:correl}
Uncertainties related to theoretical quantities that are unaffected by
detector conditions (such as the photodisintegration cross section
uncertainty) were applied to both phases equally. Uncertainties in
quantities dependent on detector conditions (such as energy
resolution) were treated independently in each phase. Uncertainties in
quantities that partly depend on the operational phase (such as
neutron capture efficiency, which depends both on a common knowledge
of the $^{252}$Cf source strength and on the current detector
conditions) were treated as partially correlated. For the latter, the
overall uncertainty associated with each phase thus involved a common
contribution in addition to a phase-specific uncertainty. Since
neutron capture events were more similar to electron-like events in
Phase~I than in Phase~II, several of the neutron-related uncertainties
applied to Phase~II only. The correlations are summarized in
Table~\ref{tab:systcorr}.
\begin{table}[!h]
\begin{center}
\begin{tabular}{lcc}
\hline \hline Systematic uncertainty & Correlation \\ \hline Energy
scale & Both \\ Electron energy resolution & Uncorrelated \\ Neutron
energy resolution & Phase~II only \\ Energy linearity & Correlated \\
$\beta_{14}$\xspace electron scale & Correlated \\ $\beta_{14}$\xspace neutron scale & Phase~II only
\\ $\beta_{14}$\xspace electron width & Correlated \\ $\beta_{14}$\xspace neutron width & Phase~II
only \\ $\beta_{14}$\xspace energy dependence & Correlated \\ Axial scaling &
Uncorrelated \\ $z$ scaling & Uncorrelated \\ $x$, $y$, $z$ offsets &
Uncorrelated \\ $x$, $y$, $z$ resolutions & Uncorrelated \\ Energy
dependent fiducial volume & Uncorrelated \\ $\cos\theta_{\odot}$ resolution &
Uncorrelated \\ PMT $T_{\rm eff}$ exponent & Uncorrelated \\ PMT $R^3$
exponent & Uncorrelated \\ PMT $R^3$ offset & Uncorrelated \\ PMT $\beta_{14}$\xspace
intercept & Uncorrelated \\ PMT $\beta_{14}$\xspace radial slope & Uncorrelated \\
PMT $\beta_{14}$\xspace width & Uncorrelated \\ Neutron capture & Both \\
Photodisintegration & Correlated \\ $^{24}$Na distribution & Phase~II
only \\ Sacrifice & Uncorrelated \\ IAVB & Uncorrelated \\
Atmospherics backgrounds & Uncorrelated \\ Instrumental contamination
& Uncorrelated \\ Other neutrons & Uncorrelated \\ \hline \hline
\end{tabular}
\caption{Phase correlations of the systematic
uncertainties. ``Correlated'' refers to a correlation coefficient of
1.0 between the phases and ``uncorrelated'' refers to a coefficient of
0.0. ``Both'' means an uncertainty was treated as partially
correlated between the phases.}
\label{tab:systcorr}
\end{center}
\end{table}
\subsection{Binned-Histogram Unconstrained Fit}
\label{s:mxf}
In this approach, the PDFs were created as three-dimensional
histograms binned in each observable dimension, as summarized in
Table~\ref{t:mxfbins}. For CC and ES, three-dimensional PDFs were
created in each $T_{\rm eff}$ interval, to fully account for
correlations between all four observable dimensions. Fifty rate
parameters were fitted: the CC and ES rates in each of 16 spectral
bins, the NC normalization and 17 background PDF normalizations.
Dominant systematic uncertainties were allowed to vary within their
uncertainties, or `floated', by performing one-dimensional scans of
the likelihood in the value of each systematic parameter. This
involved performing the fit multiple times at defined intervals in
each systematic parameter and extracting the value of the likelihood,
which included a Gaussian factor whose width was defined by the
independently estimated uncertainty on that parameter, as described in
Sec.~\ref{s:penaltyterms}. This combined \textit{a priori} knowledge
from the calibration data and Monte Carlo studies used to parameterize
systematic uncertainties with information inherent in the data itself.
If a new likelihood maximum was found at an offset from the existing
best estimate of a particular systematic parameter, then the offset
point was defined as the new best estimate. An iterative procedure
was used to take into account possible correlations between
parameters. The final uncertainties on each parameter were defined by
where the log likelihood was 0.5 less than at the best-fit point, and
the differences in each fitted flux parameter between these points and
the best-fit point were taken as the associated systematic
uncertainties for that parameter. For more details of this approach,
see~\cite{orebithesis}.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{lccccc}
\hline \hline Observable && Min & Max & Bins & Bin width\\ \hline CC,
ES $T_{\rm eff}$ && 3.5~MeV & 11.5~MeV & 16 & 0.5~MeV \\
\multirow{2}{*}{Other $T_{\rm eff}$} && 3.5~MeV & 5.0~MeV & 6 & 0.25~MeV\\
&& 5.0~MeV & 11.5~MeV & 13
& 0.5~MeV\\ $\cos \theta_{\odot}$ && $-$1.0 & 1.0 & 8& 0.25\\ $R^3$ &&
0.0 & 0.77025 & 5& 0.15405\\ $\beta_{14}$ && $-$0.12 & 0.95& 15 &
0.0713\\ \hline \hline
\end{tabular}
\caption{\label{t:mxfbins}PDF configurations used for the
binned-histogram signal extraction approach.}
\end{center}
\end{table}
The parameters floated using this approach, along with their relevant
correlations, as described in Sec.~\ref{s:correl}, were:
\begin{itemize}
\item Energy scale (both correlated and uncorrelated in each phase)
\item Energy resolution (uncorrelated in each phase)
\item $\beta_{14}$ scale for electron-like events (correlated between phases)
\item PMT $\beta$-$\gamma$ $R^3$ exponent (uncorrelated in each
phase, see Sec.~\ref{s:pmtpdf})
\item PMT $\beta$-$\gamma$ $R^3$ offset (uncorrelated in each phase,
see Sec.~\ref{s:pmtpdf})
\item PMT $\beta$-$\gamma$ $T_{\rm eff}$ exponent (uncorrelated in
each phase, see Sec.~\ref{s:pmtpdf})
\end{itemize}
The remaining systematic uncertainties were applied using the
`shift-and-refit' approach.
\subsection{Unbinned Unconstrained Fit Using Kernel Estimation}
\label{s:kernel}
In this approach, the PDFs were created by kernel estimation. Like
standard histogramming techniques, kernel estimation starts with a
sample of event values, $t_i$, drawn from an unknown distribution,
$P(x)$. Based on this finite sample, the parent distribution is
approximated by $\hat{P}(x)$, which is a sum of kernel functions,
$K_i(x)$, each centered at an event value from the sample:
\begin{equation}
\label{eq:kernel_basic}
\hat{P}(x) = \frac{1}{n}\sum_{i=1}^n K_i(x - t_i).
\end{equation}
The most common choice of form of kernel functions is the normalized
Gaussian distribution,
\begin{equation}
K(x/h) = \frac{1}{h\sqrt{2\pi}}e^{-(x/h)^2/2},
\end{equation}
where $h$ is called the \emph{bandwidth} of the kernel. One can pick
a different bandwidth, $h_i$, for the kernel centered over each event.
Kernel-estimated density functions have many useful properties. If
the kernel functions are continuous, then the density function will
also be continuous. In one dimension, kernel estimation can also be
shown to converge to the true distribution slightly more quickly than
a histogram with bin size the same as the kernel bandwidth.
Generalizing the kernel estimation method to multiple dimensions is
done by selecting a kernel with the same dimensionality as the PDF.
We used a multi-dimensional Gaussian kernel that was simply the
product of one-dimensional Gaussians. We followed the prescription
given in~\cite{cranmer} for the selection of bandwidths for each event
in each dimension.
By varying the values associated with the events in the PDF sample
individually, kernel estimation can very naturally be extended to
incorporate systematic variation of PDF shapes. For example, energy
scale is incorporated by a transformation of the simulated event
values, $t_i\rightarrow (1 + \alpha) \times t_i$, where $\alpha$ is a
continuously variable parameter. Such transformations preserve the
continuity and analyticity of the PDF. We can then add these
systematic distortion parameters to the likelihood function, and also
optimize with respect to them using a gradient descent method. This
allows correlations between systematics and neutrino signal
parameters, as well as between systematics themselves, to be naturally
handled by the optimization algorithm. In addition, the information
in the neutrino data set itself helps to improve knowledge of detector
systematics.
Three kinds of systematic distortions can be represented within this
formalism. Transformations like energy scale and position offset have
already been mentioned. A Gaussian resolution systematic can be
floated by transforming the bandwidth, $h$, through analytic
convolution. Finally, re-weighting systematics, such as the neutron
capture efficiency, are represented by varying the weight of events in
the sum.
The main challenge in using kernel estimation with large data sets is
the computational overhead associated with repeatedly re-evaluating
the PDFs as the parameters associated with detector response vary. We
made several algorithmic improvements to make kernel estimation more
efficient and did much of the calculation on off-the-shelf 3D graphics
processors. For more detail on the implementation of the fit on the
graphics processors, see~\cite{seibertthesis}.
The kernel-estimated PDFs had the same dimensionality over the same
ranges of the observables as the binned fit, except with an upper
energy limit of 20~MeV instead of 11.5~MeV. CC rates were extracted
in 0.5~MeV intervals up to 12~MeV, with a large 12--20~MeV interval at
the end of the spectrum. To reduce the number of free parameters in
the fit, ES rates were extracted in a 3.5--4.0~MeV interval, in 1~MeV
intervals from 4~MeV to 12~MeV, and in a final 12--20~MeV interval.
The CC and ES PDFs were fixed to be flat in the $T_{\rm eff}$
dimension within each $T_{\rm eff}$ interval. During fitting, the
following parameters, corresponding to the dominant systematic
uncertainties, were allowed to vary continuously:
\begin{itemize}
\item Energy scale (both correlated and uncorrelated in each
phase)
\item Energy resolution (uncorrelated in each phase)
\item $\beta_{14}$ electron and neutron scales
\item PMT $\beta$-$\gamma$ $R^3$ exponent (uncorrelated in
each phase)
\item PMT $\beta$-$\gamma$ $R^3$ offset (uncorrelated in each
phase)
\item PMT $\beta$-$\gamma$ $T_{\rm eff}$ exponent
(uncorrelated in each phase)
\end{itemize}
Altogether there were 18 CC parameters, 10 ES parameters, 1 NC
parameter, 17 background normalization parameters, and 16 detector
systematic parameters. The remaining systematic uncertainties were
applied using the `shift-and-refit' approach.
\subsection{Energy-Dependent $\nu_e$ Survival Probability Fit Using Kernel
Estimation}
\label{s:kerpoly}
The unique combination of CC, ES, and NC reactions detected by
SNO allowed us to fit directly for the energy-dependent $\nu_e$
survival probability without any reference to flux models or other
experiments. Such a fit has several advantages over fitting for the
neutrino mixing parameters using the NC rate and the `unconstrained'
CC and ES spectra described in the previous sections.
The unconstrained fits described in Secs.~\ref{s:mxf}
and~\ref{s:kernel} produce neutrino signal rates for CC and ES in
intervals of reconstructed energy, $T_{\rm eff}$, with the free
parameters in the fit directly related to event counts in each $T_{\rm
eff}$ interval. Although this simplifies implementation of the signal
extraction fit, physically-relevant quantities, such as total $^8$B
neutrino flux and neutrino energy spectra, are entangled with the
energy response of the SNO detector. Comparing the unconstrained fit
to a particular model therefore requires convolving a distorted $^8$B
neutrino spectrum with the differential cross sections for the CC and
ES interactions, and then further convolving the resulting electron
energy spectra with the energy response of the SNO detector to obtain
predictions for the $T_{\rm eff}$ spectra.
Moreover, the unconstrained fits of Secs.~\ref{s:mxf}
and~\ref{s:kernel} have more degrees of freedom than are necessary to
describe the class of MSW distortions that are observable in the SNO
detector. For example, the RMS width of $T_{\rm eff}$ for a 10~MeV
neutrino interacting via the CC process is nearly 1.5~MeV. Therefore,
adjacent $T_{\rm eff}$ bins in the unconstrained fit are correlated,
but this information is not available to the minimization routine to
constrain the space of possible spectra. By fitting for an
energy-dependent survival probability, we enforce continuity of the
energy spectrum and thereby reduce covariances with backgrounds, most
notably $^{214}$Bi events. Events from the CC reaction can no longer
easily mimic the steep exponential shape of the background energy
distribution. In addition, systematic uncertainties that are
correlated between the CC and NC events will naturally cancel in this
approach within the fit itself.
We therefore performed a signal extraction fit in which the free
parameters directly described the total $^8$B neutrino flux and the
energy-dependent $\nu_e$ survival probabilities. We made the
following assumptions:
\begin{itemize}
\item The observed CC and ES $T_{\rm eff}$ spectra come from a
fixed distribution of neutrino energies, $E_{\nu}$, with the
standard differential cross sections;
\item The $\nu_e$ survival probability can be described by
a smooth, slowly varying function of $E_\nu$ over the range of
neutrino energies to which the SNO detector is sensitive;
\item The CC, ES and NC rates are directly related
through unitarity of the neutrino mixing matrix;
\item $\nu_e$ regeneration in the Earth at night
can be modeled as a linear perturbation
to the daytime $\nu_e$ survival probability.
\end{itemize}
Given these assumptions, we performed a fit in which the neutrino
signal was described by six parameters:
\begin{itemize}
\item $\Phi_{^8{\rm B}}$ - the total $^8$B neutrino flux;
\item $c_0$, $c_1$, $c_2$ - coefficients in a quadratic expansion
of the daytime $\nu_e$ survival probability around $E_\nu = 10$~MeV;
\item $a_0$, $a_1$ - coefficients in a linear expansion of the
day/night asymmetry around $E_\nu = 10$~MeV.
\end{itemize}
The day/night asymmetry, $A$, daytime $\nu_e$ survival probability,
$P_{ee}^{\rm day}$, and nighttime $\nu_e$ survival probability,
$P_{ee}^{\rm night}$, that correspond to these parameters are:
\begin{eqnarray}
A(E_\nu) & = & a_0 + a_1(E_\nu - 10\;{\rm MeV}) \label{eq:dn}
\\ P_{ee}^{\rm day}(E_\nu) & = & c_0 + c_1 (E_\nu - 10\;{\rm
MeV}) \label{eq:poly} \nonumber \\ & & \; + c_2 (E_\nu -
10\;{\rm MeV})^2 \\ P_{ee}^{\rm night}(E_\nu) & = &
P_{ee}^{\rm day} \times \frac{1 + A(E_\nu)/2}{1 - A(E_\nu)/2}
\end{eqnarray}
The survival probabilities were parameterized in this way to reduce
correlations between $c_0$ and the higher order terms by expanding all
functions around the detected $^8$B spectrum peak near 10~MeV. The
simulated neutrino energy spectrum after application of the analysis
cuts, shown in Figure~\ref{f:b8spec}, rapidly drops in intensity away
from 10~MeV. The broad $T_{\rm eff}$ resolution of the detector in
combination with the limited range of detectable neutrino energies
limits our sensitivity to sharp distortions. For this reason, we
chose to fit for a smooth, polynomial expansion of the survival
probability. By using a generic form, we allow arbitrary models of
neutrino propagation and interaction to be tested, including standard
MSW effects, as long as they meet the assumptions described above.
Monte Carlo studies demonstrated that this analytical form was
sufficient to model the class of MSW distortions to which the SNO
detector was sensitive. We propagated the uncertainty in the shape of
the undistorted $^8$B energy spectrum as an additional
`shift-and-refit' systematic uncertainty to ensure the extracted
survival probability incorporated this model dependence.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.48\textwidth]{b8specwin.eps}
\caption{\label{f:b8spec}(Color online) Simulation of the undistorted
energy spectrum of $^8$B neutrinos that trigger the detector, before
the application of the $T_{\rm eff}$ threshold, and after a $T_{\rm
eff} > 3.5$~MeV cut is applied, normalized to the SSM prediction. The
sharp cut in $T_{\rm eff}$ results in a smooth roll-off in detection
efficiency for energies less than the peak energy. Also shown is the
spectrum of incident neutrinos predicted by~\cite{winter}, arbitrarily
normalized, to illustrate the effect of detector sensitivity.}
\end{center}
\end{figure}
To implement this fit, we performed a `four phase' signal extraction,
dividing the data and the PDFs into Phase~I-day, Phase~I-night,
Phase~II-day, and Phase~II-night groups. Background decay rates from
solid media, such as the acrylic vessel and the PMTs, were constrained
to be identical day and night. Decay rates in the D$_2$O\xspace and H$_2$O\xspace
regions were free to vary between day and night to allow for day/night
variations in the water circulation and filtration schedules. We
floated the same detector systematics as in the unconstrained fit
described in Sec.~\ref{s:kernel}. The fit has 6 neutrino parameters,
26 background normalization parameters, and 16 detector systematic
parameters, for a total of 48 free parameters.
We constructed the PDFs in the same way as described in
Sec.~\ref{s:kernel}, with the exception of the CC and ES signals.
Instead of creating a 3D PDF ($\beta_{14}$, $R^3$, $\cos
\theta_\odot$) for intervals in $T_{\rm eff}$ in the undistorted
spectrum, we created 4D PDFs ($T_{\rm eff}$, $\beta_{14}$, $R^3$,
$\cos \theta_\odot$) for separate $E_\nu$ intervals in the undistorted
spectrum. There were 9 CC and 9 ES PDFs in each of the 4 day/night
phases, with $E_\nu$ boundaries at $4, 6, 7, 8, 9, 10, 11, 12, 13,$
and $15$~MeV.
During optimization, the signal rates associated with the 76 CC, ES
and NC PDFs were not allowed to vary freely, but were determined by
the 6 neutrino parameters. We defined an `ES survival probability':
\begin{eqnarray}
P_{\rm ES}^{\rm day}(E_\nu) & = & P_{ee}^{\rm day} + \epsilon
(1 - P_{ee}^{\rm day}(E_\nu)) \\ P_{\rm ES}^{\rm night}(E_\nu)
& = & P_{ee}^{\rm night} + \epsilon (1 - P_{ee}^{\rm
night}(E_\nu))
\end{eqnarray}
where $\epsilon = 0.156$ is the approximate ratio between the
$\nu_{\mu,\tau}$ and $\nu_e$ ES cross sections. The ES cross-section
ratio is not constant as a function of neutrino energy, so we took the
variation with energy as an additional systematic uncertainty. The
signal rates were defined in terms of $\Phi_{^8{\rm B}}$, $P_{ee}$ and
$P_{\rm ES}$ to be:
\begin{eqnarray}
R_{\mathrm{NC}} & = & \Phi_{^8{\rm B}} \\ R_{\mathrm{CC}, i}^{\rm day}
& = & \frac{\Phi_{^8{\rm B}}}{E_{i} -
E_{i-1}}\int_{E_{i-1}}^{E_{i}}dE_\nu \; P_{ee}^{\rm day}(E_\nu) \\
R_{\mathrm{CC}, i}^{\rm night} & = & \frac{\Phi_{^8{\rm B}}}{E_{i} -
E_{i-1}}\int_{E_{i-1}}^{E_{i}}dE_\nu \; P_{ee}^{\rm night}(E_\nu) \\
R_{\mathrm{ES}, i}^{\rm day} & = & \frac{\Phi_{^8{\rm B}}}{E_{i} -
E_{i-1}}\int_{E_{i-1}}^{E_{i}}\; dE_\nu \; P_{\rm ES}^{\rm
day}(E_\nu)\\ R_{\mathrm{ES}, i}^{\rm night} & = & \frac{\Phi_{^8{\rm
B}}}{E_{i} - E_{i-1}}\int_{E_{i-1}}^{E_{i}} dE_\nu \; P_{\rm ES}^{\rm
night}(E_\nu)
\end{eqnarray}
where $E_0$ is 4~MeV and $E_i$ is the upper energy boundary of the
$i$-th $E_\nu$ interval.
The survival probability fit included the same `shift-and-refit'
systematics as the unconstrained fit, along with all of the day/night
systematics used in previous analyses~\cite{longd2o,nsp}. These
systematics accounted for diurnal variations in reconstructed
quantities, such as energy scale and vertex resolution, as well as
long-term variation in detector response which could alias into a
day/night asymmetry. In addition, the non-uniformity of the $\cos
\theta_\odot$ distributions of CC and ES events can also alias into a
day/night asymmetry, so we incorporated additional day/night
systematic uncertainties on all observables in the CC and ES PDFs.
\subsection{Application of Constraints}
\label{s:penaltyterms}
\textit{A priori} information from calibrations and background
measurements was included in the fits to constrain some of the fit
parameters, in particular several of the radioactive backgrounds
(discussed in Sec.~\ref{s:bkgconst}) and any systematic parameters
floated in the fit.
The extended likelihood function had the form:
\begin{equation}
\mathcal{L}(\vec{\alpha},\vec{\beta}) =
\mathcal{L}_{data}(\vec{\alpha} | \vec{\beta})
\mathcal{L}_{calib}(\vec{\beta})
\end{equation}
where $\vec{\alpha}$ represents the set of signal parameters being fit
for, $\vec{\beta}$ represents the nuisance parameters for the
systematic uncertainties that were floated in the fits,
$\mathcal{L}_{data}(\vec{\alpha} | \vec{\beta})$ is the extended
likelihood function for the neutrino data given the values of those
parameters, and $\mathcal{L}_{calib}(\vec{\beta})$ is a constraint
term representing prior information on the systematic parameters,
obtained from calibration data and {\it ex-situ} measurements. The
contribution to $\mathcal{L}_{calib}(\vec{\beta})$ for each systematic
parameter had the form:
\begin{equation}
\mathcal{L}_{calib}({\beta_i}) =
e^{\frac{-(\beta_i-\mu_i)^2}{2\sigma_i^2} }
\end{equation}
where $x_i$ is the value of parameter $i$, and $\mu_i$ and $\sigma_i$
are the estimated value and uncertainty determined from external
measurements (with asymmetric upper and lower values for $\sigma_i$
where required). This results in a reduction of the likelihood as the
parameter value moves away from the \textit{a priori} estimate.
\subsection{Bias Testing}
To verify that the signal extraction methods were unbiased, we used
half the Monte Carlo events to create `fake data' sets, and the
remaining events to create PDFs used in fits to the fake data sets. A
fit was performed for each set and the results were averaged to
evaluate bias and pull in the fit results.
We created 100 sets containing only neutrino events, 45 sets also
containing internal background events, and 15 sets containing the full
complement of neutrino events and internal and external backgrounds.
The numbers of fake data sets were limited by the available computing
resources.
The two signal extraction methods gave results that were in excellent
agreement for every set. The biases for the neutrino fluxes were
consistent with zero, and the Gaussian pull distributions were
consistent with a mean of zero and standard deviation of 1.
Additional tests were performed in which one or more systematic shifts
were applied to the event observables in the fake data sets, and the
corresponding systematic parameters were floated in the fit, using
\textit{a priori} inputs as in the final signal extraction fits, to
verify that the two independent methods for propagating systematic
uncertainties were also unbiased. In all cases, the true values for
the neutrino fluxes were recovered with biases consistent with zero.
\subsection{Corrections to PDFs}
\label{sec:correc}
A number of corrections were required to account for residual
differences between data and PDFs derived by simulation. An offset of
the laserball position along the $z$-axis during calibration of PMT
timing introduced an offset to reconstructed positions along this axis
in the data. A correction was therefore applied to all data events,
as described in Sec.~\ref{sec:hitcal}. In addition, a number of
corrections were applied to the reconstructed energy and isotropy of
events (see Sections~\ref{sec:ecorr} and~\ref{sec:beta14},
respectively).
The Monte Carlo simulation was used to link the neutrino rates between
the two phases, thus taking into account variations in detector
efficiency and livetime. Several corrections were applied to the
Monte Carlo flux predictions, as described below.
The predicted number of events for signal type $i$ per unit of
incident flux, including all correction factors, is:
\begin{eqnarray}
\label{e:se:corr}
N_i & = & N^{\rm MC}_{i} \, \delta^{\rm sim} \, \delta^{\rm acc}_{i}\,
N^{\rm iso}_{i}\, N^{\rm D}_{i} \, N^e_{i}\, R_i \, \tau,
\end{eqnarray}
\noindent where:
\begin{itemize}
\item $N^{\rm MC}_i$ is the number of events predicted by the Monte
Carlo simulation for signal $i$ per unit incident flux. This is
recalculated as needed to account for any systematic shifts applied to
the PDFs.
\item $\delta^{\rm sim}$ corrects for events aborted in the simulation
due to photon tracking errors. This correction increases with the
number of photons in an event.
\item $\delta^{\rm acc}_{i}$ corrects for differences in the acceptances
of the instrumental and high level cuts for data and Monte Carlo
events (Sec.~\ref{s:cutacc}).
\item $N^{\rm iso}_{i}$ is a correction to account for CC interactions
on chlorine and sodium nuclei in the D$_2$O\xspace volume that are not
modeled in the simulation. This correction is relevant only to the CC
signal in Phase~II.
\item $N^{\rm D}_{i}$ is a correction to the number of target
deuterons and hence is relevant to CC and NC only.
\item $N^e_{i}$ is a correction to the number of target electrons and
hence is relevant to ES only.
\item $R_{i}$ accounts for radiative corrections to the
neutrino-deuteron interaction cross section for NC. Radiative
corrections relevant to the CC and ES interactions were included in
the simulation.
\item $\tau$ corrects for deadtime introduced into the data set by the
instrumental cuts.
\end{itemize}
These corrections are summarized in Table~\ref{t:se:fluxc}.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{lccccc}
\hline Correction & Phase && CC & ES & NC \\ \hline \hline
$\delta^{\rm sim}$ & I, II&& \multicolumn{3}{c}{(1.0 -
0.0006238$\times T_{\rm eff}$)$^{-1}$} \\ $\delta^{\rm acc}_{i}$ &I &&
0.9924 & 0.9924 & 0.9924 \\ $\delta^{\rm acc}_{i}$ &II && 0.9930 &
0.9930 & 0.9954 \\ $N^{\rm iso}_{i}$ &II && 1.0002 & --- & --- \\
$N^{\rm D}_{i}$ & I, II&&1.0129 & --- & 1.0129 \\ $N^e_{i}$ &I, II&&
--- & 1.0131 & --- \\ $R_{i}$ &I, II&& --- & --- & 0.977 \\ $\tau$ &I
&& 0.979 & 0.979 & 0.979 \\ $\tau$ &II && 0.982 & 0.982 & 0.982 \\
\hline
\end{tabular}
\caption[Flux corrections.]{Corrections applied to the expected number
of CC, ES and NC events used in the signal extraction fits.}
\label{t:se:fluxc}
\end{center}
\end{table}
\section{ Results \label{sec:results}}
The detailed improvements made to this analysis, as described in
previous sections, allow a more precise extraction of the neutrino
flux parameters and, as a result, of the MSW oscillation parameters.
Results from the unconstrained fit are given in Sec.~\ref{res:uncon}
and from the energy-dependent fit to the $\nu_e$ survival probability
in Sec.~\ref{res:sprob}. This new method for directly extracting the
form of the $\nu_e$ survival probability from the signal extraction
fit produces results that are straightforward to interpret. A direct
comparison can be made of the shape of the extracted survival
probability to model predictions, such as the LMA-predicted low-energy
rise.
Sec.~\ref{res:mixp} describes the measurements of the neutrino
oscillation parameters. As has been observed in a number of recent
publications~\cite{t131,t132,t133}, the different dependence of the
$\nu_e$ survival probability on the mixing parameters $\theta_{12}$
and $\theta_{13}$ between solar and reactor neutrino experiments means
that a comparison of solar data to reactor antineutrino data from the
KamLAND experiment allows a limit to be placed on the value of
$\sin^2\theta_{13}$. The new precision achieved with the LETA
analysis in the measurement of $\tan^2\theta_{12}$ results in a better
handle on the value of $\sin^2\theta_{13}$ in such a three-flavor
oscillation analysis. Results of this analysis are presented in
Sec.~\ref{res:mixp}, including a constraint on the value of
$\sin^2\theta_{13}$.
\subsection{Unconstrained Fit}
\label{res:uncon}
Our measurement of the total flux of active $^8$B solar neutrinos,
using the NC reaction ($\Phi_{\textrm{NC}}$) is found to be:
\begin{itemize}
\item Binned-histogram method
\end{itemize}
$\Phi_{\textrm{NC}}^{binned} = 5.140 \,^{+0.160}_{-0.158}
\textrm{(stat)} \,^{+0.132}_{-0.117} \textrm{(syst)} \times 10^6\,\rm
cm^{-2}\,s^{-1} $
\begin{itemize}
\item Kernel estimation method
\end{itemize}
$\Phi_{\textrm{NC}}^{kernel} = 5.171 \,^{+0.159}_{-0.158}
\textrm{(stat)} \,^{+0.132}_{-0.114} \textrm{(syst)} \times 10^6\,\rm
cm^{-2}\,s^{-1} $
\newline
This represents $^{+4.0}_{-3.8}$\% total uncertainty on the flux,
which is more than a factor of two smaller than the best of previous
SNO results. The statistical uncertainty has been reduced by nearly
$\sqrt2$, to 3.1\%. However, the largest improvement is in the
magnitude of the systematic uncertainty, which has been reduced from
7.3\% and 6.3\% in previous analyses of Phase~II~\cite{nsp} and
Phase~III~\cite{snoncd} data, respectively, to 2.4\% (taking the
average of the upper and lower values).
Figure~\ref{f:nccomp} shows a comparison of these results to those
from previous analyses of SNO data. Note that the $^8$B spectral
shape used in the previous Phase~I and Phase~II
analyses~\cite{oldb8spec} differs from that used here~\cite{winter}.
The bands represent the size of the systematic uncertainties on each
measurement, thus illustrating the improvements achieved with this
analysis.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.48\textwidth]{nccomp.eps}
\caption{\label{f:nccomp}(Color online) Total $^8$B neutrino flux
results using the NC reaction from both unconstrained signal
extraction fits in comparison to unconstrained fit results from
previous SNO analyses. `LETA I' refers to the binned-histogram method
and `LETA II' to the kernel estimation method.}
\end{center}
\end{figure}
Throughout this analysis, the quoted `statistical' uncertainties
represent the uncertainty due to statistics of all signals and
backgrounds in the fit, with correlations between event types taken
into account. Therefore, they include uncertainties in the separation
of signal events from backgrounds in the fits. For example, the
statistical uncertainties on the quoted results for
$\Phi_{\textrm{NC}}$ include both the Poisson uncertainty in the
number of NC events, and covariances with other event types. This is
different from previous SNO analyses, in which the background events
were not included in the signal extraction fits and any uncertainty in
the level of background events was propagated as an additional
systematic uncertainty.
The two independent signal extraction fit techniques are in excellent
agreement, both in the central NC flux value and in the magnitude of
the uncertainties. The result from the binned-histogram method is
quoted as the final unconstrained fit result for ease of comparison to
previous analyses, which used a similar method for PDF creation.
This result is in good agreement with the prediction from the BS05(OP)
SSM of 5.69$\times 10^6\,\rm cm^{-2}\,s^{-1} $~\cite{bs05}, to within
the theoretical uncertainty of $\pm16$\%. It is also in good
agreement with the BS05(AGS,OP) model prediction of 4.51$\times
10^6\,\rm cm^{-2}\,s^{-1} \pm16$\%~\cite{bs05}, which was constructed
assuming a lower heavy-element abundance in the Sun's surface.
The extracted CC and ES electron spectra from both signal extraction
fits, in terms of the fraction of one unoscillated SSM, using the
BS05(OP) model flux of 5.69$\times 10^6\,\rm cm^{-2}\,s^{-1}
$~\cite{bs05}, are shown in Figure~\ref{f:overlayspec}. An
unsuppressed, undistorted spectrum would correspond to a flat line at
1.0. A greater suppression is observed for CC events than ES, since
the ES spectrum includes some contribution from $\nu_{\mu}$ and
$\nu_{\tau}$ whereas CC is sensitive only to $\nu_e$. Both spectra
are consistent with the hypothesis of no distortion. The results from
the two independent signal extraction fits are again in excellent
agreement for both the central fit values and the uncertainties.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.48\textwidth]{jointCCspectrum2.eps}
\includegraphics[width=0.48\textwidth]{jointESspectrum2.eps}
\caption{\label{f:overlayspec}(Color online) Extracted a) CC and b) ES
electron spectra as a fraction of one unoscillated SSM (BS05(OP)),
from both signal extraction fits, with total uncertainties. The final
12--20~MeV bin in the kernel estimation fit is plotted at the mean of
the spectrum in that range. Both spectra are consistent with the
hypothesis of no distortion (a flat line).}
\end{center}
\end{figure}
Figure~\ref{f:mxfspec} shows the CC electron spectrum extracted from
the binned-histogram signal extraction fit with the errors separated
into the contributions from statistical and systematic uncertainties.
As for the NC flux result, the uncertainties are dominated by those
due to statistics (which includes the ability to distinguish signal
from background). This demonstrates the effect of the significant
improvements made both in the determination of the individual
systematic uncertainties, as presented in previous sections, and in
the improved treatment of the dominant systematic uncertainties,
whereby the self-consistency of the data itself was used to further
constrain the allowed ranges of these parameters. It is worth noting
that correlations between bins, which are not shown, tend to reduce
the significance of any observed shape. Fitting to an undistorted
spectrum (the flat line on Fig.~\ref{f:mxfspec}) gives a $\chi^2$
value of 21.52 for 15 degrees of freedom, which is consistent with the
hypothesis of no distortion. The prediction for the $T_{\rm eff}$
spectrum for CC events taken from the best fit LMA point from a
previous global analysis of solar data~\cite{snoncd} is also overlaid
on Fig.~\ref{f:mxfspec}. The $\chi^2$ value of the fit of the
extracted spectrum to this prediction is 22.56 for 15 degrees of
freedom, demonstrating that the data are also consistent with the LMA
prediction.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.48\textwidth]{CCspectrumLMA.eps}
\caption{\label{f:mxfspec}(Color online) Extracted CC electron
spectrum as a fraction of one unoscillated SSM (BS05(OP)) from the
binned-histogram signal extraction fit, with the uncertainties
separated into statistical (blue bars) and systematic (red band)
contributions. The predictions for an undistorted spectrum, and for
the LMA point $\Delta m^2_{21} = 7.59\times 10^{-5}\,{\rm eV}^2$ and
$\tan^2 \theta_{12} = 0.468$ (taken from a previous global
solar+KamLAND fit~\cite{snoncd} and floating the $^8$B flux scale) are
overlaid for comparison. }
\end{center}
\end{figure}
The one-dimensional projections of the fits in each observable
parameter from the binned-histogram signal extraction are shown for
each phase in Figures~\ref{f:mxffitsd} and~\ref{f:mxffitss}. Of
particular note is the clear ES peak observed in the $\cos\theta_{\odot}$ fits for
both phases (Figs.~\ref{f:mxffitsd}(c) and~\ref{f:mxffitss}(c)),
demonstrating the extraction of ES events over the integrated energy
spectrum, even with the low 3.5~MeV threshold. The error bars
represent statistical uncertainties; systematic uncertainties are not
shown. Figure~\ref{f:mxffits2} shows the one-dimensional projection
in $T_{\rm eff}$ from Phase~II (as in Fig.~\ref{f:mxffitss}(a)) but
with the fitted contributions from individual signal types separated
into six categories: CC, ES, and NC neutrino events, internal
backgrounds (within the D$_2$O\xspace volume), external backgrounds (in the AV,
H$_2$O, and PMTs) and hep neutrino events.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.35\textwidth]{d2oenergyfitfull.eps}
\includegraphics[width=0.35\textwidth]{d2oradfitfull.eps}
\includegraphics[width=0.35\textwidth]{d2ocosthfitfull.eps}
\includegraphics[width=0.35\textwidth]{d2obetafitfull.eps}
\caption{\label{f:mxffitsd}(Color online) One dimensional projections
of the fit in each observable parameter in Phase~I, from the
binned-histogram signal extraction. The panels show the fit projected
onto (a) energy ($T_{\rm eff}$), (b) radius cubed ($R^3$), (c)
direction ($\cos\theta_{\odot}$), and (d) isotropy ($\beta_{14}$).}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.35\textwidth]{saltenergyfitfull.eps}
\includegraphics[width=0.35\textwidth]{saltradfitfull.eps}
\includegraphics[width=0.35\textwidth]{saltcosthfitfull.eps}
\includegraphics[width=0.35\textwidth]{saltbetafitfull.eps}
\caption{\label{f:mxffitss}(Color online) One dimensional projections
of the fit in each observable parameter in Phase~II, from the
binned-histogram signal extraction. The panels show the fit projected
onto (a) energy ($T_{\rm eff}$), (b) radius cubed ($R^3$), (c)
direction ($\cos\theta_{\odot}$), and (d) isotropy ($\beta_{14}$).}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.48\textwidth]{signal_breakdown.eps}
\caption{\label{f:mxffits2}(Color) One dimensional projection of the
fit in $T_{\rm eff}$ in Phase~II from the binned-histogram signal
extraction, with the individual signals separated into the three
neutrino interactions, internal backgrounds (within the D$_2$O\xspace volume),
external backgrounds (in the AV, H$_2$O, and PMTs) and hep neutrino
events.}
\end{center}
\end{figure}
\noindent The $\chi^2$ for the one-dimensional projections of the fit
are given in Table~\ref{t:mxfchi}. These were evaluated using
statistical uncertainties only and are, therefore, a conservative test
of goodness-of-fit in the one-dimensional projections. In all
dimensions, the final result is a good fit to the data.
\begin{table}[!h]
\begin{center}
\begin{tabular}{lcccc}
\hline \hline Phase & Observable &&& $\chi^2$ (data points) \\ \hline I
& $T_{\rm eff}$ &&& 8.17 (16) \\ & $\cos\theta_{\odot}$ &&& 3.69 (8) \\ & $\rho$ &&& 2.61
(5) \\ & $\beta_{14}$ &&& 20.99 (15) \\ \hline II & $T_{\rm eff}$ &&& 13.64 (16)
\\ & $\cos\theta_{\odot}$ &&& 3.07 (8) \\ & $\rho$ &&& 2.98 (5) \\ & $\beta_{14}$ &&& 26.25 (15)
\\ \hline \hline
\end{tabular}
\caption[$\chi^2$ values for the fit of extracted signals to the
data.]{$\chi^2$ values for the fit of the extracted signals from the
binned-histogram signal extraction to the data set for one-dimensional
projections in each of the four observables, in each phase. These were
evaluated using statistical uncertainties only. The number of data
points used for the $\chi^2$ calculations are given afterwards in
parentheses. Because these are one-dimensional projections of a fit
in four observables, the probability of obtaining these $\chi^2$
values cannot be simply evaluated; these are simply quoted as a
qualitative demonstration of goodness-of-fit.}
\label{t:mxfchi}
\end{center}
\end{table}
Table~\ref{t:neutflux} in Appendix~\ref{a:tables} shows the extracted
number of events for the neutrino fit parameters from the
binned-histogram signal extraction fit, with total statistical plus
systematic uncertainties.
Table~\ref{t:bkgcomp} shows the total number of background events
extracted by each signal extraction in each phase, and a breakdown of
the number of background neutron events occurring within each region
of the detector. The two methods are in good agreement based on
expectations from studies of Monte Carlo-generated `fake' data sets.
For comparison, the total number of events in each data set is also
given (taken from Table~\ref{t:cuts}). Due to the exponential shape
of the energy spectra of most sources of background in this fit, the
majority of the background events fit out in the lowest two bins in
$T_{\rm eff}$, illustrating one of the major challenges of the low
energy analysis.
\begin{table}[!h]
\begin{center}
\begin{tabular}{lcccc}
\hline \hline & \multicolumn{2}{c}{Phase~I} &
\multicolumn{2}{c}{Phase~II} \\ Background & LETA~I & LETA~II &
LETA~I & LETA~II \\ \hline Total background events & 6148.9 & 6129.8
& 11735.0 & 11724.6 \\ \hline D$_2$O\xspace neutrons & 29.7 & 34.0 & 122.4 &
133.5 \\ AV neutrons & 214.9 & 191.4 & 295.7 & 303.4 \\ H$_2$O\xspace neutrons
& 9.9 & 8.4 & 27.7 & 26.3 \\ \hline Total data events &
\multicolumn{2}{c}{9337} & \multicolumn{2}{c}{18228} \\ \hline \hline
\end{tabular}
\caption{Number of background events extracted from the signal
extraction fits for each method. `LETA~I' refers to the
binned-histogram signal extraction, and `LETA~II' refers to the kernel
estimation method. The total number of events in each data set is
also given, taken from Table~\ref{t:cuts}.}
\label{t:bkgcomp}
\end{center}
\end{table}
Tables~\ref{t:sigsyst}--\ref{t:sigsyst1} in Appendix~\ref{a:tables}
show the effects of the individual systematic uncertainties on the
extracted NC rate, the CC rate in two energy intervals (4.0--4.5~MeV
and 9.5--10.0~MeV) and the ES rate in the 3.5--4.0~MeV interval, all
taken from the binned-histogram fit. The dominant source of
uncertainty on the total neutrino flux measured with the NC reaction
is the neutron capture uncertainty. Further significant contributions
come from the Phase~II energy resolution, the $\beta_{14}$\xspace scale for neutron
capture events, the energy-dependent fiducial volume, and the
cut-acceptance uncertainties.
Figure~\ref{f:ccsyst} shows the effects of several groups of
systematic uncertainties on the extracted CC electron spectrum, taken
from the binned-histogram fit. Four groups cover systematic effects
that apply to the observables ($T_{\rm eff}$, $\cos\theta_{\odot}$, $R^3$ and
$\beta_{14}$), in which the individual contributions are summed in
quadrature (for example the $T_{\rm eff}$ group includes the effect of
energy scale, resolution and linearity); `normalization' uncertainties
include neutron capture, cut-acceptance, energy-dependent fiducial
volume and photodisintegration uncertainties; the final group consists
of uncertainties in the shape of the PMT $\beta$-$\gamma$ PDF. The
dominant sources of the systematic uncertainties on the shape of the
CC electron spectrum are energy resolution and the shape of the PMT
$\beta$-$\gamma$ PDF, particularly as a function of $T_{\rm eff}$.
The $\beta_{14}$\xspace scale for electron-like events is also a significant
contributor. It is worth noting that the contribution from the
fiducial volume uncertainty, which was significant in previous
analyses~\cite{nsp}, is now relatively small.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.48\textwidth]{paper_systs.eps}
\caption{\label{f:ccsyst}(Color) Effect of systematic uncertainties on
the extracted CC electron spectrum. The inset shows the same plot on
a larger scale.}
\end{center}
\end{figure}
The two signal extraction methods are in excellent agreement for all
the neutrino flux parameters, as well as the sources of background
events. This is a stringent test of the result, since the two methods
differed in several fundamental ways:
\begin{itemize}
\item Formation of the PDFs
The methods used to create the PDFs were entirely independent: one
using binned histograms, and the other using smooth, analytic,
kernel-estimated PDFs.
\item Treatment of systematic uncertainties
The dominant systematics in the fits were `floated' using different
approaches: in the kernel method they were floated directly, whereas
an iterative likelihood scan was used in the binned-histogram
approach.
\item PMT $\beta$-$\gamma$ constraint
In the binned-histogram method, a constraint on the total number of
PMT events was implemented using a bifurcated analysis of the data
(Sec.~\ref{s:pmtpdf}), whereas no constraint was applied in the kernel
method.
\end{itemize}
That these independent approaches give such similar results
demonstrates the robust nature of the analysis and the final results.
\subsection{Survival Probability Fit}
\label{res:sprob}
Under the assumption of unitarity (for example, no oscillations
between active and sterile neutrinos), the NC, CC, and ES rates can be
directly related. Based on this premise, a signal extraction fit was
performed in which the free parameters directly described the total
$^8$B neutrino flux and the $\nu_e$ survival probability. This fit
therefore produces a measure of the total flux of $^8$B neutrinos that
naturally includes information from all three interaction types.
Applying this approach, the uncertainty on the flux was reduced in
comparison to that from the unconstrained fit (Sec.~\ref{res:uncon}).
The total flux measured in this way ($\Phi_{^8{\rm B}}$) is found to
be:
\begin{displaymath}
\Phi_{^8{\rm B}} = 5.046 \,^{+0.159}_{-0.152} \textrm{(stat)}
\,^{+0.107}_{-0.123} \textrm{(syst)} \times 10^6\,\rm
cm^{-2}\,s^{-1},
\end{displaymath}
which represents $^{+3.8}_{-3.9}$\% total uncertainty. This is the
most precise measurement of the total flux of $^8$B neutrinos from the
Sun ever reported.
The survival probability was parameterized as a quadratic function in
$E_{\nu}$, representing $P_{ee}^{\rm day}$, and a linear day/night
asymmetry, as defined in Eqs.~\eqref{eq:dn} and~\eqref{eq:poly} of
Sec.~\ref{s:kerpoly}. The best-fit polynomial parameter values and
uncertainties are shown in Table~\ref{t:poly_pars}, and the
correlation matrix is shown in Table~\ref{t:poly_corr}, both presented
in Appendix~\ref{a:tables}. For all the extracted parameters, the
total uncertainty is dominated by that due to statistics.
Figure~\ref{f:poly_band} shows the RMS spread in the best fit survival
probabilities, $P_{ee}^{\rm day}(E_\nu)$ and $P_{ee}^{\rm
night}(E_\nu)$, and day/night asymmetry, $A(E_\nu)$. The bands were
computed by sampling the parameter space 1000 times, taking into
account the parameter uncertainties and correlations. Overlaid on
Fig.~\ref{f:poly_band} are the predicted shapes of the day and night
survival probabilities and the day/night asymmetry for the best-fit
point from a previous global analysis of solar data~\cite{snoncd}.
The advantage of this direct parameterization for the survival
probability is that model testing becomes straightforward. We can
test the goodness-of-fit to an undistorted spectrum by setting $c_1 =
c_2 = 0.0$ in Eq.~\ref{eq:poly}, and we can test the goodness-of-fit
to a model with no day/night asymmetry by setting $a_0 = a_1 = 0.0$ in
Eq.~\ref{eq:dn}. Requiring both simultaneously, we find a $\Delta
\chi^2 = 1.94$ for 4 degrees of freedom, demonstrating that the
extracted survival probabilities and day/night asymmetry are
consistent with the hypothesis of no spectral distortion and no
day/night asymmetry. For comparison, the $\Delta \chi^2$ value of the
fit to the LMA point shown in Fig.~\ref{f:poly_band} is 3.9 for 4
degrees of freedom, showing that the data are also consistent with
LMA.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.48\textwidth]{poly_band_with_lma_solkam.eps}
\caption{\label{f:poly_band}(Color online) Best fit and RMS spread in
the (a) $P_{ee}^{\rm day}(E_\nu)$, (b) $P_{ee}^{\rm night}(E_\nu)$,
and (c) $A(E_\nu)$ functions. The survival probabilities and
day/night asymmetry for the LMA point $\Delta m^2_{21} = 7.59\times
10^{-5}\,{\rm eV}^2$ and $\tan^2 \theta_{12} = 0.468$, taken from a
previous global solar+KamLAND fit~\cite{snoncd}, are shown for
comparison.}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.35\textwidth]{d2o_day_ke_1d.eps}
\includegraphics[width=0.35\textwidth]{d2o_day_r3_1d.eps}
\includegraphics[width=0.35\textwidth]{d2o_day_cstsun_1d.eps}
\includegraphics[width=0.35\textwidth]{d2o_day_beta14_1d.eps}
\caption{\label{f:poly_1d_d2o}(Color online) One dimensional
projections of the fit in Phase~I-day, from the polynomial survival
probability fit. The panels show the fit projected onto (a) energy
($T_{\rm eff}$), (b) radius cubed ($R^3$), (c) direction ($\cos\theta_{\odot}$),
and (d) isotropy ($\beta_{14}$). The binning of data is purely for
display purposes; the fits were performed unbinned.}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.35\textwidth]{salt_night_ke_1d.eps}
\includegraphics[width=0.35\textwidth]{salt_night_r3_1d.eps}
\includegraphics[width=0.35\textwidth]{salt_night_cstsun_1d.eps}
\includegraphics[width=0.35\textwidth]{salt_night_beta14_1d.eps}
\caption{\label{f:poly_1d_salt}(Color online) One dimensional
projections of the fit in Phase~II-night, from the polynomial survival
probability fit. The panels show the fit projected onto (a) energy
($T_{\rm eff}$), (b) radius cubed ($R^3$), (c) direction ($\cos\theta_{\odot}$),
and (d) isotropy ($\beta_{14}$). The binning of data is purely for
display purposes; the fits were performed unbinned.}
\end{center}
\end{figure}
This method for parameterizing the day/night asymmetry differs from
previous SNO analyses, which quoted an asymmetry for each interaction
type:
\begin{eqnarray}
A = 2\frac{(\phi_N - \phi_D)}{(\phi_N + \phi_D)},
\end{eqnarray}
where $\phi_D$ and $\phi_N$ are the interaction rates measured for the
day and night data sets, respectively. A combined analysis of the
results from Phase~I and Phase~II, assuming an undistorted neutrino
spectrum, gave a result of $A = 0.037 \pm 0.040$~\cite{nsp}. For
comparison, the current analysis made no assumption about the shape of
the underlying neutrino spectrum, except that it is a smooth, slowly
varying function of $E_\nu$ over the range of neutrino energies to
which the SNO detector is sensitive. The value of $a_0$ extracted
under this assumption was $a_0 = 0.032 \pm 0.040$. Uncertainty on the
day/night asymmetry measurement has always been dominated by
statistics, so the improvements made to systematic uncertainties in
this analysis have a small effect. The effect of the additional
statistics gained by going lower in energy appears to be balanced by
the additional degrees of freedom allowed in the shape of the neutrino
energy spectrum.
The one-dimensional projections of the fits in the observable
parameters for Phase~I-day and Phase~II-night are shown in Figures
\ref{f:poly_1d_d2o} and \ref{f:poly_1d_salt}.
\subsection{Mixing Parameters}
\label{res:mixp}
A three-flavor, active solar neutrino oscillation model has four
parameters: $\theta_{12}$ and $\theta_{13}$, which quantify the
strength of the mixing between flavor and mass eigenstates, and
$\Delta m^2_{21}$ and $\Delta m^2_{31}$, the differences between the
squares of the masses of the neutrino propagation eigenstates. The
approximation of $\Delta m^2_{31} \sim \Delta m^2_{32}$ can be made
because $|\Delta m^2_{32}| \gg |\Delta m^2_{21}|$, while the remaining
mixing angle, $\theta_{23}$, and the CP-violating phase, $\delta$, are
irrelevant for the oscillation analysis of solar neutrino data.
For the sake of comparison with other oscillation analyses, this work
employed $\tan^2\theta_{12}$ to quantify the leading effects of the
mixing angles for solar neutrino oscillations. Smaller effects due to
$\theta_{13}$ are quantified with $\sin^2\theta_{13}$. The value of
$\Delta m^2_{31}$ was fixed to $+2.3\times 10^{-3}\
\mathrm{eV^2}$~\cite{pdg08}, an assumption that was necessary for the
numerical determination of the three-flavor survival probabilities,
but whose precise value had very little impact on our calculation.
The parameters describing the $P_{ee}(E_\nu)$ function for solar
neutrinos are, in order of importance, $\theta_{12}$, $\Delta
m^2_{21}$, $\theta_{13}$, and $\Delta m^2_{31}$. For experiments
sensitive to neutrinos from terrestrial sources, near the detector,
the survival probabilities were accurately calculated using a formula
without the effect of matter. The inclusion of matter effects in the
survival probability calculation for solar neutrino experiments
involves the numerical integration of a system of coupled differential
equations:
\begin{equation}
i \, \frac{d}{dx} \psi_{\alpha}(x) = H_f \, \psi_{\alpha}(x) \, ,
\end{equation}
where $H_f$ is the Hamiltonian in flavor space, including matter
effects in both the Sun and the Earth, $x$ is the position along the
propagation direction, and $\psi_{\alpha}(x)$ is a vector containing
the real and imaginary coefficients of the wave function, where
$\alpha = $ (e, $\mu$, $\tau$). The system was solved for each new
value of $x$ as the wave function was propagated from the Sun to a
given detector on the Earth. The probabilities were then calculated
from the magnitudes of the wave function coefficients. The
integration was performed with the adaptative Runge-Kutta algorithm.
Radial profiles of the electron density and neutrino production in the
Sun were taken from the BS05(OP) model~\cite{bs05}. The matter
density inside the Earth was taken from the Preliminary Reference
Earth Model (PREM)~\cite{prem}, which is the most widely accepted data
since the density profile is inferred from seismological
considerations. For more details on the survival probability
calculation, see~\cite{olivier}.
Constraints on neutrino mixing parameters can be derived by comparing
neutrino oscillation model predictions with experimental data, as has
been done in previous SNO analyses~\cite{longd2o,nsp,snoncd}. The
approach for the interpretation of the solar and reactor neutrino data
used the covariance $\chi^2$ method. From a series of observables
with an associated set of measured parameters from a number of
experiments, the corresponding theoretical expectations were
calculated for a given neutrino oscillation parameter hypothesis. In
order to calculate the model prediction for the neutrino yield at a
given detector, each of the neutrino fluxes that the detector was
sensitive to was weighted with the neutrino survival probabilities,
convolved with the cross-sections for the neutrino-target interactions
as well as with the detector response function, and then considered
above the experiment's energy threshold. The $\chi^2$ function
quantifies the difference between the experimental data and
theoretical model expectation for the observable under study.
In the results presented here, the free parameters were the neutrino
mixing parameters and the total flux of the $^8$B and hep
neutrinos. The survival probabilities and, hence, the fluxes and
spectra of solar neutrinos and reactor antineutrinos were fully
constrained by the mixing parameters. The $\chi^2$ function in each
case was minimized over a fine grid of points with respect to
$\tan^2\theta_{12}$, $\sin^2\theta_{13}$, and $\Delta m^2_{21}$. The
$\Delta \chi^2 = \chi^2 - \chi^2_{\rm{min}}$ differences were the
indicators of the confidence levels (C.L.) in the one- and
two-dimensional projections. The 68\%, 95\%, and 99.78\% C.L. regions
in two-dimensional parameter projections were drawn following the
standard definitions: $\Delta \chi^2 =$ 2.279, 5.99, and 11.83,
respectively. For one-dimensional projections the errors on the
parameter were the standard $1 \sigma$ C.L. at $\Delta \chi^2 =
1$. For all projections shown in this section, the $\chi^2$ was
minimized with respect to the undisplayed parameters at each point in
the MSW space.
The information from the LETA survival probability measurement was
included by evaluating the polynomial survival probability and
day/night asymmetry (as defined in Eqs.~\eqref{eq:dn}
and~\eqref{eq:poly} of Sec.~\ref{s:kerpoly}) that best represented the
model prediction at each point in the MSW plane. To do this, it was
necessary to take into account the sensitivity of the SNO detector
(including effects such as the energy dependence of the cross
sections, reaction thresholds, and analysis cuts) so that the
parameterization of the model prediction at each point in the MSW
plane sampled the neutrino energy spectrum in the same manner and over
the same range as the data. We calculated the number of detected
events that passed all the cuts as a function of neutrino energy using
the Monte Carlo simulation, and what was thus equivalent to a
`detected neutrino energy spectrum' (given in Table~\ref{t:enu_spec}
in Appendix~\ref{a:tables}) was distorted by the model-predicted
survival probability at each point in the MSW plane. This was fit to
a similarly obtained spectrum, now distorted by the polynomial
parameterization, allowing the five polynomial parameters to vary in
the fit. At each point in the plane, we then calculated the $\chi^2$
value of the fit of the model-predicted polynomial parameters ($c_0$,
$c_1$, $c_2$, $a_0$, and $a_1$) to the result from the signal
extraction, taking into account all uncertainties and correlations as
output by the signal extraction fit. The SNO rates from
Phase~III~\cite{snoncd} were treated as a separate data set.
Figure~\ref{f:contour-12-snoleta} shows the allowed regions of the
$(\tan^2\theta_{12},\Delta m^2_{21})$ parameter space when the LETA
data were analyzed in combination with the rates from
Phase~III~\cite{snoncd}. The $2\nu$ contours were projected from the
parameter space at a constant value of $\sin^2\theta_{13}=0.0$, making
them equivalent to an effective two-flavor analysis. While the best
fit point falls in the so-called `LOW' region, with $\Delta m^2_{21} =
1.15\,^{+0.38}_{-0.18}\times 10\,^{-7}(\mathrm{eV}^2)$ and
$\tan^2\theta_{12} =0.437\,^{+0.058}_{-0.058}$, the significance
levels of the LOW and the higher mass Large Mixing Angle (LMA) regions
are very similar. The predicted shape for the survival probability is
very flat in both regions, and the day/night asymmetry is expected to
be small, so the SNO-only analysis has little handle on distinguishing
the two regions. A notable difference between LOW and LMA is in the
predicted sign of the slope of the energy dependence of the day/night
asymmetry, with LOW predicting a negative slope, as was extracted in
the polynomial survival probability signal extraction fit reported in
Sec.~\ref{res:sprob}.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.48\textwidth]{contour_snoonly.eps}
\caption{(Color) SNO (LETA~+~Phase~III) two-flavor oscillation
parameter analysis.\label{f:contour-12-snoleta}}
\end{center}
\end{figure}
As described above, the observables from the SNO LETA fit used in the
hypothesis testing were the polynomial parameters of the survival
probability. In a full global analysis, event yields were used for
the other solar neutrino experiments, including the SNO Phase~III
results. For each set of parameters, the oscillation model was used
to predict the rates in the Chlorine~\cite{home2},
Gallium~\cite{sage}, and Borexino~\cite{bor} experiments, the
Super-Kamiokande Phase~I zenith spectra~\cite{sk1} and Phase~II
day/night spectra~\cite{sksol}, and the KamLAND rates and
spectrum~\cite{kam}, as well as the SNO rates~\cite{snoncd} and
spectra. The expected rates and spectra were divided by the
respective predictions, calculated without oscillations, to remove the
effects of the model scaling factors. The unitless rates were then
used in the global $\chi^2$ calculation.
Although the $\Phi_{^8{\rm B}}$ scale was determined in the LETA
signal extraction, we re-introduced it as a free parameter in the
$\chi^2$ minimization at each point in the parameter space to
constrain it with all solar data. The uncertainty of the scale was
retrieved from its marginal distribution, as was done for the
oscillation parameters.
The SNO LETA covariance matrix was taken from the signal extraction
output given in Table~\ref{t:poly_corr}, as before. For other
experiments, the total covariance matrix was assembled from the
individual statistical and systematic components, as described
in~\cite{nsp}. Correlations between SNO's LETA and other solar
experimental results were allowed via the floated $\Phi_{^8{\rm B}}$
scale parameter.
The KamLAND rates and spectrum were predicted using three-flavor
vacuum oscillations. Publicly available information about the KamLAND
detector and nearby reactors were included in our calculation, which
reproduced the unoscillated spectrum of Fig.~1 of Ref.~\cite{kam} with
good accuracy. To include the effects of three-flavor oscillations,
we then compared the $\chi^2$ obtained with non-zero values of
$\theta_{13}$ with those obtained with $\theta_{13}=0$, for each set
of ($\tan^{2}\theta_{12}$,\,$\Delta m^2_{21}$) values. In this way,
we built a $\Delta\chi^2$ function to parameterize the change of the
$\chi^2$ map in Fig.~2 of Ref.~\cite{kam} due to a non-zero value of
$\theta_{13}$. This allowed us to include the KamLAND experiment in
our three-flavor neutrino oscillation analysis and to precisely
reproduce KamLAND's two-flavor neutrino contours. When including the
KamLAND antineutrino spectrum we assumed CPT invariance, and we used
the KamLAND data only to constrain the oscillation parameters (as
opposed to the $^8$B flux scale), whereas all other solar neutrino
rates were used to collectively determine the absolute scale of the
${}^{8}\mathrm{B}$ neutrino flux as well as the oscillation
parameters.
Figure~\ref{f:contour-12-solar} shows the allowed regions of the
$(\tan^2\theta_{12},\Delta m^2_{21})$ parameter space when the global
solar data and the KamLAND data were analyzed, both separately and
together, in a two-flavor analysis. It is interesting to note that
the global solar analysis does not significantly alter the constraints
in the LMA region relative to the SNO-only analysis.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.42\textwidth]{contour_2solar4.eps}
\includegraphics[width=0.42\textwidth]{contour_2solkam4.eps}
\caption{(Color) Two-flavor oscillation parameter analysis for a)
global solar data and b) global solar + KamLAND data. The solar data
includes: SNO's LETA survival probability day/night curves; SNO
Phase~III integral rates; Cl; SAGE; Gallex/GNO; Borexino; SK-I zenith
and SK-II day/night spectra.
\label{f:contour-12-solar}}
\end{center}
\end{figure}
Figure~\ref{f:contour-3nu} shows the results of a three-flavor
oscillation analysis. Fig.~\ref{f:contour-3nu}(a) shows an overlay of
the global solar and the KamLAND allowed regions in
$(\tan^2\theta_{12},\Delta m^2_{21})$ parameter space, under a
two-flavor hypothesis. Fig.~\ref{f:contour-3nu}(b) shows the same
overlay for the three-flavor hypothesis. Allowing the value of
$\sin^2\theta_{13}$ to be non-zero clearly brings the two regions into
much better agreement. The three-flavor contours show the effect of
allowing both $\Phi_{^8{\rm B}}$ and $\sin^2\theta_{13}$ to float at
each point in space. Allowing these extra degrees of freedom worsens
the uncertainties on the two dominant oscillation parameters,
$\tan^2\theta_{12}$ and $\Delta m^2_{21}$. The regions obtained with
all solar data are consistent with the SNO-only data and show an
extension of the space towards larger values of $\tan^2\theta_{12}$
when $\sin^2\theta_{13}$ is allowed to vary. In contrast, the
three-flavor KamLAND contours show an extension towards smaller values
of $\tan^2\theta_{12}$.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.42\textwidth]{contour_2nuoverlay4.eps}
\includegraphics[width=0.42\textwidth]{contour_3nuoverlay4.eps}
\caption{(Color) Solar and KamLAND oscillation parameter analysis for
a) a two-flavor oscillation hypothesis and b) a three-flavor
hypothesis. The solar data includes SNO's LETA survival probability
day/night curves, SNO Phase~III integral rates, Cl, SAGE, Gallex/GNO,
Borexino, SK-I zenith and SK-II day/night spectra. The $\chi^2$ is
minimized with respect to all undisplayed parameters, including
$\sin^2\theta_{13}$ and $\Phi_{^8{\rm B}}$.\label{f:contour-3nu} }
\end{center}
\end{figure}
Figure~\ref{f:contour-13-solarkam} shows the confidence regions in the
$(\tan^2\theta_{12},\sin^2\theta_{13})$ space. The directionality of
the contours explains the excellent agreement of $\tan^2\theta_{12}$
between the solar and KamLAND experiments when $\sin^2\theta_{13}$ is
allowed to vary in the fit.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.42\textwidth]{contour_3nu_angles4.eps}
\caption{(Color) Solar oscillation parameter analysis, identical to
Fig.~\ref{f:contour-3nu}(b), but projected in the mixing angle space.
The $\chi^2$ is minimized with respect to all undisplayed parameters,
including $\Delta m^2_{21}$ and $\Phi_{^8{\rm
B}}$.\label{f:contour-13-solarkam}}
\end{center}
\end{figure}
Tables~\ref{t:oscpars} and~\ref{t:oscpars3} summarize the oscillation
parameter results from the various two- and three-flavor oscillation
analyses, respectively. When all solar experiments are combined with
data from the KamLAND reactor antineutrino experiment in a two-flavor
fit, the best fit point is found to be at
$\theta_{12}=34.06\,^{+1.16}_{-0.84}$ degrees and $\Delta
m^2_{21}=7.59\,^{+0.20}_{-0.21}\times 10^{-5}$~eV$^2$. The
uncertainty on the mixing angle has been noticeably reduced in
comparison to SNO's previous analyses, resulting in the world's best
measurement of $\theta_{12}$ to date. The global value of
$\Phi_{^8{\rm B}}$ from this fit is extracted to a precision of
$^{+2.38}_{-2.95}$\%. The combination with KamLAND in a three-flavor
fit has allowed us to constrain $\sin^2\theta_{13}$, giving a value of
$\sin^2\theta_{13}=2.00^{+2.09}_{-1.63}\times 10^{-2}$. This implies
an upper bound of $\sin^2\theta_{13}< 0.057$ (95\% C.L.).
\begin{table}[!h]
\begin{center}
\begin{tabular}{lcc}
\hline \hline Oscillation analysis & $\tan^2\theta_{12}$ & $\Delta
m^2_{21}(\mathrm{eV}^2)$\\ \hline SNO (LOW) &
$0.437\,^{+0.058}_{-0.058}$ & $1.15\,^{+0.38}_{-0.18}\times
10\,^{-7}$\\ SNO (LMA) & $0.457\,^{+0.038}_{-0.042}$ &
$5.50\,^{+2.21}_{-1.62}\times 10\,^{-5}$\\ Solar &
$0.457\,^{+0.038}_{-0.041}$ & $5.89\,^{+2.13}_{-2.16}\times
10\,^{-5}$\\ Solar+KamLAND & $0.457\,^{+0.040}_{-0.029}$ &
$7.59\,^{+0.20}_{-0.21}\times 10\,^{-5}$\\ \hline &
$\chi^2_{\mathrm{min}}/\mathrm{ndf}$ & $\Phi_{^8{\rm B}}$ ($\times
10^6\,\rm cm^{-2}\,s^{-1} $)\\ \hline SNO (LOW) & $6.80/9$ &
$5.013\,^{+0.176}_{-0.199}$ \\ SNO (LMA) & $8.20/9$ &
$4.984\,^{+0.205}_{-0.182}$ \\ Solar & $67.5/89$ &
$5.104\,^{+0.199}_{-0.148}$\\ Solar+KamLAND & $82.8/106$ &
$5.013\,^{+0.119}_{-0.148}$\\ \hline \hline
\end{tabular}
\caption{Best-fit neutrino oscillation parameters and extracted $^8$B
flux from a two-flavor oscillation analysis. The `SNO' results are
from the combined LETA + Phase~III oscillation analysis.
Uncertainties listed are $\pm 1\sigma$ after the $\chi^2$ was
minimized with respect to all other parameters.}
\label{t:oscpars}
\end{center}
\end{table}
\begin{table}[!h]
\begin{center}
\begin{tabular}{lcc}
\hline \hline Oscillation analysis & $\tan^2\theta_{12}$ & $\Delta
m^2_{21}(\mathrm{eV}^2)$\\ \hline Solar & $0.468\,^{+0.052}_{-0.050}$
& $6.31\,^{+2.49}_{-2.58}\times 10\,^{-5}$\\ Solar+KamLAND &
$0.468\,^{+0.042}_{-0.033}$ & $7.59\,^{+0.21}_{-0.21}\times
10\,^{-5}$\\ \hline & $\chi^2_{\mathrm{min}}/\mathrm{ndf}$ &
$\Phi_{^8{\rm B}}$ ($\times 10^6\,\rm cm^{-2}\,s^{-1} $)\\ \hline
Solar & $67.4/89$ & $5.115\,^{+0.159}_{-0.193}$\\ Solar+KamLAND &
$81.4/106$ & $5.087\,^{+0.171}_{-0.159}$\\ \hline
&\multicolumn{2}{c}{$\sin^2\theta_{13}(\times 10\,^{-2})$} \\ \hline
Solar & \multicolumn{2}{c}{$< 8.10\, (95\%\, {\rm\, C.L.)}$}\\
Solar+KamLAND& \multicolumn{2}{c}{$2.00\,^{+2.09}_{-1.63}$}\\ \hline
\hline
\end{tabular}
\caption{Best-fit neutrino oscillation parameters and extracted $^8$B
flux from a three-flavor oscillation analysis. Uncertainties listed
are $\pm 1\sigma$ after the $\chi^2$ was minimized with respect to all
other parameters.}
\label{t:oscpars3}
\end{center}
\end{table}
\section{Summary and Conclusions}
\label{s:summary}
We have described here a joint low energy threshold analysis
of SNO's Phase~I and Phase~II data sets down to an effective kinetic
energy threshold of $T_{\rm eff}=3.5$~MeV. The low threshold
increased the statistics of the CC and ES events by roughly 30\%, and
of NC events by $\sim$70\%. A new energy estimator improved the
energy resolution by 6\%, thus reducing the number of background
events reconstructing above threshold by $\sim$60\%. Separation of
electron-like and neutron capture events was improved by the joint fit
of data from Phases~I and~II, due to the difference in neutron
detection sensitivity in the two phases. In addition, use of
calibration data to correct the Monte Carlo-generated PDF shapes, and
reduction of systematic uncertainties, have all contributed to
increased precision on both the total $^8$B solar neutrino flux and
the derived neutrino mixing parameters. Fitting our data without
constraints on the shape of the underlying neutrino energy spectrum or
the unitarity of the mixing matrix gives a total $^8$B neutrino flux
of $\phi_{\rm NC} =
5.14^{+0.21}_{-0.20}\mbox{\,(stat\,$\oplus$\,syst)} \times
10^6$cm$^{-2}$ s$^{-1}$, measured by the NC reaction only, where
$\oplus$ refers to the quadrature sum. This is in good agreement with
the predictions of recent Standard Solar Models. The uncertainties on
this result are more than a factor of two better than in our previous
publications. The CC and ES reconstructed electron spectra for this
fit are consistent with the hypothesis of no spectral distortion, and
with the best fit LMA point.
We have also used the unique capabilities of the SNO detector
to perform the first direct fit to data for the energy-dependent
$\nu_e$ survival probability, without any reference to flux models or
other experiments. The fit for the survival probability assumes
unitarity of the neutrino mixing matrix, and that the underlying
neutrino spectrum follows a smoothly-distorted $^8$B shape. We have
parameterized the survival probability as a second-order polynomial,
allowing for a linear energy-dependent asymmetry between day and night
spectra. The fit gives us a total $^8$B neutrino flux of
$\Phi_{^8{\rm B}} =
5.05^{+0.19}_{-0.20}\mbox{\,(stat\,$\oplus$\,syst)} \times
10^6$cm$^{-2}$ s$^{-1}$. No evidence for either a significant
spectral distortion or a day/night asymmetry was found.
With the results of the survival probability fit, we have
created contours that show the allowed regions of the mixing
parameters, finding that for SNO data alone the best fit point is in
the LOW region of parameter space, but consistent with the LMA region
at the 68.3\% confidence level. Combining all solar experiments and
the KamLAND reactor antineutrino experiment in a two-flavor fit, we
find the best fit point is at $\theta_{12}=34.06\,^{+1.16}_{-0.84}$
degrees and $\Delta m^2_{21}=7.59\,^{+0.20}_{-0.21}\times
10^{-5}$~eV$^2$. The uncertainty on the mixing angle has been
noticeably reduced from SNO's previous analyses, resulting in the
world's best measurement of $\theta_{12}$. The global value of
$\Phi_{^8{\rm B}}$ from this fit was extracted to a precision of
$^{+2.38}_{-2.95}$\%. In a three-flavor fit, we find
$\sin^2\theta_{13}=2.00^{+2.09}_{-1.63}\times 10^{-2}$. This
implies an upper bound of $\sin^2\theta_{13}< 0.057$
at the 95\% confidence level.
\section{Acknowledgments}
This research was supported by: Canada: Natural Sciences and
Engineering Research Council, Industry Canada, National Research
Council, Northern Ontario Heritage Fund, Atomic Energy of Canada,
Ltd., Ontario Power Generation, High Performance Computing Virtual
Laboratory, Canada Foundation for Innovation, Canada Research Chairs;
US: Department of Energy, National Energy Research Scientific
Computing Center, Alfred P. Sloan Foundation; UK: Science and
Technology Facilities Council; Portugal: Funda\c{c}\={a}o para a
Ci\^{e}ncia e a Technologia. We thank the SNO technical staff for
their strong contributions. We thank the University of Liverpool and
the Texas Advanced Computing Center for their grants of CPU time. We
thank NVIDIA for the donation of a Tesla graphics card. We thank Vale
Inco, Ltd. for hosting this project.
\bibliographystyle{apsrev}
| 2024-02-18T23:40:05.956Z | 2010-06-10T02:01:05.000Z | algebraic_stack_train_0000 | 1,329 | 40,457 |
|
proofpile-arXiv_065-6733 | \section{Introduction}
In \cite{lp} we obtain almost sure limits for the
$L^{ p}$ moduli of continuity of local times of a very wide class of
symmetric L\'evy processes. More specifically, if $\{L^{ x }_{ t}\,;\,(x,t)\in R^{ 1}\times R^{ 1}_{ +}\}$ denotes Brownian local time then for all
$ p\ge 1$, and all
$t\in R_+$
\begin{equation}
\lim_{ h\downarrow 0} \int_{a}^{ b} \bigg|{ L^{ x+h}_{ t} -L^{ x }_{
t}\over\sqrt{h}}\bigg|^p\,dx =2^pE(|\eta|^p)
\int_a^b |L^{ x }_{ t}|^{ p/2}\,dx\label{as.1}
\end{equation} for all
$a,b
$ in the extended real line almost surely, and also in $L^m$, $m\ge 1$.
(Here $\eta$ is normal random variable with mean zero and variance
one.) In particular when $p=2$ we have
\begin{equation}
\lim_{ h\downarrow 0} \int { (L^{ x+h}_{ t} -L^{ x }_{
t})^{ 2}\over h}\,dx =4t, \hspace{.2 in}\mbox{ almost surely.} \label{rp3.1}
\end{equation}
We refer to $\int (L^{ x+h}_{ t} -L^{ x }_{
t})^{ 2} \,dx$ as the $L^{2}$ modulus of continuity of Brownian local time.
In our recent paper \cite{CLMR} we obtain the central limit theorem corresponding to (\ref{rp3.1}).
\begin{theorem}\label{theo-clt2} For each fixed $t$
\begin{equation} { \int ( L^{ x+h}_{t}- L^{ x}_{ t})^{ 2}\,dx- 4ht\over h^{ 3/2}}
\stackrel{\mathcal{L}}{\Longrightarrow}c\(\int ( L^{ x}_{ t})^{ 2}\,dx\)^{1/2}\,\,\eta\label{5.0weak}
\end{equation} as $h\rightarrow 0$, with $c=\({64 \over 3}\)^{ 1/2}$. Equivalently
\begin{equation} { \int ( L^{ x+1}_{t}- L^{ x}_{ t})^{ 2}\,dx- 4t\over t^{ 3/4}}
\stackrel{\mathcal{L}}{\Longrightarrow}c\(\int ( L^{ x}_{ 1})^{ 2}\,dx\)^{1/2}\,\,\eta\label{5.0tweak}
\end{equation}
as $t\rightarrow\infty$. Here $\eta$ is an independent normal random variable with mean zero and variance
one.
\end{theorem}
It can be shown that
\begin{equation}
E\(\int ( L^{ x+1}_{ t}- L^{ x}_{ t})^{ 2}\,dx\)=4\( t-{2t^{ 1/2} \over \sqrt{2\pi } }\)+O( 1).\label{9.13}
\end{equation}
so that (\ref{5.0tweak}) can be written as
\begin{equation} { \int ( L^{ x+1}_{t}- L^{ x}_{ t})^{ 2}\,dx- E\(\int ( L^{ x+1}_{ t}- L^{ x}_{ t})^{ 2}\,dx\)\over t^{ 3/4}}
\stackrel{\mathcal{L}}{\Longrightarrow}c\(\int ( L^{ x}_{ 1})^{ 2}\,dx\)^{1/2}\,\,\eta\label{5.0tweake}
\end{equation}
with a similar statement for (\ref{5.0weak}).
Our proof of Theorem \ref{theo-clt2} in \cite{CLMR} is rather long and involved. We use the method of moments, but rather than study the asymptotics of the moments of (\ref{5.0weak}), which seem intractable, we study the moments of the analogous expression where the fixed time $t$ is replaced by an independent exponential time of mean $1/\lambda$. An important part of the proof is then to `invert the Laplace transform' to obtain the asymptotics of the moments for fixed $t$.
The purpose of this paper is to give a new and shorter proof of Theorem \ref{theo-clt2} using stochastic integrals, following the approach of \cite{Yor, YW}. Our proof makes use of certain differentiability properties of the double and triple intersection local time, $ \alpha_{2,t}(x)$ and $\alpha_{3,t}(x,y)$, which are formally given by
\begin{equation}
\alpha_{2,t}(x)=\int_{0}^{t} \int_{0}^{s} \delta (W_{s}-W_{r}-x ) \,dr \,ds \label{6.3}
\end{equation}
and
\begin{equation}
\alpha_{3,t}(x,y)= \int_{0}^{t} \int_{0}^{s}\int_{0}^{r} \delta (W_{r}-W_{r'}-x) \delta (W_{s}-W_{r}-y ) \,dr' \,dr \,ds.\label{6.14}
\end{equation}
More precisely, let $f(x)$ be a smooth positive symmetric function with compact support and
$\int f(x)\,dx=1$. Set $f_{\epsilon}(x)={1 \over \epsilon}f(x/\epsilon)$. Then
\begin{equation}
\alpha_{2,t}(x)=\lim_{\epsilon\rightarrow 0}\int_{0}^{t} \int_{0}^{s} f_{\epsilon} (W_{s}-W_{r}-x ) \,dr \,ds \label{6.3}
\end{equation}
and
\begin{eqnarray}
&&
\alpha_{3,t}(x,y)\nonumber\\
&&= \lim_{\epsilon\rightarrow 0}\int_{0}^{t} \int_{0}^{s}\int_{0}^{r} f_{\epsilon} (W_{r}-W_{r'}-x) f_{\epsilon} (W_{s}-W_{r}-y ) \,dr' \,dr \,ds\label{6.14}
\end{eqnarray}
exist almost surely and in all $L^{p}$, are independent of the particular choice of $f$, and are continuous in $(x,y,t)$ almost surely, \cite{jc}. It is easy to show, see \cite[Theorem 2]{djcrilt}, that for any measurable $\phi(x)$
\begin{eqnarray}
&&
\int_{0}^{t} \int_{0}^{s} \phi (W_{s}-W_{r}) \,dr \,ds= \int \phi(x)\alpha_{2,t}(x)\,dx\label{6.14r2}
\end{eqnarray}
and for any measurable $\phi(x,y)$
\begin{eqnarray}
&&
\int_{0}^{t} \int_{0}^{s}\int_{0}^{r} \phi (W_{r}-W_{r'},\,W_{s}-W_{r}) \,dr' \,dr \,ds\nonumber\\
&&= \int \phi(x,y)\alpha_{3,t}(x,y)\,dx\,dy.\label{6.14r}
\end{eqnarray}
To express the differentiability properties of $\alpha_{2,t}(x)$ and $\alpha_{3,t}(x,y)$ which we need, let us set
\begin{equation}
v(x)=\int_{0}^{\infty}e^{-s/2}p_{s}(x)\,ds=e^{-|x|}.\label{6.14s}
\end{equation}
The following result is \cite[Thorem 1]{djcrilt}.
\begin{theorem}\label{lem-diff}
\begin{equation}
\gamma_{2,t}(x)=:\alpha_{2,t}(x)-tv(x)\label{6.18b}
\end{equation}
and
\begin{equation}
\gamma_{3,t}(x,y)=:\alpha_{3,t}(x,y)-\gamma_{2,t}(x)v(y)-\gamma_{2,t}(y)v(x)-tv(x)v(y)\label{6.18}
\end{equation}
are $C^{1}$ in $(x,y)$ and $\nabla \gamma_{2,t}(x), \nabla \gamma_{3,t}(x,y)$ are continuous in $(x,y,t)$.
\end{theorem}
Our new proof of Theorem \ref{theo-clt2} is given in Section \ref{sec-stoch}.
Our original motivation for studying the
asymptotics of
$\int (L^{ x+h}_{ t} -L^{ x }_{
t})^{ 2}\,dx$ comes from our interest in the Hamiltonian
\begin{equation} H_{ n}=\sum_{ i,j=1,\,i\neq j}^{ n}1_{ \{S_{ i}=S_{ j} \}}-{1
\over 2}\sum_{ i,j=1,\,i\neq j}^{ n}1_{
\{|S_{ i}-S_{ j}|=1 \}},\label{rp5c.4}
\end{equation}
for the critical attractive
random polymer in dimension one,
\cite{HK}, where $\{S_{ n} \,;\,n=0,1,2,\ldots\}$ is a simple random walk on
$Z^{ 1}$. Note that
$ H_{ n}=\sum_{x\in Z^{ 1}}\(l_{ n}^{ x}-l_{ n}^{ x+1}\)^{ 2}$, where $l_{ n}^{
x}=\sum_{ i=1}^{ n}1_{ \{S_{ i}=x \}}$ is the local time for the random walk
$S_{ n}$.
\section{A stochastic calculus approach}\label{sec-stoch}
By \cite[Lemma 2.4.1]{book} we have that
\begin{equation}
L^{ x}_{ t}=\lim_{\epsilon\rightarrow 0}\int_{0}^{t} f_{\epsilon} (W_{s} -x ) \,ds \label{6.3l}
\end{equation}
almost surely, with convergence locally uniform in $x$. Hence
\begin{eqnarray}
&& \int L^{ x+h}_{ t}L^{ x}_{ t} \,dx\nonumber\\
&&= \int \lim_{\epsilon\rightarrow 0} \( \int_{0}^{t} f_{\epsilon} (W_{s}-(x+h))\,ds \)
\( \int_{0}^{t} f_{\epsilon} (W_{r}-x)\,dr \)\,dx
\label{6.2}\\
&&= \lim_{\epsilon\rightarrow 0}\int \( \int_{0}^{t} f_{\epsilon} (W_{s}-(x+h))\,ds \)
\( \int_{0}^{t} f_{\epsilon} (W_{r}-x)\,dr \)\,dx
\nonumber\\
&&= \lim_{\epsilon\rightarrow 0}\int_{0}^{t} \int_{0}^{t} f_{\epsilon}\ast f_{\epsilon} (W_{s}-W_{r}-h ) \,dr \,ds \nonumber\\
&&= \lim_{\epsilon\rightarrow 0}\int_{0}^{t} \int_{0}^{s} f_{\epsilon}\ast f_{\epsilon} (W_{s}-W_{r}-h ) \,dr \,ds\nonumber\\
&&\hspace{1.5 in}+ \lim_{\epsilon\rightarrow 0} \int_{0}^{t} \int_{0}^{r}f_{\epsilon}\ast f_{\epsilon} (W_{r}-W_{s}+h )\,ds \,dr \nonumber\\
&&=\alpha_{2,t}(h)+\alpha_{2,t}(-h).\nonumber
\end{eqnarray}
Note that
\begin{equation}
\int ( L^{ x+h}_{ t}- L^{ x}_{ t})^{ 2}\,dx=2\( \int (L^{ x}_{ t} )^{ 2}\,dx-
\int L^{ x+h}_{ t}L^{ x}_{ t} \,dx\)\label{6.1}
\end{equation}
and thus
\begin{equation}
\int ( L^{ x+h}_{ t}- L^{ x}_{ t})^{ 2}\,dx=2\( 2\alpha_{2,t}(0)- \alpha_{2,t}(h)- \alpha_{2,t}(-h)\).\label{6.4}
\end{equation}
Hence we can prove Theorem \ref{theo-clt2} by showing that for each fixed $t$
\begin{equation} { 2\( 2\alpha_{2,t}(0)- \alpha_{2,t}(h)- \alpha_{2,t}(-h)\)- 4ht\over h^{ 3/2}}
\stackrel{\mathcal{L}}{\Longrightarrow}c\sqrt{\alpha_{2,t}(0)}\,\,\eta\label{5.0weaksi}
\end{equation}
as $h\rightarrow 0$, with $c=\({128 \over 3}\)^{ 1/2}$. Here we used the fact, which follows from (\ref{6.2}), that $\int ( L^{ x}_{ 1})^{ 2}\,dx= 2\,\,\alpha_{2,t}(0)$.
In proving (\ref{5.0weaksi}) we will need the following Lemma. Compare Tanaka's formula, \cite[Chapter VI, Theorem 1.2]{RY}.
\begin{lemma}\label{lem-Ito}For any $a\in R^{1}$
\begin{eqnarray}
&&
\alpha_{2,t}(a)=2\int_{0}^{t} (W_{t}-W_{s}-a)^{+}\,ds -2\int_{0}^{t} (W_{0}-W_{s}-a)^{+}\,ds \nonumber\\
&& \hspace{1.5 in}-2(-a)^{+}t-2 \int_{0}^{t} \int_{0}^{s} 1_{\{W_{s}-W_{r}>a \}}\,dr\,dW_{s}. \label{it.1}
\end{eqnarray}
\end{lemma}
{\bf Proof of Lemma \ref{lem-Ito}: }Set
\begin{equation}
g_{\epsilon}(x)=\int_{0}^{\infty} yf_{\epsilon}(x-y)\,dy\label{it.2}
\end{equation}
so that
\begin{equation}
g'_{\epsilon}(x)=\int_{0}^{\infty} yf'_{\epsilon}(x-y)\,dy=\int_{0}^{\infty} f_{\epsilon}(x-y)\,dy\label{it.3}
\end{equation}
and consequently
\begin{equation}
g''_{\epsilon}(x)=f_{\epsilon}(x). \label{it.4}
\end{equation}
Let
\begin{equation}
F_{a}(t,x)= \int_{0}^{t} g_{\epsilon}(x-W_{s}-a)\,ds.
\label{6.5}
\end{equation}
Then by Ito's formula applied to $F_{a}(t,W_{t}) $ we have
\begin{eqnarray}
&&\int_{0}^{t} g_{\epsilon}(W_{t}-W_{s}-a)\,ds -\int_{0}^{t} g_{\epsilon}(W_{0}-W_{s}-a)\,ds
\label{6.6}\\
&& = \int_{0}^{t} g_{\epsilon}(-a)\,ds+ \int_{0}^{t} \int_{0}^{s} g'_{\epsilon}(W_{s}-W_{r} -a)\,dr\,dW_{s}\nonumber\\
&&\hspace{1 in}+{1 \over 2}\int_{0}^{t} \int_{0}^{s} g''_{\epsilon}(W_{s}-W_{r} -a)\,dr\,ds.\nonumber
\end{eqnarray}
It is easy to check that locally uniformly
\begin{equation}
\lim_{\epsilon\rightarrow 0}g_{\epsilon}(x)=x^{+}\label{it.5}
\end{equation}
and hence using (\ref{it.4}) we obtain
\begin{eqnarray}
&&
\alpha_{2,t}(a)=2\int_{0}^{t} (W_{t}-W_{s}-a)^{+}\,ds -2\int_{0}^{t} (W_{0}-W_{s}-a)^{+}\,ds \nonumber\\
&& \hspace{1.2 in}-2(-a)^{+}t-2\lim_{\epsilon\rightarrow 0}\int_{0}^{t} \int_{0}^{s} g'_{\epsilon}(W_{s}-W_{r} -a)\,dr\,dW_{s}. \label{it.1a}
\end{eqnarray}
From (\ref{it.3}) we can see that $\sup_{x} |g'_{\epsilon}(x)|\leq 1$ and
\begin{equation}
\lim_{\epsilon\rightarrow 0}g'_{\epsilon}(x)=1_{\{x>0\}}+{1 \over 2}1_{\{x=0\}}.\label{it.5a}
\end{equation}
Thus by the dominated convergence theorem
\begin{equation}
\lim_{\epsilon\rightarrow 0}\int_{0}^{t}E\(\( \int_{0}^{s} \left\{ g'_{\epsilon}(W_{s}-W_{r} -a)- 1_{\{W_{s}-W_{r}>a \}} \right\}\,dr \)^{2}\)\,ds =0\label{it.6}
\end{equation}
which completes the proof of our Lemma.{\hfill $\square$ \bigskip}
If we now set
\begin{eqnarray}
&&
J_{h}(x)=2x^{+}-(x-h)^{+}-(x+h)^{+}\nonumber\\
&&\hspace{.45in} =\left\{\begin{array}{ll}
-x-h&\mbox{ if }-h\leq x\leq 0\\
x-h&\mbox{ if }0\leq x\leq h.
\end{array}
\right.\label{6.8a}
\end{eqnarray}
and
\begin{eqnarray}
K_{h}(x) &=&21_{\{x>0\}}-1_{\{x>h\}}-1_{\{x>-h\}}\label{6.9}\\
&=&1_{\{0<x\leq h\}}-1_{\{-h<x\leq 0\}}\nonumber
\end{eqnarray}
we see from Lemma \ref{lem-Ito} that
\begin{eqnarray}
&&2\left\{ 2\alpha_{t}(0)- \alpha_{t}(h)- \alpha_{t}(-h)\right\}-4ht
\label{6.10}\\
&&= 4 \int_{0}^{t} J_{h}(W_{t}-W_{s})\,ds -4\int_{0}^{t} J_{h}(W_{0}-W_{s})\,ds \nonumber\\
&&\hspace{1 in}-4\int_{0}^{t} \int_{0}^{s} K_{h}(W_{s}-W_{r} )\,dr\,dW_{s}.\nonumber
\end{eqnarray}
By (\ref{6.8a})
\begin{equation}
\int_{0}^{t} J_{h}(W_{t}-W_{s})\,ds= \int J_{h}(W_{t}-x)L^{x}_{t}\,dx=O(h^{2}\sup_{x}L^{x}_{t})\label{6.8b}
\end{equation}
and similarly for $\int_{0}^{t} J_{h}(W_{0}-W_{s})\,ds$. Hence to prove (\ref{5.0weaksi}) it suffices to show that for each fixed $t$
\begin{equation} { \int_{0}^{t} \int_{0}^{s} K_{h}(W_{s}-W_{r} )\,dr\,dW_{s}\over h^{ 3/2}}
\stackrel{\mathcal{L}}{\Longrightarrow}\({8 \over 3}\)^{ 1/2}\sqrt{\alpha_{2,t}(0)}\,\,\eta\label{5.0weaksia}
\end{equation}
as $h\rightarrow 0$. Let
\begin{equation}
M^{h}_{t}=h^{- 3/2}\int_{0}^{t} \int_{0}^{s} K_{h}(W_{s}-W_{r} )\,dr\,dW_{s}.\label{6.11}
\end{equation}
It follows from the proof of Theorem 2.6 in \cite[Chapter XIII]{RY},
(the Theorem of Papanicolaou, Stroock, and Varadhan)
that to establish (\ref{5.0weaksia})
it suffices to show that
\begin{equation}
\lim_{h\rightarrow 0} \langle M^{h}, W\rangle_{t} =0\label{6.26h1}
\end{equation}
and
\begin{equation}
\lim_{h\rightarrow 0} \langle M^{h}, M^{h}\rangle_{t} ={8 \over 3} \alpha_{2,t}(0)\label{6.26h}
\end{equation}
uniformly in $t$ on compact intervals.
By (\ref{6.14r2}), and using the fact that $ K_{h}(x) = K_{1}(x/h) $, we have that
\begin{eqnarray}
\langle M^{h}, W\rangle_{t}&=&h^{- 3/2}\int_{0}^{t} \int_{0}^{s} K_{h}(W_{s}-W_{r} )\,dr\,ds
\label{6.14r5}\\
&=&h^{- 3/2}\int K_{h}(x)\alpha_{2,t}(x)\,dx \nonumber\\
&=&h^{- 1/2}\int K_{1}(x)\alpha_{2,t}(hx)\,dx \nonumber\\
&=&\int_{0}^{1}{ \alpha_{2,t}(hx)- \alpha_{2,t}(-hx) \over h^{1/2}}\,dx. \nonumber
\end{eqnarray}
But $v(hx)- v(-hx)$, so by Lemma \ref{lem-diff} we have that
\begin{eqnarray}
&&\alpha_{2,t}(hx)- \alpha_{2,t}(-hx)=\gamma_{2,t}(hx)- \gamma_{2,t}(-hx)=O(h) \label{6.14r6}
\end{eqnarray}
which completes the proof of (\ref{6.26h1}).
We next analyze
\begin{eqnarray}
&&
\langle M^{h}, M^{h}\rangle_{t}=h^{-3}\int_{0}^{t}\( \int_{0}^{s} K_{h}(W_{s}-W_{r} )\,dr\)^{2}\,ds \label{6.12}\\
&&=h^{-3}\int_{0}^{t}\( \int_{0}^{s} K_{h}(W_{s}-W_{r} )\,dr\)\( \int_{0}^{s} K_{h}(W_{s}-W_{r'} )\,dr'\) \,ds \nonumber\\
&&=h^{-3}\int_{0}^{t}\( \int_{0}^{s}\int_{0}^{r} K_{h}(W_{s}-W_{r'} )K_{h}(W_{s}-W_{r} )\,dr' \,dr\) \,ds \nonumber\\
&&+h^{-3}\int_{0}^{t}\( \int_{0}^{s}\int_{0}^{r'} K_{h}(W_{s}-W_{r} )K_{h}(W_{s}-W_{r'} )\,dr \,dr'\) \,ds. \nonumber
\end{eqnarray}
By (\ref{6.14r}) we have that
\begin{eqnarray}
&&\int_{0}^{t} \int_{0}^{s}\int_{0}^{r} K_{h}(W_{s}-W_{r'} )K_{h}(W_{s}-W_{r} )\,dr' \,dr \,ds
\label{6.13}\\
&&= \int_{0}^{t} \int_{0}^{s}\int_{0}^{r} K_{h}(W_{s}-W_{r} +W_{r}-W_{r'})K_{h}(W_{s}-W_{r} )\,dr' \,dr \,ds \nonumber\\
&&= \int \int K_{h}(x+y)K_{h}(y)\alpha_{3,t}(x,y) \,dx \,dy. \nonumber
\end{eqnarray}
Using $ K_{h}(x) = K_{1}(x/h) $ we have
\begin{eqnarray}
&&h^{-3}\int_{0}^{t} \int_{0}^{s}\int_{0}^{r} K_{h}(W_{s}-W_{r'} )K_{h}(W_{s}-W_{r} )\,dr' \,dr \,ds
\label{6.15}\\
&&= h^{-3} \int \int K_{h}(x+y)K_{h}(y)\alpha_{3,t}(x,y) \,dx \,dy \nonumber \\
&&= h^{-1} \int \int K_{1}(x+y) K_{1}(y)\alpha_{3,t}(hx,hy) \,dx \,dy \nonumber\\
&&= h^{-1}\int \int K_{1}(x)K_{1}( y)\alpha_{3,t}(h(x-y),hy) \,dx \,dy \nonumber\\
&&= h^{-1} \int_{0}^{1} \int_{0}^{1} A_{3,t}(h,x,y) \,dx \,dy \nonumber
\end{eqnarray}
where
\begin{eqnarray}
&&
A_{3,t}(h,x,y)=\alpha_{3,t}(h(x-y),hy)-\alpha_{3,t}(h(-x-y),hy)\label{6.16}\\
&&\hspace{1 in}-\alpha_{3,t}(h(x+y),-hy)+\alpha_{3,t}(-h(x-y),-hy).\nonumber
\end{eqnarray}
It remains to consider
\begin{equation}
\lim_{h\rightarrow 0}{ A_{3,t}(h,x,y)\over h}.\label{6.17}
\end{equation}
We now use Lemma \ref{lem-diff}.
Using the fact that $\gamma_{3,t}(x,y), \gamma_{2,t}(x)$ are continuously differentiable
\begin{eqnarray}
&& \gamma_{3,t}(h(x-y),hy)- \gamma_{3,t}(h(-x-y),hy)
\label{6.19}\\
&& =h(x-y){\partial \over \partial x}\gamma_{3,t}(0,hy)-h(-x-y){\partial \over \partial x} \gamma_{3,t}(0,hy) +o(h)\nonumber\\
&& =2hx{\partial \over \partial x} \gamma_{3,t}(0,0) +o(h)\nonumber
\end{eqnarray}
and similarly
\begin{eqnarray}
&& \gamma_{3,t}(-h(x-y),-hy)- \gamma_{3,t}(h(x+y),-hy)
\label{6.19}\\
&& =-h(x-y){\partial \over \partial x} \gamma_{3,t}(0,-hy)-h(x+y){\partial \over \partial x} \gamma_{3,t}(0,-hy) +o(h)\nonumber\\
&& =-2hx{\partial \over \partial x} \gamma_{3,t}(0,0) +o(h)\nonumber
\end{eqnarray}
and these two terms cancel up to $o(h)$.
Next,
\begin{eqnarray}
&& \gamma_{2,t}(h(x-y))v(hy)- \gamma_{2,t}(h(-x-y))v(hy)
\label{6.20}\\
&& \hspace{1 in}+ \gamma_{2,t}(-h(x-y))v(-hy)-\gamma_{2,t}(h(x+y))v(-hy) \nonumber\\
&& = h(x-y)\gamma'_{2,t}(0)v(0)-h(-x-y) \gamma'_{2,t}(0)v(0)\nonumber\\
&& \hspace{.6 in}-h(x-y) \gamma'_{2,t}(0)v(0)-h(x+y)\gamma'_{2,t}(0)v(0)+o(h) \nonumber\\
&& =o(h). \nonumber
\end{eqnarray}
On the other hand, using $v(x)=e^{-|x|}=1-|x|+ O(x^{2})$ we have
\begin{eqnarray}
&& v(h(x-y))\gamma_{2,t}(hy)- v(h(-x-y))\gamma_{2,t}(hy)
\label{6.21}\\
&& \hspace{1 in}+ v(-h(x-y))\gamma_{2,t}(-hy)-v(h(x+y))\gamma_{2,t}(-hy) \nonumber\\
&& =-| h(x-y)|\gamma_{2,t}(0)+|h(-x-y)|\gamma_{2,t}(0)\nonumber\\
&& \hspace{.6 in}-|h(x-y)|\gamma_{2,t}(0)+|h(x+y)|\gamma_{2,t}(0)+o(h) \nonumber\\
&& =2h(|x+y|-|x-y|)\gamma_{2,t}(0)+o(h). \nonumber
\end{eqnarray}
and similarly
\begin{eqnarray}
&& v(h(x-y))v(hy)- v(h(-x-y))v(hy)
\label{6.22}\\
&& \hspace{1 in}+ v(-h(x-y))v(-hy)-v(h(x+y))v(-hy) \nonumber\\
&& =-| h(x-y)|v(0)+|h(-x-y)|v(0)\nonumber\\
&& \hspace{.6 in}-|h(x-y)|v(0)+|h(x+y)|v(0)+O(h^{2}) \nonumber\\
&& =2h(|x+y|-|x-y|)v(0)+O(h^{2}). \nonumber
\end{eqnarray}
Putting this all together and using the fact that $\alpha_{2,t}(0)=\gamma_{2,t}(0)+tv(0)$ we see that
\begin{eqnarray}
&&\int_{0}^{1} \int_{0}^{1} A_{3,t}(h,x,y) \,dx \,dy \label{6.23}\\
&&
=2h\alpha_{2,t}(0)\int_{0}^{1} \int_{0}^{1}(|x+y|-|x-y|))\,dx \,dy+o(h).\nonumber
\end{eqnarray}
Of course \begin{eqnarray}
&&\int_{0}^{1} \int_{0}^{1}(|x+y|-|x-y|))\,dx \,dy
\label{6.24}\\
&& =\int_{0}^{1} \int_{0}^{x}2y\,dy \,dx+\int_{0}^{1} \int_{0}^{y}2x\,dx \,dy={2 \over 3} \nonumber
\end{eqnarray}
so that
\begin{equation}
\lim_{h\rightarrow 0}{\int_{0}^{1} \int_{0}^{1} A_{3,t}(h,x,y) \,dx \,dy \over h}={4 \over 3} \alpha_{2,t}(0).\label{6.25}
\end{equation}
By (\ref{6.12}) this gives (\ref{6.26h}).{\hfill $\square$ \bigskip}
\def\noopsort#1{} \def\printfirst#1#2{#1}
\def\singleletter#1{#1}
\def\switchargs#1#2{#2#1}
\def\bibsameauth{\leavevmode\vrule height .1ex
depth 0pt width 2.3em\relax\,}
\makeatletter
\renewcommand{\@biblabel}[1]{\hfill#1.}\makeatother
\newcommand{\leavevmode\hbox to3em{\hrulefill}\,}{\leavevmode\hbox to3em{\hrulefill}\,}
| 2024-02-18T23:40:06.718Z | 2009-10-15T18:18:22.000Z | algebraic_stack_train_0000 | 1,369 | 3,510 |